WorldWideScience

Sample records for regression method electronic

  1. Alternative Methods of Regression

    CERN Document Server

    Birkes, David

    2011-01-01

    Of related interest. Nonlinear Regression Analysis and its Applications Douglas M. Bates and Donald G. Watts ".an extraordinary presentation of concepts and methods concerning the use and analysis of nonlinear regression models.highly recommend[ed].for anyone needing to use and/or understand issues concerning the analysis of nonlinear regression models." --Technometrics This book provides a balance between theory and practice supported by extensive displays of instructive geometrical constructs. Numerous in-depth case studies illustrate the use of nonlinear regression analysis--with all data s

  2. Regression methods for medical research

    CERN Document Server

    Tai, Bee Choo

    2013-01-01

    Regression Methods for Medical Research provides medical researchers with the skills they need to critically read and interpret research using more advanced statistical methods. The statistical requirements of interpreting and publishing in medical journals, together with rapid changes in science and technology, increasingly demands an understanding of more complex and sophisticated analytic procedures.The text explains the application of statistical models to a wide variety of practical medical investigative studies and clinical trials. Regression methods are used to appropriately answer the

  3. Regression modeling methods, theory, and computation with SAS

    CERN Document Server

    Panik, Michael

    2009-01-01

    Regression Modeling: Methods, Theory, and Computation with SAS provides an introduction to a diverse assortment of regression techniques using SAS to solve a wide variety of regression problems. The author fully documents the SAS programs and thoroughly explains the output produced by the programs.The text presents the popular ordinary least squares (OLS) approach before introducing many alternative regression methods. It covers nonparametric regression, logistic regression (including Poisson regression), Bayesian regression, robust regression, fuzzy regression, random coefficients regression,

  4. BOX-COX REGRESSION METHOD IN TIME SCALING

    Directory of Open Access Journals (Sweden)

    ATİLLA GÖKTAŞ

    2013-06-01

    Full Text Available Box-Cox regression method with λj, for j = 1, 2, ..., k, power transformation can be used when dependent variable and error term of the linear regression model do not satisfy the continuity and normality assumptions. The situation obtaining the smallest mean square error  when optimum power λj, transformation for j = 1, 2, ..., k, of Y has been discussed. Box-Cox regression method is especially appropriate to adjust existence skewness or heteroscedasticity of error terms for a nonlinear functional relationship between dependent and explanatory variables. In this study, the advantage and disadvantage use of Box-Cox regression method have been discussed in differentiation and differantial analysis of time scale concept.

  5. Single-electron multiplication statistics as a combination of Poissonian pulse height distributions using constraint regression methods

    International Nuclear Information System (INIS)

    Ballini, J.-P.; Cazes, P.; Turpin, P.-Y.

    1976-01-01

    Analysing the histogram of anode pulse amplitudes allows a discussion of the hypothesis that has been proposed to account for the statistical processes of secondary multiplication in a photomultiplier. In an earlier work, good agreement was obtained between experimental and reconstructed spectra, assuming a first dynode distribution including two Poisson distributions of distinct mean values. This first approximation led to a search for a method which could give the weights of several Poisson distributions of distinct mean values. Three methods have been briefly exposed: classical linear regression, constraint regression (d'Esopo's method), and regression on variables subject to error. The use of these methods gives an approach of the frequency function which represents the dispersion of the punctual mean gain around the whole first dynode mean gain value. Comparison between this function and the one employed in Polya distribution allows the statement that the latter is inadequate to describe the statistical process of secondary multiplication. Numerous spectra obtained with two kinds of photomultiplier working under different physical conditions have been analysed. Then two points are discussed: - Does the frequency function represent the dynode structure and the interdynode collection process. - Is the model (the multiplication process of all dynodes but the first one, is Poissonian) valid whatever the photomultiplier and the utilization conditions. (Auth.)

  6. Quantile Regression Methods

    DEFF Research Database (Denmark)

    Fitzenberger, Bernd; Wilke, Ralf Andreas

    2015-01-01

    if the mean regression model does not. We provide a short informal introduction into the principle of quantile regression which includes an illustrative application from empirical labor market research. This is followed by briefly sketching the underlying statistical model for linear quantile regression based......Quantile regression is emerging as a popular statistical approach, which complements the estimation of conditional mean models. While the latter only focuses on one aspect of the conditional distribution of the dependent variable, the mean, quantile regression provides more detailed insights...... by modeling conditional quantiles. Quantile regression can therefore detect whether the partial effect of a regressor on the conditional quantiles is the same for all quantiles or differs across quantiles. Quantile regression can provide evidence for a statistical relationship between two variables even...

  7. QUANTITATIVE ELECTRONIC STRUCTURE - ACTIVITY RELATIONSHIP OF ANTIMALARIAL COMPOUND OF ARTEMISININ DERIVATIVES USING PRINCIPAL COMPONENT REGRESSION APPROACH

    Directory of Open Access Journals (Sweden)

    Paul Robert Martin Werfette

    2010-06-01

    Full Text Available Analysis of quantitative structure - activity relationship (QSAR for a series of antimalarial compound artemisinin derivatives has been done using principal component regression. The descriptors for QSAR study were representation of electronic structure i.e. atomic net charges of the artemisinin skeleton calculated by AM1 semi-empirical method. The antimalarial activity of the compound was expressed in log 1/IC50 which is an experimental data. The main purpose of the principal component analysis approach is to transform a large data set of atomic net charges to simplify into a data set which known as latent variables. The best QSAR equation to analyze of log 1/IC50 can be obtained from the regression method as a linear function of several latent variables i.e. x1, x2, x3, x4 and x5. The best QSAR model is expressed in the following equation,  (;;   Keywords: QSAR, antimalarial, artemisinin, principal component regression

  8. Thermal Efficiency Degradation Diagnosis Method Using Regression Model

    International Nuclear Information System (INIS)

    Jee, Chang Hyun; Heo, Gyun Young; Jang, Seok Won; Lee, In Cheol

    2011-01-01

    This paper proposes an idea for thermal efficiency degradation diagnosis in turbine cycles, which is based on turbine cycle simulation under abnormal conditions and a linear regression model. The correlation between the inputs for representing degradation conditions (normally unmeasured but intrinsic states) and the simulation outputs (normally measured but superficial states) was analyzed with the linear regression model. The regression models can inversely response an associated intrinsic state for a superficial state observed from a power plant. The diagnosis method proposed herein is classified into three processes, 1) simulations for degradation conditions to get measured states (referred as what-if method), 2) development of the linear model correlating intrinsic and superficial states, and 3) determination of an intrinsic state using the superficial states of current plant and the linear regression model (referred as inverse what-if method). The what-if method is to generate the outputs for the inputs including various root causes and/or boundary conditions whereas the inverse what-if method is the process of calculating the inverse matrix with the given superficial states, that is, component degradation modes. The method suggested in this paper was validated using the turbine cycle model for an operating power plant

  9. Regression and Sparse Regression Methods for Viscosity Estimation of Acid Milk From it’s Sls Features

    DEFF Research Database (Denmark)

    Sharifzadeh, Sara; Skytte, Jacob Lercke; Nielsen, Otto Højager Attermann

    2012-01-01

    Statistical solutions find wide spread use in food and medicine quality control. We investigate the effect of different regression and sparse regression methods for a viscosity estimation problem using the spectro-temporal features from new Sub-Surface Laser Scattering (SLS) vision system. From...... with sparse LAR, lasso and Elastic Net (EN) sparse regression methods. Due to the inconsistent measurement condition, Locally Weighted Scatter plot Smoothing (Loess) has been employed to alleviate the undesired variation in the estimated viscosity. The experimental results of applying different methods show...

  10. Finding-equal regression method and its application in predication of U resources

    International Nuclear Information System (INIS)

    Cao Huimo

    1995-03-01

    The commonly adopted deposit model method in mineral resources predication has two main part: one is model data that show up geological mineralization law for deposit, the other is statistics predication method that accords with characters of the data namely pretty regression method. This kind of regression method may be called finding-equal regression, which is made of the linear regression and distribution finding-equal method. Because distribution finding-equal method is a data pretreatment which accords with advanced mathematical precondition for the linear regression namely equal distribution theory, and this kind of data pretreatment is possible of realization. Therefore finding-equal regression not only can overcome nonlinear limitations, that are commonly occurred in traditional linear regression or other regression and always have no solution, but also can distinguish outliers and eliminate its weak influence, which would usually appeared when Robust regression possesses outlier in independent variables. Thus this newly finding-equal regression stands the best status in all kind of regression methods. Finally, two good examples of U resource quantitative predication are provided

  11. Ridge regression estimator: combining unbiased and ordinary ridge regression methods of estimation

    Directory of Open Access Journals (Sweden)

    Sharad Damodar Gore

    2009-10-01

    Full Text Available Statistical literature has several methods for coping with multicollinearity. This paper introduces a new shrinkage estimator, called modified unbiased ridge (MUR. This estimator is obtained from unbiased ridge regression (URR in the same way that ordinary ridge regression (ORR is obtained from ordinary least squares (OLS. Properties of MUR are derived. Results on its matrix mean squared error (MMSE are obtained. MUR is compared with ORR and URR in terms of MMSE. These results are illustrated with an example based on data generated by Hoerl and Kennard (1975.

  12. Methods for identifying SNP interactions: a review on variations of Logic Regression, Random Forest and Bayesian logistic regression.

    Science.gov (United States)

    Chen, Carla Chia-Ming; Schwender, Holger; Keith, Jonathan; Nunkesser, Robin; Mengersen, Kerrie; Macrossan, Paula

    2011-01-01

    Due to advancements in computational ability, enhanced technology and a reduction in the price of genotyping, more data are being generated for understanding genetic associations with diseases and disorders. However, with the availability of large data sets comes the inherent challenges of new methods of statistical analysis and modeling. Considering a complex phenotype may be the effect of a combination of multiple loci, various statistical methods have been developed for identifying genetic epistasis effects. Among these methods, logic regression (LR) is an intriguing approach incorporating tree-like structures. Various methods have built on the original LR to improve different aspects of the model. In this study, we review four variations of LR, namely Logic Feature Selection, Monte Carlo Logic Regression, Genetic Programming for Association Studies, and Modified Logic Regression-Gene Expression Programming, and investigate the performance of each method using simulated and real genotype data. We contrast these with another tree-like approach, namely Random Forests, and a Bayesian logistic regression with stochastic search variable selection.

  13. Stochastic development regression using method of moments

    DEFF Research Database (Denmark)

    Kühnel, Line; Sommer, Stefan Horst

    2017-01-01

    This paper considers the estimation problem arising when inferring parameters in the stochastic development regression model for manifold valued non-linear data. Stochastic development regression captures the relation between manifold-valued response and Euclidean covariate variables using...... the stochastic development construction. It is thereby able to incorporate several covariate variables and random effects. The model is intrinsically defined using the connection of the manifold, and the use of stochastic development avoids linearizing the geometry. We propose to infer parameters using...... the Method of Moments procedure that matches known constraints on moments of the observations conditional on the latent variables. The performance of the model is investigated in a simulation example using data on finite dimensional landmark manifolds....

  14. On two flexible methods of 2-dimensional regression analysis

    Czech Academy of Sciences Publication Activity Database

    Volf, Petr

    2012-01-01

    Roč. 18, č. 4 (2012), s. 154-164 ISSN 1803-9782 Grant - others:GA ČR(CZ) GAP209/10/2045 Institutional support: RVO:67985556 Keywords : regression analysis * Gordon surface * prediction error * projection pursuit Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2013/SI/volf-on two flexible methods of 2-dimensional regression analysis.pdf

  15. A multiple regression method for genomewide association studies ...

    Indian Academy of Sciences (India)

    Bujun Mei

    2018-06-07

    Jun 7, 2018 ... Similar to the typical genomewide association tests using LD ... new approach performed validly when the multiple regression based on linkage method was employed. .... the model, two groups of scenarios were simulated.

  16. Linear regression methods a ccording to objective functions

    OpenAIRE

    Yasemin Sisman; Sebahattin Bektas

    2012-01-01

    The aim of the study is to explain the parameter estimation methods and the regression analysis. The simple linear regressionmethods grouped according to the objective function are introduced. The numerical solution is achieved for the simple linear regressionmethods according to objective function of Least Squares and theLeast Absolute Value adjustment methods. The success of the appliedmethods is analyzed using their objective function values.

  17. The Use of Nonparametric Kernel Regression Methods in Econometric Production Analysis

    DEFF Research Database (Denmark)

    Czekaj, Tomasz Gerard

    and nonparametric estimations of production functions in order to evaluate the optimal firm size. The second paper discusses the use of parametric and nonparametric regression methods to estimate panel data regression models. The third paper analyses production risk, price uncertainty, and farmers' risk preferences...... within a nonparametric panel data regression framework. The fourth paper analyses the technical efficiency of dairy farms with environmental output using nonparametric kernel regression in a semiparametric stochastic frontier analysis. The results provided in this PhD thesis show that nonparametric......This PhD thesis addresses one of the fundamental problems in applied econometric analysis, namely the econometric estimation of regression functions. The conventional approach to regression analysis is the parametric approach, which requires the researcher to specify the form of the regression...

  18. A simple linear regression method for quantitative trait loci linkage analysis with censored observations.

    Science.gov (United States)

    Anderson, Carl A; McRae, Allan F; Visscher, Peter M

    2006-07-01

    Standard quantitative trait loci (QTL) mapping techniques commonly assume that the trait is both fully observed and normally distributed. When considering survival or age-at-onset traits these assumptions are often incorrect. Methods have been developed to map QTL for survival traits; however, they are both computationally intensive and not available in standard genome analysis software packages. We propose a grouped linear regression method for the analysis of continuous survival data. Using simulation we compare this method to both the Cox and Weibull proportional hazards models and a standard linear regression method that ignores censoring. The grouped linear regression method is of equivalent power to both the Cox and Weibull proportional hazards methods and is significantly better than the standard linear regression method when censored observations are present. The method is also robust to the proportion of censored individuals and the underlying distribution of the trait. On the basis of linear regression methodology, the grouped linear regression model is computationally simple and fast and can be implemented readily in freely available statistical software.

  19. Statistical methods in regression and calibration analysis of chromosome aberration data

    International Nuclear Information System (INIS)

    Merkle, W.

    1983-01-01

    The method of iteratively reweighted least squares for the regression analysis of Poisson distributed chromosome aberration data is reviewed in the context of other fit procedures used in the cytogenetic literature. As an application of the resulting regression curves methods for calculating confidence intervals on dose from aberration yield are described and compared, and, for the linear quadratic model a confidence interval is given. Emphasis is placed on the rational interpretation and the limitations of various methods from a statistical point of view. (orig./MG)

  20. An improved partial least-squares regression method for Raman spectroscopy

    Science.gov (United States)

    Momenpour Tehran Monfared, Ali; Anis, Hanan

    2017-10-01

    It is known that the performance of partial least-squares (PLS) regression analysis can be improved using the backward variable selection method (BVSPLS). In this paper, we further improve the BVSPLS based on a novel selection mechanism. The proposed method is based on sorting the weighted regression coefficients, and then the importance of each variable of the sorted list is evaluated using root mean square errors of prediction (RMSEP) criterion in each iteration step. Our Improved BVSPLS (IBVSPLS) method has been applied to leukemia and heparin data sets and led to an improvement in limit of detection of Raman biosensing ranged from 10% to 43% compared to PLS. Our IBVSPLS was also compared to the jack-knifing (simpler) and Genetic Algorithm (more complex) methods. Our method was consistently better than the jack-knifing method and showed either a similar or a better performance compared to the genetic algorithm.

  1. FATAL, General Experiment Fitting Program by Nonlinear Regression Method

    International Nuclear Information System (INIS)

    Salmon, L.; Budd, T.; Marshall, M.

    1982-01-01

    1 - Description of problem or function: A generalized fitting program with a free-format keyword interface to the user. It permits experimental data to be fitted by non-linear regression methods to any function describable by the user. The user requires the minimum of computer experience but needs to provide a subroutine to define his function. Some statistical output is included as well as 'best' estimates of the function's parameters. 2 - Method of solution: The regression method used is based on a minimization technique devised by Powell (Harwell Subroutine Library VA05A, 1972) which does not require the use of analytical derivatives. The method employs a quasi-Newton procedure balanced with a steepest descent correction. Experience shows this to be efficient for a very wide range of application. 3 - Restrictions on the complexity of the problem: The current version of the program permits functions to be defined with up to 20 parameters. The function may be fitted to a maximum of 400 points, preferably with estimated values of weight given

  2. Regression Methods for Virtual Metrology of Layer Thickness in Chemical Vapor Deposition

    DEFF Research Database (Denmark)

    Purwins, Hendrik; Barak, Bernd; Nagi, Ahmed

    2014-01-01

    The quality of wafer production in semiconductor manufacturing cannot always be monitored by a costly physical measurement. Instead of measuring a quantity directly, it can be predicted by a regression method (Virtual Metrology). In this paper, a survey on regression methods is given to predict...... average Silicon Nitride cap layer thickness for the Plasma Enhanced Chemical Vapor Deposition (PECVD) dual-layer metal passivation stack process. Process and production equipment Fault Detection and Classification (FDC) data are used as predictor variables. Various variable sets are compared: one most...... algorithm, and Support Vector Regression (SVR). On a test set, SVR outperforms the other methods by a large margin, being more robust towards changes in the production conditions. The method performs better on high-dimensional multivariate input data than on the most predictive variables alone. Process...

  3. Comparing parametric and nonparametric regression methods for panel data

    DEFF Research Database (Denmark)

    Czekaj, Tomasz Gerard; Henningsen, Arne

    We investigate and compare the suitability of parametric and non-parametric stochastic regression methods for analysing production technologies and the optimal firm size. Our theoretical analysis shows that the most commonly used functional forms in empirical production analysis, Cobb......-Douglas and Translog, are unsuitable for analysing the optimal firm size. We show that the Translog functional form implies an implausible linear relationship between the (logarithmic) firm size and the elasticity of scale, where the slope is artificially related to the substitutability between the inputs....... The practical applicability of the parametric and non-parametric regression methods is scrutinised and compared by an empirical example: we analyse the production technology and investigate the optimal size of Polish crop farms based on a firm-level balanced panel data set. A nonparametric specification test...

  4. Methods for measurement of electron emission yield under low energy electron-irradiation by collector method and Kelvin probe method

    Energy Technology Data Exchange (ETDEWEB)

    Tondu, Thomas; Belhaj, Mohamed; Inguimbert, Virginie [Onera, DESP, 2 Avenue Edouard Belin, 31400 Toulouse (France); Onera, DESP, 2 Avenue Edouard Belin, 31400 Toulouse, France and Fondation STAE, 4 allee Emile Monso, BP 84234-31432, Toulouse Cedex 4 (France); Onera, DESP, 2 Avenue Edouard Belin, 31400 Toulouse (France)

    2010-09-15

    Secondary electron emission yield of gold under electron impact at normal incidence below 50 eV was investigated by the classical collector method and by the Kelvin probe method. The authors show that biasing a collector to ensure secondary electron collection while keeping the target grounded can lead to primary electron beam perturbations. Thus reliable secondary electron emission yield at low primary electron energy cannot be obtained with a biased collector. The authors present two collector-free methods based on current measurement and on electron pulse surface potential buildup (Kelvin probe method). These methods are consistent, but at very low energy, measurements become sensitive to the earth magnetic field (below 10 eV). For gold, the authors can extrapolate total emission yield at 0 eV to 0.5, while a total electron emission yield of 1 is obtained at 40{+-}1 eV.

  5. Methods for measurement of electron emission yield under low energy electron-irradiation by collector method and Kelvin probe method

    International Nuclear Information System (INIS)

    Tondu, Thomas; Belhaj, Mohamed; Inguimbert, Virginie

    2010-01-01

    Secondary electron emission yield of gold under electron impact at normal incidence below 50 eV was investigated by the classical collector method and by the Kelvin probe method. The authors show that biasing a collector to ensure secondary electron collection while keeping the target grounded can lead to primary electron beam perturbations. Thus reliable secondary electron emission yield at low primary electron energy cannot be obtained with a biased collector. The authors present two collector-free methods based on current measurement and on electron pulse surface potential buildup (Kelvin probe method). These methods are consistent, but at very low energy, measurements become sensitive to the earth magnetic field (below 10 eV). For gold, the authors can extrapolate total emission yield at 0 eV to 0.5, while a total electron emission yield of 1 is obtained at 40±1 eV.

  6. Methods for estimating disease transmission rates: Evaluating the precision of Poisson regression and two novel methods

    DEFF Research Database (Denmark)

    Kirkeby, Carsten Thure; Hisham Beshara Halasa, Tariq; Gussmann, Maya Katrin

    2017-01-01

    the transmission rate. We use data from the two simulation models and vary the sampling intervals and the size of the population sampled. We devise two new methods to determine transmission rate, and compare these to the frequently used Poisson regression method in both epidemic and endemic situations. For most...... tested scenarios these new methods perform similar or better than Poisson regression, especially in the case of long sampling intervals. We conclude that transmission rate estimates are easily biased, which is important to take into account when using these rates in simulation models....

  7. Mapping urban environmental noise: a land use regression method.

    Science.gov (United States)

    Xie, Dan; Liu, Yi; Chen, Jining

    2011-09-01

    Forecasting and preventing urban noise pollution are major challenges in urban environmental management. Most existing efforts, including experiment-based models, statistical models, and noise mapping, however, have limited capacity to explain the association between urban growth and corresponding noise change. Therefore, these conventional methods can hardly forecast urban noise at a given outlook of development layout. This paper, for the first time, introduces a land use regression method, which has been applied for simulating urban air quality for a decade, to construct an urban noise model (LUNOS) in Dalian Municipality, Northwest China. The LUNOS model describes noise as a dependent variable of surrounding various land areas via a regressive function. The results suggest that a linear model performs better in fitting monitoring data, and there is no significant difference of the LUNOS's outputs when applied to different spatial scales. As the LUNOS facilitates a better understanding of the association between land use and urban environmental noise in comparison to conventional methods, it can be regarded as a promising tool for noise prediction for planning purposes and aid smart decision-making.

  8. Treating experimental data of inverse kinetic method by unitary linear regression analysis

    International Nuclear Information System (INIS)

    Zhao Yusen; Chen Xiaoliang

    2009-01-01

    The theory of treating experimental data of inverse kinetic method by unitary linear regression analysis was described. Not only the reactivity, but also the effective neutron source intensity could be calculated by this method. Computer code was compiled base on the inverse kinetic method and unitary linear regression analysis. The data of zero power facility BFS-1 in Russia were processed and the results were compared. The results show that the reactivity and the effective neutron source intensity can be obtained correctly by treating experimental data of inverse kinetic method using unitary linear regression analysis and the precision of reactivity measurement is improved. The central element efficiency can be calculated by using the reactivity. The result also shows that the effect to reactivity measurement caused by external neutron source should be considered when the reactor power is low and the intensity of external neutron source is strong. (authors)

  9. Whole-Genome Regression and Prediction Methods Applied to Plant and Animal Breeding

    Science.gov (United States)

    de los Campos, Gustavo; Hickey, John M.; Pong-Wong, Ricardo; Daetwyler, Hans D.; Calus, Mario P. L.

    2013-01-01

    Genomic-enabled prediction is becoming increasingly important in animal and plant breeding and is also receiving attention in human genetics. Deriving accurate predictions of complex traits requires implementing whole-genome regression (WGR) models where phenotypes are regressed on thousands of markers concurrently. Methods exist that allow implementing these large-p with small-n regressions, and genome-enabled selection (GS) is being implemented in several plant and animal breeding programs. The list of available methods is long, and the relationships between them have not been fully addressed. In this article we provide an overview of available methods for implementing parametric WGR models, discuss selected topics that emerge in applications, and present a general discussion of lessons learned from simulation and empirical data analysis in the last decade. PMID:22745228

  10. A Comparative Study of Pairwise Learning Methods Based on Kernel Ridge Regression.

    Science.gov (United States)

    Stock, Michiel; Pahikkala, Tapio; Airola, Antti; De Baets, Bernard; Waegeman, Willem

    2018-06-12

    Many machine learning problems can be formulated as predicting labels for a pair of objects. Problems of that kind are often referred to as pairwise learning, dyadic prediction, or network inference problems. During the past decade, kernel methods have played a dominant role in pairwise learning. They still obtain a state-of-the-art predictive performance, but a theoretical analysis of their behavior has been underexplored in the machine learning literature. In this work we review and unify kernel-based algorithms that are commonly used in different pairwise learning settings, ranging from matrix filtering to zero-shot learning. To this end, we focus on closed-form efficient instantiations of Kronecker kernel ridge regression. We show that independent task kernel ridge regression, two-step kernel ridge regression, and a linear matrix filter arise naturally as a special case of Kronecker kernel ridge regression, implying that all these methods implicitly minimize a squared loss. In addition, we analyze universality, consistency, and spectral filtering properties. Our theoretical results provide valuable insights into assessing the advantages and limitations of existing pairwise learning methods.

  11. A different approach to estimate nonlinear regression model using numerical methods

    Science.gov (United States)

    Mahaboob, B.; Venkateswarlu, B.; Mokeshrayalu, G.; Balasiddamuni, P.

    2017-11-01

    This research paper concerns with the computational methods namely the Gauss-Newton method, Gradient algorithm methods (Newton-Raphson method, Steepest Descent or Steepest Ascent algorithm method, the Method of Scoring, the Method of Quadratic Hill-Climbing) based on numerical analysis to estimate parameters of nonlinear regression model in a very different way. Principles of matrix calculus have been used to discuss the Gradient-Algorithm methods. Yonathan Bard [1] discussed a comparison of gradient methods for the solution of nonlinear parameter estimation problems. However this article discusses an analytical approach to the gradient algorithm methods in a different way. This paper describes a new iterative technique namely Gauss-Newton method which differs from the iterative technique proposed by Gorden K. Smyth [2]. Hans Georg Bock et.al [10] proposed numerical methods for parameter estimation in DAE’s (Differential algebraic equation). Isabel Reis Dos Santos et al [11], Introduced weighted least squares procedure for estimating the unknown parameters of a nonlinear regression metamodel. For large-scale non smooth convex minimization the Hager and Zhang (HZ) conjugate gradient Method and the modified HZ (MHZ) method were presented by Gonglin Yuan et al [12].

  12. Ordinary Least Squares and Quantile Regression: An Inquiry-Based Learning Approach to a Comparison of Regression Methods

    Science.gov (United States)

    Helmreich, James E.; Krog, K. Peter

    2018-01-01

    We present a short, inquiry-based learning course on concepts and methods underlying ordinary least squares (OLS), least absolute deviation (LAD), and quantile regression (QR). Students investigate squared, absolute, and weighted absolute distance functions (metrics) as location measures. Using differential calculus and properties of convex…

  13. Regression dilution bias: tools for correction methods and sample size calculation.

    Science.gov (United States)

    Berglund, Lars

    2012-08-01

    Random errors in measurement of a risk factor will introduce downward bias of an estimated association to a disease or a disease marker. This phenomenon is called regression dilution bias. A bias correction may be made with data from a validity study or a reliability study. In this article we give a non-technical description of designs of reliability studies with emphasis on selection of individuals for a repeated measurement, assumptions of measurement error models, and correction methods for the slope in a simple linear regression model where the dependent variable is a continuous variable. Also, we describe situations where correction for regression dilution bias is not appropriate. The methods are illustrated with the association between insulin sensitivity measured with the euglycaemic insulin clamp technique and fasting insulin, where measurement of the latter variable carries noticeable random error. We provide software tools for estimation of a corrected slope in a simple linear regression model assuming data for a continuous dependent variable and a continuous risk factor from a main study and an additional measurement of the risk factor in a reliability study. Also, we supply programs for estimation of the number of individuals needed in the reliability study and for choice of its design. Our conclusion is that correction for regression dilution bias is seldom applied in epidemiological studies. This may cause important effects of risk factors with large measurement errors to be neglected.

  14. Prediction of gas chromatography/electron capture detector retention times of chlorinated pesticides, herbicides, and organohalides by multivariate chemometrics methods

    International Nuclear Information System (INIS)

    Ghasemi, Jahanbakhsh; Asadpour, Saeid; Abdolmaleki, Azizeh

    2007-01-01

    A quantitative structure-retention relationship (QSRR) study, has been carried out on the gas chromatograph/electron capture detector (GC/ECD) system retention times (t R s) of 38 diverse chlorinated pesticides, herbicides, and organohalides by using molecular structural descriptors. Modeling of retention times of these compounds as a function of the theoretically derived descriptors was established by multiple linear regression (MLR) and partial least squares (PLS) regression. The stepwise regression using SPSS was used for the selection of the variables that resulted in the best-fitted models. Appropriate models with low standard errors and high correlation coefficients were obtained. Three types of molecular descriptors including electronic, steric and thermodynamic were used to develop a quantitative relationship between the retention times and structural properties. MLR and PLS analysis has been carried out to derive the best QSRR models. After variables selection, MLR and PLS methods used with leave-one-out cross validation for building the regression models. The predictive quality of the QSRR models were tested for an external prediction set of 12 compounds randomly chosen from 38 compounds. The PLS regression method was used to model the structure-retention relationships, more accurately. However, the results surprisingly showed more or less the same quality for MLR and PLS modeling according to squared regression coefficients R 2 which were 0.951 and 0.948 for MLR and PLS, respectively

  15. A nonparametric approach to calculate critical micelle concentrations: the local polynomial regression method

    Energy Technology Data Exchange (ETDEWEB)

    Lopez Fontan, J.L.; Costa, J.; Ruso, J.M.; Prieto, G. [Dept. of Applied Physics, Univ. of Santiago de Compostela, Santiago de Compostela (Spain); Sarmiento, F. [Dept. of Mathematics, Faculty of Informatics, Univ. of A Coruna, A Coruna (Spain)

    2004-02-01

    The application of a statistical method, the local polynomial regression method, (LPRM), based on a nonparametric estimation of the regression function to determine the critical micelle concentration (cmc) is presented. The method is extremely flexible because it does not impose any parametric model on the subjacent structure of the data but rather allows the data to speak for themselves. Good concordance of cmc values with those obtained by other methods was found for systems in which the variation of a measured physical property with concentration showed an abrupt change. When this variation was slow, discrepancies between the values obtained by LPRM and others methods were found. (orig.)

  16. Fuzzy Linear Regression for the Time Series Data which is Fuzzified with SMRGT Method

    Directory of Open Access Journals (Sweden)

    Seçil YALAZ

    2016-10-01

    Full Text Available Our work on regression and classification provides a new contribution to the analysis of time series used in many areas for years. Owing to the fact that convergence could not obtained with the methods used in autocorrelation fixing process faced with time series regression application, success is not met or fall into obligation of changing the models’ degree. Changing the models’ degree may not be desirable in every situation. In our study, recommended for these situations, time series data was fuzzified by using the simple membership function and fuzzy rule generation technique (SMRGT and to estimate future an equation has created by applying fuzzy least square regression (FLSR method which is a simple linear regression method to this data. Although SMRGT has success in determining the flow discharge in open channels and can be used confidently for flow discharge modeling in open canals, as well as in pipe flow with some modifications, there is no clue about that this technique is successful in fuzzy linear regression modeling. Therefore, in order to address the luck of such a modeling, a new hybrid model has been described within this study. In conclusion, to demonstrate our methods’ efficiency, classical linear regression for time series data and linear regression for fuzzy time series data were applied to two different data sets, and these two approaches performances were compared by using different measures.

  17. Easy methods for extracting individual regression slopes: Comparing SPSS, R, and Excel

    Directory of Open Access Journals (Sweden)

    Roland Pfister

    2013-10-01

    Full Text Available Three different methods for extracting coefficientsof linear regression analyses are presented. The focus is on automatic and easy-to-use approaches for common statistical packages: SPSS, R, and MS Excel / LibreOffice Calc. Hands-on examples are included for each analysis, followed by a brief description of how a subsequent regression coefficient analysis is performed.

  18. An Application of Robust Method in Multiple Linear Regression Model toward Credit Card Debt

    Science.gov (United States)

    Amira Azmi, Nur; Saifullah Rusiman, Mohd; Khalid, Kamil; Roslan, Rozaini; Sufahani, Suliadi; Mohamad, Mahathir; Salleh, Rohayu Mohd; Hamzah, Nur Shamsidah Amir

    2018-04-01

    Credit card is a convenient alternative replaced cash or cheque, and it is essential component for electronic and internet commerce. In this study, the researchers attempt to determine the relationship and significance variables between credit card debt and demographic variables such as age, household income, education level, years with current employer, years at current address, debt to income ratio and other debt. The provided data covers 850 customers information. There are three methods that applied to the credit card debt data which are multiple linear regression (MLR) models, MLR models with least quartile difference (LQD) method and MLR models with mean absolute deviation method. After comparing among three methods, it is found that MLR model with LQD method became the best model with the lowest value of mean square error (MSE). According to the final model, it shows that the years with current employer, years at current address, household income in thousands and debt to income ratio are positively associated with the amount of credit debt. Meanwhile variables for age, level of education and other debt are negatively associated with amount of credit debt. This study may serve as a reference for the bank company by using robust methods, so that they could better understand their options and choice that is best aligned with their goals for inference regarding to the credit card debt.

  19. Polynomial regression analysis and significance test of the regression function

    International Nuclear Information System (INIS)

    Gao Zhengming; Zhao Juan; He Shengping

    2012-01-01

    In order to analyze the decay heating power of a certain radioactive isotope per kilogram with polynomial regression method, the paper firstly demonstrated the broad usage of polynomial function and deduced its parameters with ordinary least squares estimate. Then significance test method of polynomial regression function is derived considering the similarity between the polynomial regression model and the multivariable linear regression model. Finally, polynomial regression analysis and significance test of the polynomial function are done to the decay heating power of the iso tope per kilogram in accord with the authors' real work. (authors)

  20. A Fast Gradient Method for Nonnegative Sparse Regression With Self-Dictionary

    Science.gov (United States)

    Gillis, Nicolas; Luce, Robert

    2018-01-01

    A nonnegative matrix factorization (NMF) can be computed efficiently under the separability assumption, which asserts that all the columns of the given input data matrix belong to the cone generated by a (small) subset of them. The provably most robust methods to identify these conic basis columns are based on nonnegative sparse regression and self dictionaries, and require the solution of large-scale convex optimization problems. In this paper we study a particular nonnegative sparse regression model with self dictionary. As opposed to previously proposed models, this model yields a smooth optimization problem where the sparsity is enforced through linear constraints. We show that the Euclidean projection on the polyhedron defined by these constraints can be computed efficiently, and propose a fast gradient method to solve our model. We compare our algorithm with several state-of-the-art methods on synthetic data sets and real-world hyperspectral images.

  1. Convert a low-cost sensor to a colorimeter using an improved regression method

    Science.gov (United States)

    Wu, Yifeng

    2008-01-01

    Closed loop color calibration is a process to maintain consistent color reproduction for color printers. To perform closed loop color calibration, a pre-designed color target should be printed, and automatically measured by a color measuring instrument. A low cost sensor has been embedded to the printer to perform the color measurement. A series of sensor calibration and color conversion methods have been developed. The purpose is to get accurate colorimetric measurement from the data measured by the low cost sensor. In order to get high accuracy colorimetric measurement, we need carefully calibrate the sensor, and minimize all possible errors during the color conversion. After comparing several classical color conversion methods, a regression based color conversion method has been selected. The regression is a powerful method to estimate the color conversion functions. But the main difficulty to use this method is to find an appropriate function to describe the relationship between the input and the output data. In this paper, we propose to use 1D pre-linearization tables to improve the linearity between the input sensor measuring data and the output colorimetric data. Using this method, we can increase the accuracy of the regression method, so as to improve the accuracy of the color conversion.

  2. Estimation Methods for Non-Homogeneous Regression - Minimum CRPS vs Maximum Likelihood

    Science.gov (United States)

    Gebetsberger, Manuel; Messner, Jakob W.; Mayr, Georg J.; Zeileis, Achim

    2017-04-01

    Non-homogeneous regression models are widely used to statistically post-process numerical weather prediction models. Such regression models correct for errors in mean and variance and are capable to forecast a full probability distribution. In order to estimate the corresponding regression coefficients, CRPS minimization is performed in many meteorological post-processing studies since the last decade. In contrast to maximum likelihood estimation, CRPS minimization is claimed to yield more calibrated forecasts. Theoretically, both scoring rules used as an optimization score should be able to locate a similar and unknown optimum. Discrepancies might result from a wrong distributional assumption of the observed quantity. To address this theoretical concept, this study compares maximum likelihood and minimum CRPS estimation for different distributional assumptions. First, a synthetic case study shows that, for an appropriate distributional assumption, both estimation methods yield to similar regression coefficients. The log-likelihood estimator is slightly more efficient. A real world case study for surface temperature forecasts at different sites in Europe confirms these results but shows that surface temperature does not always follow the classical assumption of a Gaussian distribution. KEYWORDS: ensemble post-processing, maximum likelihood estimation, CRPS minimization, probabilistic temperature forecasting, distributional regression models

  3. 31 CFR 203.10 - Electronic payment methods.

    Science.gov (United States)

    2010-07-01

    ... 31 Money and Finance: Treasury 2 2010-07-01 2010-07-01 false Electronic payment methods. 203.10... TAX AND LOAN PROGRAM Electronic Federal Tax Payments § 203.10 Electronic payment methods. (a) General. Electronic payment methods for Federal tax payments available under this subpart include ACH debit entries...

  4. Statistical approach for selection of regression model during validation of bioanalytical method

    Directory of Open Access Journals (Sweden)

    Natalija Nakov

    2014-06-01

    Full Text Available The selection of an adequate regression model is the basis for obtaining accurate and reproducible results during the bionalytical method validation. Given the wide concentration range, frequently present in bioanalytical assays, heteroscedasticity of the data may be expected. Several weighted linear and quadratic regression models were evaluated during the selection of the adequate curve fit using nonparametric statistical tests: One sample rank test and Wilcoxon signed rank test for two independent groups of samples. The results obtained with One sample rank test could not give statistical justification for the selection of linear vs. quadratic regression models because slight differences between the error (presented through the relative residuals were obtained. Estimation of the significance of the differences in the RR was achieved using Wilcoxon signed rank test, where linear and quadratic regression models were treated as two independent groups. The application of this simple non-parametric statistical test provides statistical confirmation of the choice of an adequate regression model.

  5. An NCME Instructional Module on Data Mining Methods for Classification and Regression

    Science.gov (United States)

    Sinharay, Sandip

    2016-01-01

    Data mining methods for classification and regression are becoming increasingly popular in various scientific fields. However, these methods have not been explored much in educational measurement. This module first provides a review, which should be accessible to a wide audience in education measurement, of some of these methods. The module then…

  6. Correcting for cryptic relatedness by a regression-based genomic control method

    Directory of Open Access Journals (Sweden)

    Yang Yaning

    2009-12-01

    Full Text Available Abstract Background Genomic control (GC method is a useful tool to correct for the cryptic relatedness in population-based association studies. It was originally proposed for correcting for the variance inflation of Cochran-Armitage's additive trend test by using information from unlinked null markers, and was later generalized to be applicable to other tests with the additional requirement that the null markers are matched with the candidate marker in allele frequencies. However, matching allele frequencies limits the number of available null markers and thus limits the applicability of the GC method. On the other hand, errors in genotype/allele frequencies may cause further bias and variance inflation and thereby aggravate the effect of GC correction. Results In this paper, we propose a regression-based GC method using null markers that are not necessarily matched in allele frequencies with the candidate marker. Variation of allele frequencies of the null markers is adjusted by a regression method. Conclusion The proposed method can be readily applied to the Cochran-Armitage's trend tests other than the additive trend test, the Pearson's chi-square test and other robust efficiency tests. Simulation results show that the proposed method is effective in controlling type I error in the presence of population substructure.

  7. Comparing the index-flood and multiple-regression methods using L-moments

    Science.gov (United States)

    Malekinezhad, H.; Nachtnebel, H. P.; Klik, A.

    In arid and semi-arid regions, the length of records is usually too short to ensure reliable quantile estimates. Comparing index-flood and multiple-regression analyses based on L-moments was the main objective of this study. Factor analysis was applied to determine main influencing variables on flood magnitude. Ward’s cluster and L-moments approaches were applied to several sites in the Namak-Lake basin in central Iran to delineate homogeneous regions based on site characteristics. Homogeneity test was done using L-moments-based measures. Several distributions were fitted to the regional flood data and index-flood and multiple-regression methods as two regional flood frequency methods were compared. The results of factor analysis showed that length of main waterway, compactness coefficient, mean annual precipitation, and mean annual temperature were the main variables affecting flood magnitude. The study area was divided into three regions based on the Ward’s method of clustering approach. The homogeneity test based on L-moments showed that all three regions were acceptably homogeneous. Five distributions were fitted to the annual peak flood data of three homogeneous regions. Using the L-moment ratios and the Z-statistic criteria, GEV distribution was identified as the most robust distribution among five candidate distributions for all the proposed sub-regions of the study area, and in general, it was concluded that the generalised extreme value distribution was the best-fit distribution for every three regions. The relative root mean square error (RRMSE) measure was applied for evaluating the performance of the index-flood and multiple-regression methods in comparison with the curve fitting (plotting position) method. In general, index-flood method gives more reliable estimations for various flood magnitudes of different recurrence intervals. Therefore, this method should be adopted as regional flood frequency method for the study area and the Namak-Lake basin

  8. The Bland-Altman Method Should Not Be Used in Regression Cross-Validation Studies

    Science.gov (United States)

    O'Connor, Daniel P.; Mahar, Matthew T.; Laughlin, Mitzi S.; Jackson, Andrew S.

    2011-01-01

    The purpose of this study was to demonstrate the bias in the Bland-Altman (BA) limits of agreement method when it is used to validate regression models. Data from 1,158 men were used to develop three regression equations to estimate maximum oxygen uptake (R[superscript 2] = 0.40, 0.61, and 0.82, respectively). The equations were evaluated in a…

  9. Parameter Selection Method for Support Vector Regression Based on Adaptive Fusion of the Mixed Kernel Function

    Directory of Open Access Journals (Sweden)

    Hailun Wang

    2017-01-01

    Full Text Available Support vector regression algorithm is widely used in fault diagnosis of rolling bearing. A new model parameter selection method for support vector regression based on adaptive fusion of the mixed kernel function is proposed in this paper. We choose the mixed kernel function as the kernel function of support vector regression. The mixed kernel function of the fusion coefficients, kernel function parameters, and regression parameters are combined together as the parameters of the state vector. Thus, the model selection problem is transformed into a nonlinear system state estimation problem. We use a 5th-degree cubature Kalman filter to estimate the parameters. In this way, we realize the adaptive selection of mixed kernel function weighted coefficients and the kernel parameters, the regression parameters. Compared with a single kernel function, unscented Kalman filter (UKF support vector regression algorithms, and genetic algorithms, the decision regression function obtained by the proposed method has better generalization ability and higher prediction accuracy.

  10. Statistical methods and regression analysis of stratospheric ozone and meteorological variables in Isfahan

    Science.gov (United States)

    Hassanzadeh, S.; Hosseinibalam, F.; Omidvari, M.

    2008-04-01

    Data of seven meteorological variables (relative humidity, wet temperature, dry temperature, maximum temperature, minimum temperature, ground temperature and sun radiation time) and ozone values have been used for statistical analysis. Meteorological variables and ozone values were analyzed using both multiple linear regression and principal component methods. Data for the period 1999-2004 are analyzed jointly using both methods. For all periods, temperature dependent variables were highly correlated, but were all negatively correlated with relative humidity. Multiple regression analysis was used to fit the meteorological variables using the meteorological variables as predictors. A variable selection method based on high loading of varimax rotated principal components was used to obtain subsets of the predictor variables to be included in the linear regression model of the meteorological variables. In 1999, 2001 and 2002 one of the meteorological variables was weakly influenced predominantly by the ozone concentrations. However, the model did not predict that the meteorological variables for the year 2000 were not influenced predominantly by the ozone concentrations that point to variation in sun radiation. This could be due to other factors that were not explicitly considered in this study.

  11. Regression analysis with categorized regression calibrated exposure: some interesting findings

    Directory of Open Access Journals (Sweden)

    Hjartåker Anette

    2006-07-01

    Full Text Available Abstract Background Regression calibration as a method for handling measurement error is becoming increasingly well-known and used in epidemiologic research. However, the standard version of the method is not appropriate for exposure analyzed on a categorical (e.g. quintile scale, an approach commonly used in epidemiologic studies. A tempting solution could then be to use the predicted continuous exposure obtained through the regression calibration method and treat it as an approximation to the true exposure, that is, include the categorized calibrated exposure in the main regression analysis. Methods We use semi-analytical calculations and simulations to evaluate the performance of the proposed approach compared to the naive approach of not correcting for measurement error, in situations where analyses are performed on quintile scale and when incorporating the original scale into the categorical variables, respectively. We also present analyses of real data, containing measures of folate intake and depression, from the Norwegian Women and Cancer study (NOWAC. Results In cases where extra information is available through replicated measurements and not validation data, regression calibration does not maintain important qualities of the true exposure distribution, thus estimates of variance and percentiles can be severely biased. We show that the outlined approach maintains much, in some cases all, of the misclassification found in the observed exposure. For that reason, regression analysis with the corrected variable included on a categorical scale is still biased. In some cases the corrected estimates are analytically equal to those obtained by the naive approach. Regression calibration is however vastly superior to the naive method when applying the medians of each category in the analysis. Conclusion Regression calibration in its most well-known form is not appropriate for measurement error correction when the exposure is analyzed on a

  12. Comparison of Adaline and Multiple Linear Regression Methods for Rainfall Forecasting

    Science.gov (United States)

    Sutawinaya, IP; Astawa, INGA; Hariyanti, NKD

    2018-01-01

    Heavy rainfall can cause disaster, therefore need a forecast to predict rainfall intensity. Main factor that cause flooding is there is a high rainfall intensity and it makes the river become overcapacity. This will cause flooding around the area. Rainfall factor is a dynamic factor, so rainfall is very interesting to be studied. In order to support the rainfall forecasting, there are methods that can be used from Artificial Intelligence (AI) to statistic. In this research, we used Adaline for AI method and Regression for statistic method. The more accurate forecast result shows the method that used is good for forecasting the rainfall. Through those methods, we expected which is the best method for rainfall forecasting here.

  13. Development of Compressive Failure Strength for Composite Laminate Using Regression Analysis Method

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Myoung Keon [Agency for Defense Development, Daejeon (Korea, Republic of); Lee, Jeong Won; Yoon, Dong Hyun; Kim, Jae Hoon [Chungnam Nat’l Univ., Daejeon (Korea, Republic of)

    2016-10-15

    This paper provides the compressive failure strength value of composite laminate developed by using regression analysis method. Composite material in this document is a Carbon/Epoxy unidirection(UD) tape prepreg(Cycom G40-800/5276-1) cured at 350°F(177°C). The operating temperature is –60°F~+200°F(-55°C - +95°C). A total of 56 compression tests were conducted on specimens from eight (8) distinct laminates that were laid up by standard angle layers (0°, +45°, –45° and 90°). The ASTM-D-6484 standard was used for test method. The regression analysis was performed with the response variable being the laminate ultimate fracture strength and the regressor variables being two ply orientations (0° and ±45°)

  14. Development of Compressive Failure Strength for Composite Laminate Using Regression Analysis Method

    International Nuclear Information System (INIS)

    Lee, Myoung Keon; Lee, Jeong Won; Yoon, Dong Hyun; Kim, Jae Hoon

    2016-01-01

    This paper provides the compressive failure strength value of composite laminate developed by using regression analysis method. Composite material in this document is a Carbon/Epoxy unidirection(UD) tape prepreg(Cycom G40-800/5276-1) cured at 350°F(177°C). The operating temperature is –60°F~+200°F(-55°C - +95°C). A total of 56 compression tests were conducted on specimens from eight (8) distinct laminates that were laid up by standard angle layers (0°, +45°, –45° and 90°). The ASTM-D-6484 standard was used for test method. The regression analysis was performed with the response variable being the laminate ultimate fracture strength and the regressor variables being two ply orientations (0° and ±45°)

  15. ON THE EFFECTS OF THE PRESENCE AND METHODS OF THE ELIMINATION HETEROSCEDASTICITY AND AUTOCORRELATION IN THE REGRESSION MODEL

    Directory of Open Access Journals (Sweden)

    Nina L. Timofeeva

    2014-01-01

    Full Text Available The article presents the methodological and technical bases for the creation of regression models that adequately reflect reality. The focus is on methods of removing residual autocorrelation in models. Algorithms eliminating heteroscedasticity and autocorrelation of the regression model residuals: reweighted least squares method, the method of Cochran-Orkutta are given. A model of "pure" regression is build, as well as to compare the effect on the dependent variable of the different explanatory variables when the latter are expressed in different units, a standardized form of the regression equation. The scheme of abatement techniques of heteroskedasticity and autocorrelation for the creation of regression models specific to the social and cultural sphere is developed.

  16. Linear regression based on Minimum Covariance Determinant (MCD) and TELBS methods on the productivity of phytoplankton

    Science.gov (United States)

    Gusriani, N.; Firdaniza

    2018-03-01

    The existence of outliers on multiple linear regression analysis causes the Gaussian assumption to be unfulfilled. If the Least Square method is forcedly used on these data, it will produce a model that cannot represent most data. For that, we need a robust regression method against outliers. This paper will compare the Minimum Covariance Determinant (MCD) method and the TELBS method on secondary data on the productivity of phytoplankton, which contains outliers. Based on the robust determinant coefficient value, MCD method produces a better model compared to TELBS method.

  17. Regression trees for predicting mortality in patients with cardiovascular disease: What improvement is achieved by using ensemble-based methods?

    Science.gov (United States)

    Austin, Peter C; Lee, Douglas S; Steyerberg, Ewout W; Tu, Jack V

    2012-01-01

    In biomedical research, the logistic regression model is the most commonly used method for predicting the probability of a binary outcome. While many clinical researchers have expressed an enthusiasm for regression trees, this method may have limited accuracy for predicting health outcomes. We aimed to evaluate the improvement that is achieved by using ensemble-based methods, including bootstrap aggregation (bagging) of regression trees, random forests, and boosted regression trees. We analyzed 30-day mortality in two large cohorts of patients hospitalized with either acute myocardial infarction (N = 16,230) or congestive heart failure (N = 15,848) in two distinct eras (1999–2001 and 2004–2005). We found that both the in-sample and out-of-sample prediction of ensemble methods offered substantial improvement in predicting cardiovascular mortality compared to conventional regression trees. However, conventional logistic regression models that incorporated restricted cubic smoothing splines had even better performance. We conclude that ensemble methods from the data mining and machine learning literature increase the predictive performance of regression trees, but may not lead to clear advantages over conventional logistic regression models for predicting short-term mortality in population-based samples of subjects with cardiovascular disease. PMID:22777999

  18. A subagging regression method for estimating the qualitative and quantitative state of groundwater

    Science.gov (United States)

    Jeong, Jina; Park, Eungyu; Han, Weon Shik; Kim, Kue-Young

    2017-08-01

    A subsample aggregating (subagging) regression (SBR) method for the analysis of groundwater data pertaining to trend-estimation-associated uncertainty is proposed. The SBR method is validated against synthetic data competitively with other conventional robust and non-robust methods. From the results, it is verified that the estimation accuracies of the SBR method are consistent and superior to those of other methods, and the uncertainties are reasonably estimated; the others have no uncertainty analysis option. To validate further, actual groundwater data are employed and analyzed comparatively with Gaussian process regression (GPR). For all cases, the trend and the associated uncertainties are reasonably estimated by both SBR and GPR regardless of Gaussian or non-Gaussian skewed data. However, it is expected that GPR has a limitation in applications to severely corrupted data by outliers owing to its non-robustness. From the implementations, it is determined that the SBR method has the potential to be further developed as an effective tool of anomaly detection or outlier identification in groundwater state data such as the groundwater level and contaminant concentration.

  19. Using a Regression Method for Estimating Performance in a Rapid Serial Visual Presentation Target-Detection Task

    Science.gov (United States)

    2017-12-01

    Fig. 2 Simulation method; the process for one iteration of the simulation . It was repeated 250 times per combination of HR and FAR. Analysis was...distribution is unlimited. 8 Fig. 2 Simulation method; the process for one iteration of the simulation . It was repeated 250 times per combination of HR...stimuli. Simulations show that this regression method results in an unbiased and accurate estimate of target detection performance. The regression

  20. Estimation of Fine Particulate Matter in Taipei Using Landuse Regression and Bayesian Maximum Entropy Methods

    Directory of Open Access Journals (Sweden)

    Yi-Ming Kuo

    2011-06-01

    Full Text Available Fine airborne particulate matter (PM2.5 has adverse effects on human health. Assessing the long-term effects of PM2.5 exposure on human health and ecology is often limited by a lack of reliable PM2.5 measurements. In Taipei, PM2.5 levels were not systematically measured until August, 2005. Due to the popularity of geographic information systems (GIS, the landuse regression method has been widely used in the spatial estimation of PM concentrations. This method accounts for the potential contributing factors of the local environment, such as traffic volume. Geostatistical methods, on other hand, account for the spatiotemporal dependence among the observations of ambient pollutants. This study assesses the performance of the landuse regression model for the spatiotemporal estimation of PM2.5 in the Taipei area. Specifically, this study integrates the landuse regression model with the geostatistical approach within the framework of the Bayesian maximum entropy (BME method. The resulting epistemic framework can assimilate knowledge bases including: (a empirical-based spatial trends of PM concentration based on landuse regression, (b the spatio-temporal dependence among PM observation information, and (c site-specific PM observations. The proposed approach performs the spatiotemporal estimation of PM2.5 levels in the Taipei area (Taiwan from 2005–2007.

  1. Estimation of fine particulate matter in Taipei using landuse regression and bayesian maximum entropy methods.

    Science.gov (United States)

    Yu, Hwa-Lung; Wang, Chih-Hsih; Liu, Ming-Che; Kuo, Yi-Ming

    2011-06-01

    Fine airborne particulate matter (PM2.5) has adverse effects on human health. Assessing the long-term effects of PM2.5 exposure on human health and ecology is often limited by a lack of reliable PM2.5 measurements. In Taipei, PM2.5 levels were not systematically measured until August, 2005. Due to the popularity of geographic information systems (GIS), the landuse regression method has been widely used in the spatial estimation of PM concentrations. This method accounts for the potential contributing factors of the local environment, such as traffic volume. Geostatistical methods, on other hand, account for the spatiotemporal dependence among the observations of ambient pollutants. This study assesses the performance of the landuse regression model for the spatiotemporal estimation of PM2.5 in the Taipei area. Specifically, this study integrates the landuse regression model with the geostatistical approach within the framework of the Bayesian maximum entropy (BME) method. The resulting epistemic framework can assimilate knowledge bases including: (a) empirical-based spatial trends of PM concentration based on landuse regression, (b) the spatio-temporal dependence among PM observation information, and (c) site-specific PM observations. The proposed approach performs the spatiotemporal estimation of PM2.5 levels in the Taipei area (Taiwan) from 2005-2007.

  2. An Introduction to Graphical and Mathematical Methods for Detecting Heteroscedasticity in Linear Regression.

    Science.gov (United States)

    Thompson, Russel L.

    Homoscedasticity is an important assumption of linear regression. This paper explains what it is and why it is important to the researcher. Graphical and mathematical methods for testing the homoscedasticity assumption are demonstrated. Sources of homoscedasticity and types of homoscedasticity are discussed, and methods for correction are…

  3. Real-time prediction of respiratory motion based on local regression methods

    International Nuclear Information System (INIS)

    Ruan, D; Fessler, J A; Balter, J M

    2007-01-01

    Recent developments in modulation techniques enable conformal delivery of radiation doses to small, localized target volumes. One of the challenges in using these techniques is real-time tracking and predicting target motion, which is necessary to accommodate system latencies. For image-guided-radiotherapy systems, it is also desirable to minimize sampling rates to reduce imaging dose. This study focuses on predicting respiratory motion, which can significantly affect lung tumours. Predicting respiratory motion in real-time is challenging, due to the complexity of breathing patterns and the many sources of variability. We propose a prediction method based on local regression. There are three major ingredients of this approach: (1) forming an augmented state space to capture system dynamics, (2) local regression in the augmented space to train the predictor from previous observation data using semi-periodicity of respiratory motion, (3) local weighting adjustment to incorporate fading temporal correlations. To evaluate prediction accuracy, we computed the root mean square error between predicted tumor motion and its observed location for ten patients. For comparison, we also investigated commonly used predictive methods, namely linear prediction, neural networks and Kalman filtering to the same data. The proposed method reduced the prediction error for all imaging rates and latency lengths, particularly for long prediction lengths

  4. Advanced statistics: linear regression, part I: simple linear regression.

    Science.gov (United States)

    Marill, Keith A

    2004-01-01

    Simple linear regression is a mathematical technique used to model the relationship between a single independent predictor variable and a single dependent outcome variable. In this, the first of a two-part series exploring concepts in linear regression analysis, the four fundamental assumptions and the mechanics of simple linear regression are reviewed. The most common technique used to derive the regression line, the method of least squares, is described. The reader will be acquainted with other important concepts in simple linear regression, including: variable transformations, dummy variables, relationship to inference testing, and leverage. Simplified clinical examples with small datasets and graphic models are used to illustrate the points. This will provide a foundation for the second article in this series: a discussion of multiple linear regression, in which there are multiple predictor variables.

  5. Ordinary least square regression, orthogonal regression, geometric mean regression and their applications in aerosol science

    International Nuclear Information System (INIS)

    Leng Ling; Zhang Tianyi; Kleinman, Lawrence; Zhu Wei

    2007-01-01

    Regression analysis, especially the ordinary least squares method which assumes that errors are confined to the dependent variable, has seen a fair share of its applications in aerosol science. The ordinary least squares approach, however, could be problematic due to the fact that atmospheric data often does not lend itself to calling one variable independent and the other dependent. Errors often exist for both measurements. In this work, we examine two regression approaches available to accommodate this situation. They are orthogonal regression and geometric mean regression. Comparisons are made theoretically as well as numerically through an aerosol study examining whether the ratio of organic aerosol to CO would change with age

  6. A SOCIOLOGICAL ANALYSIS OF THE CHILDBEARING COEFFICIENT IN THE ALTAI REGION BASED ON METHOD OF FUZZY LINEAR REGRESSION

    Directory of Open Access Journals (Sweden)

    Sergei Vladimirovich Varaksin

    2017-06-01

    Full Text Available Purpose. Construction of a mathematical model of the dynamics of childbearing change in the Altai region in 2000–2016, analysis of the dynamics of changes in birth rates for multiple age categories of women of childbearing age. Methodology. A auxiliary analysis element is the construction of linear mathematical models of the dynamics of childbearing by using fuzzy linear regression method based on fuzzy numbers. Fuzzy linear regression is considered as an alternative to standard statistical linear regression for short time series and unknown distribution law. The parameters of fuzzy linear and standard statistical regressions for childbearing time series were defined with using the built in language MatLab algorithm. Method of fuzzy linear regression is not used in sociological researches yet. Results. There are made the conclusions about the socio-demographic changes in society, the high efficiency of the demographic policy of the leadership of the region and the country, and the applicability of the method of fuzzy linear regression for sociological analysis.

  7. Understanding poisson regression.

    Science.gov (United States)

    Hayat, Matthew J; Higgins, Melinda

    2014-04-01

    Nurse investigators often collect study data in the form of counts. Traditional methods of data analysis have historically approached analysis of count data either as if the count data were continuous and normally distributed or with dichotomization of the counts into the categories of occurred or did not occur. These outdated methods for analyzing count data have been replaced with more appropriate statistical methods that make use of the Poisson probability distribution, which is useful for analyzing count data. The purpose of this article is to provide an overview of the Poisson distribution and its use in Poisson regression. Assumption violations for the standard Poisson regression model are addressed with alternative approaches, including addition of an overdispersion parameter or negative binomial regression. An illustrative example is presented with an application from the ENSPIRE study, and regression modeling of comorbidity data is included for illustrative purposes. Copyright 2014, SLACK Incorporated.

  8. Assessing the performance of variational methods for mixed logistic regression models

    Czech Academy of Sciences Publication Activity Database

    Rijmen, F.; Vomlel, Jiří

    2008-01-01

    Roč. 78, č. 8 (2008), s. 765-779 ISSN 0094-9655 R&D Projects: GA MŠk 1M0572 Grant - others:GA MŠk(CZ) 2C06019 Institutional research plan: CEZ:AV0Z10750506 Keywords : Mixed models * Logistic regression * Variational methods * Lower bound approximation Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.353, year: 2008

  9. Method for nonlinear exponential regression analysis

    Science.gov (United States)

    Junkin, B. G.

    1972-01-01

    Two computer programs developed according to two general types of exponential models for conducting nonlinear exponential regression analysis are described. Least squares procedure is used in which the nonlinear problem is linearized by expanding in a Taylor series. Program is written in FORTRAN 5 for the Univac 1108 computer.

  10. Direct integral linear least square regression method for kinetic evaluation of hepatobiliary scintigraphy

    International Nuclear Information System (INIS)

    Shuke, Noriyuki

    1991-01-01

    In hepatobiliary scintigraphy, kinetic model analysis, which provides kinetic parameters like hepatic extraction or excretion rate, have been done for quantitative evaluation of liver function. In this analysis, unknown model parameters are usually determined using nonlinear least square regression method (NLS method) where iterative calculation and initial estimate for unknown parameters are required. As a simple alternative to NLS method, direct integral linear least square regression method (DILS method), which can determine model parameters by a simple calculation without initial estimate, is proposed, and tested the applicability to analysis of hepatobiliary scintigraphy. In order to see whether DILS method could determine model parameters as good as NLS method, or to determine appropriate weight for DILS method, simulated theoretical data based on prefixed parameters were fitted to 1 compartment model using both DILS method with various weightings and NLS method. The parameter values obtained were then compared with prefixed values which were used for data generation. The effect of various weights on the error of parameter estimate was examined, and inverse of time was found to be the best weight to make the error minimum. When using this weight, DILS method could give parameter values close to those obtained by NLS method and both parameter values were very close to prefixed values. With appropriate weighting, the DILS method could provide reliable parameter estimate which is relatively insensitive to the data noise. In conclusion, the DILS method could be used as a simple alternative to NLS method, providing reliable parameter estimate. (author)

  11. Alpins and thibos vectorial astigmatism analyses: proposal of a linear regression model between methods

    Directory of Open Access Journals (Sweden)

    Giuliano de Oliveira Freitas

    2013-10-01

    Full Text Available PURPOSE: To determine linear regression models between Alpins descriptive indices and Thibos astigmatic power vectors (APV, assessing the validity and strength of such correlations. METHODS: This case series prospectively assessed 62 eyes of 31 consecutive cataract patients with preoperative corneal astigmatism between 0.75 and 2.50 diopters in both eyes. Patients were randomly assorted among two phacoemulsification groups: one assigned to receive AcrySof®Toric intraocular lens (IOL in both eyes and another assigned to have AcrySof Natural IOL associated with limbal relaxing incisions, also in both eyes. All patients were reevaluated postoperatively at 6 months, when refractive astigmatism analysis was performed using both Alpins and Thibos methods. The ratio between Thibos postoperative APV and preoperative APV (APVratio and its linear regression to Alpins percentage of success of astigmatic surgery, percentage of astigmatism corrected and percentage of astigmatism reduction at the intended axis were assessed. RESULTS: Significant negative correlation between the ratio of post- and preoperative Thibos APVratio and Alpins percentage of success (%Success was found (Spearman's ρ=-0.93; linear regression is given by the following equation: %Success = (-APVratio + 1.00x100. CONCLUSION: The linear regression we found between APVratio and %Success permits a validated mathematical inference concerning the overall success of astigmatic surgery.

  12. Using the fuzzy linear regression method to benchmark the energy efficiency of commercial buildings

    International Nuclear Information System (INIS)

    Chung, William

    2012-01-01

    Highlights: ► Fuzzy linear regression method is used for developing benchmarking systems. ► The systems can be used to benchmark energy efficiency of commercial buildings. ► The resulting benchmarking model can be used by public users. ► The resulting benchmarking model can capture the fuzzy nature of input–output data. -- Abstract: Benchmarking systems from a sample of reference buildings need to be developed to conduct benchmarking processes for the energy efficiency of commercial buildings. However, not all benchmarking systems can be adopted by public users (i.e., other non-reference building owners) because of the different methods in developing such systems. An approach for benchmarking the energy efficiency of commercial buildings using statistical regression analysis to normalize other factors, such as management performance, was developed in a previous work. However, the field data given by experts can be regarded as a distribution of possibility. Thus, the previous work may not be adequate to handle such fuzzy input–output data. Consequently, a number of fuzzy structures cannot be fully captured by statistical regression analysis. This present paper proposes the use of fuzzy linear regression analysis to develop a benchmarking process, the resulting model of which can be used by public users. An illustrative example is given as well.

  13. Drafting method of electricity and electron design

    International Nuclear Information System (INIS)

    Gungbon, Junchun

    1989-11-01

    This book concentrates on drafting of electricity and electron design. It deals with The meaning of electricity and electron drafting JIS standard regulation the types of drafting and line and letter, basics drafting with projection drafting method, plan projection and development elevation, Drafting method of shop drawing, practical method of design and drafting, Design and drafting of technic and illustration, Connection diagram, Drafting of wiring diagram for light and illumination, Drafting of development connection diagram for sequence control, Drafting of logic circuit sign of flow chart and manual, drafting for a electron circuit diagram and Drawing of PC board.

  14. Post-processing through linear regression

    Science.gov (United States)

    van Schaeybroeck, B.; Vannitsem, S.

    2011-03-01

    Various post-processing techniques are compared for both deterministic and ensemble forecasts, all based on linear regression between forecast data and observations. In order to evaluate the quality of the regression methods, three criteria are proposed, related to the effective correction of forecast error, the optimal variability of the corrected forecast and multicollinearity. The regression schemes under consideration include the ordinary least-square (OLS) method, a new time-dependent Tikhonov regularization (TDTR) method, the total least-square method, a new geometric-mean regression (GM), a recently introduced error-in-variables (EVMOS) method and, finally, a "best member" OLS method. The advantages and drawbacks of each method are clarified. These techniques are applied in the context of the 63 Lorenz system, whose model version is affected by both initial condition and model errors. For short forecast lead times, the number and choice of predictors plays an important role. Contrarily to the other techniques, GM degrades when the number of predictors increases. At intermediate lead times, linear regression is unable to provide corrections to the forecast and can sometimes degrade the performance (GM and the best member OLS with noise). At long lead times the regression schemes (EVMOS, TDTR) which yield the correct variability and the largest correlation between ensemble error and spread, should be preferred.

  15. A simultaneous electron energy and dosimeter calibration method for an electron beam irradiator

    International Nuclear Information System (INIS)

    Tanaka, R.; Sunaga, H.; Kojima, T.

    1991-01-01

    In radiation processing using electron accelerators, the reproducibility of absorbed dose in the product depends not only on the variation of beam current and conveyor speed, but also on variations of other accelerator parameters. This requires routine monitoring of the beam current and the scan width, and also requires periodical calibration of routine dosimeters usually in the shape of film, electron energy, and other radiation field parameters. The electron energy calibration is important especially for food processing. The dose calibration method using partial absorption calorimeters provides only information about absorbed dose. Measurement of average electron current density provides basic information about the radiation field formed by the beam scanning and scattering at the beam window, though it does not allow direct dose calibration. The total absorption calorimeter with a thick absorber allows dose and dosimeter calibration, if the depth profile of relative dose in a reference absorber is given experimentally. It also allows accurate calibration of the average electron energy at the surface of the calorimeter core, if electron fluence received by the calorimeter is measured at the same time. This means that both electron energy and dosimeters can be simultaneously calibrated by irradiation of a combined system including the calorimeter, the detector of the electron current density meter, and a thick reference absorber for depth profile measurement of relative dose. We have developed a simple and multifunctional system using the combined calibration method for 5 MeV electron beams. The paper describes a simultaneous calibration method for electron energy and film dosimeters, and describes the electron current density meter, the total absorption calorimeter, and the characteristics of this method. (author). 13 refs, 7 figs, 3 tabs

  16. Cox regression with missing covariate data using a modified partial likelihood method

    DEFF Research Database (Denmark)

    Martinussen, Torben; Holst, Klaus K.; Scheike, Thomas H.

    2016-01-01

    Missing covariate values is a common problem in survival analysis. In this paper we propose a novel method for the Cox regression model that is close to maximum likelihood but avoids the use of the EM-algorithm. It exploits that the observed hazard function is multiplicative in the baseline hazard...

  17. Model Robust Calibration: Method and Application to Electronically-Scanned Pressure Transducers

    Science.gov (United States)

    Walker, Eric L.; Starnes, B. Alden; Birch, Jeffery B.; Mays, James E.

    2010-01-01

    This article presents the application of a recently developed statistical regression method to the controlled instrument calibration problem. The statistical method of Model Robust Regression (MRR), developed by Mays, Birch, and Starnes, is shown to improve instrument calibration by reducing the reliance of the calibration on a predetermined parametric (e.g. polynomial, exponential, logarithmic) model. This is accomplished by allowing fits from the predetermined parametric model to be augmented by a certain portion of a fit to the residuals from the initial regression using a nonparametric (locally parametric) regression technique. The method is demonstrated for the absolute scale calibration of silicon-based pressure transducers.

  18. Nonparametric Methods in Astronomy: Think, Regress, Observe—Pick Any Three

    Science.gov (United States)

    Steinhardt, Charles L.; Jermyn, Adam S.

    2018-02-01

    Telescopes are much more expensive than astronomers, so it is essential to minimize required sample sizes by using the most data-efficient statistical methods possible. However, the most commonly used model-independent techniques for finding the relationship between two variables in astronomy are flawed. In the worst case they can lead without warning to subtly yet catastrophically wrong results, and even in the best case they require more data than necessary. Unfortunately, there is no single best technique for nonparametric regression. Instead, we provide a guide for how astronomers can choose the best method for their specific problem and provide a python library with both wrappers for the most useful existing algorithms and implementations of two new algorithms developed here.

  19. Linear regression in astronomy. I

    Science.gov (United States)

    Isobe, Takashi; Feigelson, Eric D.; Akritas, Michael G.; Babu, Gutti Jogesh

    1990-01-01

    Five methods for obtaining linear regression fits to bivariate data with unknown or insignificant measurement errors are discussed: ordinary least-squares (OLS) regression of Y on X, OLS regression of X on Y, the bisector of the two OLS lines, orthogonal regression, and 'reduced major-axis' regression. These methods have been used by various researchers in observational astronomy, most importantly in cosmic distance scale applications. Formulas for calculating the slope and intercept coefficients and their uncertainties are given for all the methods, including a new general form of the OLS variance estimates. The accuracy of the formulas was confirmed using numerical simulations. The applicability of the procedures is discussed with respect to their mathematical properties, the nature of the astronomical data under consideration, and the scientific purpose of the regression. It is found that, for problems needing symmetrical treatment of the variables, the OLS bisector performs significantly better than orthogonal or reduced major-axis regression.

  20. Fungible weights in logistic regression.

    Science.gov (United States)

    Jones, Jeff A; Waller, Niels G

    2016-06-01

    In this article we develop methods for assessing parameter sensitivity in logistic regression models. To set the stage for this work, we first review Waller's (2008) equations for computing fungible weights in linear regression. Next, we describe 2 methods for computing fungible weights in logistic regression. To demonstrate the utility of these methods, we compute fungible logistic regression weights using data from the Centers for Disease Control and Prevention's (2010) Youth Risk Behavior Surveillance Survey, and we illustrate how these alternate weights can be used to evaluate parameter sensitivity. To make our work accessible to the research community, we provide R code (R Core Team, 2015) that will generate both kinds of fungible logistic regression weights. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  1. Geographically weighted regression based methods for merging satellite and gauge precipitation

    Science.gov (United States)

    Chao, Lijun; Zhang, Ke; Li, Zhijia; Zhu, Yuelong; Wang, Jingfeng; Yu, Zhongbo

    2018-03-01

    Real-time precipitation data with high spatiotemporal resolutions are crucial for accurate hydrological forecasting. To improve the spatial resolution and quality of satellite precipitation, a three-step satellite and gauge precipitation merging method was formulated in this study: (1) bilinear interpolation is first applied to downscale coarser satellite precipitation to a finer resolution (PS); (2) the (mixed) geographically weighted regression methods coupled with a weighting function are then used to estimate biases of PS as functions of gauge observations (PO) and PS; and (3) biases of PS are finally corrected to produce a merged precipitation product. Based on the above framework, eight algorithms, a combination of two geographically weighted regression methods and four weighting functions, are developed to merge CMORPH (CPC MORPHing technique) precipitation with station observations on a daily scale in the Ziwuhe Basin of China. The geographical variables (elevation, slope, aspect, surface roughness, and distance to the coastline) and a meteorological variable (wind speed) were used for merging precipitation to avoid the artificial spatial autocorrelation resulting from traditional interpolation methods. The results show that the combination of the MGWR and BI-square function (MGWR-BI) has the best performance (R = 0.863 and RMSE = 7.273 mm/day) among the eight algorithms. The MGWR-BI algorithm was then applied to produce hourly merged precipitation product. Compared to the original CMORPH product (R = 0.208 and RMSE = 1.208 mm/hr), the quality of the merged data is significantly higher (R = 0.724 and RMSE = 0.706 mm/hr). The developed merging method not only improves the spatial resolution and quality of the satellite product but also is easy to implement, which is valuable for hydrological modeling and other applications.

  2. Post-processing through linear regression

    Directory of Open Access Journals (Sweden)

    B. Van Schaeybroeck

    2011-03-01

    Full Text Available Various post-processing techniques are compared for both deterministic and ensemble forecasts, all based on linear regression between forecast data and observations. In order to evaluate the quality of the regression methods, three criteria are proposed, related to the effective correction of forecast error, the optimal variability of the corrected forecast and multicollinearity. The regression schemes under consideration include the ordinary least-square (OLS method, a new time-dependent Tikhonov regularization (TDTR method, the total least-square method, a new geometric-mean regression (GM, a recently introduced error-in-variables (EVMOS method and, finally, a "best member" OLS method. The advantages and drawbacks of each method are clarified.

    These techniques are applied in the context of the 63 Lorenz system, whose model version is affected by both initial condition and model errors. For short forecast lead times, the number and choice of predictors plays an important role. Contrarily to the other techniques, GM degrades when the number of predictors increases. At intermediate lead times, linear regression is unable to provide corrections to the forecast and can sometimes degrade the performance (GM and the best member OLS with noise. At long lead times the regression schemes (EVMOS, TDTR which yield the correct variability and the largest correlation between ensemble error and spread, should be preferred.

  3. COMPARISON OF PARTIAL LEAST SQUARES REGRESSION METHOD ALGORITHMS: NIPALS AND PLS-KERNEL AND AN APPLICATION

    Directory of Open Access Journals (Sweden)

    ELİF BULUT

    2013-06-01

    Full Text Available Partial Least Squares Regression (PLSR is a multivariate statistical method that consists of partial least squares and multiple linear regression analysis. Explanatory variables, X, having multicollinearity are reduced to components which explain the great amount of covariance between explanatory and response variable. These components are few in number and they don’t have multicollinearity problem. Then multiple linear regression analysis is applied to those components to model the response variable Y. There are various PLSR algorithms. In this study NIPALS and PLS-Kernel algorithms will be studied and illustrated on a real data set.

  4. The Research of Regression Method for Forecasting Monthly Electricity Sales Considering Coupled Multi-factor

    Science.gov (United States)

    Wang, Jiangbo; Liu, Junhui; Li, Tiantian; Yin, Shuo; He, Xinhui

    2018-01-01

    The monthly electricity sales forecasting is a basic work to ensure the safety of the power system. This paper presented a monthly electricity sales forecasting method which comprehensively considers the coupled multi-factors of temperature, economic growth, electric power replacement and business expansion. The mathematical model is constructed by using regression method. The simulation results show that the proposed method is accurate and effective.

  5. Applications of Monte Carlo method to nonlinear regression of rheological data

    Science.gov (United States)

    Kim, Sangmo; Lee, Junghaeng; Kim, Sihyun; Cho, Kwang Soo

    2018-02-01

    In rheological study, it is often to determine the parameters of rheological models from experimental data. Since both rheological data and values of the parameters vary in logarithmic scale and the number of the parameters is quite large, conventional method of nonlinear regression such as Levenberg-Marquardt (LM) method is usually ineffective. The gradient-based method such as LM is apt to be caught in local minima which give unphysical values of the parameters whenever the initial guess of the parameters is far from the global optimum. Although this problem could be solved by simulated annealing (SA), the Monte Carlo (MC) method needs adjustable parameter which could be determined in ad hoc manner. We suggest a simplified version of SA, a kind of MC methods which results in effective values of the parameters of most complicated rheological models such as the Carreau-Yasuda model of steady shear viscosity, discrete relaxation spectrum and zero-shear viscosity as a function of concentration and molecular weight.

  6. Electronic computer prediction of properties of binary refractory transition metal compounds on the base of their simplificated electronic structure

    International Nuclear Information System (INIS)

    Kutolin, S.A.; Kotyukov, V.I.

    1979-01-01

    An attempt is made to obtain calculation equations of macroscopic physico-chemical properties of transition metal refractory compounds (density, melting temperature, Debye characteristic temperature, microhardness, standard formation enthalpy, thermo-emf) using the method of the regression analysis. Apart from the compound composition the argument of the regression equation is the distribution of electron bands of d-transition metals, created by the energy electron distribution in the simplified zone structure of transition metals and approximated by Chebishev polynoms, by the position of Fermi energy on the map of distribution of electron band energy depending upon the value of quasi-impulse, multiple to the first, second and third Brillouin zone for transition metals. The maximum relative error of the regressions obtained as compared with the literary data is 15-20 rel.%

  7. Further Insight and Additional Inference Methods for Polynomial Regression Applied to the Analysis of Congruence

    Science.gov (United States)

    Cohen, Ayala; Nahum-Shani, Inbal; Doveh, Etti

    2010-01-01

    In their seminal paper, Edwards and Parry (1993) presented the polynomial regression as a better alternative to applying difference score in the study of congruence. Although this method is increasingly applied in congruence research, its complexity relative to other methods for assessing congruence (e.g., difference score methods) was one of the…

  8. Multi-step polynomial regression method to model and forecast malaria incidence.

    Directory of Open Access Journals (Sweden)

    Chandrajit Chatterjee

    Full Text Available Malaria is one of the most severe problems faced by the world even today. Understanding the causative factors such as age, sex, social factors, environmental variability etc. as well as underlying transmission dynamics of the disease is important for epidemiological research on malaria and its eradication. Thus, development of suitable modeling approach and methodology, based on the available data on the incidence of the disease and other related factors is of utmost importance. In this study, we developed a simple non-linear regression methodology in modeling and forecasting malaria incidence in Chennai city, India, and predicted future disease incidence with high confidence level. We considered three types of data to develop the regression methodology: a longer time series data of Slide Positivity Rates (SPR of malaria; a smaller time series data (deaths due to Plasmodium vivax of one year; and spatial data (zonal distribution of P. vivax deaths for the city along with the climatic factors, population and previous incidence of the disease. We performed variable selection by simple correlation study, identification of the initial relationship between variables through non-linear curve fitting and used multi-step methods for induction of variables in the non-linear regression analysis along with applied Gauss-Markov models, and ANOVA for testing the prediction, validity and constructing the confidence intervals. The results execute the applicability of our method for different types of data, the autoregressive nature of forecasting, and show high prediction power for both SPR and P. vivax deaths, where the one-lag SPR values plays an influential role and proves useful for better prediction. Different climatic factors are identified as playing crucial role on shaping the disease curve. Further, disease incidence at zonal level and the effect of causative factors on different zonal clusters indicate the pattern of malaria prevalence in the city

  9. Comparing treatment effects after adjustment with multivariable Cox proportional hazards regression and propensity score methods

    NARCIS (Netherlands)

    Martens, Edwin P; de Boer, Anthonius; Pestman, Wiebe R; Belitser, Svetlana V; Stricker, Bruno H Ch; Klungel, Olaf H

    PURPOSE: To compare adjusted effects of drug treatment for hypertension on the risk of stroke from propensity score (PS) methods with a multivariable Cox proportional hazards (Cox PH) regression in an observational study with censored data. METHODS: From two prospective population-based cohort

  10. Support vector methods for survival analysis: a comparison between ranking and regression approaches.

    Science.gov (United States)

    Van Belle, Vanya; Pelckmans, Kristiaan; Van Huffel, Sabine; Suykens, Johan A K

    2011-10-01

    To compare and evaluate ranking, regression and combined machine learning approaches for the analysis of survival data. The literature describes two approaches based on support vector machines to deal with censored observations. In the first approach the key idea is to rephrase the task as a ranking problem via the concordance index, a problem which can be solved efficiently in a context of structural risk minimization and convex optimization techniques. In a second approach, one uses a regression approach, dealing with censoring by means of inequality constraints. The goal of this paper is then twofold: (i) introducing a new model combining the ranking and regression strategy, which retains the link with existing survival models such as the proportional hazards model via transformation models; and (ii) comparison of the three techniques on 6 clinical and 3 high-dimensional datasets and discussing the relevance of these techniques over classical approaches fur survival data. We compare svm-based survival models based on ranking constraints, based on regression constraints and models based on both ranking and regression constraints. The performance of the models is compared by means of three different measures: (i) the concordance index, measuring the model's discriminating ability; (ii) the logrank test statistic, indicating whether patients with a prognostic index lower than the median prognostic index have a significant different survival than patients with a prognostic index higher than the median; and (iii) the hazard ratio after normalization to restrict the prognostic index between 0 and 1. Our results indicate a significantly better performance for models including regression constraints above models only based on ranking constraints. This work gives empirical evidence that svm-based models using regression constraints perform significantly better than svm-based models based on ranking constraints. Our experiments show a comparable performance for methods

  11. A robust and efficient stepwise regression method for building sparse polynomial chaos expansions

    Energy Technology Data Exchange (ETDEWEB)

    Abraham, Simon, E-mail: Simon.Abraham@ulb.ac.be [Vrije Universiteit Brussel (VUB), Department of Mechanical Engineering, Research Group Fluid Mechanics and Thermodynamics, Pleinlaan 2, 1050 Brussels (Belgium); Raisee, Mehrdad [School of Mechanical Engineering, College of Engineering, University of Tehran, P.O. Box: 11155-4563, Tehran (Iran, Islamic Republic of); Ghorbaniasl, Ghader; Contino, Francesco; Lacor, Chris [Vrije Universiteit Brussel (VUB), Department of Mechanical Engineering, Research Group Fluid Mechanics and Thermodynamics, Pleinlaan 2, 1050 Brussels (Belgium)

    2017-03-01

    Polynomial Chaos (PC) expansions are widely used in various engineering fields for quantifying uncertainties arising from uncertain parameters. The computational cost of classical PC solution schemes is unaffordable as the number of deterministic simulations to be calculated grows dramatically with the number of stochastic dimension. This considerably restricts the practical use of PC at the industrial level. A common approach to address such problems is to make use of sparse PC expansions. This paper presents a non-intrusive regression-based method for building sparse PC expansions. The most important PC contributions are detected sequentially through an automatic search procedure. The variable selection criterion is based on efficient tools relevant to probabilistic method. Two benchmark analytical functions are used to validate the proposed algorithm. The computational efficiency of the method is then illustrated by a more realistic CFD application, consisting of the non-deterministic flow around a transonic airfoil subject to geometrical uncertainties. To assess the performance of the developed methodology, a detailed comparison is made with the well established LAR-based selection technique. The results show that the developed sparse regression technique is able to identify the most significant PC contributions describing the problem. Moreover, the most important stochastic features are captured at a reduced computational cost compared to the LAR method. The results also demonstrate the superior robustness of the method by repeating the analyses using random experimental designs.

  12. Discussion on Regression Methods Based on Ensemble Learning and Applicability Domains of Linear Submodels.

    Science.gov (United States)

    Kaneko, Hiromasa

    2018-02-26

    To develop a new ensemble learning method and construct highly predictive regression models in chemoinformatics and chemometrics, applicability domains (ADs) are introduced into the ensemble learning process of prediction. When estimating values of an objective variable using subregression models, only the submodels with ADs that cover a query sample, i.e., the sample is inside the model's AD, are used. By constructing submodels and changing a list of selected explanatory variables, the union of the submodels' ADs, which defines the overall AD, becomes large, and the prediction performance is enhanced for diverse compounds. By analyzing a quantitative structure-activity relationship data set and a quantitative structure-property relationship data set, it is confirmed that the ADs can be enlarged and the estimation performance of regression models is improved compared with traditional methods.

  13. The synthesis method for design of electron flow sources

    Science.gov (United States)

    Alexahin, Yu I.; Molodozhenzev, A. Yu

    1997-01-01

    The synthesis method to design a relativistic magnetically - focused beam source is described in this paper. It allows to find a shape of electrodes necessary to produce laminar space charge flows. Electron guns with shielded cathodes designed with this method were analyzed using the EGUN code. The obtained results have shown the coincidence of the synthesis and analysis calculations [1]. This method of electron gun calculation may be applied for immersed electron flows - of interest for the EBIS electron gun design.

  14. Regression analysis by example

    CERN Document Server

    Chatterjee, Samprit

    2012-01-01

    Praise for the Fourth Edition: ""This book is . . . an excellent source of examples for regression analysis. It has been and still is readily readable and understandable."" -Journal of the American Statistical Association Regression analysis is a conceptually simple method for investigating relationships among variables. Carrying out a successful application of regression analysis, however, requires a balance of theoretical results, empirical rules, and subjective judgment. Regression Analysis by Example, Fifth Edition has been expanded

  15. An evaluation of regression methods to estimate nutritional condition of canvasbacks and other water birds

    Science.gov (United States)

    Sparling, D.W.; Barzen, J.A.; Lovvorn, J.R.; Serie, J.R.

    1992-01-01

    Regression equations that use mensural data to estimate body condition have been developed for several water birds. These equations often have been based on data that represent different sexes, age classes, or seasons, without being adequately tested for intergroup differences. We used proximate carcass analysis of 538 adult and juvenile canvasbacks (Aythya valisineria ) collected during fall migration, winter, and spring migrations in 1975-76 and 1982-85 to test regression methods for estimating body condition.

  16. Regression modeling of ground-water flow

    Science.gov (United States)

    Cooley, R.L.; Naff, R.L.

    1985-01-01

    Nonlinear multiple regression methods are developed to model and analyze groundwater flow systems. Complete descriptions of regression methodology as applied to groundwater flow models allow scientists and engineers engaged in flow modeling to apply the methods to a wide range of problems. Organization of the text proceeds from an introduction that discusses the general topic of groundwater flow modeling, to a review of basic statistics necessary to properly apply regression techniques, and then to the main topic: exposition and use of linear and nonlinear regression to model groundwater flow. Statistical procedures are given to analyze and use the regression models. A number of exercises and answers are included to exercise the student on nearly all the methods that are presented for modeling and statistical analysis. Three computer programs implement the more complex methods. These three are a general two-dimensional, steady-state regression model for flow in an anisotropic, heterogeneous porous medium, a program to calculate a measure of model nonlinearity with respect to the regression parameters, and a program to analyze model errors in computed dependent variables such as hydraulic head. (USGS)

  17. Using the Ridge Regression Procedures to Estimate the Multiple Linear Regression Coefficients

    Science.gov (United States)

    Gorgees, HazimMansoor; Mahdi, FatimahAssim

    2018-05-01

    This article concerns with comparing the performance of different types of ordinary ridge regression estimators that have been already proposed to estimate the regression parameters when the near exact linear relationships among the explanatory variables is presented. For this situations we employ the data obtained from tagi gas filling company during the period (2008-2010). The main result we reached is that the method based on the condition number performs better than other methods since it has smaller mean square error (MSE) than the other stated methods.

  18. Efficient electronic structure methods applied to metal nanoparticles

    DEFF Research Database (Denmark)

    Larsen, Ask Hjorth

    of efficient approaches to density functional theory and the application of these methods to metal nanoparticles. We describe the formalism and implementation of localized atom-centered basis sets within the projector augmented wave method. Basis sets allow for a dramatic increase in performance compared....... The basis set method is used to study the electronic effects for the contiguous range of clusters up to several hundred atoms. The s-electrons hybridize to form electronic shells consistent with the jellium model, leading to electronic magic numbers for clusters with full shells. Large electronic gaps...... and jumps in Fermi level near magic numbers can lead to alkali-like or halogen-like behaviour when main-group atoms adsorb onto gold clusters. A non-self-consistent NewnsAnderson model is used to more closely study the chemisorption of main-group atoms on magic-number Au clusters. The behaviour at magic...

  19. Discriminative Elastic-Net Regularized Linear Regression.

    Science.gov (United States)

    Zhang, Zheng; Lai, Zhihui; Xu, Yong; Shao, Ling; Wu, Jian; Xie, Guo-Sen

    2017-03-01

    In this paper, we aim at learning compact and discriminative linear regression models. Linear regression has been widely used in different problems. However, most of the existing linear regression methods exploit the conventional zero-one matrix as the regression targets, which greatly narrows the flexibility of the regression model. Another major limitation of these methods is that the learned projection matrix fails to precisely project the image features to the target space due to their weak discriminative capability. To this end, we present an elastic-net regularized linear regression (ENLR) framework, and develop two robust linear regression models which possess the following special characteristics. First, our methods exploit two particular strategies to enlarge the margins of different classes by relaxing the strict binary targets into a more feasible variable matrix. Second, a robust elastic-net regularization of singular values is introduced to enhance the compactness and effectiveness of the learned projection matrix. Third, the resulting optimization problem of ENLR has a closed-form solution in each iteration, which can be solved efficiently. Finally, rather than directly exploiting the projection matrix for recognition, our methods employ the transformed features as the new discriminate representations to make final image classification. Compared with the traditional linear regression model and some of its variants, our method is much more accurate in image classification. Extensive experiments conducted on publicly available data sets well demonstrate that the proposed framework can outperform the state-of-the-art methods. The MATLAB codes of our methods can be available at http://www.yongxu.org/lunwen.html.

  20. 14 CFR 1260.69 - Electronic funds transfer payment methods.

    Science.gov (United States)

    2010-01-01

    ... Government by electronic funds transfer through the Treasury Fedline Payment System (FEDLINE) or the... 14 Aeronautics and Space 5 2010-01-01 2010-01-01 false Electronic funds transfer payment methods... COOPERATIVE AGREEMENTS General Special Conditions § 1260.69 Electronic funds transfer payment methods...

  1. A method to determine the necessity for global signal regression in resting-state fMRI studies.

    Science.gov (United States)

    Chen, Gang; Chen, Guangyu; Xie, Chunming; Ward, B Douglas; Li, Wenjun; Antuono, Piero; Li, Shi-Jiang

    2012-12-01

    In resting-state functional MRI studies, the global signal (operationally defined as the global average of resting-state functional MRI time courses) is often considered a nuisance effect and commonly removed in preprocessing. This global signal regression method can introduce artifacts, such as false anticorrelated resting-state networks in functional connectivity analyses. Therefore, the efficacy of this technique as a correction tool remains questionable. In this article, we establish that the accuracy of the estimated global signal is determined by the level of global noise (i.e., non-neural noise that has a global effect on the resting-state functional MRI signal). When the global noise level is low, the global signal resembles the resting-state functional MRI time courses of the largest cluster, but not those of the global noise. Using real data, we demonstrate that the global signal is strongly correlated with the default mode network components and has biological significance. These results call into question whether or not global signal regression should be applied. We introduce a method to quantify global noise levels. We show that a criteria for global signal regression can be found based on the method. By using the criteria, one can determine whether to include or exclude the global signal regression in minimizing errors in functional connectivity measures. Copyright © 2012 Wiley Periodicals, Inc.

  2. Variational methods in electron-atom scattering theory

    CERN Document Server

    Nesbet, Robert K

    1980-01-01

    The investigation of scattering phenomena is a major theme of modern physics. A scattered particle provides a dynamical probe of the target system. The practical problem of interest here is the scattering of a low­ energy electron by an N-electron atom. It has been difficult in this area of study to achieve theoretical results that are even qualitatively correct, yet quantitative accuracy is often needed as an adjunct to experiment. The present book describes a quantitative theoretical method, or class of methods, that has been applied effectively to this problem. Quantum mechanical theory relevant to the scattering of an electron by an N-electron atom, which may gain or lose energy in the process, is summarized in Chapter 1. The variational theory itself is presented in Chapter 2, both as currently used and in forms that may facilitate future applications. The theory of multichannel resonance and threshold effects, which provide a rich structure to observed electron-atom scattering data, is presented in Cha...

  3. System for cooling hybrid vehicle electronics, method for cooling hybrid vehicle electronics

    Science.gov (United States)

    France, David M.; Yu, Wenhua; Singh, Dileep; Zhao, Weihuan

    2017-11-21

    The invention provides a single radiator cooling system for use in hybrid electric vehicles, the system comprising a surface in thermal communication with electronics, and subcooled boiling fluid contacting the surface. The invention also provides a single radiator method for simultaneously cooling electronics and an internal combustion engine in a hybrid electric vehicle, the method comprising separating a coolant fluid into a first portion and a second portion; directing the first portion to the electronics and the second portion to the internal combustion engine for a time sufficient to maintain the temperature of the electronics at or below 175.degree. C.; combining the first and second portion to reestablish the coolant fluid; and treating the reestablished coolant fluid to the single radiator for a time sufficient to decrease the temperature of the reestablished coolant fluid to the temperature it had before separation.

  4. A method for fitting regression splines with varying polynomial order in the linear mixed model.

    Science.gov (United States)

    Edwards, Lloyd J; Stewart, Paul W; MacDougall, James E; Helms, Ronald W

    2006-02-15

    The linear mixed model has become a widely used tool for longitudinal analysis of continuous variables. The use of regression splines in these models offers the analyst additional flexibility in the formulation of descriptive analyses, exploratory analyses and hypothesis-driven confirmatory analyses. We propose a method for fitting piecewise polynomial regression splines with varying polynomial order in the fixed effects and/or random effects of the linear mixed model. The polynomial segments are explicitly constrained by side conditions for continuity and some smoothness at the points where they join. By using a reparameterization of this explicitly constrained linear mixed model, an implicitly constrained linear mixed model is constructed that simplifies implementation of fixed-knot regression splines. The proposed approach is relatively simple, handles splines in one variable or multiple variables, and can be easily programmed using existing commercial software such as SAS or S-plus. The method is illustrated using two examples: an analysis of longitudinal viral load data from a study of subjects with acute HIV-1 infection and an analysis of 24-hour ambulatory blood pressure profiles.

  5. Robust Methods for Moderation Analysis with a Two-Level Regression Model.

    Science.gov (United States)

    Yang, Miao; Yuan, Ke-Hai

    2016-01-01

    Moderation analysis has many applications in social sciences. Most widely used estimation methods for moderation analysis assume that errors are normally distributed and homoscedastic. When these assumptions are not met, the results from a classical moderation analysis can be misleading. For more reliable moderation analysis, this article proposes two robust methods with a two-level regression model when the predictors do not contain measurement error. One method is based on maximum likelihood with Student's t distribution and the other is based on M-estimators with Huber-type weights. An algorithm for obtaining the robust estimators is developed. Consistent estimates of standard errors of the robust estimators are provided. The robust approaches are compared against normal-distribution-based maximum likelihood (NML) with respect to power and accuracy of parameter estimates through a simulation study. Results show that the robust approaches outperform NML under various distributional conditions. Application of the robust methods is illustrated through a real data example. An R program is developed and documented to facilitate the application of the robust methods.

  6. Boosted beta regression.

    Directory of Open Access Journals (Sweden)

    Matthias Schmid

    Full Text Available Regression analysis with a bounded outcome is a common problem in applied statistics. Typical examples include regression models for percentage outcomes and the analysis of ratings that are measured on a bounded scale. In this paper, we consider beta regression, which is a generalization of logit models to situations where the response is continuous on the interval (0,1. Consequently, beta regression is a convenient tool for analyzing percentage responses. The classical approach to fit a beta regression model is to use maximum likelihood estimation with subsequent AIC-based variable selection. As an alternative to this established - yet unstable - approach, we propose a new estimation technique called boosted beta regression. With boosted beta regression estimation and variable selection can be carried out simultaneously in a highly efficient way. Additionally, both the mean and the variance of a percentage response can be modeled using flexible nonlinear covariate effects. As a consequence, the new method accounts for common problems such as overdispersion and non-binomial variance structures.

  7. Electron microscopy methods in studies of cultural heritage sites

    Science.gov (United States)

    Vasiliev, A. L.; Kovalchuk, M. V.; Yatsishina, E. B.

    2016-11-01

    The history of the development and application of scanning electron microscopy (SEM), transmission electron microscopy (TEM), and energy-dispersive X-ray microanalysis (EDXMA) in studies of cultural heritage sites is considered. In fact, investigations based on these methods began when electron microscopes became a commercial product. Currently, these methods, being developed and improved, help solve many historical enigmas. To date, electron microscopy combined with microanalysis makes it possible to investigate any object, from parchment and wooden articles to pigments, tools, and objects of art. Studies by these methods have revealed that some articles were made by ancient masters using ancient "nanotechnologies"; hence, their comprehensive analysis calls for the latest achievements in the corresponding instrumental methods and sample preparation techniques.

  8. A graphical method to evaluate spectral preprocessing in multivariate regression calibrations: example with Savitzky-Golay filters and partial least squares regression.

    Science.gov (United States)

    Delwiche, Stephen R; Reeves, James B

    2010-01-01

    In multivariate regression analysis of spectroscopy data, spectral preprocessing is often performed to reduce unwanted background information (offsets, sloped baselines) or accentuate absorption features in intrinsically overlapping bands. These procedures, also known as pretreatments, are commonly smoothing operations or derivatives. While such operations are often useful in reducing the number of latent variables of the actual decomposition and lowering residual error, they also run the risk of misleading the practitioner into accepting calibration equations that are poorly adapted to samples outside of the calibration. The current study developed a graphical method to examine this effect on partial least squares (PLS) regression calibrations of near-infrared (NIR) reflection spectra of ground wheat meal with two analytes, protein content and sodium dodecyl sulfate sedimentation (SDS) volume (an indicator of the quantity of the gluten proteins that contribute to strong doughs). These two properties were chosen because of their differing abilities to be modeled by NIR spectroscopy: excellent for protein content, fair for SDS sedimentation volume. To further demonstrate the potential pitfalls of preprocessing, an artificial component, a randomly generated value, was included in PLS regression trials. Savitzky-Golay (digital filter) smoothing, first-derivative, and second-derivative preprocess functions (5 to 25 centrally symmetric convolution points, derived from quadratic polynomials) were applied to PLS calibrations of 1 to 15 factors. The results demonstrated the danger of an over reliance on preprocessing when (1) the number of samples used in a multivariate calibration is low (<50), (2) the spectral response of the analyte is weak, and (3) the goodness of the calibration is based on the coefficient of determination (R(2)) rather than a term based on residual error. The graphical method has application to the evaluation of other preprocess functions and various

  9. Comparison of some biased estimation methods (including ordinary subset regression) in the linear model

    Science.gov (United States)

    Sidik, S. M.

    1975-01-01

    Ridge, Marquardt's generalized inverse, shrunken, and principal components estimators are discussed in terms of the objectives of point estimation of parameters, estimation of the predictive regression function, and hypothesis testing. It is found that as the normal equations approach singularity, more consideration must be given to estimable functions of the parameters as opposed to estimation of the full parameter vector; that biased estimators all introduce constraints on the parameter space; that adoption of mean squared error as a criterion of goodness should be independent of the degree of singularity; and that ordinary least-squares subset regression is the best overall method.

  10. Study (Prediction of Main Pipes Break Rates in Water Distribution Systems Using Intelligent and Regression Methods

    Directory of Open Access Journals (Sweden)

    Massoud Tabesh

    2011-07-01

    Full Text Available Optimum operation of water distribution networks is one of the priorities of sustainable development of water resources, considering the issues of increasing efficiency and decreasing the water losses. One of the key subjects in optimum operational management of water distribution systems is preparing rehabilitation and replacement schemes, prediction of pipes break rate and evaluation of their reliability. Several approaches have been presented in recent years regarding prediction of pipe failure rates which each one requires especial data sets. Deterministic models based on age and deterministic multi variables and stochastic group modeling are examples of the solutions which relate pipe break rates to parameters like age, material and diameters. In this paper besides the mentioned parameters, more factors such as pipe depth and hydraulic pressures are considered as well. Then using multi variable regression method, intelligent approaches (Artificial neural network and neuro fuzzy models and Evolutionary polynomial Regression method (EPR pipe burst rate are predicted. To evaluate the results of different approaches, a case study is carried out in a part ofMashhadwater distribution network. The results show the capability and advantages of ANN and EPR methods to predict pipe break rates, in comparison with neuro fuzzy and multi-variable regression methods.

  11. A Trajectory Regression Clustering Technique Combining a Novel Fuzzy C-Means Clustering Algorithm with the Least Squares Method

    Directory of Open Access Journals (Sweden)

    Xiangbing Zhou

    2018-04-01

    Full Text Available Rapidly growing GPS (Global Positioning System trajectories hide much valuable information, such as city road planning, urban travel demand, and population migration. In order to mine the hidden information and to capture better clustering results, a trajectory regression clustering method (an unsupervised trajectory clustering method is proposed to reduce local information loss of the trajectory and to avoid getting stuck in the local optimum. Using this method, we first define our new concept of trajectory clustering and construct a novel partitioning (angle-based partitioning method of line segments; second, the Lagrange-based method and Hausdorff-based K-means++ are integrated in fuzzy C-means (FCM clustering, which are used to maintain the stability and the robustness of the clustering process; finally, least squares regression model is employed to achieve regression clustering of the trajectory. In our experiment, the performance and effectiveness of our method is validated against real-world taxi GPS data. When comparing our clustering algorithm with the partition-based clustering algorithms (K-means, K-median, and FCM, our experimental results demonstrate that the presented method is more effective and generates a more reasonable trajectory.

  12. Development of K-Nearest Neighbour Regression Method in Forecasting River Stream Flow

    Directory of Open Access Journals (Sweden)

    Mohammad Azmi

    2012-07-01

    Full Text Available Different statistical, non-statistical and black-box methods have been used in forecasting processes. Among statistical methods, K-nearest neighbour non-parametric regression method (K-NN due to its natural simplicity and mathematical base is one of the recommended methods for forecasting processes. In this study, K-NN method is explained completely. Besides, development and improvement approaches such as best neighbour estimation, data transformation functions, distance functions and proposed extrapolation method are described. K-NN method in company with its development approaches is used in streamflow forecasting of Zayandeh-Rud Dam upper basin. Comparing between final results of classic K-NN method and modified K-NN (number of neighbour 5, transformation function of Range Scaling, distance function of Mahanalobis and proposed extrapolation method shows that modified K-NN in criteria of goodness of fit, root mean square error, percentage of volume of error and correlation has had performance improvement 45% , 59% and 17% respectively. These results approve necessity of applying mentioned approaches to derive more accurate forecasts.

  13. Comparison of multinomial logistic regression and logistic regression: which is more efficient in allocating land use?

    Science.gov (United States)

    Lin, Yingzhi; Deng, Xiangzheng; Li, Xing; Ma, Enjun

    2014-12-01

    Spatially explicit simulation of land use change is the basis for estimating the effects of land use and cover change on energy fluxes, ecology and the environment. At the pixel level, logistic regression is one of the most common approaches used in spatially explicit land use allocation models to determine the relationship between land use and its causal factors in driving land use change, and thereby to evaluate land use suitability. However, these models have a drawback in that they do not determine/allocate land use based on the direct relationship between land use change and its driving factors. Consequently, a multinomial logistic regression method was introduced to address this flaw, and thereby, judge the suitability of a type of land use in any given pixel in a case study area of the Jiangxi Province, China. A comparison of the two regression methods indicated that the proportion of correctly allocated pixels using multinomial logistic regression was 92.98%, which was 8.47% higher than that obtained using logistic regression. Paired t-test results also showed that pixels were more clearly distinguished by multinomial logistic regression than by logistic regression. In conclusion, multinomial logistic regression is a more efficient and accurate method for the spatial allocation of land use changes. The application of this method in future land use change studies may improve the accuracy of predicting the effects of land use and cover change on energy fluxes, ecology, and environment.

  14. Electron microscopy methods in studies of cultural heritage sites

    Energy Technology Data Exchange (ETDEWEB)

    Vasiliev, A. L., E-mail: a.vasiliev56@gmail.com; Kovalchuk, M. V.; Yatsishina, E. B. [National Research Centre “Kurchatov Institute” (Russian Federation)

    2016-11-15

    The history of the development and application of scanning electron microscopy (SEM), transmission electron microscopy (TEM), and energy-dispersive X-ray microanalysis (EDXMA) in studies of cultural heritage sites is considered. In fact, investigations based on these methods began when electron microscopes became a commercial product. Currently, these methods, being developed and improved, help solve many historical enigmas. To date, electron microscopy combined with microanalysis makes it possible to investigate any object, from parchment and wooden articles to pigments, tools, and objects of art. Studies by these methods have revealed that some articles were made by ancient masters using ancient “nanotechnologies”; hence, their comprehensive analysis calls for the latest achievements in the corresponding instrumental methods and sample preparation techniques.

  15. Electron microscopy methods in studies of cultural heritage sites

    International Nuclear Information System (INIS)

    Vasiliev, A. L.; Kovalchuk, M. V.; Yatsishina, E. B.

    2016-01-01

    The history of the development and application of scanning electron microscopy (SEM), transmission electron microscopy (TEM), and energy-dispersive X-ray microanalysis (EDXMA) in studies of cultural heritage sites is considered. In fact, investigations based on these methods began when electron microscopes became a commercial product. Currently, these methods, being developed and improved, help solve many historical enigmas. To date, electron microscopy combined with microanalysis makes it possible to investigate any object, from parchment and wooden articles to pigments, tools, and objects of art. Studies by these methods have revealed that some articles were made by ancient masters using ancient “nanotechnologies”; hence, their comprehensive analysis calls for the latest achievements in the corresponding instrumental methods and sample preparation techniques.

  16. Computational methods of electron/photon transport

    International Nuclear Information System (INIS)

    Mack, J.M.

    1983-01-01

    A review of computational methods simulating the non-plasma transport of electrons and their attendant cascades is presented. Remarks are mainly restricted to linearized formalisms at electron energies above 1 keV. The effectiveness of various metods is discussed including moments, point-kernel, invariant imbedding, discrete-ordinates, and Monte Carlo. Future research directions and the potential impact on various aspects of science and engineering are indicated

  17. Prevalence and Determinants of Preterm Birth in Tehran, Iran: A Comparison between Logistic Regression and Decision Tree Methods.

    Science.gov (United States)

    Amini, Payam; Maroufizadeh, Saman; Samani, Reza Omani; Hamidi, Omid; Sepidarkish, Mahdi

    2017-06-01

    Preterm birth (PTB) is a leading cause of neonatal death and the second biggest cause of death in children under five years of age. The objective of this study was to determine the prevalence of PTB and its associated factors using logistic regression and decision tree classification methods. This cross-sectional study was conducted on 4,415 pregnant women in Tehran, Iran, from July 6-21, 2015. Data were collected by a researcher-developed questionnaire through interviews with mothers and review of their medical records. To evaluate the accuracy of the logistic regression and decision tree methods, several indices such as sensitivity, specificity, and the area under the curve were used. The PTB rate was 5.5% in this study. The logistic regression outperformed the decision tree for the classification of PTB based on risk factors. Logistic regression showed that multiple pregnancies, mothers with preeclampsia, and those who conceived with assisted reproductive technology had an increased risk for PTB ( p logistic regression model for the classification of risk groups for PTB.

  18. Method of fabricating a cooled electronic system

    Science.gov (United States)

    Chainer, Timothy J; Gaynes, Michael A; Graybill, David P; Iyengar, Madhusudan K; Kamath, Vinod; Kochuparambil, Bejoy J; Schmidt, Roger R; Schultz, Mark D; Simco, Daniel P; Steinke, Mark E

    2014-02-11

    A method of fabricating a liquid-cooled electronic system is provided which includes an electronic assembly having an electronics card and a socket with a latch at one end. The latch facilitates securing of the card within the socket. The method includes providing a liquid-cooled cold rail at the one end of the socket, and a thermal spreader to couple the electronics card to the cold rail. The thermal spreader includes first and second thermal transfer plates coupled to first and second surfaces on opposite sides of the card, and thermally conductive extensions extending from end edges of the plates, which couple the respective transfer plates to the liquid-cooled cold rail. The extensions are disposed to the sides of the latch, and the card is securable within or removable from the socket using the latch without removing the cold rail or the thermal spreader.

  19. Time-adaptive quantile regression

    DEFF Research Database (Denmark)

    Møller, Jan Kloppenborg; Nielsen, Henrik Aalborg; Madsen, Henrik

    2008-01-01

    and an updating procedure are combined into a new algorithm for time-adaptive quantile regression, which generates new solutions on the basis of the old solution, leading to savings in computation time. The suggested algorithm is tested against a static quantile regression model on a data set with wind power......An algorithm for time-adaptive quantile regression is presented. The algorithm is based on the simplex algorithm, and the linear optimization formulation of the quantile regression problem is given. The observations have been split to allow a direct use of the simplex algorithm. The simplex method...... production, where the models combine splines and quantile regression. The comparison indicates superior performance for the time-adaptive quantile regression in all the performance parameters considered....

  20. An electron moiré method for a common SEM

    Institute of Scientific and Technical Information of China (English)

    Y.M.Xing; S.Kishimoto; Y.R.Zhao

    2006-01-01

    In the electron moiré method,a high-frequency grating is used to measure microscopic deformation,which promises significant potential applications for the method in the microscopic analysis of materials.However,a special beam scanning control device is required to produce a grating and generate a moiré fringe pattern for the scanning electron microscope (SEM).Because only a few SEMs used in the material science studies are equipped with this device,the use of the electron moiré method is limited.In this study,an electron moiré method for a common SEM without the beam control device is presented.A grating based on a multi-scanning concept is fabricated in any observing mode.A real-time moiré pattern can also be generated in the SEM or an optical filtering system.Without the beam control device being a prerequisite,the electron moiré method can be more widely used.The experimental results from three different types of SEMS show that high quality gratings with uniform lines and less pitch error can be fabricated by this method,and moiré patterns can also be correctly generated.

  1. Method validation using weighted linear regression models for quantification of UV filters in water samples.

    Science.gov (United States)

    da Silva, Claudia Pereira; Emídio, Elissandro Soares; de Marchi, Mary Rosa Rodrigues

    2015-01-01

    This paper describes the validation of a method consisting of solid-phase extraction followed by gas chromatography-tandem mass spectrometry for the analysis of the ultraviolet (UV) filters benzophenone-3, ethylhexyl salicylate, ethylhexyl methoxycinnamate and octocrylene. The method validation criteria included evaluation of selectivity, analytical curve, trueness, precision, limits of detection and limits of quantification. The non-weighted linear regression model has traditionally been used for calibration, but it is not necessarily the optimal model in all cases. Because the assumption of homoscedasticity was not met for the analytical data in this work, a weighted least squares linear regression was used for the calibration method. The evaluated analytical parameters were satisfactory for the analytes and showed recoveries at four fortification levels between 62% and 107%, with relative standard deviations less than 14%. The detection limits ranged from 7.6 to 24.1 ng L(-1). The proposed method was used to determine the amount of UV filters in water samples from water treatment plants in Araraquara and Jau in São Paulo, Brazil. Copyright © 2014 Elsevier B.V. All rights reserved.

  2. Analyzing Big Data with the Hybrid Interval Regression Methods

    Directory of Open Access Journals (Sweden)

    Chia-Hui Huang

    2014-01-01

    Full Text Available Big data is a new trend at present, forcing the significant impacts on information technologies. In big data applications, one of the most concerned issues is dealing with large-scale data sets that often require computation resources provided by public cloud services. How to analyze big data efficiently becomes a big challenge. In this paper, we collaborate interval regression with the smooth support vector machine (SSVM to analyze big data. Recently, the smooth support vector machine (SSVM was proposed as an alternative of the standard SVM that has been proved more efficient than the traditional SVM in processing large-scale data. In addition the soft margin method is proposed to modify the excursion of separation margin and to be effective in the gray zone that the distribution of data becomes hard to be described and the separation margin between classes.

  3. Dimension Reduction and Discretization in Stochastic Problems by Regression Method

    DEFF Research Database (Denmark)

    Ditlevsen, Ove Dalager

    1996-01-01

    The chapter mainly deals with dimension reduction and field discretizations based directly on the concept of linear regression. Several examples of interesting applications in stochastic mechanics are also given.Keywords: Random fields discretization, Linear regression, Stochastic interpolation, ...

  4. Methods of Analysis of Electronic Money in Banks

    Directory of Open Access Journals (Sweden)

    Melnychenko Oleksandr V.

    2014-03-01

    Full Text Available The article identifies methods of analysis of electronic money, formalises its instruments and offers an integral indicator, which should be calculated by issuing banks and those banks, which carry out operations with electronic money, issued by other banks. Calculation of the integral indicator would allow complex assessment of activity of the studied bank with electronic money and would allow comparison of parameters of different banks by the aggregate of indicators for the study of the electronic money market, its level of development, etc. The article presents methods which envisage economic analysis of electronic money in banks by the following directions: solvency and liquidity, efficiency of electronic money issue, business activity of the bank and social responsibility. Moreover, the proposed indicators by each of the directions are offered to be taken into account when building integral indicators, with the help of which banks are studied: business activity, profitability, solvency, liquidity and so on.

  5. A study of machine learning regression methods for major elemental analysis of rocks using laser-induced breakdown spectroscopy

    Science.gov (United States)

    Boucher, Thomas F.; Ozanne, Marie V.; Carmosino, Marco L.; Dyar, M. Darby; Mahadevan, Sridhar; Breves, Elly A.; Lepore, Kate H.; Clegg, Samuel M.

    2015-05-01

    The ChemCam instrument on the Mars Curiosity rover is generating thousands of LIBS spectra and bringing interest in this technique to public attention. The key to interpreting Mars or any other types of LIBS data are calibrations that relate laboratory standards to unknowns examined in other settings and enable predictions of chemical composition. Here, LIBS spectral data are analyzed using linear regression methods including partial least squares (PLS-1 and PLS-2), principal component regression (PCR), least absolute shrinkage and selection operator (lasso), elastic net, and linear support vector regression (SVR-Lin). These were compared against results from nonlinear regression methods including kernel principal component regression (K-PCR), polynomial kernel support vector regression (SVR-Py) and k-nearest neighbor (kNN) regression to discern the most effective models for interpreting chemical abundances from LIBS spectra of geological samples. The results were evaluated for 100 samples analyzed with 50 laser pulses at each of five locations averaged together. Wilcoxon signed-rank tests were employed to evaluate the statistical significance of differences among the nine models using their predicted residual sum of squares (PRESS) to make comparisons. For MgO, SiO2, Fe2O3, CaO, and MnO, the sparse models outperform all the others except for linear SVR, while for Na2O, K2O, TiO2, and P2O5, the sparse methods produce inferior results, likely because their emission lines in this energy range have lower transition probabilities. The strong performance of the sparse methods in this study suggests that use of dimensionality-reduction techniques as a preprocessing step may improve the performance of the linear models. Nonlinear methods tend to overfit the data and predict less accurately, while the linear methods proved to be more generalizable with better predictive performance. These results are attributed to the high dimensionality of the data (6144 channels

  6. Survey of electronic payment methods and systems

    NARCIS (Netherlands)

    Havinga, Paul J.M.; Smit, Gerardus Johannes Maria; Helme, A.; Verbraeck, A.

    1996-01-01

    In this paper an overview of electronic payment methods and systems is given. This survey is done as part of the Moby Dick project. Electronic payment systems can be grouped into three broad classes: traditional money transactions, digital currency and creditdebit payments. Such payment systems have

  7. Direct and regression methods do not give different estimates of digestible and metabolizable energy of wheat for pigs.

    Science.gov (United States)

    Bolarinwa, O A; Adeola, O

    2012-12-01

    Digestible and metabolizable energy contents of feed ingredients for pigs can be determined by direct or indirect methods. There are situations when only the indirect approach is suitable and the regression method is a robust indirect approach. This study was conducted to compare the direct and regression methods for determining the energy value of wheat for pigs. Twenty-four barrows with an average initial BW of 31 kg were assigned to 4 diets in a randomized complete block design. The 4 diets consisted of 969 g wheat/kg plus minerals and vitamins (sole wheat) for the direct method, corn (Zea mays)-soybean (Glycine max) meal reference diet (RD), RD + 300 g wheat/kg, and RD + 600 g wheat/kg. The 3 corn-soybean meal diets were used for the regression method and wheat replaced the energy-yielding ingredients, corn and soybean meal, so that the same ratio of corn and soybean meal across the experimental diets was maintained. The wheat used was analyzed to contain 883 g DM, 15.2 g N, and 3.94 Mcal GE/kg. Each diet was fed to 6 barrows in individual metabolism crates for a 5-d acclimation followed by a 5-d total but separate collection of feces and urine. The DE and ME for the sole wheat diet were 3.83 and 3.77 Mcal/kg DM, respectively. Because the sole wheat diet contained 969 g wheat/kg, these translate to 3.95 Mcal DE/kg DM and 3.89 Mcal ME/kg DM. The RD used for the regression approach yielded 4.00 Mcal DE and 3.91 Mcal ME/kg DM diet. Increasing levels of wheat in the RD linearly reduced (P direct method (3.95 and 3.89 Mcal/kg DM) did not differ (0.78 < P < 0.89) from those obtained using the regression method (3.96 and 3.88 Mcal/kg DM).

  8. First-principles method for electron-phonon coupling and electron mobility

    DEFF Research Database (Denmark)

    Gunst, Tue; Markussen, Troels; Stokbro, Kurt

    2016-01-01

    We present density functional theory calculations of the phonon-limited mobility in n-type monolayer graphene, silicene, and MoS2. The material properties, including the electron-phonon interaction, are calculated from first principles. We provide a detailed description of the normalized full......-band relaxation time approximation for the linearized Boltzmann transport equation (BTE) that includes inelastic scattering processes. The bulk electron-phonon coupling is evaluated by a supercell method. The method employed is fully numerical and does therefore not require a semianalytic treatment of part...... of the problem and, importantly, it keeps the anisotropy information stored in the coupling as well as the band structure. In addition, we perform calculations of the low-field mobility and its dependence on carrier density and temperature to obtain a better understanding of transport in graphene, silicene...

  9. EDM 1.0: electron direct methods.

    Science.gov (United States)

    Kilaas, R; Marks, L D; Own, C S

    2005-02-01

    A computer program designed to provide a number of quantitative analysis tools for high-resolution imaging and electron diffraction data is described. The program includes basic image manipulation, both real space and reciprocal space image processing, Wiener-filtering, symmetry averaging, methods for quantification of electron diffraction patterns and two-dimensional direct methods. The program consists of a number of sub-programs written in a combination of C++, C and Fortran. It can be downloaded either as GNU source code or as binaries and has been compiled and verified on a wide range of platforms, both Unix based and PC's. Elements of the design philosophy as well as future possible extensions are described.

  10. Prediction of protein binding sites using physical and chemical descriptors and the support vector machine regression method

    International Nuclear Information System (INIS)

    Sun Zhong-Hua; Jiang Fan

    2010-01-01

    In this paper a new continuous variable called core-ratio is defined to describe the probability for a residue to be in a binding site, thereby replacing the previous binary description of the interface residue using 0 and 1. So we can use the support vector machine regression method to fit the core-ratio value and predict the protein binding sites. We also design a new group of physical and chemical descriptors to characterize the binding sites. The new descriptors are more effective, with an averaging procedure used. Our test shows that much better prediction results can be obtained by the support vector regression (SVR) method than by the support vector classification method. (rapid communication)

  11. A Simple Linear Regression Method for Quantitative Trait Loci Linkage Analysis With Censored Observations

    OpenAIRE

    Anderson, Carl A.; McRae, Allan F.; Visscher, Peter M.

    2006-01-01

    Standard quantitative trait loci (QTL) mapping techniques commonly assume that the trait is both fully observed and normally distributed. When considering survival or age-at-onset traits these assumptions are often incorrect. Methods have been developed to map QTL for survival traits; however, they are both computationally intensive and not available in standard genome analysis software packages. We propose a grouped linear regression method for the analysis of continuous survival data. Using...

  12. Estimating Penetration Resistance in Agricultural Soils of Ardabil Plain Using Artificial Neural Network and Regression Methods

    Directory of Open Access Journals (Sweden)

    Gholam Reza Sheykhzadeh

    2017-02-01

    Full Text Available Introduction: Penetration resistance is one of the criteria for evaluating soil compaction. It correlates with several soil properties such as vehicle trafficability, resistance to root penetration, seedling emergence, and soil compaction by farm machinery. Direct measurement of penetration resistance is time consuming and difficult because of high temporal and spatial variability. Therefore, many different regressions and artificial neural network pedotransfer functions have been proposed to estimate penetration resistance from readily available soil variables such as particle size distribution, bulk density (Db and gravimetric water content (θm. The lands of Ardabil Province are one of the main production regions of potato in Iran, thus, obtaining the soil penetration resistance in these regions help with the management of potato production. The objective of this research was to derive pedotransfer functions by using regression and artificial neural network to predict penetration resistance from some soil variations in the agricultural soils of Ardabil plain and to compare the performance of artificial neural network with regression models. Materials and methods: Disturbed and undisturbed soil samples (n= 105 were systematically taken from 0-10 cm soil depth with nearly 3000 m distance in the agricultural lands of the Ardabil plain ((lat 38°15' to 38°40' N, long 48°16' to 48°61' E. The contents of sand, silt and clay (hydrometer method, CaCO3 (titration method, bulk density (cylinder method, particle density (Dp (pychnometer method, organic carbon (wet oxidation method, total porosity(calculating from Db and Dp, saturated (θs and field soil water (θf using the gravimetric method were measured in the laboratory. Mean geometric diameter (dg and standard deviation (σg of soil particles were computed using the percentages of sand, silt and clay. Penetration resistance was measured in situ using cone penetrometer (analog model at 10

  13. Landslide susceptibility mapping on a global scale using the method of logistic regression

    Directory of Open Access Journals (Sweden)

    L. Lin

    2017-08-01

    Full Text Available This paper proposes a statistical model for mapping global landslide susceptibility based on logistic regression. After investigating explanatory factors for landslides in the existing literature, five factors were selected for model landslide susceptibility: relative relief, extreme precipitation, lithology, ground motion and soil moisture. When building the model, 70 % of landslide and nonlandslide points were randomly selected for logistic regression, and the others were used for model validation. To evaluate the accuracy of predictive models, this paper adopts several criteria including a receiver operating characteristic (ROC curve method. Logistic regression experiments found all five factors to be significant in explaining landslide occurrence on a global scale. During the modeling process, percentage correct in confusion matrix of landslide classification was approximately 80 % and the area under the curve (AUC was nearly 0.87. During the validation process, the above statistics were about 81 % and 0.88, respectively. Such a result indicates that the model has strong robustness and stable performance. This model found that at a global scale, soil moisture can be dominant in the occurrence of landslides and topographic factor may be secondary.

  14. A Simple and Convenient Method of Multiple Linear Regression to Calculate Iodine Molecular Constants

    Science.gov (United States)

    Cooper, Paul D.

    2010-01-01

    A new procedure using a student-friendly least-squares multiple linear-regression technique utilizing a function within Microsoft Excel is described that enables students to calculate molecular constants from the vibronic spectrum of iodine. This method is advantageous pedagogically as it calculates molecular constants for ground and excited…

  15. COMPUTER-BASED PREDICTION OF TOXICITY USING THE ELECTRON-CONFORMATIONAL METHOD. APPLICATION TO FRAGRANCE ALLERGENS AND OTHER ENVIRONMENTAL POLLUTANTS

    Directory of Open Access Journals (Sweden)

    Natalia N. Gorinchoy

    2012-06-01

    Full Text Available The electron-conformational (EC method is employed for the toxicophore (Tph identification and quantitative prediction of toxicity using the training set of 24 compounds that are considered as fragrance allergens. The values of a=LD50 in oral exposure of rats were chosen as a measure of toxicity. EC parameters are evaluated on the base of conformational analysis and ab initio electronic structure calculations (including solvent influence. The Tph consists of four sites which in this series of compounds are represented by three carbon and one oxygen atoms, but may be any other atoms that have the same electronic and geometric features within the tolerance limits. The regression model taking into consideration the Tph flexibility, anti-Tph shielding, and influence of out-of-Tph functional groups predicts well the experimental values of toxicity (R2 = 0.93 with a reasonable leaveone- out cross-validation.

  16. A study of machine learning regression methods for major elemental analysis of rocks using laser-induced breakdown spectroscopy

    International Nuclear Information System (INIS)

    Boucher, Thomas F.; Ozanne, Marie V.; Carmosino, Marco L.; Dyar, M. Darby; Mahadevan, Sridhar; Breves, Elly A.; Lepore, Kate H.; Clegg, Samuel M.

    2015-01-01

    The ChemCam instrument on the Mars Curiosity rover is generating thousands of LIBS spectra and bringing interest in this technique to public attention. The key to interpreting Mars or any other types of LIBS data are calibrations that relate laboratory standards to unknowns examined in other settings and enable predictions of chemical composition. Here, LIBS spectral data are analyzed using linear regression methods including partial least squares (PLS-1 and PLS-2), principal component regression (PCR), least absolute shrinkage and selection operator (lasso), elastic net, and linear support vector regression (SVR-Lin). These were compared against results from nonlinear regression methods including kernel principal component regression (K-PCR), polynomial kernel support vector regression (SVR-Py) and k-nearest neighbor (kNN) regression to discern the most effective models for interpreting chemical abundances from LIBS spectra of geological samples. The results were evaluated for 100 samples analyzed with 50 laser pulses at each of five locations averaged together. Wilcoxon signed-rank tests were employed to evaluate the statistical significance of differences among the nine models using their predicted residual sum of squares (PRESS) to make comparisons. For MgO, SiO 2 , Fe 2 O 3 , CaO, and MnO, the sparse models outperform all the others except for linear SVR, while for Na 2 O, K 2 O, TiO 2 , and P 2 O 5 , the sparse methods produce inferior results, likely because their emission lines in this energy range have lower transition probabilities. The strong performance of the sparse methods in this study suggests that use of dimensionality-reduction techniques as a preprocessing step may improve the performance of the linear models. Nonlinear methods tend to overfit the data and predict less accurately, while the linear methods proved to be more generalizable with better predictive performance. These results are attributed to the high dimensionality of the data (6144

  17. New method of ionization energy calculation for two-electron ions

    International Nuclear Information System (INIS)

    Ershov, D.K.

    1997-01-01

    A new method for calculation of the ionization energy of two-electron ions is proposed. The method is based on the calculation of the energy of second electron interaction with the field of an one-electron ion the potential of which is well known

  18. Predicting Taxi-Out Time at Congested Airports with Optimization-Based Support Vector Regression Methods

    Directory of Open Access Journals (Sweden)

    Guan Lian

    2018-01-01

    Full Text Available Accurate prediction of taxi-out time is significant precondition for improving the operationality of the departure process at an airport, as well as reducing the long taxi-out time, congestion, and excessive emission of greenhouse gases. Unfortunately, several of the traditional methods of predicting taxi-out time perform unsatisfactorily at congested airports. This paper describes and tests three of those conventional methods which include Generalized Linear Model, Softmax Regression Model, and Artificial Neural Network method and two improved Support Vector Regression (SVR approaches based on swarm intelligence algorithm optimization, which include Particle Swarm Optimization (PSO and Firefly Algorithm. In order to improve the global searching ability of Firefly Algorithm, adaptive step factor and Lévy flight are implemented simultaneously when updating the location function. Six factors are analysed, of which delay is identified as one significant factor in congested airports. Through a series of specific dynamic analyses, a case study of Beijing International Airport (PEK is tested with historical data. The performance measures show that the proposed two SVR approaches, especially the Improved Firefly Algorithm (IFA optimization-based SVR method, not only perform as the best modelling measures and accuracy rate compared with the representative forecast models, but also can achieve a better predictive performance when dealing with abnormal taxi-out time states.

  19. Hybrid ARIMAX quantile regression method for forecasting short term electricity consumption in east java

    Science.gov (United States)

    Prastuti, M.; Suhartono; Salehah, NA

    2018-04-01

    The need for energy supply, especially for electricity in Indonesia has been increasing in the last past years. Furthermore, the high electricity usage by people at different times leads to the occurrence of heteroscedasticity issue. Estimate the electricity supply that could fulfilled the community’s need is very important, but the heteroscedasticity issue often made electricity forecasting hard to be done. An accurate forecast of electricity consumptions is one of the key challenges for energy provider to make better resources and service planning and also take control actions in order to balance the electricity supply and demand for community. In this paper, hybrid ARIMAX Quantile Regression (ARIMAX-QR) approach was proposed to predict the short-term electricity consumption in East Java. This method will also be compared to time series regression using RMSE, MAPE, and MdAPE criteria. The data used in this research was the electricity consumption per half-an-hour data during the period of September 2015 to April 2016. The results show that the proposed approach can be a competitive alternative to forecast short-term electricity in East Java. ARIMAX-QR using lag values and dummy variables as predictors yield more accurate prediction in both in-sample and out-sample data. Moreover, both time series regression and ARIMAX-QR methods with addition of lag values as predictor could capture accurately the patterns in the data. Hence, it produces better predictions compared to the models that not use additional lag variables.

  20. Calculation of U, Ra, Th and K contents in uranium ore by multiple linear regression method

    International Nuclear Information System (INIS)

    Lin Chao; Chen Yingqiang; Zhang Qingwen; Tan Fuwen; Peng Guanghui

    1991-01-01

    A multiple linear regression method was used to compute γ spectra of uranium ore samples and to calculate contents of U, Ra, Th, and K. In comparison with the inverse matrix method, its advantage is that no standard samples of pure U, Ra, Th and K are needed for obtaining response coefficients

  1. Time-Dependent Close-Coupling Methods for Electron-Atom/Molecule Scattering

    International Nuclear Information System (INIS)

    Colgan, James

    2014-01-01

    The time-dependent close-coupling (TDCC) method centers on an accurate representation of the interaction between two outgoing electrons moving in the presence of a Coulomb field. It has been extensively applied to many problems of electrons, photons, and ions scattering from light atomic targets. Theoretical Description: The TDCC method centers on a solution of the time-dependent Schrödinger equation for two interacting electrons. The advantages of a time-dependent approach are two-fold; one treats the electron-electron interaction essentially in an exact manner (within numerical accuracy) and a time-dependent approach avoids the difficult boundary condition encountered when two free electrons move in a Coulomb field (the classic three-body Coulomb problem). The TDCC method has been applied to many fundamental atomic collision processes, including photon-, electron- and ion-impact ionization of light atoms. For application to electron-impact ionization of atomic systems, one decomposes the two-electron wavefunction in a partial wave expansion and represents the subsequent two-electron radial wavefunctions on a numerical lattice. The number of partial waves required to converge the ionization process depends on the energy of the incoming electron wavepacket and on the ionization threshold of the target atom or ion.

  2. The regression-calibration method for fitting generalized linear models with additive measurement error

    OpenAIRE

    James W. Hardin; Henrik Schmeidiche; Raymond J. Carroll

    2003-01-01

    This paper discusses and illustrates the method of regression calibration. This is a straightforward technique for fitting models with additive measurement error. We present this discussion in terms of generalized linear models (GLMs) following the notation defined in Hardin and Carroll (2003). Discussion will include specified measurement error, measurement error estimated by replicate error-prone proxies, and measurement error estimated by instrumental variables. The discussion focuses on s...

  3. A simple approach to power and sample size calculations in logistic regression and Cox regression models.

    Science.gov (United States)

    Vaeth, Michael; Skovlund, Eva

    2004-06-15

    For a given regression problem it is possible to identify a suitably defined equivalent two-sample problem such that the power or sample size obtained for the two-sample problem also applies to the regression problem. For a standard linear regression model the equivalent two-sample problem is easily identified, but for generalized linear models and for Cox regression models the situation is more complicated. An approximately equivalent two-sample problem may, however, also be identified here. In particular, we show that for logistic regression and Cox regression models the equivalent two-sample problem is obtained by selecting two equally sized samples for which the parameters differ by a value equal to the slope times twice the standard deviation of the independent variable and further requiring that the overall expected number of events is unchanged. In a simulation study we examine the validity of this approach to power calculations in logistic regression and Cox regression models. Several different covariate distributions are considered for selected values of the overall response probability and a range of alternatives. For the Cox regression model we consider both constant and non-constant hazard rates. The results show that in general the approach is remarkably accurate even in relatively small samples. Some discrepancies are, however, found in small samples with few events and a highly skewed covariate distribution. Comparison with results based on alternative methods for logistic regression models with a single continuous covariate indicates that the proposed method is at least as good as its competitors. The method is easy to implement and therefore provides a simple way to extend the range of problems that can be covered by the usual formulas for power and sample size determination. Copyright 2004 John Wiley & Sons, Ltd.

  4. Comparing lagged linear correlation, lagged regression, Granger causality, and vector autoregression for uncovering associations in EHR data.

    Science.gov (United States)

    Levine, Matthew E; Albers, David J; Hripcsak, George

    2016-01-01

    Time series analysis methods have been shown to reveal clinical and biological associations in data collected in the electronic health record. We wish to develop reliable high-throughput methods for identifying adverse drug effects that are easy to implement and produce readily interpretable results. To move toward this goal, we used univariate and multivariate lagged regression models to investigate associations between twenty pairs of drug orders and laboratory measurements. Multivariate lagged regression models exhibited higher sensitivity and specificity than univariate lagged regression in the 20 examples, and incorporating autoregressive terms for labs and drugs produced more robust signals in cases of known associations among the 20 example pairings. Moreover, including inpatient admission terms in the model attenuated the signals for some cases of unlikely associations, demonstrating how multivariate lagged regression models' explicit handling of context-based variables can provide a simple way to probe for health-care processes that confound analyses of EHR data.

  5. A study of machine learning regression methods for major elemental analysis of rocks using laser-induced breakdown spectroscopy

    Energy Technology Data Exchange (ETDEWEB)

    Boucher, Thomas F., E-mail: boucher@cs.umass.edu [School of Computer Science, University of Massachusetts Amherst, 140 Governor' s Drive, Amherst, MA 01003, United States. (United States); Ozanne, Marie V. [Department of Astronomy, Mount Holyoke College, South Hadley, MA 01075 (United States); Carmosino, Marco L. [School of Computer Science, University of Massachusetts Amherst, 140 Governor' s Drive, Amherst, MA 01003, United States. (United States); Dyar, M. Darby [Department of Astronomy, Mount Holyoke College, South Hadley, MA 01075 (United States); Mahadevan, Sridhar [School of Computer Science, University of Massachusetts Amherst, 140 Governor' s Drive, Amherst, MA 01003, United States. (United States); Breves, Elly A.; Lepore, Kate H. [Department of Astronomy, Mount Holyoke College, South Hadley, MA 01075 (United States); Clegg, Samuel M. [Los Alamos National Laboratory, P.O. Box 1663, MS J565, Los Alamos, NM 87545 (United States)

    2015-05-01

    The ChemCam instrument on the Mars Curiosity rover is generating thousands of LIBS spectra and bringing interest in this technique to public attention. The key to interpreting Mars or any other types of LIBS data are calibrations that relate laboratory standards to unknowns examined in other settings and enable predictions of chemical composition. Here, LIBS spectral data are analyzed using linear regression methods including partial least squares (PLS-1 and PLS-2), principal component regression (PCR), least absolute shrinkage and selection operator (lasso), elastic net, and linear support vector regression (SVR-Lin). These were compared against results from nonlinear regression methods including kernel principal component regression (K-PCR), polynomial kernel support vector regression (SVR-Py) and k-nearest neighbor (kNN) regression to discern the most effective models for interpreting chemical abundances from LIBS spectra of geological samples. The results were evaluated for 100 samples analyzed with 50 laser pulses at each of five locations averaged together. Wilcoxon signed-rank tests were employed to evaluate the statistical significance of differences among the nine models using their predicted residual sum of squares (PRESS) to make comparisons. For MgO, SiO{sub 2}, Fe{sub 2}O{sub 3}, CaO, and MnO, the sparse models outperform all the others except for linear SVR, while for Na{sub 2}O, K{sub 2}O, TiO{sub 2}, and P{sub 2}O{sub 5}, the sparse methods produce inferior results, likely because their emission lines in this energy range have lower transition probabilities. The strong performance of the sparse methods in this study suggests that use of dimensionality-reduction techniques as a preprocessing step may improve the performance of the linear models. Nonlinear methods tend to overfit the data and predict less accurately, while the linear methods proved to be more generalizable with better predictive performance. These results are attributed to the high

  6. Gaussian process regression analysis for functional data

    CERN Document Server

    Shi, Jian Qing

    2011-01-01

    Gaussian Process Regression Analysis for Functional Data presents nonparametric statistical methods for functional regression analysis, specifically the methods based on a Gaussian process prior in a functional space. The authors focus on problems involving functional response variables and mixed covariates of functional and scalar variables.Covering the basics of Gaussian process regression, the first several chapters discuss functional data analysis, theoretical aspects based on the asymptotic properties of Gaussian process regression models, and new methodological developments for high dime

  7. Method for controlling low-energy high current density electron beams

    International Nuclear Information System (INIS)

    Lee, J.N.; Oswald, R.B. Jr.

    1977-01-01

    A method and an apparatus for controlling the angle of incidence of low-energy, high current density electron beams are disclosed. The apparatus includes a current generating diode arrangement with a mesh anode for producing a drifting electron beam. An auxiliary grounded screen electrode is placed between the anode and a target for controlling the average angle of incidence of electrons in the drifting electron beam. According to the method of the present invention, movement of the auxiliary screen electrode relative to the target and the anode permits reliable and reproducible adjustment of the average angle of incidence of the electrons in low energy, high current density relativistic electron beams

  8. A Monte Carlo simulation study comparing linear regression, beta regression, variable-dispersion beta regression and fractional logit regression at recovering average difference measures in a two sample design.

    Science.gov (United States)

    Meaney, Christopher; Moineddin, Rahim

    2014-01-24

    In biomedical research, response variables are often encountered which have bounded support on the open unit interval--(0,1). Traditionally, researchers have attempted to estimate covariate effects on these types of response data using linear regression. Alternative modelling strategies may include: beta regression, variable-dispersion beta regression, and fractional logit regression models. This study employs a Monte Carlo simulation design to compare the statistical properties of the linear regression model to that of the more novel beta regression, variable-dispersion beta regression, and fractional logit regression models. In the Monte Carlo experiment we assume a simple two sample design. We assume observations are realizations of independent draws from their respective probability models. The randomly simulated draws from the various probability models are chosen to emulate average proportion/percentage/rate differences of pre-specified magnitudes. Following simulation of the experimental data we estimate average proportion/percentage/rate differences. We compare the estimators in terms of bias, variance, type-1 error and power. Estimates of Monte Carlo error associated with these quantities are provided. If response data are beta distributed with constant dispersion parameters across the two samples, then all models are unbiased and have reasonable type-1 error rates and power profiles. If the response data in the two samples have different dispersion parameters, then the simple beta regression model is biased. When the sample size is small (N0 = N1 = 25) linear regression has superior type-1 error rates compared to the other models. Small sample type-1 error rates can be improved in beta regression models using bias correction/reduction methods. In the power experiments, variable-dispersion beta regression and fractional logit regression models have slightly elevated power compared to linear regression models. Similar results were observed if the

  9. Quantile regression theory and applications

    CERN Document Server

    Davino, Cristina; Vistocco, Domenico

    2013-01-01

    A guide to the implementation and interpretation of Quantile Regression models This book explores the theory and numerous applications of quantile regression, offering empirical data analysis as well as the software tools to implement the methods. The main focus of this book is to provide the reader with a comprehensivedescription of the main issues concerning quantile regression; these include basic modeling, geometrical interpretation, estimation and inference for quantile regression, as well as issues on validity of the model, diagnostic tools. Each methodological aspect is explored and

  10. Regression of environmental noise in LIGO data

    International Nuclear Information System (INIS)

    Tiwari, V; Klimenko, S; Mitselmakher, G; Necula, V; Drago, M; Prodi, G; Frolov, V; Yakushin, I; Re, V; Salemi, F; Vedovato, G

    2015-01-01

    We address the problem of noise regression in the output of gravitational-wave (GW) interferometers, using data from the physical environmental monitors (PEM). The objective of the regression analysis is to predict environmental noise in the GW channel from the PEM measurements. One of the most promising regression methods is based on the construction of Wiener–Kolmogorov (WK) filters. Using this method, the seismic noise cancellation from the LIGO GW channel has already been performed. In the presented approach the WK method has been extended, incorporating banks of Wiener filters in the time–frequency domain, multi-channel analysis and regulation schemes, which greatly enhance the versatility of the regression analysis. Also we present the first results on regression of the bi-coherent noise in the LIGO data. (paper)

  11. Thermoluminescence dating of chinese porcelain using a regression method of saturating exponential in pre-dose technique

    International Nuclear Information System (INIS)

    Wang Weida; Xia Junding; Zhou Zhixin; Leung, P.L.

    2001-01-01

    Thermoluminescence (TL) dating using a regression method of saturating exponential in pre-dose technique was described. 23 porcelain samples from past dynasties of China were dated by this method. The results show that the TL ages are in reasonable agreement with archaeological dates within a standard deviation of 27%. Such error can be accepted in porcelain dating

  12. Logistic Regression with Multiple Random Effects: A Simulation Study of Estimation Methods and Statistical Packages.

    Science.gov (United States)

    Kim, Yoonsang; Choi, Young-Ku; Emery, Sherry

    2013-08-01

    Several statistical packages are capable of estimating generalized linear mixed models and these packages provide one or more of three estimation methods: penalized quasi-likelihood, Laplace, and Gauss-Hermite. Many studies have investigated these methods' performance for the mixed-effects logistic regression model. However, the authors focused on models with one or two random effects and assumed a simple covariance structure between them, which may not be realistic. When there are multiple correlated random effects in a model, the computation becomes intensive, and often an algorithm fails to converge. Moreover, in our analysis of smoking status and exposure to anti-tobacco advertisements, we have observed that when a model included multiple random effects, parameter estimates varied considerably from one statistical package to another even when using the same estimation method. This article presents a comprehensive review of the advantages and disadvantages of each estimation method. In addition, we compare the performances of the three methods across statistical packages via simulation, which involves two- and three-level logistic regression models with at least three correlated random effects. We apply our findings to a real dataset. Our results suggest that two packages-SAS GLIMMIX Laplace and SuperMix Gaussian quadrature-perform well in terms of accuracy, precision, convergence rates, and computing speed. We also discuss the strengths and weaknesses of the two packages in regard to sample sizes.

  13. [Correlation coefficient-based classification method of hydrological dependence variability: With auto-regression model as example].

    Science.gov (United States)

    Zhao, Yu Xi; Xie, Ping; Sang, Yan Fang; Wu, Zi Yi

    2018-04-01

    Hydrological process evaluation is temporal dependent. Hydrological time series including dependence components do not meet the data consistency assumption for hydrological computation. Both of those factors cause great difficulty for water researches. Given the existence of hydrological dependence variability, we proposed a correlationcoefficient-based method for significance evaluation of hydrological dependence based on auto-regression model. By calculating the correlation coefficient between the original series and its dependence component and selecting reasonable thresholds of correlation coefficient, this method divided significance degree of dependence into no variability, weak variability, mid variability, strong variability, and drastic variability. By deducing the relationship between correlation coefficient and auto-correlation coefficient in each order of series, we found that the correlation coefficient was mainly determined by the magnitude of auto-correlation coefficient from the 1 order to p order, which clarified the theoretical basis of this method. With the first-order and second-order auto-regression models as examples, the reasonability of the deduced formula was verified through Monte-Carlo experiments to classify the relationship between correlation coefficient and auto-correlation coefficient. This method was used to analyze three observed hydrological time series. The results indicated the coexistence of stochastic and dependence characteristics in hydrological process.

  14. A novel relational regularization feature selection method for joint regression and classification in AD diagnosis.

    Science.gov (United States)

    Zhu, Xiaofeng; Suk, Heung-Il; Wang, Li; Lee, Seong-Whan; Shen, Dinggang

    2017-05-01

    In this paper, we focus on joint regression and classification for Alzheimer's disease diagnosis and propose a new feature selection method by embedding the relational information inherent in the observations into a sparse multi-task learning framework. Specifically, the relational information includes three kinds of relationships (such as feature-feature relation, response-response relation, and sample-sample relation), for preserving three kinds of the similarity, such as for the features, the response variables, and the samples, respectively. To conduct feature selection, we first formulate the objective function by imposing these three relational characteristics along with an ℓ 2,1 -norm regularization term, and further propose a computationally efficient algorithm to optimize the proposed objective function. With the dimension-reduced data, we train two support vector regression models to predict the clinical scores of ADAS-Cog and MMSE, respectively, and also a support vector classification model to determine the clinical label. We conducted extensive experiments on the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset to validate the effectiveness of the proposed method. Our experimental results showed the efficacy of the proposed method in enhancing the performances of both clinical scores prediction and disease status identification, compared to the state-of-the-art methods. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. Applied regression analysis a research tool

    CERN Document Server

    Pantula, Sastry; Dickey, David

    1998-01-01

    Least squares estimation, when used appropriately, is a powerful research tool. A deeper understanding of the regression concepts is essential for achieving optimal benefits from a least squares analysis. This book builds on the fundamentals of statistical methods and provides appropriate concepts that will allow a scientist to use least squares as an effective research tool. Applied Regression Analysis is aimed at the scientist who wishes to gain a working knowledge of regression analysis. The basic purpose of this book is to develop an understanding of least squares and related statistical methods without becoming excessively mathematical. It is the outgrowth of more than 30 years of consulting experience with scientists and many years of teaching an applied regression course to graduate students. Applied Regression Analysis serves as an excellent text for a service course on regression for non-statisticians and as a reference for researchers. It also provides a bridge between a two-semester introduction to...

  16. Method of determining the position of an irradiated electron beam

    International Nuclear Information System (INIS)

    Fukuda, Wataru.

    1967-01-01

    The present invention relates to the method of determining the position of a radiated electron beam, in particular, the method of detecting the position of a p-n junction by a novel method when irradiating the electron beam on to the semi-conductor wafer, controlling the position of the electron beam from said junction. When the electron beam is irradiated on to the semi-conductor wafer which possesses the p-n junction, the position of the p-n junction may be ascertained to determine the position of the irradiated electron beam by detecting the electromotive force resulting from said p-n junction with a metal disposed in the proximity of but without mechanical contact with said semi-conductor wafer. Furthermore, as far as a semi-conductor wafer having at least one p-n junction is concerned, the present invention allows said p-n junction to be used to determine the position of an irradiated electron beam. Thus, according to the present invention, the electromotive force of the electron beam resulting from the p-n junction may easily be detected by electrostatic coupling, enabling the position of the irradiated electron beam to be accurately determined. (Masui, R.)

  17. Multilayer electronic component systems and methods of manufacture

    Science.gov (United States)

    Thompson, Dane (Inventor); Wang, Guoan (Inventor); Kingsley, Nickolas D. (Inventor); Papapolymerou, Ioannis (Inventor); Tentzeris, Emmanouil M. (Inventor); Bairavasubramanian, Ramanan (Inventor); DeJean, Gerald (Inventor); Li, RongLin (Inventor)

    2010-01-01

    Multilayer electronic component systems and methods of manufacture are provided. In this regard, an exemplary system comprises a first layer of liquid crystal polymer (LCP), first electronic components supported by the first layer, and a second layer of LCP. The first layer is attached to the second layer by thermal bonds. Additionally, at least a portion of the first electronic components are located between the first layer and the second layer.

  18. 14 CFR 1274.931 - Electronic funds transfer payment methods.

    Science.gov (United States)

    2010-01-01

    ... cooperative agreement will be made by the Government by electronic funds transfer through the Treasury Fedline... 14 Aeronautics and Space 5 2010-01-01 2010-01-01 false Electronic funds transfer payment methods... COOPERATIVE AGREEMENTS WITH COMMERCIAL FIRMS Other Provisions and Special Conditions § 1274.931 Electronic...

  19. Power system state estimation using an iteratively reweighted least squares method for sequential L{sub 1}-regression

    Energy Technology Data Exchange (ETDEWEB)

    Jabr, R.A. [Electrical, Computer and Communication Engineering Department, Notre Dame University, P.O. Box 72, Zouk Mikhael, Zouk Mosbeh (Lebanon)

    2006-02-15

    This paper presents an implementation of the least absolute value (LAV) power system state estimator based on obtaining a sequence of solutions to the L{sub 1}-regression problem using an iteratively reweighted least squares (IRLS{sub L1}) method. The proposed implementation avoids reformulating the regression problem into standard linear programming (LP) form and consequently does not require the use of common methods of LP, such as those based on the simplex method or interior-point methods. It is shown that the IRLS{sub L1} method is equivalent to solving a sequence of linear weighted least squares (LS) problems. Thus, its implementation presents little additional effort since the sparse LS solver is common to existing LS state estimators. Studies on the termination criteria of the IRLS{sub L1} method have been carried out to determine a procedure for which the proposed estimator is more computationally efficient than a previously proposed non-linear iteratively reweighted least squares (IRLS) estimator. Indeed, it is revealed that the proposed method is a generalization of the previously reported IRLS estimator, but is based on more rigorous theory. (author)

  20. Vector regression introduced

    Directory of Open Access Journals (Sweden)

    Mok Tik

    2014-06-01

    Full Text Available This study formulates regression of vector data that will enable statistical analysis of various geodetic phenomena such as, polar motion, ocean currents, typhoon/hurricane tracking, crustal deformations, and precursory earthquake signals. The observed vector variable of an event (dependent vector variable is expressed as a function of a number of hypothesized phenomena realized also as vector variables (independent vector variables and/or scalar variables that are likely to impact the dependent vector variable. The proposed representation has the unique property of solving the coefficients of independent vector variables (explanatory variables also as vectors, hence it supersedes multivariate multiple regression models, in which the unknown coefficients are scalar quantities. For the solution, complex numbers are used to rep- resent vector information, and the method of least squares is deployed to estimate the vector model parameters after transforming the complex vector regression model into a real vector regression model through isomorphism. Various operational statistics for testing the predictive significance of the estimated vector parameter coefficients are also derived. A simple numerical example demonstrates the use of the proposed vector regression analysis in modeling typhoon paths.

  1. A data driven method to measure electron charge mis-identification rate

    CERN Document Server

    Bakhshiansohi, Hamed

    2009-01-01

    Electron charge mis-measurement is an important challenge in analyses which depend on the charge of electron. To estimate the probability of {\\it electron charge mis-measurement} a data driven method is introduced and a good agreement with MC based methods is achieved.\\\\ The third moment of $\\phi$ distribution of hits in electron SuperCluster is studied. The correlation between this variable and the electron charge is also investigated. Using this `new' variable and some other variables the electron charge measurement is improved by two different approaches.

  2. On Solving Lq-Penalized Regressions

    Directory of Open Access Journals (Sweden)

    Tracy Zhou Wu

    2007-01-01

    Full Text Available Lq-penalized regression arises in multidimensional statistical modelling where all or part of the regression coefficients are penalized to achieve both accuracy and parsimony of statistical models. There is often substantial computational difficulty except for the quadratic penalty case. The difficulty is partly due to the nonsmoothness of the objective function inherited from the use of the absolute value. We propose a new solution method for the general Lq-penalized regression problem based on space transformation and thus efficient optimization algorithms. The new method has immediate applications in statistics, notably in penalized spline smoothing problems. In particular, the LASSO problem is shown to be polynomial time solvable. Numerical studies show promise of our approach.

  3. Predicting volume of distribution with decision tree-based regression methods using predicted tissue:plasma partition coefficients.

    Science.gov (United States)

    Freitas, Alex A; Limbu, Kriti; Ghafourian, Taravat

    2015-01-01

    Volume of distribution is an important pharmacokinetic property that indicates the extent of a drug's distribution in the body tissues. This paper addresses the problem of how to estimate the apparent volume of distribution at steady state (Vss) of chemical compounds in the human body using decision tree-based regression methods from the area of data mining (or machine learning). Hence, the pros and cons of several different types of decision tree-based regression methods have been discussed. The regression methods predict Vss using, as predictive features, both the compounds' molecular descriptors and the compounds' tissue:plasma partition coefficients (Kt:p) - often used in physiologically-based pharmacokinetics. Therefore, this work has assessed whether the data mining-based prediction of Vss can be made more accurate by using as input not only the compounds' molecular descriptors but also (a subset of) their predicted Kt:p values. Comparison of the models that used only molecular descriptors, in particular, the Bagging decision tree (mean fold error of 2.33), with those employing predicted Kt:p values in addition to the molecular descriptors, such as the Bagging decision tree using adipose Kt:p (mean fold error of 2.29), indicated that the use of predicted Kt:p values as descriptors may be beneficial for accurate prediction of Vss using decision trees if prior feature selection is applied. Decision tree based models presented in this work have an accuracy that is reasonable and similar to the accuracy of reported Vss inter-species extrapolations in the literature. The estimation of Vss for new compounds in drug discovery will benefit from methods that are able to integrate large and varied sources of data and flexible non-linear data mining methods such as decision trees, which can produce interpretable models. Graphical AbstractDecision trees for the prediction of tissue partition coefficient and volume of distribution of drugs.

  4. Relationship between electronic structure and radioprotective activity of some indazoles

    International Nuclear Information System (INIS)

    Sokolov, Yu.A.

    2000-01-01

    The quantum-chemical study of electronic structure of 29 indasoles with complete optimization of geometry and search of quantitative link between the established characteristics and radioprotective activity (RPA) was carried out through the MNDO method with application of multiple linear and nonlinear regression analysis and the basic component method. The equations of correlation relationship between the RPA and electronic characteristics are presented. 10 indasole structures, the forecasted RPA values whereof (survival rate, %) equal 50% and above, are selected. The statistic significance of the obtained correlation equations and their regression coefficients make it possible to conclude, that the established relationships are not accidental and are prospective for forecasting RPA of other close compounds of the indasole series [ru

  5. Analysis of some methods for reduced rank Gaussian process regression

    DEFF Research Database (Denmark)

    Quinonero-Candela, J.; Rasmussen, Carl Edward

    2005-01-01

    While there is strong motivation for using Gaussian Processes (GPs) due to their excellent performance in regression and classification problems, their computational complexity makes them impractical when the size of the training set exceeds a few thousand cases. This has motivated the recent...... proliferation of a number of cost-effective approximations to GPs, both for classification and for regression. In this paper we analyze one popular approximation to GPs for regression: the reduced rank approximation. While generally GPs are equivalent to infinite linear models, we show that Reduced Rank...... Gaussian Processes (RRGPs) are equivalent to finite sparse linear models. We also introduce the concept of degenerate GPs and show that they correspond to inappropriate priors. We show how to modify the RRGP to prevent it from being degenerate at test time. Training RRGPs consists both in learning...

  6. Laser-induced Breakdown spectroscopy quantitative analysis method via adaptive analytical line selection and relevance vector machine regression model

    International Nuclear Information System (INIS)

    Yang, Jianhong; Yi, Cancan; Xu, Jinwu; Ma, Xianghong

    2015-01-01

    A new LIBS quantitative analysis method based on analytical line adaptive selection and Relevance Vector Machine (RVM) regression model is proposed. First, a scheme of adaptively selecting analytical line is put forward in order to overcome the drawback of high dependency on a priori knowledge. The candidate analytical lines are automatically selected based on the built-in characteristics of spectral lines, such as spectral intensity, wavelength and width at half height. The analytical lines which will be used as input variables of regression model are determined adaptively according to the samples for both training and testing. Second, an LIBS quantitative analysis method based on RVM is presented. The intensities of analytical lines and the elemental concentrations of certified standard samples are used to train the RVM regression model. The predicted elemental concentration analysis results will be given with a form of confidence interval of probabilistic distribution, which is helpful for evaluating the uncertainness contained in the measured spectra. Chromium concentration analysis experiments of 23 certified standard high-alloy steel samples have been carried out. The multiple correlation coefficient of the prediction was up to 98.85%, and the average relative error of the prediction was 4.01%. The experiment results showed that the proposed LIBS quantitative analysis method achieved better prediction accuracy and better modeling robustness compared with the methods based on partial least squares regression, artificial neural network and standard support vector machine. - Highlights: • Both training and testing samples are considered for analytical lines selection. • The analytical lines are auto-selected based on the built-in characteristics of spectral lines. • The new method can achieve better prediction accuracy and modeling robustness. • Model predictions are given with confidence interval of probabilistic distribution

  7. Thermodynamics, Gibbs Method and Statistical Physics of Electron Gases Gibbs Method and Statistical Physics of Electron Gases

    CERN Document Server

    Askerov, Bahram M

    2010-01-01

    This book deals with theoretical thermodynamics and the statistical physics of electron and particle gases. While treating the laws of thermodynamics from both classical and quantum theoretical viewpoints, it posits that the basis of the statistical theory of macroscopic properties of a system is the microcanonical distribution of isolated systems, from which all canonical distributions stem. To calculate the free energy, the Gibbs method is applied to ideal and non-ideal gases, and also to a crystalline solid. Considerable attention is paid to the Fermi-Dirac and Bose-Einstein quantum statistics and its application to different quantum gases, and electron gas in both metals and semiconductors is considered in a nonequilibrium state. A separate chapter treats the statistical theory of thermodynamic properties of an electron gas in a quantizing magnetic field.

  8. Principal component regression analysis with SPSS.

    Science.gov (United States)

    Liu, R X; Kuang, J; Gong, Q; Hou, X L

    2003-06-01

    The paper introduces all indices of multicollinearity diagnoses, the basic principle of principal component regression and determination of 'best' equation method. The paper uses an example to describe how to do principal component regression analysis with SPSS 10.0: including all calculating processes of the principal component regression and all operations of linear regression, factor analysis, descriptives, compute variable and bivariate correlations procedures in SPSS 10.0. The principal component regression analysis can be used to overcome disturbance of the multicollinearity. The simplified, speeded up and accurate statistical effect is reached through the principal component regression analysis with SPSS.

  9. Numerical methods in electron magnetic resonance

    International Nuclear Information System (INIS)

    Soernes, A.R.

    1998-01-01

    The focal point of the thesis is the development and use of numerical methods in the analysis, simulation and interpretation of Electron Magnetic Resonance experiments on free radicals in solids to uncover the structure, the dynamics and the environment of the system

  10. Numerical methods in electron magnetic resonance

    Energy Technology Data Exchange (ETDEWEB)

    Soernes, A.R

    1998-07-01

    The focal point of the thesis is the development and use of numerical methods in the analysis, simulation and interpretation of Electron Magnetic Resonance experiments on free radicals in solids to uncover the structure, the dynamics and the environment of the system.

  11. Differentiating regressed melanoma from regressed lichenoid keratosis.

    Science.gov (United States)

    Chan, Aegean H; Shulman, Kenneth J; Lee, Bonnie A

    2017-04-01

    Distinguishing regressed lichen planus-like keratosis (LPLK) from regressed melanoma can be difficult on histopathologic examination, potentially resulting in mismanagement of patients. We aimed to identify histopathologic features by which regressed melanoma can be differentiated from regressed LPLK. Twenty actively inflamed LPLK, 12 LPLK with regression and 15 melanomas with regression were compared and evaluated by hematoxylin and eosin staining as well as Melan-A, microphthalmia transcription factor (MiTF) and cytokeratin (AE1/AE3) immunostaining. (1) A total of 40% of regressed melanomas showed complete or near complete loss of melanocytes within the epidermis with Melan-A and MiTF immunostaining, while 8% of regressed LPLK exhibited this finding. (2) Necrotic keratinocytes were seen in the epidermis in 33% regressed melanomas as opposed to all of the regressed LPLK. (3) A dense infiltrate of melanophages in the papillary dermis was seen in 40% of regressed melanomas, a feature not seen in regressed LPLK. In summary, our findings suggest that a complete or near complete loss of melanocytes within the epidermis strongly favors a regressed melanoma over a regressed LPLK. In addition, necrotic epidermal keratinocytes and the presence of a dense band-like distribution of dermal melanophages can be helpful in differentiating these lesions. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  12. A Comparative Investigation of the Combined Effects of Pre-Processing, Wavelength Selection, and Regression Methods on Near-Infrared Calibration Model Performance.

    Science.gov (United States)

    Wan, Jian; Chen, Yi-Chieh; Morris, A Julian; Thennadil, Suresh N

    2017-07-01

    Near-infrared (NIR) spectroscopy is being widely used in various fields ranging from pharmaceutics to the food industry for analyzing chemical and physical properties of the substances concerned. Its advantages over other analytical techniques include available physical interpretation of spectral data, nondestructive nature and high speed of measurements, and little or no need for sample preparation. The successful application of NIR spectroscopy relies on three main aspects: pre-processing of spectral data to eliminate nonlinear variations due to temperature, light scattering effects and many others, selection of those wavelengths that contribute useful information, and identification of suitable calibration models using linear/nonlinear regression . Several methods have been developed for each of these three aspects and many comparative studies of different methods exist for an individual aspect or some combinations. However, there is still a lack of comparative studies for the interactions among these three aspects, which can shed light on what role each aspect plays in the calibration and how to combine various methods of each aspect together to obtain the best calibration model. This paper aims to provide such a comparative study based on four benchmark data sets using three typical pre-processing methods, namely, orthogonal signal correction (OSC), extended multiplicative signal correction (EMSC) and optical path-length estimation and correction (OPLEC); two existing wavelength selection methods, namely, stepwise forward selection (SFS) and genetic algorithm optimization combined with partial least squares regression for spectral data (GAPLSSP); four popular regression methods, namely, partial least squares (PLS), least absolute shrinkage and selection operator (LASSO), least squares support vector machine (LS-SVM), and Gaussian process regression (GPR). The comparative study indicates that, in general, pre-processing of spectral data can play a significant

  13. Performance and separation occurrence of binary probit regression estimator using maximum likelihood method and Firths approach under different sample size

    Science.gov (United States)

    Lusiana, Evellin Dewi

    2017-12-01

    The parameters of binary probit regression model are commonly estimated by using Maximum Likelihood Estimation (MLE) method. However, MLE method has limitation if the binary data contains separation. Separation is the condition where there are one or several independent variables that exactly grouped the categories in binary response. It will result the estimators of MLE method become non-convergent, so that they cannot be used in modeling. One of the effort to resolve the separation is using Firths approach instead. This research has two aims. First, to identify the chance of separation occurrence in binary probit regression model between MLE method and Firths approach. Second, to compare the performance of binary probit regression model estimator that obtained by MLE method and Firths approach using RMSE criteria. Those are performed using simulation method and under different sample size. The results showed that the chance of separation occurrence in MLE method for small sample size is higher than Firths approach. On the other hand, for larger sample size, the probability decreased and relatively identic between MLE method and Firths approach. Meanwhile, Firths estimators have smaller RMSE than MLEs especially for smaller sample sizes. But for larger sample sizes, the RMSEs are not much different. It means that Firths estimators outperformed MLE estimator.

  14. Regression calibration with more surrogates than mismeasured variables

    KAUST Repository

    Kipnis, Victor

    2012-06-29

    In a recent paper (Weller EA, Milton DK, Eisen EA, Spiegelman D. Regression calibration for logistic regression with multiple surrogates for one exposure. Journal of Statistical Planning and Inference 2007; 137: 449-461), the authors discussed fitting logistic regression models when a scalar main explanatory variable is measured with error by several surrogates, that is, a situation with more surrogates than variables measured with error. They compared two methods of adjusting for measurement error using a regression calibration approximate model as if it were exact. One is the standard regression calibration approach consisting of substituting an estimated conditional expectation of the true covariate given observed data in the logistic regression. The other is a novel two-stage approach when the logistic regression is fitted to multiple surrogates, and then a linear combination of estimated slopes is formed as the estimate of interest. Applying estimated asymptotic variances for both methods in a single data set with some sensitivity analysis, the authors asserted superiority of their two-stage approach. We investigate this claim in some detail. A troubling aspect of the proposed two-stage method is that, unlike standard regression calibration and a natural form of maximum likelihood, the resulting estimates are not invariant to reparameterization of nuisance parameters in the model. We show, however, that, under the regression calibration approximation, the two-stage method is asymptotically equivalent to a maximum likelihood formulation, and is therefore in theory superior to standard regression calibration. However, our extensive finite-sample simulations in the practically important parameter space where the regression calibration model provides a good approximation failed to uncover such superiority of the two-stage method. We also discuss extensions to different data structures.

  15. Regression calibration with more surrogates than mismeasured variables

    KAUST Repository

    Kipnis, Victor; Midthune, Douglas; Freedman, Laurence S.; Carroll, Raymond J.

    2012-01-01

    In a recent paper (Weller EA, Milton DK, Eisen EA, Spiegelman D. Regression calibration for logistic regression with multiple surrogates for one exposure. Journal of Statistical Planning and Inference 2007; 137: 449-461), the authors discussed fitting logistic regression models when a scalar main explanatory variable is measured with error by several surrogates, that is, a situation with more surrogates than variables measured with error. They compared two methods of adjusting for measurement error using a regression calibration approximate model as if it were exact. One is the standard regression calibration approach consisting of substituting an estimated conditional expectation of the true covariate given observed data in the logistic regression. The other is a novel two-stage approach when the logistic regression is fitted to multiple surrogates, and then a linear combination of estimated slopes is formed as the estimate of interest. Applying estimated asymptotic variances for both methods in a single data set with some sensitivity analysis, the authors asserted superiority of their two-stage approach. We investigate this claim in some detail. A troubling aspect of the proposed two-stage method is that, unlike standard regression calibration and a natural form of maximum likelihood, the resulting estimates are not invariant to reparameterization of nuisance parameters in the model. We show, however, that, under the regression calibration approximation, the two-stage method is asymptotically equivalent to a maximum likelihood formulation, and is therefore in theory superior to standard regression calibration. However, our extensive finite-sample simulations in the practically important parameter space where the regression calibration model provides a good approximation failed to uncover such superiority of the two-stage method. We also discuss extensions to different data structures.

  16. Face Hallucination with Linear Regression Model in Semi-Orthogonal Multilinear PCA Method

    Science.gov (United States)

    Asavaskulkiet, Krissada

    2018-04-01

    In this paper, we propose a new face hallucination technique, face images reconstruction in HSV color space with a semi-orthogonal multilinear principal component analysis method. This novel hallucination technique can perform directly from tensors via tensor-to-vector projection by imposing the orthogonality constraint in only one mode. In our experiments, we use facial images from FERET database to test our hallucination approach which is demonstrated by extensive experiments with high-quality hallucinated color faces. The experimental results assure clearly demonstrated that we can generate photorealistic color face images by using the SO-MPCA subspace with a linear regression model.

  17. Regression with Sparse Approximations of Data

    DEFF Research Database (Denmark)

    Noorzad, Pardis; Sturm, Bob L.

    2012-01-01

    We propose sparse approximation weighted regression (SPARROW), a method for local estimation of the regression function that uses sparse approximation with a dictionary of measurements. SPARROW estimates the regression function at a point with a linear combination of a few regressands selected...... by a sparse approximation of the point in terms of the regressors. We show SPARROW can be considered a variant of \\(k\\)-nearest neighbors regression (\\(k\\)-NNR), and more generally, local polynomial kernel regression. Unlike \\(k\\)-NNR, however, SPARROW can adapt the number of regressors to use based...

  18. Logistic Regression with Multiple Random Effects: A Simulation Study of Estimation Methods and Statistical Packages

    Science.gov (United States)

    Kim, Yoonsang; Emery, Sherry

    2013-01-01

    Several statistical packages are capable of estimating generalized linear mixed models and these packages provide one or more of three estimation methods: penalized quasi-likelihood, Laplace, and Gauss-Hermite. Many studies have investigated these methods’ performance for the mixed-effects logistic regression model. However, the authors focused on models with one or two random effects and assumed a simple covariance structure between them, which may not be realistic. When there are multiple correlated random effects in a model, the computation becomes intensive, and often an algorithm fails to converge. Moreover, in our analysis of smoking status and exposure to anti-tobacco advertisements, we have observed that when a model included multiple random effects, parameter estimates varied considerably from one statistical package to another even when using the same estimation method. This article presents a comprehensive review of the advantages and disadvantages of each estimation method. In addition, we compare the performances of the three methods across statistical packages via simulation, which involves two- and three-level logistic regression models with at least three correlated random effects. We apply our findings to a real dataset. Our results suggest that two packages—SAS GLIMMIX Laplace and SuperMix Gaussian quadrature—perform well in terms of accuracy, precision, convergence rates, and computing speed. We also discuss the strengths and weaknesses of the two packages in regard to sample sizes. PMID:24288415

  19. A method for nonlinear exponential regression analysis

    Science.gov (United States)

    Junkin, B. G.

    1971-01-01

    A computer-oriented technique is presented for performing a nonlinear exponential regression analysis on decay-type experimental data. The technique involves the least squares procedure wherein the nonlinear problem is linearized by expansion in a Taylor series. A linear curve fitting procedure for determining the initial nominal estimates for the unknown exponential model parameters is included as an integral part of the technique. A correction matrix was derived and then applied to the nominal estimate to produce an improved set of model parameters. The solution cycle is repeated until some predetermined criterion is satisfied.

  20. Sparse reduced-rank regression with covariance estimation

    KAUST Repository

    Chen, Lisha

    2014-12-08

    Improving the predicting performance of the multiple response regression compared with separate linear regressions is a challenging question. On the one hand, it is desirable to seek model parsimony when facing a large number of parameters. On the other hand, for certain applications it is necessary to take into account the general covariance structure for the errors of the regression model. We assume a reduced-rank regression model and work with the likelihood function with general error covariance to achieve both objectives. In addition we propose to select relevant variables for reduced-rank regression by using a sparsity-inducing penalty, and to estimate the error covariance matrix simultaneously by using a similar penalty on the precision matrix. We develop a numerical algorithm to solve the penalized regression problem. In a simulation study and real data analysis, the new method is compared with two recent methods for multivariate regression and exhibits competitive performance in prediction and variable selection.

  1. Sparse reduced-rank regression with covariance estimation

    KAUST Repository

    Chen, Lisha; Huang, Jianhua Z.

    2014-01-01

    Improving the predicting performance of the multiple response regression compared with separate linear regressions is a challenging question. On the one hand, it is desirable to seek model parsimony when facing a large number of parameters. On the other hand, for certain applications it is necessary to take into account the general covariance structure for the errors of the regression model. We assume a reduced-rank regression model and work with the likelihood function with general error covariance to achieve both objectives. In addition we propose to select relevant variables for reduced-rank regression by using a sparsity-inducing penalty, and to estimate the error covariance matrix simultaneously by using a similar penalty on the precision matrix. We develop a numerical algorithm to solve the penalized regression problem. In a simulation study and real data analysis, the new method is compared with two recent methods for multivariate regression and exhibits competitive performance in prediction and variable selection.

  2. Numerical simulation methods for electron and ion optics

    International Nuclear Information System (INIS)

    Munro, Eric

    2011-01-01

    This paper summarizes currently used techniques for simulation and computer-aided design in electron and ion beam optics. Topics covered include: field computation, methods for computing optical properties (including Paraxial Rays and Aberration Integrals, Differential Algebra and Direct Ray Tracing), simulation of Coulomb interactions, space charge effects in electron and ion sources, tolerancing, wave optical simulations and optimization. Simulation examples are presented for multipole aberration correctors, Wien filter monochromators, imaging energy filters, magnetic prisms, general curved axis systems and electron mirrors.

  3. Statistical learning method in regression analysis of simulated positron spectral data

    International Nuclear Information System (INIS)

    Avdic, S. Dz.

    2005-01-01

    Positron lifetime spectroscopy is a non-destructive tool for detection of radiation induced defects in nuclear reactor materials. This work concerns the applicability of the support vector machines method for the input data compression in the neural network analysis of positron lifetime spectra. It has been demonstrated that the SVM technique can be successfully applied to regression analysis of positron spectra. A substantial data compression of about 50 % and 8 % of the whole training set with two and three spectral components respectively has been achieved including a high accuracy of the spectra approximation. However, some parameters in the SVM approach such as the insensitivity zone e and the penalty parameter C have to be chosen carefully to obtain a good performance. (author)

  4. The crux of the method: assumptions in ordinary least squares and logistic regression.

    Science.gov (United States)

    Long, Rebecca G

    2008-10-01

    Logistic regression has increasingly become the tool of choice when analyzing data with a binary dependent variable. While resources relating to the technique are widely available, clear discussions of why logistic regression should be used in place of ordinary least squares regression are difficult to find. The current paper compares and contrasts the assumptions of ordinary least squares with those of logistic regression and explains why logistic regression's looser assumptions make it adept at handling violations of the more important assumptions in ordinary least squares.

  5. Regression and direct methods do not give different estimates of digestible and metabolizable energy values of barley, sorghum, and wheat for pigs.

    Science.gov (United States)

    Bolarinwa, O A; Adeola, O

    2016-02-01

    Direct or indirect methods can be used to determine the DE and ME of feed ingredients for pigs. In situations when only the indirect approach is suitable, the regression method presents a robust indirect approach. Three experiments were conducted to compare the direct and regression methods for determining the DE and ME values of barley, sorghum, and wheat for pigs. In each experiment, 24 barrows with an average initial BW of 31, 32, and 33 kg were assigned to 4 diets in a randomized complete block design. The 4 diets consisted of 969 g barley, sorghum, or wheat/kg plus minerals and vitamins for the direct method; a corn-soybean meal reference diet (RD); the RD + 300 g barley, sorghum, or wheat/kg; and the RD + 600 g barley, sorghum, or wheat/kg. The 3 corn-soybean meal diets were used for the regression method. Each diet was fed to 6 barrows in individual metabolism crates for a 5-d acclimation followed by a 5-d period of total but separate collection of feces and urine in each experiment. Graded substitution of barley or wheat, but not sorghum, into the RD linearly reduced ( direct method-derived DE and ME for barley were 3,669 and 3,593 kcal/kg DM, respectively. The regressions of barley contribution to DE and ME in kilocalories against the quantity of barley DMI in kilograms generated 3,746 kcal DE/kg DM and 3,647 kcal ME/kg DM. The DE and ME for sorghum by the direct method were 4,097 and 4,042 kcal/kg DM, respectively; the corresponding regression-derived estimates were 4,145 and 4,066 kcal/kg DM. Using the direct method, energy values for wheat were 3,953 kcal DE/kg DM and 3,889 kcal ME/kg DM. The regressions of wheat contribution to DE and ME in kilocalories against the quantity of wheat DMI in kilograms generated 3,960 kcal DE/kg DM and 3,874 kcal ME/kg DM. The DE and ME of barley using the direct method were not different (0.3 direct method-derived DE and ME of sorghum were not different (0.5 direct method- and regression method-derived DE (3,953 and 3

  6. Learning a Nonnegative Sparse Graph for Linear Regression.

    Science.gov (United States)

    Fang, Xiaozhao; Xu, Yong; Li, Xuelong; Lai, Zhihui; Wong, Wai Keung

    2015-09-01

    Previous graph-based semisupervised learning (G-SSL) methods have the following drawbacks: 1) they usually predefine the graph structure and then use it to perform label prediction, which cannot guarantee an overall optimum and 2) they only focus on the label prediction or the graph structure construction but are not competent in handling new samples. To this end, a novel nonnegative sparse graph (NNSG) learning method was first proposed. Then, both the label prediction and projection learning were integrated into linear regression. Finally, the linear regression and graph structure learning were unified within the same framework to overcome these two drawbacks. Therefore, a novel method, named learning a NNSG for linear regression was presented, in which the linear regression and graph learning were simultaneously performed to guarantee an overall optimum. In the learning process, the label information can be accurately propagated via the graph structure so that the linear regression can learn a discriminative projection to better fit sample labels and accurately classify new samples. An effective algorithm was designed to solve the corresponding optimization problem with fast convergence. Furthermore, NNSG provides a unified perceptiveness for a number of graph-based learning methods and linear regression methods. The experimental results showed that NNSG can obtain very high classification accuracy and greatly outperforms conventional G-SSL methods, especially some conventional graph construction methods.

  7. Methods of Detecting Outliers in A Regression Analysis Model ...

    African Journals Online (AJOL)

    PROF. O. E. OSUAGWU

    2013-06-01

    Jun 1, 2013 ... especially true in observational studies .... Simple linear regression and multiple ... The simple linear ..... Grubbs,F.E (1950): Sample Criteria for Testing Outlying observations: Annals of ... In experimental design, the Relative.

  8. Assessing the reliability of the borderline regression method as a standard setting procedure for objective structured clinical examination

    Directory of Open Access Journals (Sweden)

    Sara Mortaz Hejri

    2013-01-01

    Full Text Available Background: One of the methods used for standard setting is the borderline regression method (BRM. This study aims to assess the reliability of BRM when the pass-fail standard in an objective structured clinical examination (OSCE was calculated by averaging the BRM standards obtained for each station separately. Materials and Methods: In nine stations of the OSCE with direct observation the examiners gave each student a checklist score and a global score. Using a linear regression model for each station, we calculated the checklist score cut-off on the regression equation for the global scale cut-off set at 2. The OSCE pass-fail standard was defined as the average of all station′s standard. To determine the reliability, the root mean square error (RMSE was calculated. The R2 coefficient and the inter-grade discrimination were calculated to assess the quality of OSCE. Results: The mean total test score was 60.78. The OSCE pass-fail standard and its RMSE were 47.37 and 0.55, respectively. The R2 coefficients ranged from 0.44 to 0.79. The inter-grade discrimination score varied greatly among stations. Conclusion: The RMSE of the standard was very small indicating that BRM is a reliable method of setting standard for OSCE, which has the advantage of providing data for quality assurance.

  9. Fast electronic structure methods for strongly correlated molecular systems

    International Nuclear Information System (INIS)

    Head-Gordon, Martin; Beran, Gregory J O; Sodt, Alex; Jung, Yousung

    2005-01-01

    A short review is given of newly developed fast electronic structure methods that are designed to treat molecular systems with strong electron correlations, such as diradicaloid molecules, for which standard electronic structure methods such as density functional theory are inadequate. These new local correlation methods are based on coupled cluster theory within a perfect pairing active space, containing either a linear or quadratic number of pair correlation amplitudes, to yield the perfect pairing (PP) and imperfect pairing (IP) models. This reduces the scaling of the coupled cluster iterations to no worse than cubic, relative to the sixth power dependence of the usual (untruncated) coupled cluster doubles model. A second order perturbation correction, PP(2), to treat the neglected (weaker) correlations is formulated for the PP model. To ensure minimal prefactors, in addition to favorable size-scaling, highly efficient implementations of PP, IP and PP(2) have been completed, using auxiliary basis expansions. This yields speedups of almost an order of magnitude over the best alternatives using 4-center 2-electron integrals. A short discussion of the scope of accessible chemical applications is given

  10. An analysis of main factors in electron beam flue gas purification

    International Nuclear Information System (INIS)

    Zhang Ming; Xu Guang

    2003-01-01

    Electron beam flue gas purification method is developing very quickly in recent years. Based on the experiment setting for electron beam flue gas purification in Institute of Nuclear Energy and Technology, Tsinghua University, how the technique factors affect the ratio of desulphurization and denitrogenation are described. Radiation dose (D), temperature (T), humidity (H), pour ammonia quantity (α) and initial concentration of SO 2 (C SO 2 ) and NO x (C NO x ) are main factors influencing flue gas purification. Using the methods of correlation analysis and regression analysis, the primary effect factors are found out and the regression equations are set to optimize the system process, predigest the system structure and to forecast the experimental results. (authors)

  11. Linear regression in astronomy. II

    Science.gov (United States)

    Feigelson, Eric D.; Babu, Gutti J.

    1992-01-01

    A wide variety of least-squares linear regression procedures used in observational astronomy, particularly investigations of the cosmic distance scale, are presented and discussed. The classes of linear models considered are (1) unweighted regression lines, with bootstrap and jackknife resampling; (2) regression solutions when measurement error, in one or both variables, dominates the scatter; (3) methods to apply a calibration line to new data; (4) truncated regression models, which apply to flux-limited data sets; and (5) censored regression models, which apply when nondetections are present. For the calibration problem we develop two new procedures: a formula for the intercept offset between two parallel data sets, which propagates slope errors from one regression to the other; and a generalization of the Working-Hotelling confidence bands to nonstandard least-squares lines. They can provide improved error analysis for Faber-Jackson, Tully-Fisher, and similar cosmic distance scale relations.

  12. Regression Analysis by Example. 5th Edition

    Science.gov (United States)

    Chatterjee, Samprit; Hadi, Ali S.

    2012-01-01

    Regression analysis is a conceptually simple method for investigating relationships among variables. Carrying out a successful application of regression analysis, however, requires a balance of theoretical results, empirical rules, and subjective judgment. "Regression Analysis by Example, Fifth Edition" has been expanded and thoroughly…

  13. Modelling infant mortality rate in Central Java, Indonesia use generalized poisson regression method

    Science.gov (United States)

    Prahutama, Alan; Sudarno

    2018-05-01

    The infant mortality rate is the number of deaths under one year of age occurring among the live births in a given geographical area during a given year, per 1,000 live births occurring among the population of the given geographical area during the same year. This problem needs to be addressed because it is an important element of a country’s economic development. High infant mortality rate will disrupt the stability of a country as it relates to the sustainability of the population in the country. One of regression model that can be used to analyze the relationship between dependent variable Y in the form of discrete data and independent variable X is Poisson regression model. Recently The regression modeling used for data with dependent variable is discrete, among others, poisson regression, negative binomial regression and generalized poisson regression. In this research, generalized poisson regression modeling gives better AIC value than poisson regression. The most significant variable is the Number of health facilities (X1), while the variable that gives the most influence to infant mortality rate is the average breastfeeding (X9).

  14. Logistic regression applied to natural hazards: rare event logistic regression with replications

    Science.gov (United States)

    Guns, M.; Vanacker, V.

    2012-06-01

    Statistical analysis of natural hazards needs particular attention, as most of these phenomena are rare events. This study shows that the ordinary rare event logistic regression, as it is now commonly used in geomorphologic studies, does not always lead to a robust detection of controlling factors, as the results can be strongly sample-dependent. In this paper, we introduce some concepts of Monte Carlo simulations in rare event logistic regression. This technique, so-called rare event logistic regression with replications, combines the strength of probabilistic and statistical methods, and allows overcoming some of the limitations of previous developments through robust variable selection. This technique was here developed for the analyses of landslide controlling factors, but the concept is widely applicable for statistical analyses of natural hazards.

  15. Modified Monte Carlo method for study of electron transport in degenerate electron gas in the presence of electron-electron interactions, application to graphene

    Science.gov (United States)

    Borowik, Piotr; Thobel, Jean-Luc; Adamowicz, Leszek

    2017-07-01

    Standard computational methods used to take account of the Pauli Exclusion Principle into Monte Carlo (MC) simulations of electron transport in semiconductors may give unphysical results in low field regime, where obtained electron distribution function takes values exceeding unity. Modified algorithms were already proposed and allow to correctly account for electron scattering on phonons or impurities. Present paper extends this approach and proposes improved simulation scheme allowing including Pauli exclusion principle for electron-electron (e-e) scattering into MC simulations. Simulations with significantly reduced computational cost recreate correct values of the electron distribution function. Proposed algorithm is applied to study transport properties of degenerate electrons in graphene with e-e interactions. This required adapting the treatment of e-e scattering in the case of linear band dispersion relation. Hence, this part of the simulation algorithm is described in details.

  16. Testing discontinuities in nonparametric regression

    KAUST Repository

    Dai, Wenlin

    2017-01-19

    In nonparametric regression, it is often needed to detect whether there are jump discontinuities in the mean function. In this paper, we revisit the difference-based method in [13 H.-G. Müller and U. Stadtmüller, Discontinuous versus smooth regression, Ann. Stat. 27 (1999), pp. 299–337. doi: 10.1214/aos/1018031100

  17. Testing discontinuities in nonparametric regression

    KAUST Repository

    Dai, Wenlin; Zhou, Yuejin; Tong, Tiejun

    2017-01-01

    In nonparametric regression, it is often needed to detect whether there are jump discontinuities in the mean function. In this paper, we revisit the difference-based method in [13 H.-G. Müller and U. Stadtmüller, Discontinuous versus smooth regression, Ann. Stat. 27 (1999), pp. 299–337. doi: 10.1214/aos/1018031100

  18. Estimating HIES Data through Ratio and Regression Methods for Different Sampling Designs

    Directory of Open Access Journals (Sweden)

    Faqir Muhammad

    2007-01-01

    Full Text Available In this study, comparison has been made for different sampling designs, using the HIES data of North West Frontier Province (NWFP for 2001-02 and 1998-99 collected from the Federal Bureau of Statistics, Statistical Division, Government of Pakistan, Islamabad. The performance of the estimators has also been considered using bootstrap and Jacknife. A two-stage stratified random sample design is adopted by HIES. In the first stage, enumeration blocks and villages are treated as the first stage Primary Sampling Units (PSU. The sample PSU’s are selected with probability proportional to size. Secondary Sampling Units (SSU i.e., households are selected by systematic sampling with a random start. They have used a single study variable. We have compared the HIES technique with some other designs, which are: Stratified Simple Random Sampling. Stratified Systematic Sampling. Stratified Ranked Set Sampling. Stratified Two Phase Sampling. Ratio and Regression methods were applied with two study variables, which are: Income (y and Household sizes (x. Jacknife and Bootstrap are used for variance replication. Simple Random Sampling with sample size (462 to 561 gave moderate variances both by Jacknife and Bootstrap. By applying Systematic Sampling, we received moderate variance with sample size (467. In Jacknife with Systematic Sampling, we obtained variance of regression estimator greater than that of ratio estimator for a sample size (467 to 631. At a sample size (952 variance of ratio estimator gets greater than that of regression estimator. The most efficient design comes out to be Ranked set sampling compared with other designs. The Ranked set sampling with jackknife and bootstrap, gives minimum variance even with the smallest sample size (467. Two Phase sampling gave poor performance. Multi-stage sampling applied by HIES gave large variances especially if used with a single study variable.

  19. Logistic Regression and Path Analysis Method to Analyze Factors influencing Students’ Achievement

    Science.gov (United States)

    Noeryanti, N.; Suryowati, K.; Setyawan, Y.; Aulia, R. R.

    2018-04-01

    Students' academic achievement cannot be separated from the influence of two factors namely internal and external factors. The first factors of the student (internal factors) consist of intelligence (X1), health (X2), interest (X3), and motivation of students (X4). The external factors consist of family environment (X5), school environment (X6), and society environment (X7). The objects of this research are eighth grade students of the school year 2016/2017 at SMPN 1 Jiwan Madiun sampled by using simple random sampling. Primary data are obtained by distributing questionnaires. The method used in this study is binary logistic regression analysis that aims to identify internal and external factors that affect student’s achievement and how the trends of them. Path Analysis was used to determine the factors that influence directly, indirectly or totally on student’s achievement. Based on the results of binary logistic regression, variables that affect student’s achievement are interest and motivation. And based on the results obtained by path analysis, factors that have a direct impact on student’s achievement are students’ interest (59%) and students’ motivation (27%). While the factors that have indirect influences on students’ achievement, are family environment (97%) and school environment (37).

  20. QSAR Study of Insecticides of Phthalamide Derivatives Using Multiple Linear Regression and Artificial Neural Network Methods

    Directory of Open Access Journals (Sweden)

    Adi Syahputra

    2014-03-01

    Full Text Available Quantitative structure activity relationship (QSAR for 21 insecticides of phthalamides containing hydrazone (PCH was studied using multiple linear regression (MLR, principle component regression (PCR and artificial neural network (ANN. Five descriptors were included in the model for MLR and ANN analysis, and five latent variables obtained from principle component analysis (PCA were used in PCR analysis. Calculation of descriptors was performed using semi-empirical PM6 method. ANN analysis was found to be superior statistical technique compared to the other methods and gave a good correlation between descriptors and activity (r2 = 0.84. Based on the obtained model, we have successfully designed some new insecticides with higher predicted activity than those of previously synthesized compounds, e.g.2-(decalinecarbamoyl-5-chloro-N’-((5-methylthiophen-2-ylmethylene benzohydrazide, 2-(decalinecarbamoyl-5-chloro-N’-((thiophen-2-yl-methylene benzohydrazide and 2-(decaline carbamoyl-N’-(4-fluorobenzylidene-5-chlorobenzohydrazide with predicted log LC50 of 1.640, 1.672, and 1.769 respectively.

  1. Improving ASTER GDEM Accuracy Using Land Use-Based Linear Regression Methods: A Case Study of Lianyungang, East China

    Directory of Open Access Journals (Sweden)

    Xiaoyan Yang

    2018-04-01

    Full Text Available The Advanced Spaceborne Thermal-Emission and Reflection Radiometer Global Digital Elevation Model (ASTER GDEM is important to a wide range of geographical and environmental studies. Its accuracy, to some extent associated with land-use types reflecting topography, vegetation coverage, and human activities, impacts the results and conclusions of these studies. In order to improve the accuracy of ASTER GDEM prior to its application, we investigated ASTER GDEM errors based on individual land-use types and proposed two linear regression calibration methods, one considering only land use-specific errors and the other considering the impact of both land-use and topography. Our calibration methods were tested on the coastal prefectural city of Lianyungang in eastern China. Results indicate that (1 ASTER GDEM is highly accurate for rice, wheat, grass and mining lands but less accurate for scenic, garden, wood and bare lands; (2 despite improvements in ASTER GDEM2 accuracy, multiple linear regression calibration requires more data (topography and a relatively complex calibration process; (3 simple linear regression calibration proves a practicable and simplified means to systematically investigate and improve the impact of land-use on ASTER GDEM accuracy. Our method is applicable to areas with detailed land-use data based on highly accurate field-based point-elevation measurements.

  2. Meta-Modeling by Symbolic Regression and Pareto Simulated Annealing

    NARCIS (Netherlands)

    Stinstra, E.; Rennen, G.; Teeuwen, G.J.A.

    2006-01-01

    The subject of this paper is a new approach to Symbolic Regression.Other publications on Symbolic Regression use Genetic Programming.This paper describes an alternative method based on Pareto Simulated Annealing.Our method is based on linear regression for the estimation of constants.Interval

  3. A dynamic particle filter-support vector regression method for reliability prediction

    International Nuclear Information System (INIS)

    Wei, Zhao; Tao, Tao; ZhuoShu, Ding; Zio, Enrico

    2013-01-01

    Support vector regression (SVR) has been applied to time series prediction and some works have demonstrated the feasibility of its use to forecast system reliability. For accuracy of reliability forecasting, the selection of SVR's parameters is important. The existing research works on SVR's parameters selection divide the example dataset into training and test subsets, and tune the parameters on the training data. However, these fixed parameters can lead to poor prediction capabilities if the data of the test subset differ significantly from those of training. Differently, the novel method proposed in this paper uses particle filtering to estimate the SVR model parameters according to the whole measurement sequence up to the last observation instance. By treating the SVR training model as the observation equation of a particle filter, our method allows updating the SVR model parameters dynamically when a new observation comes. Because of the adaptability of the parameters to dynamic data pattern, the new PF–SVR method has superior prediction performance over that of standard SVR. Four application results show that PF–SVR is more robust than SVR to the decrease of the number of training data and the change of initial SVR parameter values. Also, even if there are trends in the test data different from those in the training data, the method can capture the changes, correct the SVR parameters and obtain good predictions. -- Highlights: •A dynamic PF–SVR method is proposed to predict the system reliability. •The method can adjust the SVR parameters according to the change of data. •The method is robust to the size of training data and initial parameter values. •Some cases based on both artificial and real data are studied. •PF–SVR shows superior prediction performance over standard SVR

  4. Prediction of unwanted pregnancies using logistic regression, probit regression and discriminant analysis.

    Science.gov (United States)

    Ebrahimzadeh, Farzad; Hajizadeh, Ebrahim; Vahabi, Nasim; Almasian, Mohammad; Bakhteyar, Katayoon

    2015-01-01

    Unwanted pregnancy not intended by at least one of the parents has undesirable consequences for the family and the society. In the present study, three classification models were used and compared to predict unwanted pregnancies in an urban population. In this cross-sectional study, 887 pregnant mothers referring to health centers in Khorramabad, Iran, in 2012 were selected by the stratified and cluster sampling; relevant variables were measured and for prediction of unwanted pregnancy, logistic regression, discriminant analysis, and probit regression models and SPSS software version 21 were used. To compare these models, indicators such as sensitivity, specificity, the area under the ROC curve, and the percentage of correct predictions were used. The prevalence of unwanted pregnancies was 25.3%. The logistic and probit regression models indicated that parity and pregnancy spacing, contraceptive methods, household income and number of living male children were related to unwanted pregnancy. The performance of the models based on the area under the ROC curve was 0.735, 0.733, and 0.680 for logistic regression, probit regression, and linear discriminant analysis, respectively. Given the relatively high prevalence of unwanted pregnancies in Khorramabad, it seems necessary to revise family planning programs. Despite the similar accuracy of the models, if the researcher is interested in the interpretability of the results, the use of the logistic regression model is recommended.

  5. Short term load forecasting technique based on the seasonal exponential adjustment method and the regression model

    International Nuclear Information System (INIS)

    Wu, Jie; Wang, Jianzhou; Lu, Haiyan; Dong, Yao; Lu, Xiaoxiao

    2013-01-01

    Highlights: ► The seasonal and trend items of the data series are forecasted separately. ► Seasonal item in the data series is verified by the Kendall τ correlation testing. ► Different regression models are applied to the trend item forecasting. ► We examine the superiority of the combined models by the quartile value comparison. ► Paired-sample T test is utilized to confirm the superiority of the combined models. - Abstract: For an energy-limited economy system, it is crucial to forecast load demand accurately. This paper devotes to 1-week-ahead daily load forecasting approach in which load demand series are predicted by employing the information of days before being similar to that of the forecast day. As well as in many nonlinear systems, seasonal item and trend item are coexisting in load demand datasets. In this paper, the existing of the seasonal item in the load demand data series is firstly verified according to the Kendall τ correlation testing method. Then in the belief of the separate forecasting to the seasonal item and the trend item would improve the forecasting accuracy, hybrid models by combining seasonal exponential adjustment method (SEAM) with the regression methods are proposed in this paper, where SEAM and the regression models are employed to seasonal and trend items forecasting respectively. Comparisons of the quartile values as well as the mean absolute percentage error values demonstrate this forecasting technique can significantly improve the accuracy though models applied to the trend item forecasting are eleven different ones. This superior performance of this separate forecasting technique is further confirmed by the paired-sample T tests

  6. Electron beam directed energy device and methods of using same

    Science.gov (United States)

    Retsky, Michael W.

    2007-10-16

    A method and apparatus is disclosed for an electron beam directed energy device. The device consists of an electron gun with one or more electron beams. The device includes one or more accelerating plates with holes aligned for beam passage. The plates may be flat or preferably shaped to direct each electron beam to exit the electron gun at a predetermined orientation. In one preferred application, the device is located in outer space with individual beams that are directed to focus at a distant target to be used to impact and destroy missiles. The aimings of the separate beams are designed to overcome Coulomb repulsion. A method is also presented for directing the beams to a target considering the variable terrestrial magnetic field. In another preferred application, the electron beam is directed into the ground to produce a subsurface x-ray source to locate and/or destroy buried or otherwise hidden objects including explosive devices.

  7. Multiple regression and beyond an introduction to multiple regression and structural equation modeling

    CERN Document Server

    Keith, Timothy Z

    2014-01-01

    Multiple Regression and Beyond offers a conceptually oriented introduction to multiple regression (MR) analysis and structural equation modeling (SEM), along with analyses that flow naturally from those methods. By focusing on the concepts and purposes of MR and related methods, rather than the derivation and calculation of formulae, this book introduces material to students more clearly, and in a less threatening way. In addition to illuminating content necessary for coursework, the accessibility of this approach means students are more likely to be able to conduct research using MR or SEM--and more likely to use the methods wisely. Covers both MR and SEM, while explaining their relevance to one another Also includes path analysis, confirmatory factor analysis, and latent growth modeling Figures and tables throughout provide examples and illustrate key concepts and techniques For additional resources, please visit: http://tzkeith.com/.

  8. Methods for fabrication of flexible hybrid electronics

    Science.gov (United States)

    Street, Robert A.; Mei, Ping; Krusor, Brent; Ready, Steve E.; Zhang, Yong; Schwartz, David E.; Pierre, Adrien; Doris, Sean E.; Russo, Beverly; Kor, Siv; Veres, Janos

    2017-08-01

    Printed and flexible hybrid electronics is an emerging technology with potential applications in smart labels, wearable electronics, soft robotics, and prosthetics. Printed solution-based materials are compatible with plastic film substrates that are flexible, soft, and stretchable, thus enabling conformal integration with non-planar objects. In addition, manufacturing by printing is scalable to large areas and is amenable to low-cost sheet-fed and roll-to-roll processes. FHE includes display and sensory components to interface with users and environments. On the system level, devices also require electronic circuits for power, memory, signal conditioning, and communications. Those electronic components can be integrated onto a flexible substrate by either assembly or printing. PARC has developed systems and processes for realizing both approaches. This talk presents fabrication methods with an emphasis on techniques recently developed for the assembly of off-the-shelf chips. A few examples of systems fabricated with this approach are also described.

  9. An Efficient Method for Electron-Atom Scattering Using Ab-initio Calculations

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Yuan; Yang, Yonggang; Xiao, Liantuan; Jia, Suotang [Shanxi University, Taiyuan (China)

    2017-02-15

    We present an efficient method based on ab-initio calculations to investigate electron-atom scatterings. Those calculations profit from methods implemented in standard quantum chemistry programs. The new approach is applied to electron-helium scattering. The results are compared with experimental and other theoretical references to demonstrate the efficiency of our method.

  10. Short-term load forecasting with increment regression tree

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Jingfei; Stenzel, Juergen [Darmstadt University of Techonology, Darmstadt 64283 (Germany)

    2006-06-15

    This paper presents a new regression tree method for short-term load forecasting. Both increment and non-increment tree are built according to the historical data to provide the data space partition and input variable selection. Support vector machine is employed to the samples of regression tree nodes for further fine regression. Results of different tree nodes are integrated through weighted average method to obtain the comprehensive forecasting result. The effectiveness of the proposed method is demonstrated through its application to an actual system. (author)

  11. Selecting minimum dataset soil variables using PLSR as a regressive multivariate method

    Science.gov (United States)

    Stellacci, Anna Maria; Armenise, Elena; Castellini, Mirko; Rossi, Roberta; Vitti, Carolina; Leogrande, Rita; De Benedetto, Daniela; Ferrara, Rossana M.; Vivaldi, Gaetano A.

    2017-04-01

    Long-term field experiments and science-based tools that characterize soil status (namely the soil quality indices, SQIs) assume a strategic role in assessing the effect of agronomic techniques and thus in improving soil management especially in marginal environments. Selecting key soil variables able to best represent soil status is a critical step for the calculation of SQIs. Current studies show the effectiveness of statistical methods for variable selection to extract relevant information deriving from multivariate datasets. Principal component analysis (PCA) has been mainly used, however supervised multivariate methods and regressive techniques are progressively being evaluated (Armenise et al., 2013; de Paul Obade et al., 2016; Pulido Moncada et al., 2014). The present study explores the effectiveness of partial least square regression (PLSR) in selecting critical soil variables, using a dataset comparing conventional tillage and sod-seeding on durum wheat. The results were compared to those obtained using PCA and stepwise discriminant analysis (SDA). The soil data derived from a long-term field experiment in Southern Italy. On samples collected in April 2015, the following set of variables was quantified: (i) chemical: total organic carbon and nitrogen (TOC and TN), alkali-extractable C (TEC and humic substances - HA-FA), water extractable N and organic C (WEN and WEOC), Olsen extractable P, exchangeable cations, pH and EC; (ii) physical: texture, dry bulk density (BD), macroporosity (Pmac), air capacity (AC), and relative field capacity (RFC); (iii) biological: carbon of the microbial biomass quantified with the fumigation-extraction method. PCA and SDA were previously applied to the multivariate dataset (Stellacci et al., 2016). PLSR was carried out on mean centered and variance scaled data of predictors (soil variables) and response (wheat yield) variables using the PLS procedure of SAS/STAT. In addition, variable importance for projection (VIP

  12. ''In situ'' electronic testing method of a neutron detector performance

    International Nuclear Information System (INIS)

    Gonzalez, J.M.; Levai, F.

    1987-01-01

    The method allows detection of any important change in the electrical characteristics of a neutron sensor channel. It checks the response signal produced by an electronic detector circuit when a pulse generator is connected as input signal in the high voltage supply. The electronic circuit compares the detector capacitance value, previously measured, against a reference value, which is adjusted in a window type comparator electronic circuit to detect any important degrading condition of the capacitance value in a detector-cable system. The ''in-situ'' electronic testing method of neutron detector performance has been verified in a laboratory atmosphere to be a potential method to detect any significant change in the capacitance value of a nuclear sensor and its connecting cable, also checking: detector disconnections, cable disconnections, length changes of the connecting cable, electric short-opened circuits in the sensor channel, and any electrical trouble in the detector-connector-cable system. The experimental practices were carried out by simulation of several electric changes in a nuclear sensor-cable system from a linear D.C. channel which measures reactor power during nuclear reactor operation. It was made at the Training Reactor Electronic Laboratory. The results and conclusions obtained at the Laboratory were proved, satisfactorily, in the Electronic Instrumentation of Budapest Technical University Training Reactor, Hungary

  13. Regression models of reactor diagnostic signals

    International Nuclear Information System (INIS)

    Vavrin, J.

    1989-01-01

    The application is described of an autoregression model as the simplest regression model of diagnostic signals in experimental analysis of diagnostic systems, in in-service monitoring of normal and anomalous conditions and their diagnostics. The method of diagnostics is described using a regression type diagnostic data base and regression spectral diagnostics. The diagnostics is described of neutron noise signals from anomalous modes in the experimental fuel assembly of a reactor. (author)

  14. Logistic regression applied to natural hazards: rare event logistic regression with replications

    Directory of Open Access Journals (Sweden)

    M. Guns

    2012-06-01

    Full Text Available Statistical analysis of natural hazards needs particular attention, as most of these phenomena are rare events. This study shows that the ordinary rare event logistic regression, as it is now commonly used in geomorphologic studies, does not always lead to a robust detection of controlling factors, as the results can be strongly sample-dependent. In this paper, we introduce some concepts of Monte Carlo simulations in rare event logistic regression. This technique, so-called rare event logistic regression with replications, combines the strength of probabilistic and statistical methods, and allows overcoming some of the limitations of previous developments through robust variable selection. This technique was here developed for the analyses of landslide controlling factors, but the concept is widely applicable for statistical analyses of natural hazards.

  15. Predicting Charging Time of Battery Electric Vehicles Based on Regression and Time-Series Methods: A Case Study of Beijing

    Directory of Open Access Journals (Sweden)

    Jun Bi

    2018-04-01

    Full Text Available Battery electric vehicles (BEVs reduce energy consumption and air pollution as compared with conventional vehicles. However, the limited driving range and potential long charging time of BEVs create new problems. Accurate charging time prediction of BEVs helps drivers determine travel plans and alleviate their range anxiety during trips. This study proposed a combined model for charging time prediction based on regression and time-series methods according to the actual data from BEVs operating in Beijing, China. After data analysis, a regression model was established by considering the charged amount for charging time prediction. Furthermore, a time-series method was adopted to calibrate the regression model, which significantly improved the fitting accuracy of the model. The parameters of the model were determined by using the actual data. Verification results confirmed the accuracy of the model and showed that the model errors were small. The proposed model can accurately depict the charging time characteristics of BEVs in Beijing.

  16. Methods and apparatus for cooling electronics

    Science.gov (United States)

    Hall, Shawn Anthony; Kopcsay, Gerard Vincent

    2014-12-02

    Methods and apparatus are provided for choosing an energy-efficient coolant temperature for electronics by considering the temperature dependence of the electronics' power dissipation. This dependence is explicitly considered in selecting the coolant temperature T.sub.0 that is sent to the equipment. To minimize power consumption P.sub.Total for the entire system, where P.sub.Total=P.sub.0+P.sub.Cool is the sum of the electronic equipment's power consumption P.sub.0 plus the cooling equipment's power consumption P.sub.Cool, P.sub.Total is obtained experimentally, by measuring P.sub.0 and P.sub.Cool, as a function of three parameters: coolant temperature T.sub.0; weather-related temperature T.sub.3 that affects the performance of free-cooling equipment; and computational state C of the electronic equipment, which affects the temperature dependence of its power consumption. This experiment provides, for each possible combination of T.sub.3 and C, the value T.sub.0* of T.sub.0 that minimizes P.sub.Total. During operation, for any combination of T.sub.3 and C that occurs, the corresponding optimal coolant temperature T.sub.0* is selected, and the cooling equipment is commanded to produce it.

  17. Impact of regression methods on improved effects of soil structure on soil water retention estimates

    Science.gov (United States)

    Nguyen, Phuong Minh; De Pue, Jan; Le, Khoa Van; Cornelis, Wim

    2015-06-01

    Increasing the accuracy of pedotransfer functions (PTFs), an indirect method for predicting non-readily available soil features such as soil water retention characteristics (SWRC), is of crucial importance for large scale agro-hydrological modeling. Adding significant predictors (i.e., soil structure), and implementing more flexible regression algorithms are among the main strategies of PTFs improvement. The aim of this study was to investigate whether the improved effect of categorical soil structure information on estimating soil-water content at various matric potentials, which has been reported in literature, could be enduringly captured by regression techniques other than the usually applied linear regression. Two data mining techniques, i.e., Support Vector Machines (SVM), and k-Nearest Neighbors (kNN), which have been recently introduced as promising tools for PTF development, were utilized to test if the incorporation of soil structure will improve PTF's accuracy under a context of rather limited training data. The results show that incorporating descriptive soil structure information, i.e., massive, structured and structureless, as grouping criterion can improve the accuracy of PTFs derived by SVM approach in the range of matric potential of -6 to -33 kPa (average RMSE decreased up to 0.005 m3 m-3 after grouping, depending on matric potentials). The improvement was primarily attributed to the outperformance of SVM-PTFs calibrated on structureless soils. No improvement was obtained with kNN technique, at least not in our study in which the data set became limited in size after grouping. Since there is an impact of regression techniques on the improved effect of incorporating qualitative soil structure information, selecting a proper technique will help to maximize the combined influence of flexible regression algorithms and soil structure information on PTF accuracy.

  18. Mixed ionic-electronic conductor-based radiation detectors and methods of fabrication

    Science.gov (United States)

    Conway, Adam; Beck, Patrick R; Graff, Robert T; Nelson, Art; Nikolic, Rebecca J; Payne, Stephen A; Voss, Lars; Kim, Hadong

    2015-04-07

    A method of fabricating a mixed ionic-electronic conductor (e.g. TlBr)-based radiation detector having halide-treated surfaces and associated methods of fabrication, which controls polarization of the mixed ionic-electronic MIEC material to improve stability and operational lifetime.

  19. Methods for recovering metals from electronic waste, and related systems

    Science.gov (United States)

    Lister, Tedd E; Parkman, Jacob A; Diaz Aldana, Luis A; Clark, Gemma; Dufek, Eric J; Keller, Philip

    2017-10-03

    A method of recovering metals from electronic waste comprises providing a powder comprising electronic waste in at least a first reactor and a second reactor and providing an electrolyte comprising at least ferric ions in an electrochemical cell in fluid communication with the first reactor and the second reactor. The method further includes contacting the powders within the first reactor and the second reactor with the electrolyte to dissolve at least one base metal from each reactor into the electrolyte and reduce at least some of the ferric ions to ferrous ions. The ferrous ions are oxidized at an anode of the electrochemical cell to regenerate the ferric ions. The powder within the second reactor comprises a higher weight percent of the at least one base metal than the powder in the first reactor. Additional methods of recovering metals from electronic waste are also described, as well as an apparatus of recovering metals from electronic waste.

  20. Logistic regression models

    CERN Document Server

    Hilbe, Joseph M

    2009-01-01

    This book really does cover everything you ever wanted to know about logistic regression … with updates available on the author's website. Hilbe, a former national athletics champion, philosopher, and expert in astronomy, is a master at explaining statistical concepts and methods. Readers familiar with his other expository work will know what to expect-great clarity.The book provides considerable detail about all facets of logistic regression. No step of an argument is omitted so that the book will meet the needs of the reader who likes to see everything spelt out, while a person familiar with some of the topics has the option to skip "obvious" sections. The material has been thoroughly road-tested through classroom and web-based teaching. … The focus is on helping the reader to learn and understand logistic regression. The audience is not just students meeting the topic for the first time, but also experienced users. I believe the book really does meet the author's goal … .-Annette J. Dobson, Biometric...

  1. Ab initio methods for electron-molecule collisions

    International Nuclear Information System (INIS)

    Collins, L.A.; Schneider, B.I.

    1987-01-01

    This review concentrates on the recent advances in treating the electronic aspect of the electron-molecule interaction and leaves to other articles the description of the rotational and vibrational motions. Those methods which give the most complete treatment of the direct, exchange, and correlation effects are focused on. Such full treatments are generally necessary at energies below a few Rydbergs (≅ 60 eV). This choice unfortunately necessitates omission of those active and vital areas devoted to the development of model potentials and approximate scattering formulations. The ab initio and model approaches complement each other and are both extremely important to the full explication of the electron-scattering process. Due to the rapid developments of recent years, the approaches that provide the fullest treatment are concentrated on. 81 refs

  2. Retro-regression--another important multivariate regression improvement.

    Science.gov (United States)

    Randić, M

    2001-01-01

    We review the serious problem associated with instabilities of the coefficients of regression equations, referred to as the MRA (multivariate regression analysis) "nightmare of the first kind". This is manifested when in a stepwise regression a descriptor is included or excluded from a regression. The consequence is an unpredictable change of the coefficients of the descriptors that remain in the regression equation. We follow with consideration of an even more serious problem, referred to as the MRA "nightmare of the second kind", arising when optimal descriptors are selected from a large pool of descriptors. This process typically causes at different steps of the stepwise regression a replacement of several previously used descriptors by new ones. We describe a procedure that resolves these difficulties. The approach is illustrated on boiling points of nonanes which are considered (1) by using an ordered connectivity basis; (2) by using an ordering resulting from application of greedy algorithm; and (3) by using an ordering derived from an exhaustive search for optimal descriptors. A novel variant of multiple regression analysis, called retro-regression (RR), is outlined showing how it resolves the ambiguities associated with both "nightmares" of the first and the second kind of MRA.

  3. Modified Regression Correlation Coefficient for Poisson Regression Model

    Science.gov (United States)

    Kaengthong, Nattacha; Domthong, Uthumporn

    2017-09-01

    This study gives attention to indicators in predictive power of the Generalized Linear Model (GLM) which are widely used; however, often having some restrictions. We are interested in regression correlation coefficient for a Poisson regression model. This is a measure of predictive power, and defined by the relationship between the dependent variable (Y) and the expected value of the dependent variable given the independent variables [E(Y|X)] for the Poisson regression model. The dependent variable is distributed as Poisson. The purpose of this research was modifying regression correlation coefficient for Poisson regression model. We also compare the proposed modified regression correlation coefficient with the traditional regression correlation coefficient in the case of two or more independent variables, and having multicollinearity in independent variables. The result shows that the proposed regression correlation coefficient is better than the traditional regression correlation coefficient based on Bias and the Root Mean Square Error (RMSE).

  4. Whole-genome regression and prediction methods applied to plant and animal breeding

    NARCIS (Netherlands)

    Los Campos, De G.; Hickey, J.M.; Pong-Wong, R.; Daetwyler, H.D.; Calus, M.P.L.

    2013-01-01

    Genomic-enabled prediction is becoming increasingly important in animal and plant breeding, and is also receiving attention in human genetics. Deriving accurate predictions of complex traits requires implementing whole-genome regression (WGR) models where phenotypes are regressed on thousands of

  5. Determination of the Electronics Charge--Electrolysis of Water Method.

    Science.gov (United States)

    Venkatachar, Arun C.

    1985-01-01

    Presents an alternative method for measuring the electronic charge using data from the electrolysis of acidified distilled water. The process (carried out in a commercially available electrolytic cell) has the advantage of short completion time so that students can determine electron charge and mass in one laboratory period. (DH)

  6. Semisupervised Clustering by Iterative Partition and Regression with Neuroscience Applications

    Directory of Open Access Journals (Sweden)

    Guoqi Qian

    2016-01-01

    Full Text Available Regression clustering is a mixture of unsupervised and supervised statistical learning and data mining method which is found in a wide range of applications including artificial intelligence and neuroscience. It performs unsupervised learning when it clusters the data according to their respective unobserved regression hyperplanes. The method also performs supervised learning when it fits regression hyperplanes to the corresponding data clusters. Applying regression clustering in practice requires means of determining the underlying number of clusters in the data, finding the cluster label of each data point, and estimating the regression coefficients of the model. In this paper, we review the estimation and selection issues in regression clustering with regard to the least squares and robust statistical methods. We also provide a model selection based technique to determine the number of regression clusters underlying the data. We further develop a computing procedure for regression clustering estimation and selection. Finally, simulation studies are presented for assessing the procedure, together with analyzing a real data set on RGB cell marking in neuroscience to illustrate and interpret the method.

  7. Quantile Regression With Measurement Error

    KAUST Repository

    Wei, Ying

    2009-08-27

    Regression quantiles can be substantially biased when the covariates are measured with error. In this paper we propose a new method that produces consistent linear quantile estimation in the presence of covariate measurement error. The method corrects the measurement error induced bias by constructing joint estimating equations that simultaneously hold for all the quantile levels. An iterative EM-type estimation algorithm to obtain the solutions to such joint estimation equations is provided. The finite sample performance of the proposed method is investigated in a simulation study, and compared to the standard regression calibration approach. Finally, we apply our methodology to part of the National Collaborative Perinatal Project growth data, a longitudinal study with an unusual measurement error structure. © 2009 American Statistical Association.

  8. Introduction to the use of regression models in epidemiology.

    Science.gov (United States)

    Bender, Ralf

    2009-01-01

    Regression modeling is one of the most important statistical techniques used in analytical epidemiology. By means of regression models the effect of one or several explanatory variables (e.g., exposures, subject characteristics, risk factors) on a response variable such as mortality or cancer can be investigated. From multiple regression models, adjusted effect estimates can be obtained that take the effect of potential confounders into account. Regression methods can be applied in all epidemiologic study designs so that they represent a universal tool for data analysis in epidemiology. Different kinds of regression models have been developed in dependence on the measurement scale of the response variable and the study design. The most important methods are linear regression for continuous outcomes, logistic regression for binary outcomes, Cox regression for time-to-event data, and Poisson regression for frequencies and rates. This chapter provides a nontechnical introduction to these regression models with illustrating examples from cancer research.

  9. Statistics of electron multiplication in multiplier phototube: iterative method

    International Nuclear Information System (INIS)

    Grau Malonda, A.; Ortiz Sanchez, J.F.

    1985-01-01

    An iterative method is applied to study the variation of dynode response in the multiplier phototube. Three different situations are considered that correspond to the following ways of electronic incidence on the first dynode: incidence of exactly one electron, incidence of exactly r electrons and incidence of an average anti-r electrons. The responses are given for a number of steps between 1 and 5, and for values of the multiplication factor of 2.1, 2.5, 3 and 5. We study also the variance, the skewness and the excess of jurtosis for different multiplication factors. (author)

  10. A comparative study on generating simulated Landsat NDVI images using data fusion and regression method-the case of the Korean Peninsula.

    Science.gov (United States)

    Lee, Mi Hee; Lee, Soo Bong; Eo, Yang Dam; Kim, Sun Woong; Woo, Jung-Hun; Han, Soo Hee

    2017-07-01

    Landsat optical images have enough spatial and spectral resolution to analyze vegetation growth characteristics. But, the clouds and water vapor degrade the image quality quite often, which limits the availability of usable images for the time series vegetation vitality measurement. To overcome this shortcoming, simulated images are used as an alternative. In this study, weighted average method, spatial and temporal adaptive reflectance fusion model (STARFM) method, and multilinear regression analysis method have been tested to produce simulated Landsat normalized difference vegetation index (NDVI) images of the Korean Peninsula. The test results showed that the weighted average method produced the images most similar to the actual images, provided that the images were available within 1 month before and after the target date. The STARFM method gives good results when the input image date is close to the target date. Careful regional and seasonal consideration is required in selecting input images. During summer season, due to clouds, it is very difficult to get the images close enough to the target date. Multilinear regression analysis gives meaningful results even when the input image date is not so close to the target date. Average R 2 values for weighted average method, STARFM, and multilinear regression analysis were 0.741, 0.70, and 0.61, respectively.

  11. Forecasting exchange rates: a robust regression approach

    OpenAIRE

    Preminger, Arie; Franck, Raphael

    2005-01-01

    The least squares estimation method as well as other ordinary estimation method for regression models can be severely affected by a small number of outliers, thus providing poor out-of-sample forecasts. This paper suggests a robust regression approach, based on the S-estimation method, to construct forecasting models that are less sensitive to data contamination by outliers. A robust linear autoregressive (RAR) and a robust neural network (RNN) models are estimated to study the predictabil...

  12. Scanning probe methods applied to molecular electronics

    Energy Technology Data Exchange (ETDEWEB)

    Pavlicek, Niko

    2013-08-01

    Scanning probe methods on insulating films offer a rich toolbox to study electronic, structural and spin properties of individual molecules. This work discusses three issues in the field of molecular and organic electronics. An STM head to be operated in high magnetic fields has been designed and built up. The STM head is very compact and rigid relying on a robust coarse approach mechanism. This will facilitate investigations of the spin properties of individual molecules in the future. Combined STM/AFM studies revealed a reversible molecular switch based on two stable configurations of DBTH molecules on ultrathin NaCl films. AFM experiments visualize the molecular structure in both states. Our experiments allowed to unambiguously determine the pathway of the switch. Finally, tunneling into and out of the frontier molecular orbitals of pentacene molecules has been investigated on different insulating films. These experiments show that the local symmetry of initial and final electron wave function are decisive for the ratio between elastic and vibration-assisted tunneling. The results can be generalized to electron transport in organic materials.

  13. Variable and subset selection in PLS regression

    DEFF Research Database (Denmark)

    Høskuldsson, Agnar

    2001-01-01

    The purpose of this paper is to present some useful methods for introductory analysis of variables and subsets in relation to PLS regression. We present here methods that are efficient in finding the appropriate variables or subset to use in the PLS regression. The general conclusion...... is that variable selection is important for successful analysis of chemometric data. An important aspect of the results presented is that lack of variable selection can spoil the PLS regression, and that cross-validation measures using a test set can show larger variation, when we use different subsets of X, than...

  14. Two-Stage Method Based on Local Polynomial Fitting for a Linear Heteroscedastic Regression Model and Its Application in Economics

    Directory of Open Access Journals (Sweden)

    Liyun Su

    2012-01-01

    Full Text Available We introduce the extension of local polynomial fitting to the linear heteroscedastic regression model. Firstly, the local polynomial fitting is applied to estimate heteroscedastic function, then the coefficients of regression model are obtained by using generalized least squares method. One noteworthy feature of our approach is that we avoid the testing for heteroscedasticity by improving the traditional two-stage method. Due to nonparametric technique of local polynomial estimation, we do not need to know the heteroscedastic function. Therefore, we can improve the estimation precision, when the heteroscedastic function is unknown. Furthermore, we focus on comparison of parameters and reach an optimal fitting. Besides, we verify the asymptotic normality of parameters based on numerical simulations. Finally, this approach is applied to a case of economics, and it indicates that our method is surely effective in finite-sample situations.

  15. Estimating cavity tree and snag abundance using negative binomial regression models and nearest neighbor imputation methods

    Science.gov (United States)

    Bianca N.I. Eskelson; Hailemariam Temesgen; Tara M. Barrett

    2009-01-01

    Cavity tree and snag abundance data are highly variable and contain many zero observations. We predict cavity tree and snag abundance from variables that are readily available from forest cover maps or remotely sensed data using negative binomial (NB), zero-inflated NB, and zero-altered NB (ZANB) regression models as well as nearest neighbor (NN) imputation methods....

  16. The effect of electron range on electron beam induced current collection and a simple method to extract an electron range for any generation function

    International Nuclear Information System (INIS)

    Lahreche, A.; Beggah, Y.; Corkish, R.

    2011-01-01

    The effect of electron range on electron beam induced current (EBIC) is demonstrated and the problem of the choice of the optimal electron ranges to use with simple uniform and point generation function models is resolved by proposing a method to extract an electron range-energy relationship (ERER). The results show that the use of these extracted electron ranges remove the previous disagreement between the EBIC curves computed with simple forms of generation model and those based on a more realistic generation model. The impact of these extracted electron ranges on the extraction of diffusion length, surface recombination velocity and EBIC contrast of defects is discussed. It is also demonstrated that, for the case of uniform generation, the computed EBIC current is independent of the assumed shape of the generation volume. -- Highlights: → Effect of electron ranges on modeling electron beam induced current is shown. → A method to extract an electron range for simple form of generation is proposed. → For uniform generation the EBIC current is independent of the choice of it shape. → Uses of the extracted electron ranges remove some existing literature ambiguity.

  17. Nondestructive testing method for a new generation of electronics

    Directory of Open Access Journals (Sweden)

    Azin Anton

    2018-01-01

    Full Text Available The implementation of the Smart City system needs reliable and smoothly operating electronic equipment. The study is aimed at developing a nondestructive testing method for electronic equipment and its components. This method can be used to identify critical design defects of printed circuit boards (PCB and to predict their service life, taking into account the nature of probable operating loads. The study uses an acoustic emission method to identify and localize critical design defects of printed circuit boards. Geometric dimensions of detected critical defects can be determined by the X-ray tomography method. Based on the results of the study, a method combining acoustic emission and X-ray tomography was developed for nondestructive testing of printed circuit boards. The stress-strain state of solder joints containing detected defects was analyzed. This paper gives an example of using the developed method for estimating the degree of damage to joints between PCB components and predicting the service life of the entire PCB.

  18. Linear regression

    CERN Document Server

    Olive, David J

    2017-01-01

    This text covers both multiple linear regression and some experimental design models. The text uses the response plot to visualize the model and to detect outliers, does not assume that the error distribution has a known parametric distribution, develops prediction intervals that work when the error distribution is unknown, suggests bootstrap hypothesis tests that may be useful for inference after variable selection, and develops prediction regions and large sample theory for the multivariate linear regression model that has m response variables. A relationship between multivariate prediction regions and confidence regions provides a simple way to bootstrap confidence regions. These confidence regions often provide a practical method for testing hypotheses. There is also a chapter on generalized linear models and generalized additive models. There are many R functions to produce response and residual plots, to simulate prediction intervals and hypothesis tests, to detect outliers, and to choose response trans...

  19. Robust mislabel logistic regression without modeling mislabel probabilities.

    Science.gov (United States)

    Hung, Hung; Jou, Zhi-Yu; Huang, Su-Yun

    2018-03-01

    Logistic regression is among the most widely used statistical methods for linear discriminant analysis. In many applications, we only observe possibly mislabeled responses. Fitting a conventional logistic regression can then lead to biased estimation. One common resolution is to fit a mislabel logistic regression model, which takes into consideration of mislabeled responses. Another common method is to adopt a robust M-estimation by down-weighting suspected instances. In this work, we propose a new robust mislabel logistic regression based on γ-divergence. Our proposal possesses two advantageous features: (1) It does not need to model the mislabel probabilities. (2) The minimum γ-divergence estimation leads to a weighted estimating equation without the need to include any bias correction term, that is, it is automatically bias-corrected. These features make the proposed γ-logistic regression more robust in model fitting and more intuitive for model interpretation through a simple weighting scheme. Our method is also easy to implement, and two types of algorithms are included. Simulation studies and the Pima data application are presented to demonstrate the performance of γ-logistic regression. © 2017, The International Biometric Society.

  20. Model-based Quantile Regression for Discrete Data

    KAUST Repository

    Padellini, Tullia; Rue, Haavard

    2018-01-01

    Quantile regression is a class of methods voted to the modelling of conditional quantiles. In a Bayesian framework quantile regression has typically been carried out exploiting the Asymmetric Laplace Distribution as a working likelihood. Despite

  1. Comparison of exact, efron and breslow parameter approach method on hazard ratio and stratified cox regression model

    Science.gov (United States)

    Fatekurohman, Mohamat; Nurmala, Nita; Anggraeni, Dian

    2018-04-01

    Lungs are the most important organ, in the case of respiratory system. Problems related to disorder of the lungs are various, i.e. pneumonia, emphysema, tuberculosis and lung cancer. Comparing all those problems, lung cancer is the most harmful. Considering about that, the aim of this research applies survival analysis and factors affecting the endurance of the lung cancer patient using comparison of exact, Efron and Breslow parameter approach method on hazard ratio and stratified cox regression model. The data applied are based on the medical records of lung cancer patients in Jember Paru-paru hospital on 2016, east java, Indonesia. The factors affecting the endurance of the lung cancer patients can be classified into several criteria, i.e. sex, age, hemoglobin, leukocytes, erythrocytes, sedimentation rate of blood, therapy status, general condition, body weight. The result shows that exact method of stratified cox regression model is better than other. On the other hand, the endurance of the patients is affected by their age and the general conditions.

  2. A consensus successive projections algorithm--multiple linear regression method for analyzing near infrared spectra.

    Science.gov (United States)

    Liu, Ke; Chen, Xiaojing; Li, Limin; Chen, Huiling; Ruan, Xiukai; Liu, Wenbin

    2015-02-09

    The successive projections algorithm (SPA) is widely used to select variables for multiple linear regression (MLR) modeling. However, SPA used only once may not obtain all the useful information of the full spectra, because the number of selected variables cannot exceed the number of calibration samples in the SPA algorithm. Therefore, the SPA-MLR method risks the loss of useful information. To make a full use of the useful information in the spectra, a new method named "consensus SPA-MLR" (C-SPA-MLR) is proposed herein. This method is the combination of consensus strategy and SPA-MLR method. In the C-SPA-MLR method, SPA-MLR is used to construct member models with different subsets of variables, which are selected from the remaining variables iteratively. A consensus prediction is obtained by combining the predictions of the member models. The proposed method is evaluated by analyzing the near infrared (NIR) spectra of corn and diesel. The results of C-SPA-MLR method showed a better prediction performance compared with the SPA-MLR and full-spectra PLS methods. Moreover, these results could serve as a reference for combination the consensus strategy and other variable selection methods when analyzing NIR spectra and other spectroscopic techniques. Copyright © 2014 Elsevier B.V. All rights reserved.

  3. Adaptive multiresolution method for MAP reconstruction in electron tomography

    Energy Technology Data Exchange (ETDEWEB)

    Acar, Erman, E-mail: erman.acar@tut.fi [Department of Signal Processing, Tampere University of Technology, P.O. Box 553, FI-33101 Tampere (Finland); BioMediTech, Tampere University of Technology, Biokatu 10, 33520 Tampere (Finland); Peltonen, Sari; Ruotsalainen, Ulla [Department of Signal Processing, Tampere University of Technology, P.O. Box 553, FI-33101 Tampere (Finland); BioMediTech, Tampere University of Technology, Biokatu 10, 33520 Tampere (Finland)

    2016-11-15

    3D image reconstruction with electron tomography holds problems due to the severely limited range of projection angles and low signal to noise ratio of the acquired projection images. The maximum a posteriori (MAP) reconstruction methods have been successful in compensating for the missing information and suppressing noise with their intrinsic regularization techniques. There are two major problems in MAP reconstruction methods: (1) selection of the regularization parameter that controls the balance between the data fidelity and the prior information, and (2) long computation time. One aim of this study is to provide an adaptive solution to the regularization parameter selection problem without having additional knowledge about the imaging environment and the sample. The other aim is to realize the reconstruction using sequences of resolution levels to shorten the computation time. The reconstructions were analyzed in terms of accuracy and computational efficiency using a simulated biological phantom and publically available experimental datasets of electron tomography. The numerical and visual evaluations of the experiments show that the adaptive multiresolution method can provide more accurate results than the weighted back projection (WBP), simultaneous iterative reconstruction technique (SIRT), and sequential MAP expectation maximization (sMAPEM) method. The method is superior to sMAPEM also in terms of computation time and usability since it can reconstruct 3D images significantly faster without requiring any parameter to be set by the user. - Highlights: • An adaptive multiresolution reconstruction method is introduced for electron tomography. • The method provides more accurate results than the conventional reconstruction methods. • The missing wedge and noise problems can be compensated by the method efficiently.

  4. Data-driven method based on particle swarm optimization and k-nearest neighbor regression for estimating capacity of lithium-ion battery

    International Nuclear Information System (INIS)

    Hu, Chao; Jain, Gaurav; Zhang, Puqiang; Schmidt, Craig; Gomadam, Parthasarathy; Gorka, Tom

    2014-01-01

    Highlights: • We develop a data-driven method for the battery capacity estimation. • Five charge-related features that are indicative of the capacity are defined. • The kNN regression model captures the dependency of the capacity on the features. • Results with 10 years’ continuous cycling data verify the effectiveness of the method. - Abstract: Reliability of lithium-ion (Li-ion) rechargeable batteries used in implantable medical devices has been recognized as of high importance from a broad range of stakeholders, including medical device manufacturers, regulatory agencies, physicians, and patients. To ensure Li-ion batteries in these devices operate reliably, it is important to be able to assess the battery health condition by estimating the battery capacity over the life-time. This paper presents a data-driven method for estimating the capacity of Li-ion battery based on the charge voltage and current curves. The contributions of this paper are three-fold: (i) the definition of five characteristic features of the charge curves that are indicative of the capacity, (ii) the development of a non-linear kernel regression model, based on the k-nearest neighbor (kNN) regression, that captures the complex dependency of the capacity on the five features, and (iii) the adaptation of particle swarm optimization (PSO) to finding the optimal combination of feature weights for creating a kNN regression model that minimizes the cross validation (CV) error in the capacity estimation. Verification with 10 years’ continuous cycling data suggests that the proposed method is able to accurately estimate the capacity of Li-ion battery throughout the whole life-time

  5. A comparative study of different methods for calculating electronic transition rates

    Science.gov (United States)

    Kananenka, Alexei A.; Sun, Xiang; Schubert, Alexander; Dunietz, Barry D.; Geva, Eitan

    2018-03-01

    We present a comprehensive comparison of the following mixed quantum-classical methods for calculating electronic transition rates: (1) nonequilibrium Fermi's golden rule, (2) mixed quantum-classical Liouville method, (3) mean-field (Ehrenfest) mixed quantum-classical method, and (4) fewest switches surface-hopping method (in diabatic and adiabatic representations). The comparison is performed on the Garg-Onuchic-Ambegaokar benchmark charge-transfer model, over a broad range of temperatures and electronic coupling strengths, with different nonequilibrium initial states, in the normal and inverted regimes. Under weak to moderate electronic coupling, the nonequilibrium Fermi's golden rule rates are found to be in good agreement with the rates obtained via the mixed quantum-classical Liouville method that coincides with the fully quantum-mechanically exact results for the model system under study. Our results suggest that the nonequilibrium Fermi's golden rule can serve as an inexpensive yet accurate alternative to Ehrenfest and the fewest switches surface-hopping methods.

  6. Evaluating the Performance of Polynomial Regression Method with Different Parameters during Color Characterization

    Directory of Open Access Journals (Sweden)

    Bangyong Sun

    2014-01-01

    Full Text Available The polynomial regression method is employed to calculate the relationship of device color space and CIE color space for color characterization, and the performance of different expressions with specific parameters is evaluated. Firstly, the polynomial equation for color conversion is established and the computation of polynomial coefficients is analysed. And then different forms of polynomial equations are used to calculate the RGB and CMYK’s CIE color values, while the corresponding color errors are compared. At last, an optimal polynomial expression is obtained by analysing several related parameters during color conversion, including polynomial numbers, the degree of polynomial terms, the selection of CIE visual spaces, and the linearization.

  7. Innovative electron transport methods in EGS5

    International Nuclear Information System (INIS)

    Bielajew, A.F.; Wilderman, S.J.

    2000-01-01

    The initial formulation of a Monte Carlo scheme for the transport of high-energy (>≅ 100 keV) electrons was established by Berger in 1963. Calling his method the 'condensed history theory', Berger combined the theoretical results of the previous generation of research into developing approximate solutions of the Boltzmann transport equation with numerical algorithms for exploiting the power of computers to permit iterative, piece-wise solution of the transport equation in a computationally intensive but much less approximate fashion. The methods devised by Berger, with comparatively little modification, provide the foundation of all present day Monte Carlo electron transport simulation algorithms. Only in the last 15 years, beginning with the development and publication of the PRESTA algorithm, has there been a significant revisitation of the problem of simulating electron transport within the condensed history framework. Research in this area is ongoing, highly active, and far from complete. It presents an enormous challenge, demanding derivation of new analytical transport solutions based on underlying fundamental interaction mechanisms, intuitive insight in the development of computer algorithms, and state of the art computer science skills in order to permit deployment of these techniques in an efficient manner. The EGS5 project, a modern ground-up rewrite of the EGS4 code, is now in the design phase. EGS5 will take modern photon and electron transport algorithms and deploy them in an easy-to-maintain, modern computer language-ANSI-standard C ++. Moreover, the well-known difficulties of applying EGS4 to practical geometries (geometry code development, tally routine design) should be made easier and more intuitive through the use of a visual user interface being designed by Quantum Research, Inc., work that is presented elsewhere in this conference. This report commences with a historical review of electron transport models culminating with the proposal of a

  8. New pbysical methods used in the study of composition, electronic properties and surface phenomena of solid substances. I. Electronic spectroscopies

    International Nuclear Information System (INIS)

    Toderean, A; Ilonca, Gh.

    1981-01-01

    The discovery of different kinds of interactions between solids and fotonic, respectively electronic and ionic beams, leads to the development of many new, very sensitive, physical methods for the study of solids. This monograph tries to present some of these methods, useful in compositional analysis, in the study of electronic properties and of the surface processes of solid substances. This is done from the point of view both of physical phenomena underlying them and of the information obtainable with such methods. But the whole monograph is limited only to the methods based on the electronic properties of the elements existing in the solid probes studied and this paper presents only those of them in which the detected beam is an electronic one, like: ELS, DAPS, ILS, AES, AEAPS, INS, TSS, XPS and UPS. (authors)

  9. A method for the direct measurement of electronic site populations in a molecular aggregate using two-dimensional electronic-vibrational spectroscopy

    Energy Technology Data Exchange (ETDEWEB)

    Lewis, Nicholas H. C.; Dong, Hui; Oliver, Thomas A. A.; Fleming, Graham R., E-mail: grfleming@lbl.gov [Department of Chemistry, University of California, Berkeley, California 94720 (United States); Physical Biosciences Division, Lawrence Berkeley National Laboratory, Berkeley, California 94720 (United States); Kavli Energy Nanosciences Institute at Berkeley, Berkeley, California 94720 (United States)

    2015-09-28

    Two dimensional electronic spectroscopy has proved to be a valuable experimental technique to reveal electronic excitation dynamics in photosynthetic pigment-protein complexes, nanoscale semiconductors, organic photovoltaic materials, and many other types of systems. It does not, however, provide direct information concerning the spatial structure and dynamics of excitons. 2D infrared spectroscopy has become a widely used tool for studying structural dynamics but is incapable of directly providing information concerning electronic excited states. 2D electronic-vibrational (2DEV) spectroscopy provides a link between these domains, directly connecting the electronic excitation with the vibrational structure of the system under study. In this work, we derive response functions for the 2DEV spectrum of a molecular dimer and propose a method by which 2DEV spectra could be used to directly measure the electronic site populations as a function of time following the initial electronic excitation. We present results from the response function simulations which show that our proposed approach is substantially valid. This method provides, to our knowledge, the first direct experimental method for measuring the electronic excited state dynamics in the spatial domain, on the molecular scale.

  10. Dual Regression

    OpenAIRE

    Spady, Richard; Stouli, Sami

    2012-01-01

    We propose dual regression as an alternative to the quantile regression process for the global estimation of conditional distribution functions under minimal assumptions. Dual regression provides all the interpretational power of the quantile regression process while avoiding the need for repairing the intersecting conditional quantile surfaces that quantile regression often produces in practice. Our approach introduces a mathematical programming characterization of conditional distribution f...

  11. Several cases of electronics and the measuring methods

    International Nuclear Information System (INIS)

    Supardiyono, Bb.; Kamadi, J.; Suparmono, M.; Indarto.

    1980-01-01

    Several cases of electronics and the measuring methods, covering electric conductivity and electric potential of analog systems, electric current, electric conductivity and electric potential of semiconductor diodes, and characteristics of transistors are described. (SMN)

  12. Influence diagnostics in meta-regression model.

    Science.gov (United States)

    Shi, Lei; Zuo, ShanShan; Yu, Dalei; Zhou, Xiaohua

    2017-09-01

    This paper studies the influence diagnostics in meta-regression model including case deletion diagnostic and local influence analysis. We derive the subset deletion formulae for the estimation of regression coefficient and heterogeneity variance and obtain the corresponding influence measures. The DerSimonian and Laird estimation and maximum likelihood estimation methods in meta-regression are considered, respectively, to derive the results. Internal and external residual and leverage measure are defined. The local influence analysis based on case-weights perturbation scheme, responses perturbation scheme, covariate perturbation scheme, and within-variance perturbation scheme are explored. We introduce a method by simultaneous perturbing responses, covariate, and within-variance to obtain the local influence measure, which has an advantage of capable to compare the influence magnitude of influential studies from different perturbations. An example is used to illustrate the proposed methodology. Copyright © 2017 John Wiley & Sons, Ltd.

  13. Resting-state functional magnetic resonance imaging: the impact of regression analysis.

    Science.gov (United States)

    Yeh, Chia-Jung; Tseng, Yu-Sheng; Lin, Yi-Ru; Tsai, Shang-Yueh; Huang, Teng-Yi

    2015-01-01

    To investigate the impact of regression methods on resting-state functional magnetic resonance imaging (rsfMRI). During rsfMRI preprocessing, regression analysis is considered effective for reducing the interference of physiological noise on the signal time course. However, it is unclear whether the regression method benefits rsfMRI analysis. Twenty volunteers (10 men and 10 women; aged 23.4 ± 1.5 years) participated in the experiments. We used node analysis and functional connectivity mapping to assess the brain default mode network by using five combinations of regression methods. The results show that regressing the global mean plays a major role in the preprocessing steps. When a global regression method is applied, the values of functional connectivity are significantly lower (P ≤ .01) than those calculated without a global regression. This step increases inter-subject variation and produces anticorrelated brain areas. rsfMRI data processed using regression should be interpreted carefully. The significance of the anticorrelated brain areas produced by global signal removal is unclear. Copyright © 2014 by the American Society of Neuroimaging.

  14. Complex regression Doppler optical coherence tomography

    Science.gov (United States)

    Elahi, Sahar; Gu, Shi; Thrane, Lars; Rollins, Andrew M.; Jenkins, Michael W.

    2018-04-01

    We introduce a new method to measure Doppler shifts more accurately and extend the dynamic range of Doppler optical coherence tomography (OCT). The two-point estimate of the conventional Doppler method is replaced with a regression that is applied to high-density B-scans in polar coordinates. We built a high-speed OCT system using a 1.68-MHz Fourier domain mode locked laser to acquire high-density B-scans (16,000 A-lines) at high enough frame rates (˜100 fps) to accurately capture the dynamics of the beating embryonic heart. Flow phantom experiments confirm that the complex regression lowers the minimum detectable velocity from 12.25 mm / s to 374 μm / s, whereas the maximum velocity of 400 mm / s is measured without phase wrapping. Complex regression Doppler OCT also demonstrates higher accuracy and precision compared with the conventional method, particularly when signal-to-noise ratio is low. The extended dynamic range allows monitoring of blood flow over several stages of development in embryos without adjusting the imaging parameters. In addition, applying complex averaging recovers hidden features in structural images.

  15. Methods for significance testing of categorical covariates in logistic regression models after multiple imputation: power and applicability analysis

    NARCIS (Netherlands)

    Eekhout, I.; Wiel, M.A. van de; Heymans, M.W.

    2017-01-01

    Background. Multiple imputation is a recommended method to handle missing data. For significance testing after multiple imputation, Rubin’s Rules (RR) are easily applied to pool parameter estimates. In a logistic regression model, to consider whether a categorical covariate with more than two levels

  16. Linear Regression Analysis

    CERN Document Server

    Seber, George A F

    2012-01-01

    Concise, mathematically clear, and comprehensive treatment of the subject.* Expanded coverage of diagnostics and methods of model fitting.* Requires no specialized knowledge beyond a good grasp of matrix algebra and some acquaintance with straight-line regression and simple analysis of variance models.* More than 200 problems throughout the book plus outline solutions for the exercises.* This revision has been extensively class-tested.

  17. Piecewise linear regression splines with hyperbolic covariates

    International Nuclear Information System (INIS)

    Cologne, John B.; Sposto, Richard

    1992-09-01

    Consider the problem of fitting a curve to data that exhibit a multiphase linear response with smooth transitions between phases. We propose substituting hyperbolas as covariates in piecewise linear regression splines to obtain curves that are smoothly joined. The method provides an intuitive and easy way to extend the two-phase linear hyperbolic response model of Griffiths and Miller and Watts and Bacon to accommodate more than two linear segments. The resulting regression spline with hyperbolic covariates may be fit by nonlinear regression methods to estimate the degree of curvature between adjoining linear segments. The added complexity of fitting nonlinear, as opposed to linear, regression models is not great. The extra effort is particularly worthwhile when investigators are unwilling to assume that the slope of the response changes abruptly at the join points. We can also estimate the join points (the values of the abscissas where the linear segments would intersect if extrapolated) if their number and approximate locations may be presumed known. An example using data on changing age at menarche in a cohort of Japanese women illustrates the use of the method for exploratory data analysis. (author)

  18. A novel method to quantify the activity of alcohol acetyltransferase Using a SnO2-based sensor of electronic nose.

    Science.gov (United States)

    Hu, Zhongqiu; Li, Xiaojing; Wang, Huxuan; Niu, Chen; Yuan, Yahong; Yue, Tianli

    2016-07-15

    Alcohol acetyltransferase (AATFase) extensively catalyzes the reactions of alcohols to acetic esters in microorganisms and plants. In this work, a novel method has been proposed to quantify the activity of AATFase using a SnO2-based sensor of electronic nose, which was determined on the basis of its higher sensitivity to the reducing alcohol than the oxidizing ester. The maximum value of the first-derivative of the signals from the SnO2-based sensor was therein found to be an eigenvalue of isoamyl alcohol concentration. Quadratic polynomial regression perfectly fitted the correlation between the eigenvalue and the isoamyl alcohol concentration. The method was used to determine the AATFase activity in this type of reaction by calculating the conversion rate of isoamyl alcohol. The proposed method has been successfully applied to determine the AATFase activity of a cider yeast strain. Compared with GC-MS, the method shows promises with ideal recovery and low cost. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Statistics of electron multiplication in a multiplier phototube; Iterative method

    International Nuclear Information System (INIS)

    Ortiz, J. F.; Grau, A.

    1985-01-01

    In the present paper an iterative method is applied to study the variation of dynode response in the multiplier phototube. Three different situation are considered that correspond to the following ways of electronic incidence on the first dynode: incidence of exactly one electron, incidence of exactly r electrons and incidence of an average r electrons. The responses are given for a number of steps between 1 and 5, and for values of the multiplication factor of 2.1, 2.5, 3 and 5. We study also the variance, the skewness and the excess of jurtosis for different multiplication factors. (Author) 11 refs

  20. Sample size determination for logistic regression on a logit-normal distribution.

    Science.gov (United States)

    Kim, Seongho; Heath, Elisabeth; Heilbrun, Lance

    2017-06-01

    Although the sample size for simple logistic regression can be readily determined using currently available methods, the sample size calculation for multiple logistic regression requires some additional information, such as the coefficient of determination ([Formula: see text]) of a covariate of interest with other covariates, which is often unavailable in practice. The response variable of logistic regression follows a logit-normal distribution which can be generated from a logistic transformation of a normal distribution. Using this property of logistic regression, we propose new methods of determining the sample size for simple and multiple logistic regressions using a normal transformation of outcome measures. Simulation studies and a motivating example show several advantages of the proposed methods over the existing methods: (i) no need for [Formula: see text] for multiple logistic regression, (ii) available interim or group-sequential designs, and (iii) much smaller required sample size.

  1. Electron paramagnetic resonance: A new method of quaternary dating

    International Nuclear Information System (INIS)

    Poupeau, G.; Rossi, A.; Teles, M.M.; Danon, J.

    1984-01-01

    Significant progress has occurred in the last years in quaternary geochronology. One of this is the emergence of a new dating approach, the Electron Spin Resonance Method. The aim of this paper is to briefly review the method and discuss some aspects of the work at CBPF. (Author) [pt

  2. Electron paramagnetic resonance: a new method of quaternary dating

    International Nuclear Information System (INIS)

    Poupeau, G.; Rossi, A.; Universidade Federal Rural do Rio de Janeiro; Telles, M.; Danon, J.

    1984-01-01

    Significant progress has occurred in the last years in quaternary geochronology. One of this is the emergence of a new dating approach, the Electron Spin Resonance Method. The aim of this paper is to briefly review the method and discuss some aspects of the work at CBPF. (Author) [pt

  3. Sparse Regression by Projection and Sparse Discriminant Analysis

    KAUST Repository

    Qi, Xin

    2015-04-03

    © 2015, © American Statistical Association, Institute of Mathematical Statistics, and Interface Foundation of North America. Recent years have seen active developments of various penalized regression methods, such as LASSO and elastic net, to analyze high-dimensional data. In these approaches, the direction and length of the regression coefficients are determined simultaneously. Due to the introduction of penalties, the length of the estimates can be far from being optimal for accurate predictions. We introduce a new framework, regression by projection, and its sparse version to analyze high-dimensional data. The unique nature of this framework is that the directions of the regression coefficients are inferred first, and the lengths and the tuning parameters are determined by a cross-validation procedure to achieve the largest prediction accuracy. We provide a theoretical result for simultaneous model selection consistency and parameter estimation consistency of our method in high dimension. This new framework is then generalized such that it can be applied to principal components analysis, partial least squares, and canonical correlation analysis. We also adapt this framework for discriminant analysis. Compared with the existing methods, where there is relatively little control of the dependency among the sparse components, our method can control the relationships among the components. We present efficient algorithms and related theory for solving the sparse regression by projection problem. Based on extensive simulations and real data analysis, we demonstrate that our method achieves good predictive performance and variable selection in the regression setting, and the ability to control relationships between the sparse components leads to more accurate classification. In supplementary materials available online, the details of the algorithms and theoretical proofs, and R codes for all simulation studies are provided.

  4. Logistic regression for dichotomized counts.

    Science.gov (United States)

    Preisser, John S; Das, Kalyan; Benecha, Habtamu; Stamm, John W

    2016-12-01

    Sometimes there is interest in a dichotomized outcome indicating whether a count variable is positive or zero. Under this scenario, the application of ordinary logistic regression may result in efficiency loss, which is quantifiable under an assumed model for the counts. In such situations, a shared-parameter hurdle model is investigated for more efficient estimation of regression parameters relating to overall effects of covariates on the dichotomous outcome, while handling count data with many zeroes. One model part provides a logistic regression containing marginal log odds ratio effects of primary interest, while an ancillary model part describes the mean count of a Poisson or negative binomial process in terms of nuisance regression parameters. Asymptotic efficiency of the logistic model parameter estimators of the two-part models is evaluated with respect to ordinary logistic regression. Simulations are used to assess the properties of the models with respect to power and Type I error, the latter investigated under both misspecified and correctly specified models. The methods are applied to data from a randomized clinical trial of three toothpaste formulations to prevent incident dental caries in a large population of Scottish schoolchildren. © The Author(s) 2014.

  5. Multitask Quantile Regression under the Transnormal Model.

    Science.gov (United States)

    Fan, Jianqing; Xue, Lingzhou; Zou, Hui

    2016-01-01

    We consider estimating multi-task quantile regression under the transnormal model, with focus on high-dimensional setting. We derive a surprisingly simple closed-form solution through rank-based covariance regularization. In particular, we propose the rank-based ℓ 1 penalization with positive definite constraints for estimating sparse covariance matrices, and the rank-based banded Cholesky decomposition regularization for estimating banded precision matrices. By taking advantage of alternating direction method of multipliers, nearest correlation matrix projection is introduced that inherits sampling properties of the unprojected one. Our work combines strengths of quantile regression and rank-based covariance regularization to simultaneously deal with nonlinearity and nonnormality for high-dimensional regression. Furthermore, the proposed method strikes a good balance between robustness and efficiency, achieves the "oracle"-like convergence rate, and provides the provable prediction interval under the high-dimensional setting. The finite-sample performance of the proposed method is also examined. The performance of our proposed rank-based method is demonstrated in a real application to analyze the protein mass spectroscopy data.

  6. Electronic structure prediction via data-mining the empirical pseudopotential method

    Energy Technology Data Exchange (ETDEWEB)

    Zenasni, H; Aourag, H [LEPM, URMER, Departement of Physics, University Abou Bakr Belkaid, Tlemcen 13000 (Algeria); Broderick, S R; Rajan, K [Department of Materials Science and Engineering, Iowa State University, Ames, Iowa 50011-2230 (United States)

    2010-01-15

    We introduce a new approach for accelerating the calculation of the electronic structure of new materials by utilizing the empirical pseudopotential method combined with data mining tools. Combining data mining with the empirical pseudopotential method allows us to convert an empirical approach to a predictive approach. Here we consider tetrahedrally bounded III-V Bi semiconductors, and through the prediction of form factors based on basic elemental properties we can model the band structure and charge density for these semi-conductors, for which limited results exist. This work represents a unique approach to modeling the electronic structure of a material which may be used to identify new promising semi-conductors and is one of the few efforts utilizing data mining at an electronic level. (Abstract Copyright [2010], Wiley Periodicals, Inc.)

  7. Improving validation methods for molecular diagnostics: application of Bland-Altman, Deming and simple linear regression analyses in assay comparison and evaluation for next-generation sequencing.

    Science.gov (United States)

    Misyura, Maksym; Sukhai, Mahadeo A; Kulasignam, Vathany; Zhang, Tong; Kamel-Reid, Suzanne; Stockley, Tracy L

    2018-02-01

    A standard approach in test evaluation is to compare results of the assay in validation to results from previously validated methods. For quantitative molecular diagnostic assays, comparison of test values is often performed using simple linear regression and the coefficient of determination (R 2 ), using R 2 as the primary metric of assay agreement. However, the use of R 2 alone does not adequately quantify constant or proportional errors required for optimal test evaluation. More extensive statistical approaches, such as Bland-Altman and expanded interpretation of linear regression methods, can be used to more thoroughly compare data from quantitative molecular assays. We present the application of Bland-Altman and linear regression statistical methods to evaluate quantitative outputs from next-generation sequencing assays (NGS). NGS-derived data sets from assay validation experiments were used to demonstrate the utility of the statistical methods. Both Bland-Altman and linear regression were able to detect the presence and magnitude of constant and proportional error in quantitative values of NGS data. Deming linear regression was used in the context of assay comparison studies, while simple linear regression was used to analyse serial dilution data. Bland-Altman statistical approach was also adapted to quantify assay accuracy, including constant and proportional errors, and precision where theoretical and empirical values were known. The complementary application of the statistical methods described in this manuscript enables more extensive evaluation of performance characteristics of quantitative molecular assays, prior to implementation in the clinical molecular laboratory. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  8. A note on the relationships between multiple imputation, maximum likelihood and fully Bayesian methods for missing responses in linear regression models.

    Science.gov (United States)

    Chen, Qingxia; Ibrahim, Joseph G

    2014-07-01

    Multiple Imputation, Maximum Likelihood and Fully Bayesian methods are the three most commonly used model-based approaches in missing data problems. Although it is easy to show that when the responses are missing at random (MAR), the complete case analysis is unbiased and efficient, the aforementioned methods are still commonly used in practice for this setting. To examine the performance of and relationships between these three methods in this setting, we derive and investigate small sample and asymptotic expressions of the estimates and standard errors, and fully examine how these estimates are related for the three approaches in the linear regression model when the responses are MAR. We show that when the responses are MAR in the linear model, the estimates of the regression coefficients using these three methods are asymptotically equivalent to the complete case estimates under general conditions. One simulation and a real data set from a liver cancer clinical trial are given to compare the properties of these methods when the responses are MAR.

  9. Predictors of course in obsessive-compulsive disorder: logistic regression versus Cox regression for recurrent events.

    Science.gov (United States)

    Kempe, P T; van Oppen, P; de Haan, E; Twisk, J W R; Sluis, A; Smit, J H; van Dyck, R; van Balkom, A J L M

    2007-09-01

    Two methods for predicting remissions in obsessive-compulsive disorder (OCD) treatment are evaluated. Y-BOCS measurements of 88 patients with a primary OCD (DSM-III-R) diagnosis were performed over a 16-week treatment period, and during three follow-ups. Remission at any measurement was defined as a Y-BOCS score lower than thirteen combined with a reduction of seven points when compared with baseline. Logistic regression models were compared with a Cox regression for recurrent events model. Logistic regression yielded different models at different evaluation times. The recurrent events model remained stable when fewer measurements were used. Higher baseline levels of neuroticism and more severe OCD symptoms were associated with a lower chance of remission, early age of onset and more depressive symptoms with a higher chance. Choice of outcome time affects logistic regression prediction models. Recurrent events analysis uses all information on remissions and relapses. Short- and long-term predictors for OCD remission show overlap.

  10. Improved methods for high resolution electron microscopy

    Energy Technology Data Exchange (ETDEWEB)

    Taylor, J.R.

    1987-04-01

    Existing methods of making support films for high resolution transmission electron microscopy are investigated and novel methods are developed. Existing methods of fabricating fenestrated, metal reinforced specimen supports (microgrids) are evaluated for their potential to reduce beam induced movement of monolamellar crystals of C/sub 44/H/sub 90/ paraffin supported on thin carbon films. Improved methods of producing hydrophobic carbon films by vacuum evaporation, and improved methods of depositing well ordered monolamellar paraffin crystals on carbon films are developed. A novel technique for vacuum evaporation of metals is described which is used to reinforce microgrids. A technique is also developed to bond thin carbon films to microgrids with a polymer bonding agent. Unique biochemical methods are described to accomplish site specific covalent modification of membrane proteins. Protocols are given which covalently convert the carboxy terminus of papain cleaved bacteriorhodopsin to a free thiol. 53 refs., 19 figs., 1 tab.

  11. Subset selection in regression

    CERN Document Server

    Miller, Alan

    2002-01-01

    Originally published in 1990, the first edition of Subset Selection in Regression filled a significant gap in the literature, and its critical and popular success has continued for more than a decade. Thoroughly revised to reflect progress in theory, methods, and computing power, the second edition promises to continue that tradition. The author has thoroughly updated each chapter, incorporated new material on recent developments, and included more examples and references. New in the Second Edition:A separate chapter on Bayesian methodsComplete revision of the chapter on estimationA major example from the field of near infrared spectroscopyMore emphasis on cross-validationGreater focus on bootstrappingStochastic algorithms for finding good subsets from large numbers of predictors when an exhaustive search is not feasible Software available on the Internet for implementing many of the algorithms presentedMore examplesSubset Selection in Regression, Second Edition remains dedicated to the techniques for fitting...

  12. Thick-Restart Lanczos Method for Electronic Structure Calculations

    International Nuclear Information System (INIS)

    Simon, Horst D.; Wang, L.-W.; Wu, Kesheng

    1999-01-01

    This paper describes two recent innovations related to the classic Lanczos method for eigenvalue problems, namely the thick-restart technique and dynamic restarting schemes. Combining these two new techniques we are able to implement an efficient eigenvalue problem solver. This paper will demonstrate its effectiveness on one particular class of problems for which this method is well suited: linear eigenvalue problems generated from non-self-consistent electronic structure calculations

  13. Electronic properties of antiferromagnetic UBi2 metal by exact exchange for correlated electrons method

    Directory of Open Access Journals (Sweden)

    E Ghasemikhah

    2012-03-01

    Full Text Available This study investigated the electronic properties of antiferromagnetic UBi2 metal by using ab initio calculations based on the density functional theory (DFT, employing the augmented plane waves plus local orbital method. We used the exact exchange for correlated electrons (EECE method to calculate the exchange-correlation energy under a variety of hybrid functionals. Electric field gradients (EFGs at the uranium site in UBi2 compound were calculated and compared with the experiment. The EFGs were predicted experimentally at the U site to be very small in this compound. The EFG calculated by the EECE functional are in agreement with the experiment. The densities of states (DOSs show that 5f U orbital is hybrided with the other orbitals. The plotted Fermi surfaces show that there are two kinds of charges on Fermi surface of this compound.

  14. Geodesic least squares regression for scaling studies in magnetic confinement fusion

    International Nuclear Information System (INIS)

    Verdoolaege, Geert

    2015-01-01

    In regression analyses for deriving scaling laws that occur in various scientific disciplines, usually standard regression methods have been applied, of which ordinary least squares (OLS) is the most popular. However, concerns have been raised with respect to several assumptions underlying OLS in its application to scaling laws. We here discuss a new regression method that is robust in the presence of significant uncertainty on both the data and the regression model. The method, which we call geodesic least squares regression (GLS), is based on minimization of the Rao geodesic distance on a probabilistic manifold. We demonstrate the superiority of the method using synthetic data and we present an application to the scaling law for the power threshold for the transition to the high confinement regime in magnetic confinement fusion devices

  15. 4D-Qsar Study of Some Pyrazole Pyridine Carboxylic Acid Derivatives by Electron Conformational-Genetic Algorithm Method.

    Science.gov (United States)

    Tuzun, Burak; Yavuz, Sevtap Caglar; Sabanci, Nazmiye; Saripinar, Emin

    2018-05-13

    In the present work, pharmacophore identification and biological activity prediction for 86 pyrazole pyridine carboxylic acid derivatives were made using the electron conformational genetic algorithm approach which was introduced as a 4D-QSAR analysis by us in recent years. In the light of the data obtained from quantum chemical calculations at HF/6-311 G** level, the electron conformational matrices of congruity (ECMC) were constructed by EMRE software. Comparing the matrices, electron conformational submatrix of activity (ECSA, Pha) was revealed that are common for these compounds within a minimum tolerance. A parameter pool was generated considering the obtained pharmacophore. To determine the theoretical biological activity of molecules and identify the best subset of variables affecting bioactivities, we used the nonlinear least square regression method and genetic algorithm. The results obtained in this study are in good agreement with the experimental data presented in the literature. The model for training and test sets attained by the optimum 12 parameters gave highly satisfactory results with R2training= 0.889, q2=0.839 and SEtraining=0.066, q2ext1 = 0.770, q2ext2 = 0.750, q2ext3=0.824, ccctr = 0.941, ccctest = 0.869 and cccall = 0.927. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  16. Sparse Reduced-Rank Regression for Simultaneous Dimension Reduction and Variable Selection

    KAUST Repository

    Chen, Lisha

    2012-12-01

    The reduced-rank regression is an effective method in predicting multiple response variables from the same set of predictor variables. It reduces the number of model parameters and takes advantage of interrelations between the response variables and hence improves predictive accuracy. We propose to select relevant variables for reduced-rank regression by using a sparsity-inducing penalty. We apply a group-lasso type penalty that treats each row of the matrix of the regression coefficients as a group and show that this penalty satisfies certain desirable invariance properties. We develop two numerical algorithms to solve the penalized regression problem and establish the asymptotic consistency of the proposed method. In particular, the manifold structure of the reduced-rank regression coefficient matrix is considered and studied in our theoretical analysis. In our simulation study and real data analysis, the new method is compared with several existing variable selection methods for multivariate regression and exhibits competitive performance in prediction and variable selection. © 2012 American Statistical Association.

  17. A method for the selection of a functional form for a thermodynamic equation of state using weighted linear least squares stepwise regression

    Science.gov (United States)

    Jacobsen, R. T.; Stewart, R. B.; Crain, R. W., Jr.; Rose, G. L.; Myers, A. F.

    1976-01-01

    A method was developed for establishing a rational choice of the terms to be included in an equation of state with a large number of adjustable coefficients. The methods presented were developed for use in the determination of an equation of state for oxygen and nitrogen. However, a general application of the methods is possible in studies involving the determination of an optimum polynomial equation for fitting a large number of data points. The data considered in the least squares problem are experimental thermodynamic pressure-density-temperature data. Attention is given to a description of stepwise multiple regression and the use of stepwise regression in the determination of an equation of state for oxygen and nitrogen.

  18. Remote-sensing data processing with the multivariate regression analysis method for iron mineral resource potential mapping: a case study in the Sarvian area, central Iran

    Science.gov (United States)

    Mansouri, Edris; Feizi, Faranak; Jafari Rad, Alireza; Arian, Mehran

    2018-03-01

    This paper uses multivariate regression to create a mathematical model for iron skarn exploration in the Sarvian area, central Iran, using multivariate regression for mineral prospectivity mapping (MPM). The main target of this paper is to apply multivariate regression analysis (as an MPM method) to map iron outcrops in the northeastern part of the study area in order to discover new iron deposits in other parts of the study area. Two types of multivariate regression models using two linear equations were employed to discover new mineral deposits. This method is one of the reliable methods for processing satellite images. ASTER satellite images (14 bands) were used as unique independent variables (UIVs), and iron outcrops were mapped as dependent variables for MPM. According to the results of the probability value (p value), coefficient of determination value (R2) and adjusted determination coefficient (Radj2), the second regression model (which consistent of multiple UIVs) fitted better than other models. The accuracy of the model was confirmed by iron outcrops map and geological observation. Based on field observation, iron mineralization occurs at the contact of limestone and intrusive rocks (skarn type).

  19. [Application of negative binomial regression and modified Poisson regression in the research of risk factors for injury frequency].

    Science.gov (United States)

    Cao, Qingqing; Wu, Zhenqiang; Sun, Ying; Wang, Tiezhu; Han, Tengwei; Gu, Chaomei; Sun, Yehuan

    2011-11-01

    To Eexplore the application of negative binomial regression and modified Poisson regression analysis in analyzing the influential factors for injury frequency and the risk factors leading to the increase of injury frequency. 2917 primary and secondary school students were selected from Hefei by cluster random sampling method and surveyed by questionnaire. The data on the count event-based injuries used to fitted modified Poisson regression and negative binomial regression model. The risk factors incurring the increase of unintentional injury frequency for juvenile students was explored, so as to probe the efficiency of these two models in studying the influential factors for injury frequency. The Poisson model existed over-dispersion (P Poisson regression and negative binomial regression model, was fitted better. respectively. Both showed that male gender, younger age, father working outside of the hometown, the level of the guardian being above junior high school and smoking might be the results of higher injury frequencies. On a tendency of clustered frequency data on injury event, both the modified Poisson regression analysis and negative binomial regression analysis can be used. However, based on our data, the modified Poisson regression fitted better and this model could give a more accurate interpretation of relevant factors affecting the frequency of injury.

  20. Regression: A Bibliography.

    Science.gov (United States)

    Pedrini, D. T.; Pedrini, Bonnie C.

    Regression, another mechanism studied by Sigmund Freud, has had much research, e.g., hypnotic regression, frustration regression, schizophrenic regression, and infra-human-animal regression (often directly related to fixation). Many investigators worked with hypnotic age regression, which has a long history, going back to Russian reflexologists.…

  1. The validity of the density scaling method in primary electron transport for photon and electron beams

    International Nuclear Information System (INIS)

    Woo, M.K.; Cunningham, J.R.

    1990-01-01

    In the convolution/superposition method of photon beam dose calculations, inhomogeneities are usually handled by using some form of scaling involving the relative electron densities of the inhomogeneities. In this paper the accuracy of density scaling as applied to primary electrons generated in photon interactions is examined. Monte Carlo calculations are compared with density scaling calculations for air and cork slab inhomogeneities. For individual primary photon kernels as well as for photon interactions restricted to a thin layer, the results can differ significantly, by up to 50%, between the two calculations. However, for realistic photon beams where interactions occur throughout the whole irradiated volume, the discrepancies are much less severe. The discrepancies for the kernel calculation are attributed to the scattering characteristics of the electrons and the consequent oversimplified modeling used in the density scaling method. A technique called the kernel integration technique is developed to analyze the general effects of air and cork inhomogeneities. It is shown that the discrepancies become significant only under rather extreme conditions, such as immediately beyond the surface after a large air gap. In electron beams all the primary electrons originate from the surface of the phantom and the errors caused by simple density scaling can be much more significant. Various aspects relating to the accuracy of density scaling for air and cork slab inhomogeneities are discussed

  2. Relativistic convergent close-coupling method applied to electron scattering from mercury

    International Nuclear Information System (INIS)

    Bostock, Christopher J.; Fursa, Dmitry V.; Bray, Igor

    2010-01-01

    We report on the extension of the recently formulated relativistic convergent close-coupling (RCCC) method to accommodate two-electron and quasi-two-electron targets. We apply the theory to electron scattering from mercury and obtain differential and integrated cross sections for elastic and inelastic scattering. We compared with previous nonrelativistic convergent close-coupling (CCC) calculations and for a number of transitions obtained significantly better agreement with the experiment. The RCCC method is able to resolve structure in the integrated cross sections for the energy regime in the vicinity of the excitation thresholds for the (6s6p) 3 P 0,1,2 states. These cross sections are associated with the formation of negative ion (Hg - ) resonances that could not be resolved with the nonrelativistic CCC method. The RCCC results are compared with the experiment and other relativistic theories.

  3. A novel hybrid method of beta-turn identification in protein using binary logistic regression and neural network.

    Science.gov (United States)

    Asghari, Mehdi Poursheikhali; Hayatshahi, Sayyed Hamed Sadat; Abdolmaleki, Parviz

    2012-01-01

    From both the structural and functional points of view, β-turns play important biological roles in proteins. In the present study, a novel two-stage hybrid procedure has been developed to identify β-turns in proteins. Binary logistic regression was initially used for the first time to select significant sequence parameters in identification of β-turns due to a re-substitution test procedure. Sequence parameters were consisted of 80 amino acid positional occurrences and 20 amino acid percentages in sequence. Among these parameters, the most significant ones which were selected by binary logistic regression model, were percentages of Gly, Ser and the occurrence of Asn in position i+2, respectively, in sequence. These significant parameters have the highest effect on the constitution of a β-turn sequence. A neural network model was then constructed and fed by the parameters selected by binary logistic regression to build a hybrid predictor. The networks have been trained and tested on a non-homologous dataset of 565 protein chains. With applying a nine fold cross-validation test on the dataset, the network reached an overall accuracy (Qtotal) of 74, which is comparable with results of the other β-turn prediction methods. In conclusion, this study proves that the parameter selection ability of binary logistic regression together with the prediction capability of neural networks lead to the development of more precise models for identifying β-turns in proteins.

  4. Testing for marginal linear effects in quantile regression

    KAUST Repository

    Wang, Huixia Judy

    2017-10-23

    The paper develops a new marginal testing procedure to detect significant predictors that are associated with the conditional quantiles of a scalar response. The idea is to fit the marginal quantile regression on each predictor one at a time, and then to base the test on the t-statistics that are associated with the most predictive predictors. A resampling method is devised to calibrate this test statistic, which has non-regular limiting behaviour due to the selection of the most predictive variables. Asymptotic validity of the procedure is established in a general quantile regression setting in which the marginal quantile regression models can be misspecified. Even though a fixed dimension is assumed to derive the asymptotic results, the test proposed is applicable and computationally feasible for large dimensional predictors. The method is more flexible than existing marginal screening test methods based on mean regression and has the added advantage of being robust against outliers in the response. The approach is illustrated by using an application to a human immunodeficiency virus drug resistance data set.

  5. Testing for marginal linear effects in quantile regression

    KAUST Repository

    Wang, Huixia Judy; McKeague, Ian W.; Qian, Min

    2017-01-01

    The paper develops a new marginal testing procedure to detect significant predictors that are associated with the conditional quantiles of a scalar response. The idea is to fit the marginal quantile regression on each predictor one at a time, and then to base the test on the t-statistics that are associated with the most predictive predictors. A resampling method is devised to calibrate this test statistic, which has non-regular limiting behaviour due to the selection of the most predictive variables. Asymptotic validity of the procedure is established in a general quantile regression setting in which the marginal quantile regression models can be misspecified. Even though a fixed dimension is assumed to derive the asymptotic results, the test proposed is applicable and computationally feasible for large dimensional predictors. The method is more flexible than existing marginal screening test methods based on mean regression and has the added advantage of being robust against outliers in the response. The approach is illustrated by using an application to a human immunodeficiency virus drug resistance data set.

  6. Stellar atmospheric parameter estimation using Gaussian process regression

    Science.gov (United States)

    Bu, Yude; Pan, Jingchang

    2015-02-01

    As is well known, it is necessary to derive stellar parameters from massive amounts of spectral data automatically and efficiently. However, in traditional automatic methods such as artificial neural networks (ANNs) and kernel regression (KR), it is often difficult to optimize the algorithm structure and determine the optimal algorithm parameters. Gaussian process regression (GPR) is a recently developed method that has been proven to be capable of overcoming these difficulties. Here we apply GPR to derive stellar atmospheric parameters from spectra. Through evaluating the performance of GPR on Sloan Digital Sky Survey (SDSS) spectra, Medium resolution Isaac Newton Telescope Library of Empirical Spectra (MILES) spectra, ELODIE spectra and the spectra of member stars of galactic globular clusters, we conclude that GPR can derive stellar parameters accurately and precisely, especially when we use data preprocessed with principal component analysis (PCA). We then compare the performance of GPR with that of several widely used regression methods (ANNs, support-vector regression and KR) and find that with GPR it is easier to optimize structures and parameters and more efficient and accurate to extract atmospheric parameters.

  7. The Association between Electronic Bullying and School Absenteeism among High School Students in the United States

    Science.gov (United States)

    Grinshteyn, Erin; Yang, Y. T.

    2017-01-01

    Background: We examined the relationship between exposure to electronic bullying and absenteeism as a result of being afraid. Methods: This multivariate, multinomial regression analysis of the 2013 Youth Risk Behavior Survey data assessed the association between experiencing electronic bullying in the past year and how often students were absent…

  8. Significance testing in ridge regression for genetic data

    Directory of Open Access Journals (Sweden)

    De Iorio Maria

    2011-09-01

    Full Text Available Abstract Background Technological developments have increased the feasibility of large scale genetic association studies. Densely typed genetic markers are obtained using SNP arrays, next-generation sequencing technologies and imputation. However, SNPs typed using these methods can be highly correlated due to linkage disequilibrium among them, and standard multiple regression techniques fail with these data sets due to their high dimensionality and correlation structure. There has been increasing interest in using penalised regression in the analysis of high dimensional data. Ridge regression is one such penalised regression technique which does not perform variable selection, instead estimating a regression coefficient for each predictor variable. It is therefore desirable to obtain an estimate of the significance of each ridge regression coefficient. Results We develop and evaluate a test of significance for ridge regression coefficients. Using simulation studies, we demonstrate that the performance of the test is comparable to that of a permutation test, with the advantage of a much-reduced computational cost. We introduce the p-value trace, a plot of the negative logarithm of the p-values of ridge regression coefficients with increasing shrinkage parameter, which enables the visualisation of the change in p-value of the regression coefficients with increasing penalisation. We apply the proposed method to a lung cancer case-control data set from EPIC, the European Prospective Investigation into Cancer and Nutrition. Conclusions The proposed test is a useful alternative to a permutation test for the estimation of the significance of ridge regression coefficients, at a much-reduced computational cost. The p-value trace is an informative graphical tool for evaluating the results of a test of significance of ridge regression coefficients as the shrinkage parameter increases, and the proposed test makes its production computationally feasible.

  9. Multivariate Linear Regression and CART Regression Analysis of TBM Performance at Abu Hamour Phase-I Tunnel

    Science.gov (United States)

    Jakubowski, J.; Stypulkowski, J. B.; Bernardeau, F. G.

    2017-12-01

    The first phase of the Abu Hamour drainage and storm tunnel was completed in early 2017. The 9.5 km long, 3.7 m diameter tunnel was excavated with two Earth Pressure Balance (EPB) Tunnel Boring Machines from Herrenknecht. TBM operation processes were monitored and recorded by Data Acquisition and Evaluation System. The authors coupled collected TBM drive data with available information on rock mass properties, cleansed, completed with secondary variables and aggregated by weeks and shifts. Correlations and descriptive statistics charts were examined. Multivariate Linear Regression and CART regression tree models linking TBM penetration rate (PR), penetration per revolution (PPR) and field penetration index (FPI) with TBM operational and geotechnical characteristics were performed for the conditions of the weak/soft rock of Doha. Both regression methods are interpretable and the data were screened with different computational approaches allowing enriched insight. The primary goal of the analysis was to investigate empirical relations between multiple explanatory and responding variables, to search for best subsets of explanatory variables and to evaluate the strength of linear and non-linear relations. For each of the penetration indices, a predictive model coupling both regression methods was built and validated. The resultant models appeared to be stronger than constituent ones and indicated an opportunity for more accurate and robust TBM performance predictions.

  10. Non-invasive diagnostic methods for atherosclerosis and use in assessing progression and regression in hypercholesterolemia

    International Nuclear Information System (INIS)

    Tsushima, Motoo; Fujii, Shigeki; Yutani, Chikao; Yamamoto, Akira; Naitoh, Hiroaki.

    1990-01-01

    We evaluated the wall thickening and stenosis rate (ASI), the calcification rate (ACI), and the wall thickening and calcification stenosis rate (SCI) of the lower abdominal aorta calculated by the 12 sector method from simple or enhanced computed tomography. The intra-observer variation of the calculation of ASI was 5.7% and that of ACI was 2.4%. In 9 patients who underwent an autopsy examination, ACI was significantly correlated with the rate of the calcification dimension to the whole objective area of the abdominal aorta (r=0.856, p<0.01). However, there were no correlations between ASI and the surface involvement or the atherosclerotic index obtained by the point-counting method of the autopsy materials. In the analysis of 40 patients with atherosclerotic vascular diseases, ASI and ACI were also highly correlated with the percentage volume of the arterial wall in relation to the whole volume of the observed artery (r=0.852, p<0.0001) and also the percentage calcification volume (r=0.913, p<0.0001) calculated by the computed method, respectively. The percentage of atherosclerotic vascular diseases increased in the group of both high ASI (over 10%) and high ACI (over 20%). We used SCI as a reliable index when the progression and regression of atherosclerosis was considered. Among patients of hypercholesterolemia consisting of 15 with familial hypercholesterolemia (FH) and 6 non-FH patients, the change of SCI (d-SCI) was significantly correlated with the change of total cholesterol concentration (d-TC) after the treatment (r=0.466, p<0.05) and the change of the right Achilles' tendon thickening (d-ATT) was also correlated with d-TC (r=0.634, p<0.005). However, no correlation between d-SCI and d-ATT was observed. In conclusion, CT indices of atherosclerosis were useful as a noninvasive quantitative diagnostic method and we were able to use them to assess the progression and regression of atherosclerosis. (author)

  11. On the absorbed dose determination method in high energy electrons beams

    International Nuclear Information System (INIS)

    Scarlat, F.; Scarisoreanu, A.; Oane, M.; Mitru, E.; Avadanei, C.

    2008-01-01

    The absorbed dose determination method in water for electron beams with energies in the range from 1 MeV to 50 MeV is presented herein. The dosimetry equipment for measurements is composed of an UNIDOS.PTW electrometer and different ionization chambers calibrated in air kerma in a Co 60 beam. Starting from the code of practice for high energy electron beams, this paper describes the method adopted by the secondary standard dosimetry laboratory (SSDL) in NILPRP - Bucharest

  12. A comparison on parameter-estimation methods in multiple regression analysis with existence of multicollinearity among independent variables

    Directory of Open Access Journals (Sweden)

    Hukharnsusatrue, A.

    2005-11-01

    Full Text Available The objective of this research is to compare multiple regression coefficients estimating methods with existence of multicollinearity among independent variables. The estimation methods are Ordinary Least Squares method (OLS, Restricted Least Squares method (RLS, Restricted Ridge Regression method (RRR and Restricted Liu method (RL when restrictions are true and restrictions are not true. The study used the Monte Carlo Simulation method. The experiment was repeated 1,000 times under each situation. The analyzed results of the data are demonstrated as follows. CASE 1: The restrictions are true. In all cases, RRR and RL methods have a smaller Average Mean Square Error (AMSE than OLS and RLS method, respectively. RRR method provides the smallest AMSE when the level of correlations is high and also provides the smallest AMSE for all level of correlations and all sample sizes when standard deviation is equal to 5. However, RL method provides the smallest AMSE when the level of correlations is low and middle, except in the case of standard deviation equal to 3, small sample sizes, RRR method provides the smallest AMSE.The AMSE varies with, most to least, respectively, level of correlations, standard deviation and number of independent variables but inversely with to sample size.CASE 2: The restrictions are not true.In all cases, RRR method provides the smallest AMSE, except in the case of standard deviation equal to 1 and error of restrictions equal to 5%, OLS method provides the smallest AMSE when the level of correlations is low or median and there is a large sample size, but the small sample sizes, RL method provides the smallest AMSE. In addition, when error of restrictions is increased, OLS method provides the smallest AMSE for all level, of correlations and all sample sizes, except when the level of correlations is high and sample sizes small. Moreover, the case OLS method provides the smallest AMSE, the most RLS method has a smaller AMSE than

  13. Method for pulse to pulse dose reproducibility applied to electron linear accelerators

    International Nuclear Information System (INIS)

    Ighigeanu, D.; Martin, D.; Oproiu, C.; Cirstea, E.; Craciun, G.

    2002-01-01

    An original method for obtaining programmed beam single shots and pulse trains with programmed pulse number, pulse repetition frequency, pulse duration and pulse dose is presented. It is particularly useful for automatic control of absorbed dose rate level, irradiation process control as well as in pulse radiolysis studies, single pulse dose measurement or for research experiments where pulse-to-pulse dose reproducibility is required. This method is applied to the electron linear accelerators, ALIN-10 of 6.23 MeV and 82 W and ALID-7, of 5.5 MeV and 670 W, built in NILPRP. In order to implement this method, the accelerator triggering system (ATS) consists of two branches: the gun branch and the magnetron branch. ATS, which synchronizes all the system units, delivers trigger pulses at a programmed repetition rate (up to 250 pulses/s) to the gun (80 kV, 10 A and 4 ms) and magnetron (45 kV, 100 A, and 4 ms).The accelerated electron beam existence is determined by the electron gun and magnetron pulses overlapping. The method consists in controlling the overlapping of pulses in order to deliver the beam in the desired sequence. This control is implemented by a discrete pulse position modulation of gun and/or magnetron pulses. The instabilities of the gun and magnetron transient regimes are avoided by operating the accelerator with no accelerated beam for a certain time. At the operator 'beam start' command, the ATS controls electron gun and magnetron pulses overlapping and the linac beam is generated. The pulse-to-pulse absorbed dose variation is thus considerably reduced. Programmed absorbed dose, irradiation time, beam pulse number or other external events may interrupt the coincidence between the gun and magnetron pulses. Slow absorbed dose variation is compensated by the control of the pulse duration and repetition frequency. Two methods are reported in the electron linear accelerators' development for obtaining the pulse to pulse dose reproducibility: the method

  14. Boosted regression trees, multivariate adaptive regression splines and their two-step combinations with multiple linear regression or partial least squares to predict blood-brain barrier passage: a case study.

    Science.gov (United States)

    Deconinck, E; Zhang, M H; Petitet, F; Dubus, E; Ijjaali, I; Coomans, D; Vander Heyden, Y

    2008-02-18

    The use of some unconventional non-linear modeling techniques, i.e. classification and regression trees and multivariate adaptive regression splines-based methods, was explored to model the blood-brain barrier (BBB) passage of drugs and drug-like molecules. The data set contains BBB passage values for 299 structural and pharmacological diverse drugs, originating from a structured knowledge-based database. Models were built using boosted regression trees (BRT) and multivariate adaptive regression splines (MARS), as well as their respective combinations with stepwise multiple linear regression (MLR) and partial least squares (PLS) regression in two-step approaches. The best models were obtained using combinations of MARS with either stepwise MLR or PLS. It could be concluded that the use of combinations of a linear with a non-linear modeling technique results in some improved properties compared to the individual linear and non-linear models and that, when the use of such a combination is appropriate, combinations using MARS as non-linear technique should be preferred over those with BRT, due to some serious drawbacks of the BRT approaches.

  15. Application of Macro Response Monte Carlo method for electron spectrum simulation

    International Nuclear Information System (INIS)

    Perles, L.A.; Almeida, A. de

    2007-01-01

    During the past years several variance reduction techniques for Monte Carlo electron transport have been developed in order to reduce the electron computation time transport for absorbed dose distribution. We have implemented the Macro Response Monte Carlo (MRMC) method to evaluate the electron spectrum which can be used as a phase space input for others simulation programs. Such technique uses probability distributions for electron histories previously simulated in spheres (called kugels). These probabilities are used to sample the primary electron final state, as well as the creation secondary electrons and photons. We have compared the MRMC electron spectra simulated in homogeneous phantom against the Geant4 spectra. The results showed an agreement better than 6% in the spectra peak energies and that MRMC code is up to 12 time faster than Geant4 simulations

  16. Quantile Regression With Measurement Error

    KAUST Repository

    Wei, Ying; Carroll, Raymond J.

    2009-01-01

    . The finite sample performance of the proposed method is investigated in a simulation study, and compared to the standard regression calibration approach. Finally, we apply our methodology to part of the National Collaborative Perinatal Project growth data, a

  17. Local regression type methods applied to the study of geophysics and high frequency financial data

    Science.gov (United States)

    Mariani, M. C.; Basu, K.

    2014-09-01

    In this work we applied locally weighted scatterplot smoothing techniques (Lowess/Loess) to Geophysical and high frequency financial data. We first analyze and apply this technique to the California earthquake geological data. A spatial analysis was performed to show that the estimation of the earthquake magnitude at a fixed location is very accurate up to the relative error of 0.01%. We also applied the same method to a high frequency data set arising in the financial sector and obtained similar satisfactory results. The application of this approach to the two different data sets demonstrates that the overall method is accurate and efficient, and the Lowess approach is much more desirable than the Loess method. The previous works studied the time series analysis; in this paper our local regression models perform a spatial analysis for the geophysics data providing different information. For the high frequency data, our models estimate the curve of best fit where data are dependent on time.

  18. New Combined Electron-Beam Methods of Wastewater Purification

    International Nuclear Information System (INIS)

    Pikaev, A.K.; Makarov, I.E.; Ponomarev, A.V.; Kartasheva, L.I.; Podzorova, E.A.; Chulkov, V.N.; Han, B.; Kim, D.K.

    1999-01-01

    The paper is a brief review of the results obtained with the participation of the authors from the study on combined electron-beam methods for purification of some wastewaters. The data on purification of wastewaters containing dyes or hydrogen peroxide and municipal wastewater in the aerosol flow are considered

  19. Classification and regression trees

    CERN Document Server

    Breiman, Leo; Olshen, Richard A; Stone, Charles J

    1984-01-01

    The methodology used to construct tree structured rules is the focus of this monograph. Unlike many other statistical procedures, which moved from pencil and paper to calculators, this text's use of trees was unthinkable before computers. Both the practical and theoretical sides have been developed in the authors' study of tree methods. Classification and Regression Trees reflects these two sides, covering the use of trees as a data analysis method, and in a more mathematical framework, proving some of their fundamental properties.

  20. Gaussian Process Regression for WDM System Performance Prediction

    DEFF Research Database (Denmark)

    Wass, Jesper; Thrane, Jakob; Piels, Molly

    2017-01-01

    Gaussian process regression is numerically and experimentally investigated to predict the bit error rate of a 24 x 28 CiBd QPSK WDM system. The proposed method produces accurate predictions from multi-dimensional and sparse measurement data.......Gaussian process regression is numerically and experimentally investigated to predict the bit error rate of a 24 x 28 CiBd QPSK WDM system. The proposed method produces accurate predictions from multi-dimensional and sparse measurement data....

  1. Implementing and evaluating a fictitious electron dynamics method for the calculation of electronic structure: Application to the Si(100) surface

    International Nuclear Information System (INIS)

    Hoffman, M J H; Claassens, C H

    2006-01-01

    A density matrix based fictitious electron dynamics method for calculating electronic structure has been implemented within a semi-empirical quantum chemistry environment. This method uses an equation of motion that implicitly ensures the idempotency constraint on the density matrix. Test calculations showed that this method has potential of being combined with simultaneous atomic dynamics, in analogy to the popular Car-Parrinello method. In addition, the sparsity of the density matrix and the sophisticated though flexible way of ensuring idempotency conservation while integrating the equation of motion creates the potential of developing a fast linear scaling method

  2. Tutorial on Using Regression Models with Count Outcomes Using R

    Directory of Open Access Journals (Sweden)

    A. Alexander Beaujean

    2016-02-01

    Full Text Available Education researchers often study count variables, such as times a student reached a goal, discipline referrals, and absences. Most researchers that study these variables use typical regression methods (i.e., ordinary least-squares either with or without transforming the count variables. In either case, using typical regression for count data can produce parameter estimates that are biased, thus diminishing any inferences made from such data. As count-variable regression models are seldom taught in training programs, we present a tutorial to help educational researchers use such methods in their own research. We demonstrate analyzing and interpreting count data using Poisson, negative binomial, zero-inflated Poisson, and zero-inflated negative binomial regression models. The count regression methods are introduced through an example using the number of times students skipped class. The data for this example are freely available and the R syntax used run the example analyses are included in the Appendix.

  3. Modified Monte Carlo method for study of electron transport in degenerate electron gas in the presence of electron–electron interactions, application to graphene

    International Nuclear Information System (INIS)

    Borowik, Piotr; Thobel, Jean-Luc; Adamowicz, Leszek

    2017-01-01

    Standard computational methods used to take account of the Pauli Exclusion Principle into Monte Carlo (MC) simulations of electron transport in semiconductors may give unphysical results in low field regime, where obtained electron distribution function takes values exceeding unity. Modified algorithms were already proposed and allow to correctly account for electron scattering on phonons or impurities. Present paper extends this approach and proposes improved simulation scheme allowing including Pauli exclusion principle for electron–electron (e–e) scattering into MC simulations. Simulations with significantly reduced computational cost recreate correct values of the electron distribution function. Proposed algorithm is applied to study transport properties of degenerate electrons in graphene with e–e interactions. This required adapting the treatment of e–e scattering in the case of linear band dispersion relation. Hence, this part of the simulation algorithm is described in details.

  4. Modified Monte Carlo method for study of electron transport in degenerate electron gas in the presence of electron–electron interactions, application to graphene

    Energy Technology Data Exchange (ETDEWEB)

    Borowik, Piotr, E-mail: pborow@poczta.onet.pl [Warsaw University of Technology, Faculty of Physics, ul. Koszykowa 75, 00-662 Warszawa (Poland); Thobel, Jean-Luc, E-mail: jean-luc.thobel@iemn.univ-lille1.fr [Institut d' Electronique, de Microélectronique et de Nanotechnologies, UMR CNRS 8520, Université Lille 1, Avenue Poincaré, CS 60069, 59652 Villeneuve d' Ascq Cédex (France); Adamowicz, Leszek, E-mail: adamo@if.pw.edu.pl [Warsaw University of Technology, Faculty of Physics, ul. Koszykowa 75, 00-662 Warszawa (Poland)

    2017-07-15

    Standard computational methods used to take account of the Pauli Exclusion Principle into Monte Carlo (MC) simulations of electron transport in semiconductors may give unphysical results in low field regime, where obtained electron distribution function takes values exceeding unity. Modified algorithms were already proposed and allow to correctly account for electron scattering on phonons or impurities. Present paper extends this approach and proposes improved simulation scheme allowing including Pauli exclusion principle for electron–electron (e–e) scattering into MC simulations. Simulations with significantly reduced computational cost recreate correct values of the electron distribution function. Proposed algorithm is applied to study transport properties of degenerate electrons in graphene with e–e interactions. This required adapting the treatment of e–e scattering in the case of linear band dispersion relation. Hence, this part of the simulation algorithm is described in details.

  5. Moving beyond regression techniques in cardiovascular risk prediction: applying machine learning to address analytic challenges.

    Science.gov (United States)

    Goldstein, Benjamin A; Navar, Ann Marie; Carter, Rickey E

    2017-06-14

    Risk prediction plays an important role in clinical cardiology research. Traditionally, most risk models have been based on regression models. While useful and robust, these statistical methods are limited to using a small number of predictors which operate in the same way on everyone, and uniformly throughout their range. The purpose of this review is to illustrate the use of machine-learning methods for development of risk prediction models. Typically presented as black box approaches, most machine-learning methods are aimed at solving particular challenges that arise in data analysis that are not well addressed by typical regression approaches. To illustrate these challenges, as well as how different methods can address them, we consider trying to predicting mortality after diagnosis of acute myocardial infarction. We use data derived from our institution's electronic health record and abstract data on 13 regularly measured laboratory markers. We walk through different challenges that arise in modelling these data and then introduce different machine-learning approaches. Finally, we discuss general issues in the application of machine-learning methods including tuning parameters, loss functions, variable importance, and missing data. Overall, this review serves as an introduction for those working on risk modelling to approach the diffuse field of machine learning. © The Author 2016. Published by Oxford University Press on behalf of the European Society of Cardiology.

  6. Correlation and simple linear regression.

    Science.gov (United States)

    Zou, Kelly H; Tuncali, Kemal; Silverman, Stuart G

    2003-06-01

    In this tutorial article, the concepts of correlation and regression are reviewed and demonstrated. The authors review and compare two correlation coefficients, the Pearson correlation coefficient and the Spearman rho, for measuring linear and nonlinear relationships between two continuous variables. In the case of measuring the linear relationship between a predictor and an outcome variable, simple linear regression analysis is conducted. These statistical concepts are illustrated by using a data set from published literature to assess a computed tomography-guided interventional technique. These statistical methods are important for exploring the relationships between variables and can be applied to many radiologic studies.

  7. Detection of epistatic effects with logic regression and a classical linear regression model.

    Science.gov (United States)

    Malina, Magdalena; Ickstadt, Katja; Schwender, Holger; Posch, Martin; Bogdan, Małgorzata

    2014-02-01

    To locate multiple interacting quantitative trait loci (QTL) influencing a trait of interest within experimental populations, usually methods as the Cockerham's model are applied. Within this framework, interactions are understood as the part of the joined effect of several genes which cannot be explained as the sum of their additive effects. However, if a change in the phenotype (as disease) is caused by Boolean combinations of genotypes of several QTLs, this Cockerham's approach is often not capable to identify them properly. To detect such interactions more efficiently, we propose a logic regression framework. Even though with the logic regression approach a larger number of models has to be considered (requiring more stringent multiple testing correction) the efficient representation of higher order logic interactions in logic regression models leads to a significant increase of power to detect such interactions as compared to a Cockerham's approach. The increase in power is demonstrated analytically for a simple two-way interaction model and illustrated in more complex settings with simulation study and real data analysis.

  8. Methods of organization of SCORM-compliant teaching materials in electronic format

    Directory of Open Access Journals (Sweden)

    Jacek Marciniak

    2012-06-01

    Full Text Available This paper presents a method of organizing electronic teaching materials based on their role in the teaching process rather than their technical structure. Our method allows SCORM materials stored as e-learning courses („electronic books” to be subdivided and structured so that content can be used in multiple contexts. As a standard, SCORM defines rules for organizing content, but not how to divide and structure it. Our method uses UCTS nomenclature to divide content, define relationships between content entities, and aggregate those entities into courses. This allows content to be shared in different implementations of SCORM while guaranteeing that usability and consistency are maintained.

  9. TEMPERATURE PREDICTION IN 3013 CONTAINERS IN K AREA MATERIAL STORAGE (KAMS) FACILITY USING REGRESSION METHODS

    International Nuclear Information System (INIS)

    Gupta, N

    2008-01-01

    3013 containers are designed in accordance with the DOE-STD-3013-2004. These containers are qualified to store plutonium (Pu) bearing materials such as PuO2 for 50 years. DOT shipping packages such as the 9975 are used to store the 3013 containers in the K-Area Material Storage (KAMS) facility at Savannah River Site (SRS). DOE-STD-3013-2004 requires that a comprehensive surveillance program be set up to ensure that the 3013 container design parameters are not violated during the long term storage. To ensure structural integrity of the 3013 containers, thermal analyses using finite element models were performed to predict the contents and component temperatures for different but well defined parameters such as storage ambient temperature, PuO 2 density, fill heights, weights, and thermal loading. Interpolation is normally used to calculate temperatures if the actual parameter values are different from the analyzed values. A statistical analysis technique using regression methods is proposed to develop simple polynomial relations to predict temperatures for the actual parameter values found in the containers. The analysis shows that regression analysis is a powerful tool to develop simple relations to assess component temperatures

  10. Simple method for generating adjustable trains of picosecond electron bunches

    Directory of Open Access Journals (Sweden)

    P. Muggli

    2010-05-01

    Full Text Available A simple, passive method for producing an adjustable train of picosecond electron bunches is demonstrated. The key component of this method is an electron beam mask consisting of an array of parallel wires that selectively spoils the beam emittance. This mask is positioned in a high magnetic dispersion, low beta-function region of the beam line. The incoming electron beam striking the mask has a time/energy correlation that corresponds to a time/position correlation at the mask location. The mask pattern is transformed into a time pattern or train of bunches when the dispersion is brought back to zero downstream of the mask. Results are presented of a proof-of-principle experiment demonstrating this novel technique that was performed at the Brookhaven National Laboratory Accelerator Test Facility. This technique allows for easy tailoring of the bunch train for a particular application, including varying the bunch width and spacing, and enabling the generation of a trailing witness bunch.

  11. Regression Benchmarking: An Approach to Quality Assurance in Performance

    OpenAIRE

    Bulej, Lubomír

    2005-01-01

    The paper presents a short summary of our work in the area of regression benchmarking and its application to software development. Specially, we explain the concept of regression benchmarking, the requirements for employing regression testing in a software project, and methods used for analyzing the vast amounts of data resulting from repeated benchmarking. We present the application of regression benchmarking on a real software project and conclude with a glimpse at the challenges for the fu...

  12. Kernel polynomial method for a nonorthogonal electronic-structure calculation of amorphous diamond

    International Nuclear Information System (INIS)

    Roeder, H.; Silver, R.N.; Drabold, D.A.; Dong, J.J.

    1997-01-01

    The Kernel polynomial method (KPM) has been successfully applied to tight-binding electronic-structure calculations as an O(N) method. Here we extend this method to nonorthogonal basis sets with a sparse overlap matrix S and a sparse Hamiltonian H. Since the KPM method utilizes matrix vector multiplications it is necessary to apply S -1 H onto a vector. The multiplication of S -1 is performed using a preconditioned conjugate-gradient method and does not involve the explicit inversion of S. Hence the method scales the same way as the original KPM method, i.e., O(N), although there is an overhead due to the additional conjugate-gradient part. We apply this method to a large scale electronic-structure calculation of amorphous diamond. copyright 1997 The American Physical Society

  13. A False Alarm Reduction Method for a Gas Sensor Based Electronic Nose

    Directory of Open Access Journals (Sweden)

    Mohammad Mizanur Rahman

    2017-09-01

    Full Text Available Electronic noses (E-Noses are becoming popular for food and fruit quality assessment due to their robustness and repeated usability without fatigue, unlike human experts. An E-Nose equipped with classification algorithms and having open ended classification boundaries such as the k-nearest neighbor (k-NN, support vector machine (SVM, and multilayer perceptron neural network (MLPNN, are found to suffer from false classification errors of irrelevant odor data. To reduce false classification and misclassification errors, and to improve correct rejection performance; algorithms with a hyperspheric boundary, such as a radial basis function neural network (RBFNN and generalized regression neural network (GRNN with a Gaussian activation function in the hidden layer should be used. The simulation results presented in this paper show that GRNN has more correct classification efficiency and false alarm reduction capability compared to RBFNN. As the design of a GRNN and RBFNN is complex and expensive due to large numbers of neuron requirements, a simple hyperspheric classification method based on minimum, maximum, and mean (MMM values of each class of the training dataset was presented. The MMM algorithm was simple and found to be fast and efficient in correctly classifying data of training classes, and correctly rejecting data of extraneous odors, and thereby reduced false alarms.

  14. Two Paradoxes in Linear Regression Analysis

    Science.gov (United States)

    FENG, Ge; PENG, Jing; TU, Dongke; ZHENG, Julia Z.; FENG, Changyong

    2016-01-01

    Summary Regression is one of the favorite tools in applied statistics. However, misuse and misinterpretation of results from regression analysis are common in biomedical research. In this paper we use statistical theory and simulation studies to clarify some paradoxes around this popular statistical method. In particular, we show that a widely used model selection procedure employed in many publications in top medical journals is wrong. Formal procedures based on solid statistical theory should be used in model selection. PMID:28638214

  15. A Local Weighted Nearest Neighbor Algorithm and a Weighted and Constrained Least-Squared Method for Mixed Odor Analysis by Electronic Nose Systems

    Directory of Open Access Journals (Sweden)

    Jyuo-Min Shyu

    2010-11-01

    Full Text Available A great deal of work has been done to develop techniques for odor analysis by electronic nose systems. These analyses mostly focus on identifying a particular odor by comparing with a known odor dataset. However, in many situations, it would be more practical if each individual odorant could be determined directly. This paper proposes two methods for such odor components analysis for electronic nose systems. First, a K-nearest neighbor (KNN-based local weighted nearest neighbor (LWNN algorithm is proposed to determine the components of an odor. According to the component analysis, the odor training data is firstly categorized into several groups, each of which is represented by its centroid. The examined odor is then classified as the class of the nearest centroid. The distance between the examined odor and the centroid is calculated based on a weighting scheme, which captures the local structure of each predefined group. To further determine the concentration of each component, odor models are built by regressions. Then, a weighted and constrained least-squares (WCLS method is proposed to estimate the component concentrations. Experiments were carried out to assess the effectiveness of the proposed methods. The LWNN algorithm is able to classify mixed odors with different mixing ratios, while the WCLS method can provide good estimates on component concentrations.

  16. On some methods to produce high-energy polarized electron beams by means of proton synchrotrons

    International Nuclear Information System (INIS)

    Bessonov, E.G.; Vazdik, Ya.A.

    1980-01-01

    Some methods of production of high-energy polarized electron beams by means of proton synchrotrons are considered. These methods are based on transfer by protons of a part of their energy to the polarized electrons of a thin target placed inside the working volume of the synchrotron. It is suggested to use as a polarized electron target a magnetized crystalline iron in which proton channeling is realized, polarized atomic beams and the polarized plasma. It is shown that by this method one can produce polarized electron beams with energy approximately 100 GeV, energy spread +- 5 % and intensity approximately 10 7 electron/c, polarization approximately 30% and with intensity approximately 10 4 -10 5 electron/c, polarization approximately 100% [ru

  17. Electronic-projecting Moire method applying CBR-technology

    Science.gov (United States)

    Kuzyakov, O. N.; Lapteva, U. V.; Andreeva, M. A.

    2018-01-01

    Electronic-projecting method based on Moire effect for examining surface topology is suggested. Conditions of forming Moire fringes and their parameters’ dependence on reference parameters of object and virtual grids are analyzed. Control system structure and decision-making subsystem are elaborated. Subsystem execution includes CBR-technology, based on applying case base. The approach related to analysing and forming decision for each separate local area with consequent formation of common topology map is applied.

  18. A method for ultrashort electron pulse-shape measurement using coherent synchrotron radiation

    International Nuclear Information System (INIS)

    Geloni, G.; Yurkov, M.V.

    2003-03-01

    In this paper we discuss a method for nondestructive measurements of the longitudinal profile of sub-picosecond electron bunches for X-ray free electron lasers (XFELs). The method is based on the detection of the coherent synchrotron radiation (CSR) spectrum produced by a bunch passing a dipole magnet system. This work also contains a systematic treatment of synchrotron radiation theory which lies at the basis of CSR. Standard theory of synchrotron radiation uses several approximations whose applicability limits are often forgotten: here we present a systematic discussion about these assumptions. Properties of coherent synchrotron radiation from an electron moving along an arc of a circle are then derived and discussed. We describe also an effective and practical diagnostic technique based on the utilization of an electromagnetic undulator to record the energy of the coherent radiation pulse into the central cone. This measurement must be repeated many times with different undulator resonant frequencies in order to reconstruct the modulus of the bunch form-factor. The retrieval of the bunch profile function from these data is performed by means of deconvolution techniques: for the present work we take advantage of a constrained deconvolution method. We illustrate with numerical examples the potential of the proposed method for electron beam diagnostics at the TESLA test facility (TTF) accelerator. Here we choose, for emphasis, experiments aimed at the measure of the strongly non-Gaussian electron bunch profile in the TTF femtosecond-mode operation. We demonstrate that a tandem combination of a picosecond streak camera and a CSR spectrometer can be used to extract shape information from electron bunches with a narrow leading peak and a long tail. (orig.)

  19. Regularized principal covariates regression and its application to finding coupled patterns in climate fields

    Science.gov (United States)

    Fischer, M. J.

    2014-02-01

    There are many different methods for investigating the coupling between two climate fields, which are all based on the multivariate regression model. Each different method of solving the multivariate model has its own attractive characteristics, but often the suitability of a particular method for a particular problem is not clear. Continuum regression methods search the solution space between the conventional methods and thus can find regression model subspaces that mix the attractive characteristics of the end-member subspaces. Principal covariates regression is a continuum regression method that is easily applied to climate fields and makes use of two end-members: principal components regression and redundancy analysis. In this study, principal covariates regression is extended to additionally span a third end-member (partial least squares or maximum covariance analysis). The new method, regularized principal covariates regression, has several attractive features including the following: it easily applies to problems in which the response field has missing values or is temporally sparse, it explores a wide range of model spaces, and it seeks a model subspace that will, for a set number of components, have a predictive skill that is the same or better than conventional regression methods. The new method is illustrated by applying it to the problem of predicting the southern Australian winter rainfall anomaly field using the regional atmospheric pressure anomaly field. Regularized principal covariates regression identifies four major coupled patterns in these two fields. The two leading patterns, which explain over half the variance in the rainfall field, are related to the subtropical ridge and features of the zonally asymmetric circulation.

  20. A structured sparse regression method for estimating isoform expression level from multi-sample RNA-seq data.

    Science.gov (United States)

    Zhang, L; Liu, X J

    2016-06-03

    With the rapid development of next-generation high-throughput sequencing technology, RNA-seq has become a standard and important technique for transcriptome analysis. For multi-sample RNA-seq data, the existing expression estimation methods usually deal with each single-RNA-seq sample, and ignore that the read distributions are consistent across multiple samples. In the current study, we propose a structured sparse regression method, SSRSeq, to estimate isoform expression using multi-sample RNA-seq data. SSRSeq uses a non-parameter model to capture the general tendency of non-uniformity read distribution for all genes across multiple samples. Additionally, our method adds a structured sparse regularization, which not only incorporates the sparse specificity between a gene and its corresponding isoform expression levels, but also reduces the effects of noisy reads, especially for lowly expressed genes and isoforms. Four real datasets were used to evaluate our method on isoform expression estimation. Compared with other popular methods, SSRSeq reduced the variance between multiple samples, and produced more accurate isoform expression estimations, and thus more meaningful biological interpretations.

  1. Development and application of advanced methods for electronic structure calculations

    DEFF Research Database (Denmark)

    Schmidt, Per Simmendefeldt

    . For this reason, part of this thesis relates to developing and applying a new method for constructing so-called norm-conserving PAW setups, that are applicable to GW calculations by using a genetic algorithm. The effect of applying the new setups significantly affects the absolute band positions, both for bulk......This thesis relates to improvements and applications of beyond-DFT methods for electronic structure calculations that are applied in computational material science. The improvements are of both technical and principal character. The well-known GW approximation is optimized for accurate calculations...... of electronic excitations in two-dimensional materials by exploiting exact limits of the screened Coulomb potential. This approach reduces the computational time by an order of magnitude, enabling large scale applications. The GW method is further improved by including so-called vertex corrections. This turns...

  2. Evaluation of Linear Regression Simultaneous Myoelectric Control Using Intramuscular EMG.

    Science.gov (United States)

    Smith, Lauren H; Kuiken, Todd A; Hargrove, Levi J

    2016-04-01

    The objective of this study was to evaluate the ability of linear regression models to decode patterns of muscle coactivation from intramuscular electromyogram (EMG) and provide simultaneous myoelectric control of a virtual 3-DOF wrist/hand system. Performance was compared to the simultaneous control of conventional myoelectric prosthesis methods using intramuscular EMG (parallel dual-site control)-an approach that requires users to independently modulate individual muscles in the residual limb, which can be challenging for amputees. Linear regression control was evaluated in eight able-bodied subjects during a virtual Fitts' law task and was compared to performance of eight subjects using parallel dual-site control. An offline analysis also evaluated how different types of training data affected prediction accuracy of linear regression control. The two control systems demonstrated similar overall performance; however, the linear regression method demonstrated improved performance for targets requiring use of all three DOFs, whereas parallel dual-site control demonstrated improved performance for targets that required use of only one DOF. Subjects using linear regression control could more easily activate multiple DOFs simultaneously, but often experienced unintended movements when trying to isolate individual DOFs. Offline analyses also suggested that the method used to train linear regression systems may influence controllability. Linear regression myoelectric control using intramuscular EMG provided an alternative to parallel dual-site control for 3-DOF simultaneous control at the wrist and hand. The two methods demonstrated different strengths in controllability, highlighting the tradeoff between providing simultaneous control and the ability to isolate individual DOFs when desired.

  3. Reduced Rank Regression

    DEFF Research Database (Denmark)

    Johansen, Søren

    2008-01-01

    The reduced rank regression model is a multivariate regression model with a coefficient matrix with reduced rank. The reduced rank regression algorithm is an estimation procedure, which estimates the reduced rank regression model. It is related to canonical correlations and involves calculating...

  4. Modeling Governance KB with CATPCA to Overcome Multicollinearity in the Logistic Regression

    Science.gov (United States)

    Khikmah, L.; Wijayanto, H.; Syafitri, U. D.

    2017-04-01

    The problem often encounters in logistic regression modeling are multicollinearity problems. Data that have multicollinearity between explanatory variables with the result in the estimation of parameters to be bias. Besides, the multicollinearity will result in error in the classification. In general, to overcome multicollinearity in regression used stepwise regression. They are also another method to overcome multicollinearity which involves all variable for prediction. That is Principal Component Analysis (PCA). However, classical PCA in only for numeric data. Its data are categorical, one method to solve the problems is Categorical Principal Component Analysis (CATPCA). Data were used in this research were a part of data Demographic and Population Survey Indonesia (IDHS) 2012. This research focuses on the characteristic of women of using the contraceptive methods. Classification results evaluated using Area Under Curve (AUC) values. The higher the AUC value, the better. Based on AUC values, the classification of the contraceptive method using stepwise method (58.66%) is better than the logistic regression model (57.39%) and CATPCA (57.39%). Evaluation of the results of logistic regression using sensitivity, shows the opposite where CATPCA method (99.79%) is better than logistic regression method (92.43%) and stepwise (92.05%). Therefore in this study focuses on major class classification (using a contraceptive method), then the selected model is CATPCA because it can raise the level of the major class model accuracy.

  5. Projection-reduction method applied to deriving non-linear optical conductivity for an electron-impurity system

    Directory of Open Access Journals (Sweden)

    Nam Lyong Kang

    2013-07-01

    Full Text Available The projection-reduction method introduced by the present authors is known to give a validated theory for optical transitions in the systems of electrons interacting with phonons. In this work, using this method, we derive the linear and first order nonlinear optical conductivites for an electron-impurity system and examine whether the expressions faithfully satisfy the quantum mechanical philosophy, in the same way as for the electron-phonon systems. The result shows that the Fermi distribution function for electrons, energy denominators, and electron-impurity coupling factors are contained properly in organized manners along with absorption of photons for each electron transition process in the final expressions. Furthermore, the result is shown to be represented properly by schematic diagrams, as in the formulation of electron-phonon interaction. Therefore, in conclusion, we claim that this method can be applied in modeling optical transitions of electrons interacting with both impurities and phonons.

  6. A New Global Regression Analysis Method for the Prediction of Wind Tunnel Model Weight Corrections

    Science.gov (United States)

    Ulbrich, Norbert Manfred; Bridge, Thomas M.; Amaya, Max A.

    2014-01-01

    A new global regression analysis method is discussed that predicts wind tunnel model weight corrections for strain-gage balance loads during a wind tunnel test. The method determines corrections by combining "wind-on" model attitude measurements with least squares estimates of the model weight and center of gravity coordinates that are obtained from "wind-off" data points. The method treats the least squares fit of the model weight separate from the fit of the center of gravity coordinates. Therefore, it performs two fits of "wind- off" data points and uses the least squares estimator of the model weight as an input for the fit of the center of gravity coordinates. Explicit equations for the least squares estimators of the weight and center of gravity coordinates are derived that simplify the implementation of the method in the data system software of a wind tunnel. In addition, recommendations for sets of "wind-off" data points are made that take typical model support system constraints into account. Explicit equations of the confidence intervals on the model weight and center of gravity coordinates and two different error analyses of the model weight prediction are also discussed in the appendices of the paper.

  7. Regression filter for signal resolution

    International Nuclear Information System (INIS)

    Matthes, W.

    1975-01-01

    The problem considered is that of resolving a measured pulse height spectrum of a material mixture, e.g. gamma ray spectrum, Raman spectrum, into a weighed sum of the spectra of the individual constituents. The model on which the analytical formulation is based is described. The problem reduces to that of a multiple linear regression. A stepwise linear regression procedure was constructed. The efficiency of this method was then tested by transforming the procedure in a computer programme which was used to unfold test spectra obtained by mixing some spectra, from a library of arbitrary chosen spectra, and adding a noise component. (U.K.)

  8. Estimating Loess Plateau Average Annual Precipitation with Multiple Linear Regression Kriging and Geographically Weighted Regression Kriging

    Directory of Open Access Journals (Sweden)

    Qiutong Jin

    2016-06-01

    Full Text Available Estimating the spatial distribution of precipitation is an important and challenging task in hydrology, climatology, ecology, and environmental science. In order to generate a highly accurate distribution map of average annual precipitation for the Loess Plateau in China, multiple linear regression Kriging (MLRK and geographically weighted regression Kriging (GWRK methods were employed using precipitation data from the period 1980–2010 from 435 meteorological stations. The predictors in regression Kriging were selected by stepwise regression analysis from many auxiliary environmental factors, such as elevation (DEM, normalized difference vegetation index (NDVI, solar radiation, slope, and aspect. All predictor distribution maps had a 500 m spatial resolution. Validation precipitation data from 130 hydrometeorological stations were used to assess the prediction accuracies of the MLRK and GWRK approaches. Results showed that both prediction maps with a 500 m spatial resolution interpolated by MLRK and GWRK had a high accuracy and captured detailed spatial distribution data; however, MLRK produced a lower prediction error and a higher variance explanation than GWRK, although the differences were small, in contrast to conclusions from similar studies.

  9. Method of electron emission control in RF guns

    International Nuclear Information System (INIS)

    Khodak, I.V.; Kushnir, V.A.

    2001-01-01

    The electron emission control method for a RF gun is considered.According to the main idea of the method,the additional resonance system is created in a cathode region where the RF field strength could be varied using the external pulse equipment. The additional resonance system is composed of a coaxial cavity coupled with a RF gun cylindrical cavity via an axial hole. Computed results of radiofrequency and electrodynamic performances of such a two-cavity system and results of the RF gun model pilot study are presented in. Results of particle dynamics simulation are described

  10. Method of electron emission control in RF guns

    CERN Document Server

    Khodak, I V

    2001-01-01

    The electron emission control method for a RF gun is considered.According to the main idea of the method,the additional resonance system is created in a cathode region where the RF field strength could be varied using the external pulse equipment. The additional resonance system is composed of a coaxial cavity coupled with a RF gun cylindrical cavity via an axial hole. Computed results of radiofrequency and electrodynamic performances of such a two-cavity system and results of the RF gun model pilot study are presented in. Results of particle dynamics simulation are described.

  11. Application of NIRS coupled with PLS regression as a rapid, non-destructive alternative method for quantification of KBA in Boswellia sacra

    Science.gov (United States)

    Al-Harrasi, Ahmed; Rehman, Najeeb Ur; Mabood, Fazal; Albroumi, Muhammaed; Ali, Liaqat; Hussain, Javid; Hussain, Hidayat; Csuk, René; Khan, Abdul Latif; Alam, Tanveer; Alameri, Saif

    2017-09-01

    In the present study, for the first time, NIR spectroscopy coupled with PLS regression as a rapid and alternative method was developed to quantify the amount of Keto-β-Boswellic Acid (KBA) in different plant parts of Boswellia sacra and the resin exudates of the trunk. NIR spectroscopy was used for the measurement of KBA standards and B. sacra samples in absorption mode in the wavelength range from 700-2500 nm. PLS regression model was built from the obtained spectral data using 70% of KBA standards (training set) in the range from 0.1 ppm to 100 ppm. The PLS regression model obtained was having R-square value of 98% with 0.99 corelationship value and having good prediction with RMSEP value 3.2 and correlation of 0.99. It was then used to quantify the amount of KBA in the samples of B. sacra. The results indicated that the MeOH extract of resin has the highest concentration of KBA (0.6%) followed by essential oil (0.1%). However, no KBA was found in the aqueous extract. The MeOH extract of the resin was subjected to column chromatography to get various sub-fractions at different polarity of organic solvents. The sub-fraction at 4% MeOH/CHCl3 (4.1% of KBA) was found to contain the highest percentage of KBA followed by another sub-fraction at 2% MeOH/CHCl3 (2.2% of KBA). The present results also indicated that KBA is only present in the gum-resin of the trunk and not in all parts of the plant. These results were further confirmed through HPLC analysis and therefore it is concluded that NIRS coupled with PLS regression is a rapid and alternate method for quantification of KBA in Boswellia sacra. It is non-destructive, rapid, sensitive and uses simple methods of sample preparation.

  12. Online and Batch Supervised Background Estimation via L1 Regression

    KAUST Repository

    Dutta, Aritra

    2017-11-23

    We propose a surprisingly simple model for supervised video background estimation. Our model is based on $\\\\ell_1$ regression. As existing methods for $\\\\ell_1$ regression do not scale to high-resolution videos, we propose several simple and scalable methods for solving the problem, including iteratively reweighted least squares, a homotopy method, and stochastic gradient descent. We show through extensive experiments that our model and methods match or outperform the state-of-the-art online and batch methods in virtually all quantitative and qualitative measures.

  13. Online and Batch Supervised Background Estimation via L1 Regression

    KAUST Repository

    Dutta, Aritra; Richtarik, Peter

    2017-01-01

    We propose a surprisingly simple model for supervised video background estimation. Our model is based on $\\ell_1$ regression. As existing methods for $\\ell_1$ regression do not scale to high-resolution videos, we propose several simple and scalable methods for solving the problem, including iteratively reweighted least squares, a homotopy method, and stochastic gradient descent. We show through extensive experiments that our model and methods match or outperform the state-of-the-art online and batch methods in virtually all quantitative and qualitative measures.

  14. General regression and representation model for classification.

    Directory of Open Access Journals (Sweden)

    Jianjun Qian

    Full Text Available Recently, the regularized coding-based classification methods (e.g. SRC and CRC show a great potential for pattern classification. However, most existing coding methods assume that the representation residuals are uncorrelated. In real-world applications, this assumption does not hold. In this paper, we take account of the correlations of the representation residuals and develop a general regression and representation model (GRR for classification. GRR not only has advantages of CRC, but also takes full use of the prior information (e.g. the correlations between representation residuals and representation coefficients and the specific information (weight matrix of image pixels to enhance the classification performance. GRR uses the generalized Tikhonov regularization and K Nearest Neighbors to learn the prior information from the training data. Meanwhile, the specific information is obtained by using an iterative algorithm to update the feature (or image pixel weights of the test sample. With the proposed model as a platform, we design two classifiers: basic general regression and representation classifier (B-GRR and robust general regression and representation classifier (R-GRR. The experimental results demonstrate the performance advantages of proposed methods over state-of-the-art algorithms.

  15. Survival analysis II: Cox regression

    NARCIS (Netherlands)

    Stel, Vianda S.; Dekker, Friedo W.; Tripepi, Giovanni; Zoccali, Carmine; Jager, Kitty J.

    2011-01-01

    In contrast to the Kaplan-Meier method, Cox proportional hazards regression can provide an effect estimate by quantifying the difference in survival between patient groups and can adjust for confounding effects of other variables. The purpose of this article is to explain the basic concepts of the

  16. On a method for high-energy electron beam production in proton synchrotrons

    International Nuclear Information System (INIS)

    Bessonov, E.G.; Vazdik, Ya.A.

    1979-01-01

    It is suggested to produce high-energy electron beams in such a way that the ultrarelativistic protons give an amount of their kinetic energy to the electrons of a thin target, placed inside the working volume of the proton synchrotron. The kinematics of the elastic scattering of relativistic protons on electrons at rest is treated. Evaluation of a number of elastically-scattered electrons by 1000 GeV and 3000 GeV proton beams is presented. The method under consideration is of certain practical interest and may appear to be preferable in a definite energy range of protons and electrons

  17. Testing hypotheses for differences between linear regression lines

    Science.gov (United States)

    Stanley J. Zarnoch

    2009-01-01

    Five hypotheses are identified for testing differences between simple linear regression lines. The distinctions between these hypotheses are based on a priori assumptions and illustrated with full and reduced models. The contrast approach is presented as an easy and complete method for testing for overall differences between the regressions and for making pairwise...

  18. Methods of measurements on incidental X-radiation from electron tubes

    International Nuclear Information System (INIS)

    1977-01-01

    The standard describes the method for detection of x-radiation and the method for the direct and indirect measurement of field pattern and exposure rate of random incidental radiation emanating from high voltage electron tubes. Required apparatus and calibration procedure for the exposure rate meter or film mount are described. (M.G.B.)

  19. SU-F-T-71: A Practical Method for Evaluation of Electron Virtual Source Position

    Energy Technology Data Exchange (ETDEWEB)

    Huang, Z; Jiang, W; Stuart, B; Leu, S; Feng, Y [East Carolina University, Greenville, North Carolina (United States); Liu, T [Houston Methodist Hospital, Sugar Land, TX (United States)

    2016-06-15

    Purpose: Since electrons are easily scattered, the virtual source position for electrons is expected to locate below the x-ray target of Medical Linacs. However, the effective SSD method yields the electron virtual position above the x-ray target for some applicators for some energy in Siemens Linacs. In this study, we propose to use IC Profiler (Sun Nuclear) for evaluating the electron virtual source position for the standard electron applicators for various electron energies. Methods: The profile measurements for various nominal source-to-detector distances (SDDs) of 100–115 cm were carried out for electron beam energies of 6–18 MeV. Two methods were used: one was to use a 0.125 cc ion chamber (PTW, Type 31010) with buildup mounted in a PTW water tank without water filled; and the other was to use IC Profiler with a buildup to achieve charge particle equilibrium. The full width at half-maximum (FWHM) method was used to determine the field sizes for the measured profiles. Backprojecting (by a straight line) the distance between the 50% points on the beam profiles for the various SDDs, yielded the virtual source position for each applicator. Results: The profiles were obtained and the field sizes were determined by FWHM. The virtual source positions were determined through backprojection of profiles for applicators (5, 10, 15, 20, 25). For instance, they were 96.415 cm (IC Profiler) vs 95.844 cm (scanning ion chamber) for 9 MeV electrons with 10×10 cm applicator and 97.160 cm vs 97.161 cm for 12 MeV electrons with 10×10 cm applicator. The differences in the virtual source positions between IC profiler and scanning ion chamber were within 1.5%. Conclusion: IC Profiler provides a practical method for determining the electron virtual source position and its results are consistent with those obtained by profiles of scanning ion chamber with buildup.

  20. SU-F-T-71: A Practical Method for Evaluation of Electron Virtual Source Position

    International Nuclear Information System (INIS)

    Huang, Z; Jiang, W; Stuart, B; Leu, S; Feng, Y; Liu, T

    2016-01-01

    Purpose: Since electrons are easily scattered, the virtual source position for electrons is expected to locate below the x-ray target of Medical Linacs. However, the effective SSD method yields the electron virtual position above the x-ray target for some applicators for some energy in Siemens Linacs. In this study, we propose to use IC Profiler (Sun Nuclear) for evaluating the electron virtual source position for the standard electron applicators for various electron energies. Methods: The profile measurements for various nominal source-to-detector distances (SDDs) of 100–115 cm were carried out for electron beam energies of 6–18 MeV. Two methods were used: one was to use a 0.125 cc ion chamber (PTW, Type 31010) with buildup mounted in a PTW water tank without water filled; and the other was to use IC Profiler with a buildup to achieve charge particle equilibrium. The full width at half-maximum (FWHM) method was used to determine the field sizes for the measured profiles. Backprojecting (by a straight line) the distance between the 50% points on the beam profiles for the various SDDs, yielded the virtual source position for each applicator. Results: The profiles were obtained and the field sizes were determined by FWHM. The virtual source positions were determined through backprojection of profiles for applicators (5, 10, 15, 20, 25). For instance, they were 96.415 cm (IC Profiler) vs 95.844 cm (scanning ion chamber) for 9 MeV electrons with 10×10 cm applicator and 97.160 cm vs 97.161 cm for 12 MeV electrons with 10×10 cm applicator. The differences in the virtual source positions between IC profiler and scanning ion chamber were within 1.5%. Conclusion: IC Profiler provides a practical method for determining the electron virtual source position and its results are consistent with those obtained by profiles of scanning ion chamber with buildup.

  1. A Comparison of Advanced Regression Algorithms for Quantifying Urban Land Cover

    Directory of Open Access Journals (Sweden)

    Akpona Okujeni

    2014-07-01

    Full Text Available Quantitative methods for mapping sub-pixel land cover fractions are gaining increasing attention, particularly with regard to upcoming hyperspectral satellite missions. We evaluated five advanced regression algorithms combined with synthetically mixed training data for quantifying urban land cover from HyMap data at 3.6 and 9 m spatial resolution. Methods included support vector regression (SVR, kernel ridge regression (KRR, artificial neural networks (NN, random forest regression (RFR and partial least squares regression (PLSR. Our experiments demonstrate that both kernel methods SVR and KRR yield high accuracies for mapping complex urban surface types, i.e., rooftops, pavements, grass- and tree-covered areas. SVR and KRR models proved to be stable with regard to the spatial and spectral differences between both images and effectively utilized the higher complexity of the synthetic training mixtures for improving estimates for coarser resolution data. Observed deficiencies mainly relate to known problems arising from spectral similarities or shadowing. The remaining regressors either revealed erratic (NN or limited (RFR and PLSR performances when comprehensively mapping urban land cover. Our findings suggest that the combination of kernel-based regression methods, such as SVR and KRR, with synthetically mixed training data is well suited for quantifying urban land cover from imaging spectrometer data at multiple scales.

  2. Depth-weighted robust multivariate regression with application to sparse data

    KAUST Repository

    Dutta, Subhajit; Genton, Marc G.

    2017-01-01

    A robust method for multivariate regression is developed based on robust estimators of the joint location and scatter matrix of the explanatory and response variables using the notion of data depth. The multivariate regression estimator possesses desirable affine equivariance properties, achieves the best breakdown point of any affine equivariant estimator, and has an influence function which is bounded in both the response as well as the predictor variable. To increase the efficiency of this estimator, a re-weighted estimator based on robust Mahalanobis distances of the residual vectors is proposed. In practice, the method is more stable than existing methods that are constructed using subsamples of the data. The resulting multivariate regression technique is computationally feasible, and turns out to perform better than several popular robust multivariate regression methods when applied to various simulated data as well as a real benchmark data set. When the data dimension is quite high compared to the sample size it is still possible to use meaningful notions of data depth along with the corresponding depth values to construct a robust estimator in a sparse setting.

  3. Depth-weighted robust multivariate regression with application to sparse data

    KAUST Repository

    Dutta, Subhajit

    2017-04-05

    A robust method for multivariate regression is developed based on robust estimators of the joint location and scatter matrix of the explanatory and response variables using the notion of data depth. The multivariate regression estimator possesses desirable affine equivariance properties, achieves the best breakdown point of any affine equivariant estimator, and has an influence function which is bounded in both the response as well as the predictor variable. To increase the efficiency of this estimator, a re-weighted estimator based on robust Mahalanobis distances of the residual vectors is proposed. In practice, the method is more stable than existing methods that are constructed using subsamples of the data. The resulting multivariate regression technique is computationally feasible, and turns out to perform better than several popular robust multivariate regression methods when applied to various simulated data as well as a real benchmark data set. When the data dimension is quite high compared to the sample size it is still possible to use meaningful notions of data depth along with the corresponding depth values to construct a robust estimator in a sparse setting.

  4. Limitations of the condensed history method for low-energy electrons

    International Nuclear Information System (INIS)

    Martin, W.R.; Ballinger, C.T.; Rathkopf, J.A.

    1991-01-01

    A systematic evaluation of the conventional, condensed history electron transport methodology has been performed through comparisons with more accurate single-scatter Monte Carlo calculations. These comparisons highlight the inaccuracies associated with the condensed history method and indicate its range of validity. The condensed history method is used in codes such as MCNP4, SANDYL, ETRAN, ITS, and EGS and requires a number of restrictive assumptions about the scattering characteristics to make tractable the analytical solution to the infinite-medium transport equation. Distributions describing electron characteristics after multiple collisions (multiscatter distributions) are constructed from such solutions and serve as the heart of the condensed history codes. A two-level approach is taken to quantify the errors inherent in condensed history. First, conventional condensed history multiscattering distributions in energy and angle are compared directly with analogous distributions generated with a single-scatter Monte Carlo code. This recently developed code directly simulates individual electron interactions. Second, the conventional distributions are replaced in the condensed history code by distributions constructed via a single-scatter Monte Carlo simulation

  5. The mass angular scattering power method for determining the kinetic energies of clinical electron beams

    International Nuclear Information System (INIS)

    Blais, N.; Podgorsak, E.B.

    1992-01-01

    A method for determining the kinetic energy of clinical electron beams is described, based on the measurement in air of the spatial spread of a pencil electron beam which is produced from the broad clinical electron beam. As predicted by the Fermi-Eyges theory, the dose distribution measured in air on a plane, perpendicular to the incident direction of the initial pencil electron beam, is Gaussian. The square of its spatial spread is related to the mass angular scattering power which in turn is related to the kinetic energy of the electron beam. The measured spatial spread may thus be used to determine the mass angular scattering power, which is then used to determine the kinetic energy of the electron beam from the known relationship between mass angular scattering power and kinetic energy. Energies obtained with the mass angular scattering power method agree with those obtained with the electron range method. (author)

  6. Interval ridge regression (iRR) as a fast and robust method for quantitative prediction and variable selection applied to edible oil adulteration.

    Science.gov (United States)

    Jović, Ozren; Smrečki, Neven; Popović, Zora

    2016-04-01

    A novel quantitative prediction and variable selection method called interval ridge regression (iRR) is studied in this work. The method is performed on six data sets of FTIR, two data sets of UV-vis and one data set of DSC. The obtained results show that models built with ridge regression on optimal variables selected with iRR significantly outperfom models built with ridge regression on all variables in both calibration (6 out of 9 cases) and validation (2 out of 9 cases). In this study, iRR is also compared with interval partial least squares regression (iPLS). iRR outperfomed iPLS in validation (insignificantly in 6 out of 9 cases and significantly in one out of 9 cases for poil, a well known health beneficial nutrient, is studied in this work by mixing it with cheap and widely used oils such as soybean (So) oil, rapeseed (R) oil and sunflower (Su) oil. Binary mixture sets of hempseed oil with these three oils (HSo, HR and HSu) and a ternary mixture set of H oil, R oil and Su oil (HRSu) were considered. The obtained accuracy indicates that using iRR on FTIR and UV-vis data, each particular oil can be very successfully quantified (in all 8 cases RMSEPoil (R(2)>0.99). Copyright © 2015 Elsevier B.V. All rights reserved.

  7. Synthesis method for using in the design of an electron gun for gyrotion

    International Nuclear Information System (INIS)

    Silva, C.A.B.

    1987-09-01

    In this work a synthesis method is applied to the design of an electron gun for a 94GHz gyrotron. Using the synthesis method, it is found the shape of the electrodes compatible with the laminar flow which minimizes the action of space change on the electron velocity dispersion. A sistematic procedure is presented to fuid the parameters of the synthesis method which, in turn, are closely related to the characteristics of the aptoclechonic system. (author) [pt

  8. Color electron microprobe cathodoluminescence of Bishunpur meteorite compared with the traditional optical microscopy method

    Directory of Open Access Journals (Sweden)

    Amanda Araujo Tosi

    Full Text Available Abstract Cathodoluminescence (CL imaging is an outstanding method for sub classification of Unequilibrated Ordinary Chondrites (UOC - petrological type 3. CL can be obtained by several electron beam apparatuses. The traditional method uses an electron gun coupled to an optical microscope (OM. Although many scanning electron microscopes (SEM and electron microprobes (EPMA have been equipped with a cathodoluminescence, this technique was not fully explored. Images obtained by the two methods differ due to a different kind of signal acquisition. While in the CL-OM optical photography true colors are obtained, in the CL-EPMA the results are grayscale monochromatic electronic signals. L-RGB filters were used in the CL-EPMA analysis in order to obtain color data. The aim of this work is to compare cathodoluminescence data obtained from both techniques, optical microscope and electron microprobe, on the Bishunpur meteorite classified as LL 3.1 chondrite. The present study allows concluding that 20 KeV and 7 nA is the best analytical condition at EPMA in order to test the equivalence between CL-EPMA and CL-OM colour results. Moreover, the color index revealed to be a method for aiding the study of the thermal metamorphism, but it is not definitive for the meteorite classification.

  9. METHOD OF ELECTRON BEAM PROCESSING

    DEFF Research Database (Denmark)

    2003-01-01

    As a rule, electron beam welding takes place in a vacuum. However, this means that the workpieces in question have to be placed in a vacuum chamber and have to be removed therefrom after welding. This is time−consuming and a serious limitation of a process the greatest advantage of which is the o......As a rule, electron beam welding takes place in a vacuum. However, this means that the workpieces in question have to be placed in a vacuum chamber and have to be removed therefrom after welding. This is time−consuming and a serious limitation of a process the greatest advantage of which...... is the option of welding workpieces of large thicknesses. Therefore the idea is to guide the electron beam (2) to the workpiece via a hollow wire, said wire thereby acting as a prolongation of the vacuum chamber (4) down to workpiece. Thus, a workpiece need not be placed inside the vacuum chamber, thereby...... exploiting the potential of electron beam processing to a greater degree than previously possible, for example by means of electron beam welding...

  10. A new method of testing space-based high-energy electron detectors with radioactive electron sources

    Science.gov (United States)

    Zhang, S. Y.; Shen, G. H.; Sun, Y.; Zhou, D. Z.; Zhang, X. X.; Li, J. W.; Huang, C.; Zhang, X. G.; Dong, Y. J.; Zhang, W. J.; Zhang, B. Q.; Shi, C. Y.

    2016-05-01

    Space-based electron detectors are commonly tested using radioactive β-sources which emit a continuous spectrum without spectral lines. Therefore, the tests are often to be considered only qualitative. This paper introduces a method, which results in more than a qualitative test even when using a β-source. The basic idea is to use the simulated response function of the instrument to invert the measured spectrum and compare this inverted spectrum with a reference spectrum obtained from the same source. Here we have used Geant4 to simulate the instrument response function (IRF) and a 3.5 mm thick Li-drifted Si detector to obtain the reference 90Sr/90Yi source spectrum to test and verify the geometric factors of the Omni-Direction Particle Detector (ODPD) on the Tiangong-1 (TG-1) and Tiangong-2 (TG-2) spacecraft. The TG spacecraft are experimental space laboratories and prototypes of the Chinese space station. The excellent agreement between the measured and reference spectra demonstrates that this test method can be used to quantitatively assess the quality of the instrument. Due to its simplicity, the method is faster and therefore more efficient than traditional full calibrations using an electron accelerator.

  11. Reflexion on linear regression trip production modelling method for ensuring good model quality

    Science.gov (United States)

    Suprayitno, Hitapriya; Ratnasari, Vita

    2017-11-01

    Transport Modelling is important. For certain cases, the conventional model still has to be used, in which having a good trip production model is capital. A good model can only be obtained from a good sample. Two of the basic principles of a good sampling is having a sample capable to represent the population characteristics and capable to produce an acceptable error at a certain confidence level. It seems that this principle is not yet quite understood and used in trip production modeling. Therefore, investigating the Trip Production Modelling practice in Indonesia and try to formulate a better modeling method for ensuring the Model Quality is necessary. This research result is presented as follows. Statistics knows a method to calculate span of prediction value at a certain confidence level for linear regression, which is called Confidence Interval of Predicted Value. The common modeling practice uses R2 as the principal quality measure, the sampling practice varies and not always conform to the sampling principles. An experiment indicates that small sample is already capable to give excellent R2 value and sample composition can significantly change the model. Hence, good R2 value, in fact, does not always mean good model quality. These lead to three basic ideas for ensuring good model quality, i.e. reformulating quality measure, calculation procedure, and sampling method. A quality measure is defined as having a good R2 value and a good Confidence Interval of Predicted Value. Calculation procedure must incorporate statistical calculation method and appropriate statistical tests needed. A good sampling method must incorporate random well distributed stratified sampling with a certain minimum number of samples. These three ideas need to be more developed and tested.

  12. Regression Phalanxes

    OpenAIRE

    Zhang, Hongyang; Welch, William J.; Zamar, Ruben H.

    2017-01-01

    Tomal et al. (2015) introduced the notion of "phalanxes" in the context of rare-class detection in two-class classification problems. A phalanx is a subset of features that work well for classification tasks. In this paper, we propose a different class of phalanxes for application in regression settings. We define a "Regression Phalanx" - a subset of features that work well together for prediction. We propose a novel algorithm which automatically chooses Regression Phalanxes from high-dimensi...

  13. Simultaneous chemometric determination of pyridoxine hydrochloride and isoniazid in tablets by multivariate regression methods.

    Science.gov (United States)

    Dinç, Erdal; Ustündağ, Ozgür; Baleanu, Dumitru

    2010-08-01

    The sole use of pyridoxine hydrochloride during treatment of tuberculosis gives rise to pyridoxine deficiency. Therefore, a combination of pyridoxine hydrochloride and isoniazid is used in pharmaceutical dosage form in tuberculosis treatment to reduce this side effect. In this study, two chemometric methods, partial least squares (PLS) and principal component regression (PCR), were applied to the simultaneous determination of pyridoxine (PYR) and isoniazid (ISO) in their tablets. A concentration training set comprising binary mixtures of PYR and ISO consisting of 20 different combinations were randomly prepared in 0.1 M HCl. Both multivariate calibration models were constructed using the relationships between the concentration data set (concentration data matrix) and absorbance data matrix in the spectral region 200-330 nm. The accuracy and the precision of the proposed chemometric methods were validated by analyzing synthetic mixtures containing the investigated drugs. The recovery results obtained by applying PCR and PLS calibrations to the artificial mixtures were found between 100.0 and 100.7%. Satisfactory results obtained by applying the PLS and PCR methods to both artificial and commercial samples were obtained. The results obtained in this manuscript strongly encourage us to use them for the quality control and the routine analysis of the marketing tablets containing PYR and ISO drugs. Copyright © 2010 John Wiley & Sons, Ltd.

  14. Method for the determination of the three-dimensional structure of ultrashort relativistic electron bunches

    Energy Technology Data Exchange (ETDEWEB)

    Geloni, Gianluca; Ilinski, Petr; Saldin, Evgeni; Schneidmiller, Evgeni; Yurkov, Mikhail

    2009-05-15

    We describe a novel technique to characterize ultrashort electron bunches in Xray Free-Electron Lasers. Namely, we propose to use coherent Optical Transition Radiation to measure three-dimensional (3D) electron density distributions. Our method relies on the combination of two known diagnostics setups, an Optical Replica Synthesizer (ORS) and an Optical Transition Radiation (OTR) imager. Electron bunches are modulated at optical wavelengths in the ORS setup.When these electron bunches pass through a metal foil target, coherent radiation pulses of tens MW power are generated. It is thereafter possible to exploit advantages of coherent imaging techniques, such as direct imaging, diffractive imaging, Fourier holography and their combinations. The proposed method opens up the possibility of real-time, wavelength-limited, single-shot 3D imaging of an ultrashort electron bunch. (orig.)

  15. A simplified spherical harmonic method for coupled electron-photon transport calculations

    International Nuclear Information System (INIS)

    Josef, J.A.

    1996-12-01

    In this thesis we have developed a simplified spherical harmonic method (SP N method) and associated efficient solution techniques for 2-D multigroup electron-photon transport calculations. The SP N method has never before been applied to charged-particle transport. We have performed a first time Fourier analysis of the source iteration scheme and the P 1 diffusion synthetic acceleration (DSA) scheme applied to the 2-D SP N equations. Our theoretical analyses indicate that the source iteration and P 1 DSA schemes are as effective for the 2-D SP N equations as for the 1-D S N equations. Previous analyses have indicated that the P 1 DSA scheme is unstable (with sufficiently forward-peaked scattering and sufficiently small absorption) for the 2-D S N equations, yet is very effective for the 1-D S N equations. In addition, we have applied an angular multigrid acceleration scheme, and computationally demonstrated that it performs as well for the 2-D SP N equations as for the 1-D S N equations. It has previously been shown for 1-D S N calculations that this scheme is much more effective than the DSA scheme when scattering is highly forward-peaked. We have investigated the applicability of the SP N approximation to two different physical classes of problems: satellite electronics shielding from geomagnetically trapped electrons, and electron beam problems. In the space shielding study, the SP N method produced solutions that are accurate within 10% of the benchmark Monte Carlo solutions, and often orders of magnitude faster than Monte Carlo. We have successfully modeled quasi-void problems and have obtained excellent agreement with Monte Carlo. We have observed that the SP N method appears to be too diffusive an approximation for beam problems. This result, however, is in agreement with theoretical expectations

  16. Alternative regression models to assess increase in childhood BMI

    OpenAIRE

    Beyerlein, Andreas; Fahrmeir, Ludwig; Mansmann, Ulrich; Toschke, André M

    2008-01-01

    Abstract Background Body mass index (BMI) data usually have skewed distributions, for which common statistical modeling approaches such as simple linear or logistic regression have limitations. Methods Different regression approaches to predict childhood BMI by goodness-of-fit measures and means of interpretation were compared including generalized linear models (GLMs), quantile regression and Generalized Additive Models for Location, Scale and Shape (GAMLSS). We analyzed data of 4967 childre...

  17. The Use of Alternative Regression Methods in Social Sciences and the Comparison of Least Squares and M Estimation Methods in Terms of the Determination of Coefficient

    Science.gov (United States)

    Coskuntuncel, Orkun

    2013-01-01

    The purpose of this study is two-fold; the first aim being to show the effect of outliers on the widely used least squares regression estimator in social sciences. The second aim is to compare the classical method of least squares with the robust M-estimator using the "determination of coefficient" (R[superscript 2]). For this purpose,…

  18. Application of CTOF method to detect secondly charged particle from 2 GeV electron

    International Nuclear Information System (INIS)

    Takahashi, Kazutoshi; Sanami, Toshiya; Ban, Syuichi; Lee, Hee-Seok; Sato, Tatsuhiko

    2002-01-01

    To design a shield and evaluate leakage radiation at high energy electron accelerators, the energy and angular data of secondary particle from the reaction of electrons with structural materials are required. Secondly neutron spectrum from structural materials has been measured by using electron accelerator in PAL (Pohang Accelerator Laboratory). In the neutron measurement, the electronics with Multi-hit TDC (MHTDC) was adopted to measure Time of Flight of every particles (TOFs) emitted from the reactions by each single electron bunch. The measurements are extended to secondly charged particles. For the charged particles measurement, the pulse height data for every particles are indispensable to distinguish charged particles by Δ E-E method. A new system which can measure pulse height for every particle is required instead of the MHTDC system. For this requirement, the method which can take output current from detectors was developed by using digital storage oscilloscope system is named ''Current Time of Flight method'' (CTOF). The CTOF method is able to measure pulse height and TOF for every particles produced by single electron bunch. Electrons are accelerated to 2.04 GeV and the repetition rate is 10 Hz. These electrons bombard thin disk samples of Cu 1mm, Al 4 mm and W 0.5 mm. Secondly charged particles, proton and deuteron, are produced in the samples by photonuclear reaction. Two dimensional of Δ E-E spectrum for each the samples measured by CTOF shows separation between proton and deuteron perfectly. Thus, proton and deuteron spectrum are obtained from this data. (M. Suetake)

  19. Method of synthesizing small-diameter carbon nanotubes with electron field emission properties

    Science.gov (United States)

    Liu, Jie (Inventor); Du, Chunsheng (Inventor); Qian, Cheng (Inventor); Gao, Bo (Inventor); Qiu, Qi (Inventor); Zhou, Otto Z. (Inventor)

    2009-01-01

    Carbon nanotube material having an outer diameter less than 10 nm and a number of walls less than ten are disclosed. Also disclosed are an electron field emission device including a substrate, an optionally layer of adhesion-promoting layer, and a layer of electron field emission material. The electron field emission material includes a carbon nanotube having a number of concentric graphene shells per tube of from two to ten, an outer diameter from 2 to 8 nm, and a nanotube length greater than 0.1 microns. One method to fabricate carbon nanotubes includes the steps of (a) producing a catalyst containing Fe and Mo supported on MgO powder, (b) using a mixture of hydrogen and carbon containing gas as precursors, and (c) heating the catalyst to a temperature above 950.degree. C. to produce a carbon nanotube. Another method of fabricating an electron field emission cathode includes the steps of (a) synthesizing electron field emission materials containing carbon nanotubes with a number of concentric graphene shells per tube from two to ten, an outer diameter of from 2 to 8 nm, and a length greater than 0.1 microns, (b) dispersing the electron field emission material in a suitable solvent, (c) depositing the electron field emission materials onto a substrate, and (d) annealing the substrate.

  20. Empirical estimation of the grades of hearing impairment among industrial workers based on new artificial neural networks and classical regression methods.

    Science.gov (United States)

    Farhadian, Maryam; Aliabadi, Mohsen; Darvishi, Ebrahim

    2015-01-01

    Prediction models are used in a variety of medical domains, and they are frequently built from experience which constitutes data acquired from actual cases. This study aimed to analyze the potential of artificial neural networks and logistic regression techniques for estimation of hearing impairment among industrial workers. A total of 210 workers employed in a steel factory (in West of Iran) were selected, and their occupational exposure histories were analyzed. The hearing loss thresholds of the studied workers were determined using a calibrated audiometer. The personal noise exposures were also measured using a noise dosimeter in the workstations. Data obtained from five variables, which can influence the hearing loss, were used as input features, and the hearing loss thresholds were considered as target feature of the prediction methods. Multilayer feedforward neural networks and logistic regression were developed using MATLAB R2011a software. Based on the World Health Organization classification for the grades of hearing loss, 74.2% of the studied workers have normal hearing thresholds, 23.4% have slight hearing loss, and 2.4% have moderate hearing loss. The accuracy and kappa coefficient of the best developed neural networks for prediction of the grades of hearing loss were 88.6 and 66.30, respectively. The accuracy and kappa coefficient of the logistic regression were also 84.28 and 51.30, respectively. Neural networks could provide more accurate predictions of the hearing loss than logistic regression. The prediction method can provide reliable and comprehensible information for occupational health and medicine experts.

  1. Apparatus and method for generating high density pulses of electrons

    International Nuclear Information System (INIS)

    Lee, C.; Oettinger, P.E.

    1981-01-01

    An apparatus and method are described for the production of high density pulses of electrons using a laser energized emitter. Caesium atoms from a low pressure vapour atmosphere are absorbed on and migrate from a metallic target rapidly heated by a laser to a high temperature. Due to this heating time being short compared with the residence time of the caesium atoms adsorbed on the target surface, copious electrons are emitted which form a high current density pulse. (U.K.)

  2. Model-based Quantile Regression for Discrete Data

    KAUST Repository

    Padellini, Tullia

    2018-04-10

    Quantile regression is a class of methods voted to the modelling of conditional quantiles. In a Bayesian framework quantile regression has typically been carried out exploiting the Asymmetric Laplace Distribution as a working likelihood. Despite the fact that this leads to a proper posterior for the regression coefficients, the resulting posterior variance is however affected by an unidentifiable parameter, hence any inferential procedure beside point estimation is unreliable. We propose a model-based approach for quantile regression that considers quantiles of the generating distribution directly, and thus allows for a proper uncertainty quantification. We then create a link between quantile regression and generalised linear models by mapping the quantiles to the parameter of the response variable, and we exploit it to fit the model with R-INLA. We extend it also in the case of discrete responses, where there is no 1-to-1 relationship between quantiles and distribution\\'s parameter, by introducing continuous generalisations of the most common discrete variables (Poisson, Binomial and Negative Binomial) to be exploited in the fitting.

  3. Advanced statistics: linear regression, part II: multiple linear regression.

    Science.gov (United States)

    Marill, Keith A

    2004-01-01

    The applications of simple linear regression in medical research are limited, because in most situations, there are multiple relevant predictor variables. Univariate statistical techniques such as simple linear regression use a single predictor variable, and they often may be mathematically correct but clinically misleading. Multiple linear regression is a mathematical technique used to model the relationship between multiple independent predictor variables and a single dependent outcome variable. It is used in medical research to model observational data, as well as in diagnostic and therapeutic studies in which the outcome is dependent on more than one factor. Although the technique generally is limited to data that can be expressed with a linear function, it benefits from a well-developed mathematical framework that yields unique solutions and exact confidence intervals for regression coefficients. Building on Part I of this series, this article acquaints the reader with some of the important concepts in multiple regression analysis. These include multicollinearity, interaction effects, and an expansion of the discussion of inference testing, leverage, and variable transformations to multivariate models. Examples from the first article in this series are expanded on using a primarily graphic, rather than mathematical, approach. The importance of the relationships among the predictor variables and the dependence of the multivariate model coefficients on the choice of these variables are stressed. Finally, concepts in regression model building are discussed.

  4. Score Normalization using Logistic Regression with Expected Parameters

    NARCIS (Netherlands)

    Aly, Robin

    State-of-the-art score normalization methods use generative models that rely on sometimes unrealistic assumptions. We propose a novel parameter estimation method for score normalization based on logistic regression. Experiments on the Gov2 and CluewebA collection indicate that our method is

  5. Electron backscatter diffraction as a useful method for alloys microstructure characterization

    Energy Technology Data Exchange (ETDEWEB)

    Klimek, Leszek; Pietrzyk, Bozena

    2004-11-17

    Microstructure examination of cast Co-Cr-Mo alloy is presented in this paper. The surface morphology and chemical composition of the alloy were investigated by means of scanning electron microscopy (SEM) and energy dispersive X-ray microanalysis (EDX). An identification of alloy phases was carried out using electron backscatter diffraction (EBSD). Two different kinds of precipitates in metallic matrix were found. They were identified as MC and M{sub 23}C{sub 6} type of carbides in Co-lattice solid solution. The advantages and limits of the EBSD method are described. It is presented that EBSD, as excellent tool for phase identification, is a valuable supplementary method for materials research.

  6. Multiple Response Regression for Gaussian Mixture Models with Known Labels.

    Science.gov (United States)

    Lee, Wonyul; Du, Ying; Sun, Wei; Hayes, D Neil; Liu, Yufeng

    2012-12-01

    Multiple response regression is a useful regression technique to model multiple response variables using the same set of predictor variables. Most existing methods for multiple response regression are designed for modeling homogeneous data. In many applications, however, one may have heterogeneous data where the samples are divided into multiple groups. Our motivating example is a cancer dataset where the samples belong to multiple cancer subtypes. In this paper, we consider modeling the data coming from a mixture of several Gaussian distributions with known group labels. A naive approach is to split the data into several groups according to the labels and model each group separately. Although it is simple, this approach ignores potential common structures across different groups. We propose new penalized methods to model all groups jointly in which the common and unique structures can be identified. The proposed methods estimate the regression coefficient matrix, as well as the conditional inverse covariance matrix of response variables. Asymptotic properties of the proposed methods are explored. Through numerical examples, we demonstrate that both estimation and prediction can be improved by modeling all groups jointly using the proposed methods. An application to a glioblastoma cancer dataset reveals some interesting common and unique gene relationships across different cancer subtypes.

  7. Application of principal component regression and partial least squares regression in ultraviolet spectrum water quality detection

    Science.gov (United States)

    Li, Jiangtong; Luo, Yongdao; Dai, Honglin

    2018-01-01

    Water is the source of life and the essential foundation of all life. With the development of industrialization, the phenomenon of water pollution is becoming more and more frequent, which directly affects the survival and development of human. Water quality detection is one of the necessary measures to protect water resources. Ultraviolet (UV) spectral analysis is an important research method in the field of water quality detection, which partial least squares regression (PLSR) analysis method is becoming predominant technology, however, in some special cases, PLSR's analysis produce considerable errors. In order to solve this problem, the traditional principal component regression (PCR) analysis method was improved by using the principle of PLSR in this paper. The experimental results show that for some special experimental data set, improved PCR analysis method performance is better than PLSR. The PCR and PLSR is the focus of this paper. Firstly, the principal component analysis (PCA) is performed by MATLAB to reduce the dimensionality of the spectral data; on the basis of a large number of experiments, the optimized principal component is extracted by using the principle of PLSR, which carries most of the original data information. Secondly, the linear regression analysis of the principal component is carried out with statistic package for social science (SPSS), which the coefficients and relations of principal components can be obtained. Finally, calculating a same water spectral data set by PLSR and improved PCR, analyzing and comparing two results, improved PCR and PLSR is similar for most data, but improved PCR is better than PLSR for data near the detection limit. Both PLSR and improved PCR can be used in Ultraviolet spectral analysis of water, but for data near the detection limit, improved PCR's result better than PLSR.

  8. Regression to Causality : Regression-style presentation influences causal attribution

    DEFF Research Database (Denmark)

    Bordacconi, Mats Joe; Larsen, Martin Vinæs

    2014-01-01

    of equivalent results presented as either regression models or as a test of two sample means. Our experiment shows that the subjects who were presented with results as estimates from a regression model were more inclined to interpret these results causally. Our experiment implies that scholars using regression...... models – one of the primary vehicles for analyzing statistical results in political science – encourage causal interpretation. Specifically, we demonstrate that presenting observational results in a regression model, rather than as a simple comparison of means, makes causal interpretation of the results...... more likely. Our experiment drew on a sample of 235 university students from three different social science degree programs (political science, sociology and economics), all of whom had received substantial training in statistics. The subjects were asked to compare and evaluate the validity...

  9. Determination of benzo(apyrene content in PM10 using regression methods

    Directory of Open Access Journals (Sweden)

    Jacek Gębicki

    2015-12-01

    Full Text Available The paper presents an attempt of application of multidimensional linear regression to estimation of an empirical model describing the factors influencing on B(aP content in suspended dust PM10 in Olsztyn and Elbląg city regions between 2010 and 2013. During this period annual average concentration of B(aP in PM10 exceeded the admissible level 1.5-3 times. Conducted investigations confirm that the reasons of B(aP concentration increase are low-efficiency individual home heat stations or low-temperature heat sources, which are responsible for so-called low emission during heating period. Dependences between the following quantities were analysed: concentration of PM10 dust in air, air temperature, wind velocity, air humidity. A measure of model fitting to actual B(aP concentration in PM10 was the coefficient of determination of the model. Application of multidimensional linear regression yielded the equations characterized by high values of the coefficient of determination of the model, especially during heating season. This parameter ranged from 0.54 to 0.80 during the analyzed period.

  10. Support Vector Regression Model Based on Empirical Mode Decomposition and Auto Regression for Electric Load Forecasting

    Directory of Open Access Journals (Sweden)

    Hong-Juan Li

    2013-04-01

    Full Text Available Electric load forecasting is an important issue for a power utility, associated with the management of daily operations such as energy transfer scheduling, unit commitment, and load dispatch. Inspired by strong non-linear learning capability of support vector regression (SVR, this paper presents a SVR model hybridized with the empirical mode decomposition (EMD method and auto regression (AR for electric load forecasting. The electric load data of the New South Wales (Australia market are employed for comparing the forecasting performances of different forecasting models. The results confirm the validity of the idea that the proposed model can simultaneously provide forecasting with good accuracy and interpretability.

  11. Length-weight regressions of the microcrustacean species from a tropical floodplain Regressões peso-comprimento das espécies de microcrustáceos em uma planície de inundação tropical

    Directory of Open Access Journals (Sweden)

    Fábio de Azevedo

    2012-03-01

    Full Text Available AIM: This study presents length-weight regressions adjusted for the most representative microcrustacean species and young stages of copepods from tropical lakes, together with a comparison of these results with estimates from the literature for tropical and temperate regions; METHODS: Samples were taken from six isolated lakes, in summer and winter, using a motorized pump and plankton net. The dry weight of each size class (for cladocerans or developmental stage (for copepods was measured using an electronic microbalance; RESULTS: Adjusted regressions were significant. We observed a trend of under-estimating the weights of smaller species and overestimating those of larger species, when using regressions obtained from temperate regions; CONCLUSION: We must be cautious about using pooled regressions from the literature, preferring models of similar species, or weighing the organisms and building new models.OBJETIVO: Este estudo apresenta as regressões peso-comprimento elaboradas para as espécies mais representativas de microcrustáceos e formas jovens de copépodes em lagos tropicais, bem como a comparação desses resultados com as estimativas da literatura para as regiões tropical e temperada; MÉTODOS: As amostragens foram realizadas em seis lagoas isoladas, no verão e no inverno, usando moto-bomba e rede de plâncton. O peso seco de cada classe de tamanho (para cladóceros e estágio de desenvolvimento (copépodes foi medido em microbalança eletrônica; RESULTADOS: As regressões ajustadas foram significativas. Observamos uma tendência em subestimar o peso das espécies de menor porte e superestimar as espécies de maior porte, quando se utiliza regressões peso-comprimento obtidas para a região de clima temperado; CONCLUSÃO: Devemos ter cautela no uso de regressões peso-comprimento existentes na literatura, preferindo modelos para as mesmas espécies, ou pesar os organismos e construir os próprios modelos.

  12. A new method of testing space-based high-energy electron detectors with radioactive electron sources

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, S.Y. [National Space Science Center, Chinese Academy of Sciences, Beijing (China); Beijing Key Laboratory of Space Environment Exploration, Beijing (China); Shen, G.H., E-mail: shgh@nssc.ac.cn [National Space Science Center, Chinese Academy of Sciences, Beijing (China); Beijing Key Laboratory of Space Environment Exploration, Beijing (China); Sun, Y., E-mail: sunying@nssc.ac.cn [National Space Science Center, Chinese Academy of Sciences, Beijing (China); Beijing Key Laboratory of Space Environment Exploration, Beijing (China); Zhou, D.Z., E-mail: dazhuang.zhou@gmail.com [National Space Science Center, Chinese Academy of Sciences, Beijing (China); Beijing Key Laboratory of Space Environment Exploration, Beijing (China); Zhang, X.X., E-mail: xxzhang@cma.gov.cn [National Center for Space Weather, Beijing (China); Li, J.W., E-mail: lijw@cma.gov.cn [National Center for Space Weather, Beijing (China); Huang, C., E-mail: huangc@cma.gov.cn [National Center for Space Weather, Beijing (China); Zhang, X.G., E-mail: zhangxg@nssc.ac.cn [National Space Science Center, Chinese Academy of Sciences, Beijing (China); Beijing Key Laboratory of Space Environment Exploration, Beijing (China); Dong, Y.J., E-mail: dyj@nssc.ac.cn [National Space Science Center, Chinese Academy of Sciences, Beijing (China); Beijing Key Laboratory of Space Environment Exploration, Beijing (China); Zhang, W.J., E-mail: zhangreatest@163.com [National Space Science Center, Chinese Academy of Sciences, Beijing (China); Beijing Key Laboratory of Space Environment Exploration, Beijing (China); Zhang, B.Q., E-mail: zhangbinquan@nssc.ac.cn [National Space Science Center, Chinese Academy of Sciences, Beijing (China); Beijing Key Laboratory of Space Environment Exploration, Beijing (China); Shi, C.Y., E-mail: scy@nssc.ac.cn [National Space Science Center, Chinese Academy of Sciences, Beijing (China); Beijing Key Laboratory of Space Environment Exploration, Beijing (China)

    2016-05-01

    Space-based electron detectors are commonly tested using radioactive β-sources which emit a continuous spectrum without spectral lines. Therefore, the tests are often to be considered only qualitative. This paper introduces a method, which results in more than a qualitative test even when using a β-source. The basic idea is to use the simulated response function of the instrument to invert the measured spectrum and compare this inverted spectrum with a reference spectrum obtained from the same source. Here we have used Geant4 to simulate the instrument response function (IRF) and a 3.5 mm thick Li-drifted Si detector to obtain the reference {sup 90}Sr/{sup 90}Yi source spectrum to test and verify the geometric factors of the Omni-Direction Particle Detector (ODPD) on the Tiangong-1 (TG-1) and Tiangong-2 (TG-2) spacecraft. The TG spacecraft are experimental space laboratories and prototypes of the Chinese space station. The excellent agreement between the measured and reference spectra demonstrates that this test method can be used to quantitatively assess the quality of the instrument. Due to its simplicity, the method is faster and therefore more efficient than traditional full calibrations using an electron accelerator.

  13. A new method of testing space-based high-energy electron detectors with radioactive electron sources

    International Nuclear Information System (INIS)

    Zhang, S.Y.; Shen, G.H.; Sun, Y.; Zhou, D.Z.; Zhang, X.X.; Li, J.W.; Huang, C.; Zhang, X.G.; Dong, Y.J.; Zhang, W.J.; Zhang, B.Q.; Shi, C.Y.

    2016-01-01

    Space-based electron detectors are commonly tested using radioactive β-sources which emit a continuous spectrum without spectral lines. Therefore, the tests are often to be considered only qualitative. This paper introduces a method, which results in more than a qualitative test even when using a β-source. The basic idea is to use the simulated response function of the instrument to invert the measured spectrum and compare this inverted spectrum with a reference spectrum obtained from the same source. Here we have used Geant4 to simulate the instrument response function (IRF) and a 3.5 mm thick Li-drifted Si detector to obtain the reference "9"0Sr/"9"0Yi source spectrum to test and verify the geometric factors of the Omni-Direction Particle Detector (ODPD) on the Tiangong-1 (TG-1) and Tiangong-2 (TG-2) spacecraft. The TG spacecraft are experimental space laboratories and prototypes of the Chinese space station. The excellent agreement between the measured and reference spectra demonstrates that this test method can be used to quantitatively assess the quality of the instrument. Due to its simplicity, the method is faster and therefore more efficient than traditional full calibrations using an electron accelerator.

  14. Estimasi Model Seemingly Unrelated Regression (SUR dengan Metode Generalized Least Square (GLS

    Directory of Open Access Journals (Sweden)

    Ade Widyaningsih

    2015-04-01

    Full Text Available Regression analysis is a statistical tool that is used to determine the relationship between two or more quantitative variables so that one variable can be predicted from the other variables. A method that can used to obtain a good estimation in the regression analysis is ordinary least squares method. The least squares method is used to estimate the parameters of one or more regression but relationships among the errors in the response of other estimators are not allowed. One way to overcome this problem is Seemingly Unrelated Regression model (SUR in which parameters are estimated using Generalized Least Square (GLS. In this study, the author applies SUR model using GLS method on world gasoline demand data. The author obtains that SUR using GLS is better than OLS because SUR produce smaller errors than the OLS.

  15. Estimasi Model Seemingly Unrelated Regression (SUR dengan Metode Generalized Least Square (GLS

    Directory of Open Access Journals (Sweden)

    Ade Widyaningsih

    2014-06-01

    Full Text Available Regression analysis is a statistical tool that is used to determine the relationship between two or more quantitative variables so that one variable can be predicted from the other variables. A method that can used to obtain a good estimation in the regression analysis is ordinary least squares method. The least squares method is used to estimate the parameters of one or more regression but relationships among the errors in the response of other estimators are not allowed. One way to overcome this problem is Seemingly Unrelated Regression model (SUR in which parameters are estimated using Generalized Least Square (GLS. In this study, the author applies SUR model using GLS method on world gasoline demand data. The author obtains that SUR using GLS is better than OLS because SUR produce smaller errors than the OLS.

  16. Krylov subspace method for evaluating the self-energy matrices in electron transport calculations

    DEFF Research Database (Denmark)

    Sørensen, Hans Henrik Brandenborg; Hansen, Per Christian; Petersen, D. E.

    2008-01-01

    We present a Krylov subspace method for evaluating the self-energy matrices used in the Green's function formulation of electron transport in nanoscale devices. A procedure based on the Arnoldi method is employed to obtain solutions of the quadratic eigenvalue problem associated with the infinite...... calculations. Numerical tests within a density functional theory framework are provided to validate the accuracy and robustness of the proposed method, which in most cases is an order of magnitude faster than conventional methods.......We present a Krylov subspace method for evaluating the self-energy matrices used in the Green's function formulation of electron transport in nanoscale devices. A procedure based on the Arnoldi method is employed to obtain solutions of the quadratic eigenvalue problem associated with the infinite...

  17. Quantum chemistry the development of ab initio methods in molecular electronic structure theory

    CERN Document Server

    Schaefer III, Henry F

    2004-01-01

    This guide is guaranteed to prove of keen interest to the broad spectrum of experimental chemists who use electronic structure theory to assist in the interpretation of their laboratory findings. A list of 150 landmark papers in ab initio molecular electronic structure methods, it features the first page of each paper (which usually encompasses the abstract and introduction). Its primary focus is methodology, rather than the examination of particular chemical problems, and the selected papers either present new and important methods or illustrate the effectiveness of existing methods in predi

  18. Identification of Influential Points in a Linear Regression Model

    Directory of Open Access Journals (Sweden)

    Jan Grosz

    2011-03-01

    Full Text Available The article deals with the detection and identification of influential points in the linear regression model. Three methods of detection of outliers and leverage points are described. These procedures can also be used for one-sample (independentdatasets. This paper briefly describes theoretical aspects of several robust methods as well. Robust statistics is a powerful tool to increase the reliability and accuracy of statistical modelling and data analysis. A simulation model of the simple linear regression is presented.

  19. Towards seamlessly-integrated textile electronics: methods to coat fabrics and fibers with conducting polymers for electronic applications.

    Science.gov (United States)

    Allison, Linden; Hoxie, Steven; Andrew, Trisha L

    2017-06-29

    Traditional textile materials can be transformed into functional electronic components upon being dyed or coated with films of intrinsically conducting polymers, such as poly(aniline), poly(pyrrole) and poly(3,4-ethylenedioxythiophene). A variety of textile electronic devices are built from the conductive fibers and fabrics thus obtained, including: physiochemical sensors, thermoelectric fibers/fabrics, heated garments, artificial muscles and textile supercapacitors. In all these cases, electrical performance and device ruggedness is determined by the morphology of the conducting polymer active layer on the fiber or fabric substrate. Tremendous variation in active layer morphology can be observed with different coating or dyeing conditions. Here, we summarize various methods used to create fiber- and fabric-based devices and highlight the influence of the coating method on active layer morphology and device stability.

  20. Comparison of optimization methods for electronic-structure calculations

    International Nuclear Information System (INIS)

    Garner, J.; Das, S.G.; Min, B.I.; Woodward, C.; Benedek, R.

    1989-01-01

    The performance of several local-optimization methods for calculating electronic structure is compared. The fictitious first-order equation of motion proposed by Williams and Soler is integrated numerically by three procedures: simple finite-difference integration, approximate analytical integration (the Williams-Soler algorithm), and the Born perturbation series. These techniques are applied to a model problem for which exact solutions are known, the Mathieu equation. The Williams-Soler algorithm and the second Born approximation converge equally rapidly, but the former involves considerably less computational effort and gives a more accurate converged solution. Application of the method of conjugate gradients to the Mathieu equation is discussed

  1. A modified method of calculating the lateral build-up ratio for small electron fields

    International Nuclear Information System (INIS)

    Tyner, E; McCavana, P; McClean, B

    2006-01-01

    This note outlines an improved method of calculating dose per monitor unit values for small electron fields using Khan's lateral build-up ratio (LBR). This modified method obtains the LBR directly from the ratio of measured, surface normalized, electron beam percentage depth dose curves. The LBR calculated using this modified method more accurately accounts for the change in lateral scatter with decreasing field size. The LBR is used along with Khan's dose per monitor unit formula to calculate dose per monitor unit values for a set of small fields. These calculated dose per monitor unit values are compared to measured values to within 3.5% for all circular fields and electron energies examined. The modified method was further tested using a small triangular field. A maximum difference of 4.8% was found. (note)

  2. Steganalysis using logistic regression

    Science.gov (United States)

    Lubenko, Ivans; Ker, Andrew D.

    2011-02-01

    We advocate Logistic Regression (LR) as an alternative to the Support Vector Machine (SVM) classifiers commonly used in steganalysis. LR offers more information than traditional SVM methods - it estimates class probabilities as well as providing a simple classification - and can be adapted more easily and efficiently for multiclass problems. Like SVM, LR can be kernelised for nonlinear classification, and it shows comparable classification accuracy to SVM methods. This work is a case study, comparing accuracy and speed of SVM and LR classifiers in detection of LSB Matching and other related spatial-domain image steganography, through the state-of-art 686-dimensional SPAM feature set, in three image sets.

  3. Methods for coupling radiation, ion, and electron energies in grey Implicit Monte Carlo

    International Nuclear Information System (INIS)

    Evans, T.M.; Densmore, J.D.

    2007-01-01

    We present three methods for extending the Implicit Monte Carlo (IMC) method to treat the time-evolution of coupled radiation, electron, and ion energies. The first method splits the ion and electron coupling and conduction from the standard IMC radiation-transport process. The second method recasts the IMC equations such that part of the coupling is treated during the Monte Carlo calculation. The third method treats all of the coupling and conduction in the Monte Carlo simulation. We apply modified equation analysis (MEA) to simplified forms of each method that neglects the errors in the conduction terms. Through MEA we show that the third method is theoretically the most accurate. We demonstrate the effectiveness of each method on a series of 0-dimensional, nonlinear benchmark problems where the accuracy of the third method is shown to be up to ten times greater than the other coupling methods for selected calculations

  4. Inventory of electronic money as method of its control: process approach

    Directory of Open Access Journals (Sweden)

    A.Р. Semenets

    2016-09-01

    Full Text Available The extent of legal regulation of inventory of electronic money in the company is considered. The absence of developed techniques of valuation as well as reflection of electronic money on the accounts, which results in distortion of indicators of financial statements are detected. The author develops the organizational and methodical provisions of inventory of electronic money in accordance with the stages that will ensure the avoidance of misstatements in the financial statements and providing users with more reliable information about the amount and as well as oddments of electronic money at the company on the balance sheet date. The effect of accounting policies, provisions for the organization of accounting as well as job description on the control system for transactions with electronic money, including their inventory, are determined. The author discovers the typical violations that occur during reflecting the transactions with electronic money in accounting, early detection of which will enable appropriate adjustments for the avoidance of misstatements of the information provided in the financial statements of the company.

  5. Monte Carlo methods in electron transport problems. Pt. 1

    International Nuclear Information System (INIS)

    Cleri, F.

    1989-01-01

    The condensed-history Monte Carlo method for charged particles transport is reviewed and discussed starting from a general form of the Boltzmann equation (Part I). The physics of the electronic interactions, together with some pedagogic example will be introduced in the part II. The lecture is directed to potential users of the method, for which it can be a useful introduction to the subject matter, and wants to establish the basis of the work on the computer code RECORD, which is at present in a developing stage

  6. Robust Regression and its Application in Financial Data Analysis

    OpenAIRE

    Mansoor Momeni; Mahmoud Dehghan Nayeri; Ali Faal Ghayoumi; Hoda Ghorbani

    2010-01-01

    This research is aimed to describe the application of robust regression and its advantages over the least square regression method in analyzing financial data. To do this, relationship between earning per share, book value of equity per share and share price as price model and earning per share, annual change of earning per share and return of stock as return model is discussed using both robust and least square regressions, and finally the outcomes are compared. Comparing the results from th...

  7. Coordinate transformation based cryo-correlative methods for electron tomography and focused ion beam milling

    International Nuclear Information System (INIS)

    Fukuda, Yoshiyuki; Schrod, Nikolas; Schaffer, Miroslava; Feng, Li Rebekah; Baumeister, Wolfgang; Lucic, Vladan

    2014-01-01

    Correlative microscopy allows imaging of the same feature over multiple length scales, combining light microscopy with high resolution information provided by electron microscopy. We demonstrate two procedures for coordinate transformation based correlative microscopy of vitrified biological samples applicable to different imaging modes. The first procedure aims at navigating cryo-electron tomography to cellular regions identified by fluorescent labels. The second procedure, allowing navigation of focused ion beam milling to fluorescently labeled molecules, is based on the introduction of an intermediate scanning electron microscopy imaging step to overcome the large difference between cryo-light microscopy and focused ion beam imaging modes. These methods make it possible to image fluorescently labeled macromolecular complexes in their natural environments by cryo-electron tomography, while minimizing exposure to the electron beam during the search for features of interest. - Highlights: • Correlative light microscopy and focused ion beam milling of vitrified samples. • Coordinate transformation based cryo-correlative method. • Improved correlative light microscopy and cryo-electron tomography

  8. Stochastic search, optimization and regression with energy applications

    Science.gov (United States)

    Hannah, Lauren A.

    Designing clean energy systems will be an important task over the next few decades. One of the major roadblocks is a lack of mathematical tools to economically evaluate those energy systems. However, solutions to these mathematical problems are also of interest to the operations research and statistical communities in general. This thesis studies three problems that are of interest to the energy community itself or provide support for solution methods: R&D portfolio optimization, nonparametric regression and stochastic search with an observable state variable. First, we consider the one stage R&D portfolio optimization problem to avoid the sequential decision process associated with the multi-stage. The one stage problem is still difficult because of a non-convex, combinatorial decision space and a non-convex objective function. We propose a heuristic solution method that uses marginal project values---which depend on the selected portfolio---to create a linear objective function. In conjunction with the 0-1 decision space, this new problem can be solved as a knapsack linear program. This method scales well to large decision spaces. We also propose an alternate, provably convergent algorithm that does not exploit problem structure. These methods are compared on a solid oxide fuel cell R&D portfolio problem. Next, we propose Dirichlet Process mixtures of Generalized Linear Models (DPGLM), a new method of nonparametric regression that accommodates continuous and categorical inputs, and responses that can be modeled by a generalized linear model. We prove conditions for the asymptotic unbiasedness of the DP-GLM regression mean function estimate. We also give examples for when those conditions hold, including models for compactly supported continuous distributions and a model with continuous covariates and categorical response. We empirically analyze the properties of the DP-GLM and why it provides better results than existing Dirichlet process mixture regression

  9. Electron-phonon thermalization in a scalable method for real-time quantum dynamics

    Science.gov (United States)

    Rizzi, Valerio; Todorov, Tchavdar N.; Kohanoff, Jorge J.; Correa, Alfredo A.

    2016-01-01

    We present a quantum simulation method that follows the dynamics of out-of-equilibrium many-body systems of electrons and oscillators in real time. Its cost is linear in the number of oscillators and it can probe time scales from attoseconds to hundreds of picoseconds. Contrary to Ehrenfest dynamics, it can thermalize starting from a variety of initial conditions, including electronic population inversion. While an electronic temperature can be defined in terms of a nonequilibrium entropy, a Fermi-Dirac distribution in general emerges only after thermalization. These results can be used to construct a kinetic model of electron-phonon equilibration based on the explicit quantum dynamics.

  10. A meta-regression analysis of 41 Australian problem gambling prevalence estimates and their relationship to total spending on electronic gaming machines

    Directory of Open Access Journals (Sweden)

    Francis Markham

    2017-05-01

    Full Text Available Abstract Background Many jurisdictions regularly conduct surveys to estimate the prevalence of problem gambling in their adult populations. However, the comparison of such estimates is problematic due to methodological variations between studies. Total consumption theory suggests that an association between mean electronic gaming machine (EGM and casino gambling losses and problem gambling prevalence estimates may exist. If this is the case, then changes in EGM losses may be used as a proxy indicator for changes in problem gambling prevalence. To test for this association this study examines the relationship between aggregated losses on electronic gaming machines (EGMs and problem gambling prevalence estimates for Australian states and territories between 1994 and 2016. Methods A Bayesian meta-regression analysis of 41 cross-sectional problem gambling prevalence estimates was undertaken using EGM gambling losses, year of survey and methodological variations as predictor variables. General population studies of adults in Australian states and territory published before 1 July 2016 were considered in scope. 41 studies were identified, with a total of 267,367 participants. Problem gambling prevalence, moderate-risk problem gambling prevalence, problem gambling screen, administration mode and frequency threshold were extracted from surveys. Administrative data on EGM and casino gambling loss data were extracted from government reports and expressed as the proportion of household disposable income lost. Results Money lost on EGMs is correlated with problem gambling prevalence. An increase of 1% of household disposable income lost on EGMs and in casinos was associated with problem gambling prevalence estimates that were 1.33 times higher [95% credible interval 1.04, 1.71]. There was no clear association between EGM losses and moderate-risk problem gambling prevalence estimates. Moderate-risk problem gambling prevalence estimates were not explained by

  11. Advanced colorectal neoplasia risk stratification by penalized logistic regression.

    Science.gov (United States)

    Lin, Yunzhi; Yu, Menggang; Wang, Sijian; Chappell, Richard; Imperiale, Thomas F

    2016-08-01

    Colorectal cancer is the second leading cause of death from cancer in the United States. To facilitate the efficiency of colorectal cancer screening, there is a need to stratify risk for colorectal cancer among the 90% of US residents who are considered "average risk." In this article, we investigate such risk stratification rules for advanced colorectal neoplasia (colorectal cancer and advanced, precancerous polyps). We use a recently completed large cohort study of subjects who underwent a first screening colonoscopy. Logistic regression models have been used in the literature to estimate the risk of advanced colorectal neoplasia based on quantifiable risk factors. However, logistic regression may be prone to overfitting and instability in variable selection. Since most of the risk factors in our study have several categories, it was tempting to collapse these categories into fewer risk groups. We propose a penalized logistic regression method that automatically and simultaneously selects variables, groups categories, and estimates their coefficients by penalizing the [Formula: see text]-norm of both the coefficients and their differences. Hence, it encourages sparsity in the categories, i.e. grouping of the categories, and sparsity in the variables, i.e. variable selection. We apply the penalized logistic regression method to our data. The important variables are selected, with close categories simultaneously grouped, by penalized regression models with and without the interactions terms. The models are validated with 10-fold cross-validation. The receiver operating characteristic curves of the penalized regression models dominate the receiver operating characteristic curve of naive logistic regressions, indicating a superior discriminative performance. © The Author(s) 2013.

  12. Standardless quantification methods in electron probe microanalysis

    Energy Technology Data Exchange (ETDEWEB)

    Trincavelli, Jorge, E-mail: trincavelli@famaf.unc.edu.ar [Facultad de Matemática, Astronomía y Física, Universidad Nacional de Córdoba, Ciudad Universitaria, 5000 Córdoba (Argentina); Instituto de Física Enrique Gaviola, Consejo Nacional de Investigaciones Científicas y Técnicas de la República Argentina, Medina Allende s/n, Ciudad Universitaria, 5000 Córdoba (Argentina); Limandri, Silvina, E-mail: s.limandri@conicet.gov.ar [Facultad de Matemática, Astronomía y Física, Universidad Nacional de Córdoba, Ciudad Universitaria, 5000 Córdoba (Argentina); Instituto de Física Enrique Gaviola, Consejo Nacional de Investigaciones Científicas y Técnicas de la República Argentina, Medina Allende s/n, Ciudad Universitaria, 5000 Córdoba (Argentina); Bonetto, Rita, E-mail: bonetto@quimica.unlp.edu.ar [Centro de Investigación y Desarrollo en Ciencias Aplicadas Dr. Jorge Ronco, Consejo Nacional de Investigaciones Científicas y Técnicas de la República Argentina, Facultad de Ciencias Exactas, de la Universidad Nacional de La Plata, Calle 47 N° 257, 1900 La Plata (Argentina)

    2014-11-01

    The elemental composition of a solid sample can be determined by electron probe microanalysis with or without the use of standards. The standardless algorithms are quite faster than the methods that require standards; they are useful when a suitable set of standards is not available or for rough samples, and also they help to solve the problem of current variation, for example, in equipments with cold field emission gun. Due to significant advances in the accuracy achieved during the last years, product of the successive efforts made to improve the description of generation, absorption and detection of X-rays, the standardless methods have increasingly become an interesting option for the user. Nevertheless, up to now, algorithms that use standards are still more precise than standardless methods. It is important to remark, that care must be taken with results provided by standardless methods that normalize the calculated concentration values to 100%, unless an estimate of the errors is reported. In this work, a comprehensive discussion of the key features of the main standardless quantification methods, as well as the level of accuracy achieved by them is presented. - Highlights: • Standardless methods are a good alternative when no suitable standards are available. • Their accuracy reaches 10% for 95% of the analyses when traces are excluded. • Some of them are suitable for the analysis of rough samples.

  13. Reducing false-positive incidental findings with ensemble genotyping and logistic regression based variant filtering methods.

    Science.gov (United States)

    Hwang, Kyu-Baek; Lee, In-Hee; Park, Jin-Ho; Hambuch, Tina; Choe, Yongjoon; Kim, MinHyeok; Lee, Kyungjoon; Song, Taemin; Neu, Matthew B; Gupta, Neha; Kohane, Isaac S; Green, Robert C; Kong, Sek Won

    2014-08-01

    As whole genome sequencing (WGS) uncovers variants associated with rare and common diseases, an immediate challenge is to minimize false-positive findings due to sequencing and variant calling errors. False positives can be reduced by combining results from orthogonal sequencing methods, but costly. Here, we present variant filtering approaches using logistic regression (LR) and ensemble genotyping to minimize false positives without sacrificing sensitivity. We evaluated the methods using paired WGS datasets of an extended family prepared using two sequencing platforms and a validated set of variants in NA12878. Using LR or ensemble genotyping based filtering, false-negative rates were significantly reduced by 1.1- to 17.8-fold at the same levels of false discovery rates (5.4% for heterozygous and 4.5% for homozygous single nucleotide variants (SNVs); 30.0% for heterozygous and 18.7% for homozygous insertions; 25.2% for heterozygous and 16.6% for homozygous deletions) compared to the filtering based on genotype quality scores. Moreover, ensemble genotyping excluded > 98% (105,080 of 107,167) of false positives while retaining > 95% (897 of 937) of true positives in de novo mutation (DNM) discovery in NA12878, and performed better than a consensus method using two sequencing platforms. Our proposed methods were effective in prioritizing phenotype-associated variants, and an ensemble genotyping would be essential to minimize false-positive DNM candidates. © 2014 WILEY PERIODICALS, INC.

  14. Rank-Optimized Logistic Matrix Regression toward Improved Matrix Data Classification.

    Science.gov (United States)

    Zhang, Jianguang; Jiang, Jianmin

    2018-02-01

    While existing logistic regression suffers from overfitting and often fails in considering structural information, we propose a novel matrix-based logistic regression to overcome the weakness. In the proposed method, 2D matrices are directly used to learn two groups of parameter vectors along each dimension without vectorization, which allows the proposed method to fully exploit the underlying structural information embedded inside the 2D matrices. Further, we add a joint [Formula: see text]-norm on two parameter matrices, which are organized by aligning each group of parameter vectors in columns. This added co-regularization term has two roles-enhancing the effect of regularization and optimizing the rank during the learning process. With our proposed fast iterative solution, we carried out extensive experiments. The results show that in comparison to both the traditional tensor-based methods and the vector-based regression methods, our proposed solution achieves better performance for matrix data classifications.

  15. A comparative study of frequency ratio, weights of evidence and logistic regression methods for landslide susceptibility mapping: Sultan Mountains, SW Turkey

    Science.gov (United States)

    Ozdemir, Adnan; Altural, Tolga

    2013-03-01

    This study evaluated and compared landslide susceptibility maps produced with three different methods, frequency ratio, weights of evidence, and logistic regression, by using validation datasets. The field surveys performed as part of this investigation mapped the locations of 90 landslides that had been identified in the Sultan Mountains of south-western Turkey. The landslide influence parameters used for this study are geology, relative permeability, land use/land cover, precipitation, elevation, slope, aspect, total curvature, plan curvature, profile curvature, wetness index, stream power index, sediment transportation capacity index, distance to drainage, distance to fault, drainage density, fault density, and spring density maps. The relationships between landslide distributions and these parameters were analysed using the three methods, and the results of these methods were then used to calculate the landslide susceptibility of the entire study area. The accuracy of the final landslide susceptibility maps was evaluated based on the landslides observed during the fieldwork, and the accuracy of the models was evaluated by calculating each model's relative operating characteristic curve. The predictive capability of each model was determined from the area under the relative operating characteristic curve and the areas under the curves obtained using the frequency ratio, logistic regression, and weights of evidence methods are 0.976, 0.952, and 0.937, respectively. These results indicate that the frequency ratio and weights of evidence models are relatively good estimators of landslide susceptibility in the study area. Specifically, the results of the correlation analysis show a high correlation between the frequency ratio and weights of evidence results, and the frequency ratio and logistic regression methods exhibit correlation coefficients of 0.771 and 0.727, respectively. The frequency ratio model is simple, and its input, calculation and output processes are

  16. Face Alignment via Regressing Local Binary Features.

    Science.gov (United States)

    Ren, Shaoqing; Cao, Xudong; Wei, Yichen; Sun, Jian

    2016-03-01

    This paper presents a highly efficient and accurate regression approach for face alignment. Our approach has two novel components: 1) a set of local binary features and 2) a locality principle for learning those features. The locality principle guides us to learn a set of highly discriminative local binary features for each facial landmark independently. The obtained local binary features are used to jointly learn a linear regression for the final output. This approach achieves the state-of-the-art results when tested on the most challenging benchmarks to date. Furthermore, because extracting and regressing local binary features are computationally very cheap, our system is much faster than previous methods. It achieves over 3000 frames per second (FPS) on a desktop or 300 FPS on a mobile phone for locating a few dozens of landmarks. We also study a key issue that is important but has received little attention in the previous research, which is the face detector used to initialize alignment. We investigate several face detectors and perform quantitative evaluation on how they affect alignment accuracy. We find that an alignment friendly detector can further greatly boost the accuracy of our alignment method, reducing the error up to 16% relatively. To facilitate practical usage of face detection/alignment methods, we also propose a convenient metric to measure how good a detector is for alignment initialization.

  17. Application of the method of continued fractions for electron scattering by linear molecules

    International Nuclear Information System (INIS)

    Lee, M.-T.; Iga, I.; Fujimoto, M.M.; Lara, O.; Brasilia Univ., DF

    1995-01-01

    The method of continued fractions (MCF) of Horacek and Sasakawa is adapted for the first time to study low-energy electron scattering by linear molecules. Particularly, we have calculated the reactance K-matrices for an electron scattered by hydrogen molecule and hydrogen molecular ion as well as by a polar LiH molecule in the static-exchange level. For all the applications studied herein. the calculated physical quantities converge rapidly, even for a strongly polar molecule such as LiH, to the correct values and in most cases the convergence is monotonic. Our study suggests that the MCF could be an efficient method for studying electron-molecule scattering and also photoionization of molecules. (Author)

  18. A parallel orbital-updating based plane-wave basis method for electronic structure calculations

    International Nuclear Information System (INIS)

    Pan, Yan; Dai, Xiaoying; Gironcoli, Stefano de; Gong, Xin-Gao; Rignanese, Gian-Marco; Zhou, Aihui

    2017-01-01

    Highlights: • Propose three parallel orbital-updating based plane-wave basis methods for electronic structure calculations. • These new methods can avoid the generating of large scale eigenvalue problems and then reduce the computational cost. • These new methods allow for two-level parallelization which is particularly interesting for large scale parallelization. • Numerical experiments show that these new methods are reliable and efficient for large scale calculations on modern supercomputers. - Abstract: Motivated by the recently proposed parallel orbital-updating approach in real space method , we propose a parallel orbital-updating based plane-wave basis method for electronic structure calculations, for solving the corresponding eigenvalue problems. In addition, we propose two new modified parallel orbital-updating methods. Compared to the traditional plane-wave methods, our methods allow for two-level parallelization, which is particularly interesting for large scale parallelization. Numerical experiments show that these new methods are more reliable and efficient for large scale calculations on modern supercomputers.

  19. Generalized Hartree-Fock method for electron-atom scattering

    International Nuclear Information System (INIS)

    Rosenberg, L.

    1997-01-01

    In the widely used Hartree-Fock procedure for atomic structure calculations, trial functions in the form of linear combinations of Slater determinants are constructed and the Rayleigh-Ritz minimum principle is applied to determine the best in that class. A generalization of this approach, applicable to low-energy electron-atom scattering, is developed here. The method is based on a unique decomposition of the scattering wave function into open- and closed-channel components, so chosen that an approximation to the closed-channel component may be obtained by adopting it as a trial function in a minimum principle, whose rigor can be maintained even when the target wave functions are imprecisely known. Given a closed-channel trial function, the full scattering function may be determined from the solution of an effective one-body Schroedinger equation. Alternatively, in a generalized Hartree-Fock approach, the minimum principle leads to coupled integrodifferential equations to be satisfied by the basis functions appearing in a Slater-determinant representation of the closed-channel wave function; it also provides a procedure for optimizing the choice of nonlinear parameters in a variational determination of these basis functions. Inclusion of additional Slater determinants in the closed-channel trial function allows for systematic improvement of that function, as well as the calculated scattering parameters, with the possibility of spurious singularities avoided. Electron-electron correlations can be important in accounting for long-range forces and resonances. These correlation effects can be included explicitly by suitable choice of one component of the closed-channel wave function; the remaining component may then be determined by the generalized Hartree-Fock procedure. As a simple test, the method is applied to s-wave scattering of positrons by hydrogen. copyright 1997 The American Physical Society

  20. Uncertainty of pesticide residue concentration determined from ordinary and weighted linear regression curve.

    Science.gov (United States)

    Yolci Omeroglu, Perihan; Ambrus, Árpad; Boyacioglu, Dilek

    2018-03-28

    Determination of pesticide residues is based on calibration curves constructed for each batch of analysis. Calibration standard solutions are prepared from a known amount of reference material at different concentration levels covering the concentration range of the analyte in the analysed samples. In the scope of this study, the applicability of both ordinary linear and weighted linear regression (OLR and WLR) for pesticide residue analysis was investigated. We used 782 multipoint calibration curves obtained for 72 different analytical batches with high-pressure liquid chromatography equipped with an ultraviolet detector, and gas chromatography with electron capture, nitrogen phosphorus or mass spectrophotometer detectors. Quality criteria of the linear curves including regression coefficient, standard deviation of relative residuals and deviation of back calculated concentrations were calculated both for WLR and OLR methods. Moreover, the relative uncertainty of the predicted analyte concentration was estimated for both methods. It was concluded that calibration curve based on WLR complies with all the quality criteria set by international guidelines compared to those calculated with OLR. It means that all the data fit well with WLR for pesticide residue analysis. It was estimated that, regardless of the actual concentration range of the calibration, relative uncertainty at the lowest calibrated level ranged between 0.3% and 113.7% for OLR and between 0.2% and 22.1% for WLR. At or above 1/3 of the calibrated range, uncertainty of calibration curve ranged between 0.1% and 16.3% for OLR and 0% and 12.2% for WLR, and therefore, the two methods gave comparable results.

  1. Estimating traffic volume on Wyoming low volume roads using linear and logistic regression methods

    Directory of Open Access Journals (Sweden)

    Dick Apronti

    2016-12-01

    Full Text Available Traffic volume is an important parameter in most transportation planning applications. Low volume roads make up about 69% of road miles in the United States. Estimating traffic on the low volume roads is a cost-effective alternative to taking traffic counts. This is because traditional traffic counts are expensive and impractical for low priority roads. The purpose of this paper is to present the development of two alternative means of cost-effectively estimating traffic volumes for low volume roads in Wyoming and to make recommendations for their implementation. The study methodology involves reviewing existing studies, identifying data sources, and carrying out the model development. The utility of the models developed were then verified by comparing actual traffic volumes to those predicted by the model. The study resulted in two regression models that are inexpensive and easy to implement. The first regression model was a linear regression model that utilized pavement type, access to highways, predominant land use types, and population to estimate traffic volume. In verifying the model, an R2 value of 0.64 and a root mean square error of 73.4% were obtained. The second model was a logistic regression model that identified the level of traffic on roads using five thresholds or levels. The logistic regression model was verified by estimating traffic volume thresholds and determining the percentage of roads that were accurately classified as belonging to the given thresholds. For the five thresholds, the percentage of roads classified correctly ranged from 79% to 88%. In conclusion, the verification of the models indicated both model types to be useful for accurate and cost-effective estimation of traffic volumes for low volume Wyoming roads. The models developed were recommended for use in traffic volume estimations for low volume roads in pavement management and environmental impact assessment studies.

  2. Independent contrasts and PGLS regression estimators are equivalent.

    Science.gov (United States)

    Blomberg, Simon P; Lefevre, James G; Wells, Jessie A; Waterhouse, Mary

    2012-05-01

    We prove that the slope parameter of the ordinary least squares regression of phylogenetically independent contrasts (PICs) conducted through the origin is identical to the slope parameter of the method of generalized least squares (GLSs) regression under a Brownian motion model of evolution. This equivalence has several implications: 1. Understanding the structure of the linear model for GLS regression provides insight into when and why phylogeny is important in comparative studies. 2. The limitations of the PIC regression analysis are the same as the limitations of the GLS model. In particular, phylogenetic covariance applies only to the response variable in the regression and the explanatory variable should be regarded as fixed. Calculation of PICs for explanatory variables should be treated as a mathematical idiosyncrasy of the PIC regression algorithm. 3. Since the GLS estimator is the best linear unbiased estimator (BLUE), the slope parameter estimated using PICs is also BLUE. 4. If the slope is estimated using different branch lengths for the explanatory and response variables in the PIC algorithm, the estimator is no longer the BLUE, so this is not recommended. Finally, we discuss whether or not and how to accommodate phylogenetic covariance in regression analyses, particularly in relation to the problem of phylogenetic uncertainty. This discussion is from both frequentist and Bayesian perspectives.

  3. Finite Algorithms for Robust Linear Regression

    DEFF Research Database (Denmark)

    Madsen, Kaj; Nielsen, Hans Bruun

    1990-01-01

    The Huber M-estimator for robust linear regression is analyzed. Newton type methods for solution of the problem are defined and analyzed, and finite convergence is proved. Numerical experiments with a large number of test problems demonstrate efficiency and indicate that this kind of approach may...

  4. Polylinear regression analysis in radiochemistry

    International Nuclear Information System (INIS)

    Kopyrin, A.A.; Terent'eva, T.N.; Khramov, N.N.

    1995-01-01

    A number of radiochemical problems have been formulated in the framework of polylinear regression analysis, which permits the use of conventional mathematical methods for their solution. The authors have considered features of the use of polylinear regression analysis for estimating the contributions of various sources to the atmospheric pollution, for studying irradiated nuclear fuel, for estimating concentrations from spectral data, for measuring neutron fields of a nuclear reactor, for estimating crystal lattice parameters from X-ray diffraction patterns, for interpreting data of X-ray fluorescence analysis, for estimating complex formation constants, and for analyzing results of radiometric measurements. The problem of estimating the target parameters can be incorrect at certain properties of the system under study. The authors showed the possibility of regularization by adding a fictitious set of data open-quotes obtainedclose quotes from the orthogonal design. To estimate only a part of the parameters under consideration, the authors used incomplete rank models. In this case, it is necessary to take into account the possibility of confounding estimates. An algorithm for evaluating the degree of confounding is presented which is realized using standard software or regression analysis

  5. Iteratively-coupled propagating exterior complex scaling method for electron-hydrogen collisions

    International Nuclear Information System (INIS)

    Bartlett, Philip L; Stelbovics, Andris T; Bray, Igor

    2004-01-01

    A newly-derived iterative coupling procedure for the propagating exterior complex scaling (PECS) method is used to efficiently calculate the electron-impact wavefunctions for atomic hydrogen. An overview of this method is given along with methods for extracting scattering cross sections. Differential scattering cross sections at 30 eV are presented for the electron-impact excitation to the n = 1, 2, 3 and 4 final states, for both PECS and convergent close coupling (CCC), which are in excellent agreement with each other and with experiment. PECS results are presented at 27.2 eV and 30 eV for symmetric and asymmetric energy-sharing triple differential cross sections, which are in excellent agreement with CCC and exterior complex scaling calculations, and with experimental data. At these intermediate energies, the efficiency of the PECS method with iterative coupling has allowed highly accurate partial-wave solutions of the full Schroedinger equation, for L ≤ 50 and a large number of coupled angular momentum states, to be obtained with minimal computing resources. (letter to the editor)

  6. Direction of Effects in Multiple Linear Regression Models.

    Science.gov (United States)

    Wiedermann, Wolfgang; von Eye, Alexander

    2015-01-01

    Previous studies analyzed asymmetric properties of the Pearson correlation coefficient using higher than second order moments. These asymmetric properties can be used to determine the direction of dependence in a linear regression setting (i.e., establish which of two variables is more likely to be on the outcome side) within the framework of cross-sectional observational data. Extant approaches are restricted to the bivariate regression case. The present contribution extends the direction of dependence methodology to a multiple linear regression setting by analyzing distributional properties of residuals of competing multiple regression models. It is shown that, under certain conditions, the third central moments of estimated regression residuals can be used to decide upon direction of effects. In addition, three different approaches for statistical inference are discussed: a combined D'Agostino normality test, a skewness difference test, and a bootstrap difference test. Type I error and power of the procedures are assessed using Monte Carlo simulations, and an empirical example is provided for illustrative purposes. In the discussion, issues concerning the quality of psychological data, possible extensions of the proposed methods to the fourth central moment of regression residuals, and potential applications are addressed.

  7. Comparing spatial regression to random forests for large ...

    Science.gov (United States)

    Environmental data may be “large” due to number of records, number of covariates, or both. Random forests has a reputation for good predictive performance when using many covariates, whereas spatial regression, when using reduced rank methods, has a reputation for good predictive performance when using many records. In this study, we compare these two techniques using a data set containing the macroinvertebrate multimetric index (MMI) at 1859 stream sites with over 200 landscape covariates. Our primary goal is predicting MMI at over 1.1 million perennial stream reaches across the USA. For spatial regression modeling, we develop two new methods to accommodate large data: (1) a procedure that estimates optimal Box-Cox transformations to linearize covariate relationships; and (2) a computationally efficient covariate selection routine that takes into account spatial autocorrelation. We show that our new methods lead to cross-validated performance similar to random forests, but that there is an advantage for spatial regression when quantifying the uncertainty of the predictions. Simulations are used to clarify advantages for each method. This research investigates different approaches for modeling and mapping national stream condition. We use MMI data from the EPA's National Rivers and Streams Assessment and predictors from StreamCat (Hill et al., 2015). Previous studies have focused on modeling the MMI condition classes (i.e., good, fair, and po

  8. Modelling Status Food Security Households Disease Sufferers Pulmonary Tuberculosis Uses the Method Regression Logistics Binary

    Science.gov (United States)

    Wulandari, S. P.; Salamah, M.; Rositawati, A. F. D.

    2018-04-01

    Food security is the condition where the food fulfilment is managed well for the country till the individual. Indonesia is one of the country which has the commitment to create the food security becomes main priority. However, the food necessity becomes common thing means that it doesn’t care about nutrient standard and the health condition of family member, so in the fulfilment of food necessity also has to consider the disease suffered by the family member, one of them is pulmonary tuberculosa. From that reasons, this research is conducted to know the factors which influence on household food security status which suffered from pulmonary tuberculosis in the coastal area of Surabaya by using binary logistic regression method. The analysis result by using binary logistic regression shows that the variables wife latest education, house density and spacious house ventilation significantly affect on household food security status which suffered from pulmonary tuberculosis in the coastal area of Surabaya, where the wife education level is University/equivalent, the house density is eligible or 8 m2/person and spacious house ventilation 10% of the floor area has the opportunity to become food secure households amounted to 0.911089. While the chance of becoming food insecure households amounted to 0.088911. The model household food security status which suffered from pulmonary tuberculosis in the coastal area of Surabaya has been conformable, and the overall percentages of those classifications are at 71.8%.

  9. Calculation of dynamic and electronic properties of perfect and defect crystals by semiempirical quantum mechanical methods

    International Nuclear Information System (INIS)

    Zunger, A.

    1975-07-01

    Semiempirical all-valence-electron LCAO methods, that were previously used to study the electronic structure of molecules are applied to three problems in solid state physics: the electronic band structure of covalent crystals, point defect problems in solids and lattice dynamical study of molecular crystals. Calculation methods for the electronic band structure of regular solids are introduced and problems regarding the computation of the density matrix in solids are discussed. Three models for treating the electronic eigenvalue problem in the solid, within the proposed calculation schemes, are discussed and the proposed models and calculation schemes are applied to the calculation of the electronic structure of several solids belonging to different crystal types. The calculation models also describe electronic properties of deep defects in covalent insulating crystals. The possible usefulness of the semieipirical LCAO methods in determining the first order intermolecular interaction potential in solids and an improved model for treating the lattice dynamics and related thermodynamical properties of molecular solids are presented. The improved lattice dynamical is used to compute phonon dispersion curves, phonon density of states, stable unit cell structure, lattice heat capacity and thermal crystal parameters, in α and γ-N 2 crystals, using the N 2 -N 2 intermolecular interaction potential that has been computed from the semiempirical LCAO methods. (B.G.)

  10. Advanced cluster methods for correlated-electron systems

    Energy Technology Data Exchange (ETDEWEB)

    Fischer, Andre

    2015-04-27

    In this thesis, quantum cluster methods are used to calculate electronic properties of correlated-electron systems. A special focus lies in the determination of the ground state properties of a 3/4 filled triangular lattice within the one-band Hubbard model. At this filling, the electronic density of states exhibits a so-called van Hove singularity and the Fermi surface becomes perfectly nested, causing an instability towards a variety of spin-density-wave (SDW) and superconducting states. While chiral d+id-wave superconductivity has been proposed as the ground state in the weak coupling limit, the situation towards strong interactions is unclear. Additionally, quantum cluster methods are used here to investigate the interplay of Coulomb interactions and symmetry-breaking mechanisms within the nematic phase of iron-pnictide superconductors. The transition from a tetragonal to an orthorhombic phase is accompanied by a significant change in electronic properties, while long-range magnetic order is not established yet. The driving force of this transition may not only be phonons but also magnetic or orbital fluctuations. The signatures of these scenarios are studied with quantum cluster methods to identify the most important effects. Here, cluster perturbation theory (CPT) and its variational extention, the variational cluster approach (VCA) are used to treat the respective systems on a level beyond mean-field theory. Short-range correlations are incorporated numerically exactly by exact diagonalization (ED). In the VCA, long-range interactions are included by variational optimization of a fictitious symmetry-breaking field based on a self-energy functional approach. Due to limitations of ED, cluster sizes are limited to a small number of degrees of freedom. For the 3/4 filled triangular lattice, the VCA is performed for different cluster symmetries. A strong symmetry dependence and finite-size effects make a comparison of the results from different clusters difficult

  11. Building interpretable predictive models for pediatric hospital readmission using Tree-Lasso logistic regression.

    Science.gov (United States)

    Jovanovic, Milos; Radovanovic, Sandro; Vukicevic, Milan; Van Poucke, Sven; Delibasic, Boris

    2016-09-01

    similar performances reaching AUC values 0.783 and 0.779 for traditional Lasso and Tree-Lasso, respectfully. However, information loss of Lasso models is 0.35 bits higher compared to Tree-Lasso model. We propose a method for building predictive models applicable for the detection of readmission risk based on Electronic Health records. Integration of domain knowledge (in the form of ICD-9-CM taxonomy) and a data-driven, sparse predictive algorithm (Tree-Lasso Logistic Regression) resulted in an increase of interpretability of the resulting model. The models are interpreted for the readmission prediction problem in general pediatric population in California, as well as several important subpopulations, and the interpretations of models comply with existing medical understanding of pediatric readmission. Finally, quantitative assessment of the interpretability of the models is given, that is beyond simple counts of selected low-level features. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. [SciELO: method for electronic publishing].

    Science.gov (United States)

    Laerte Packer, A; Rocha Biojone, M; Antonio, I; Mayumi Takemaka, R; Pedroso García, A; Costa da Silva, A; Toshiyuki Murasaki, R; Mylek, C; Carvalho Reisl, O; Rocha F Delbucio, H C

    2001-01-01

    It describes the SciELO Methodology Scientific Electronic Library Online for electronic publishing of scientific periodicals, examining issues such as the transition from traditional printed publication to electronic publishing, the scientific communication process, the principles which founded the methodology development, its application in the building of the SciELO site, its modules and components, the tools use for its construction etc. The article also discusses the potentialities and trends for the area in Brazil and Latin America, pointing out questions and proposals which should be investigated and solved by the methodology. It concludes that the SciELO Methodology is an efficient, flexible and wide solution for the scientific electronic publishing.

  13. Free Electron Laser Induced Forward Transfer Method of Biomaterial for Marking

    Science.gov (United States)

    Suzuki, Kaoru

    Biomaterial, such as chitosan, poly lactic acid, etc., containing fluorescence agent was deposited onto biology hard tissue, such as teeth, fingernail of dog or cat, or sapphire substrate by free electron laser induced forward transfer method for direct write marking. Spin-coated biomaterial with fluorescence agent of rhodamin-6G or zinc phthalochyamine target on sapphire plate was ablated by free electron laser (resonance absorption wavelength of biomaterial : 3380 nm). The influence of the spin-coating film-forming temperature on hardness and adhesion strength of biomaterial is particularly studied. Effect of resonance excitation of biomaterial target by turning free electron laser was discussed to damage of biomaterial, rhodamin-6G or zinc phtarochyamine for direct write marking

  14. Method for calculating ionic and electronic defect concentrations in y-stabilised zirconia

    Energy Technology Data Exchange (ETDEWEB)

    Poulsen, F W [Risoe National Lab., Materials Research Dept., Roskilde (Denmark)

    1997-10-01

    A numerical (trial and error) method for calculation of concentration of ions, vacancies and ionic and electronic defects in solids (Brouwer-type diagrams) is presented. No approximations or truncations of the set of equations describing the chemistry for the various defect regions are used. Doped zirconia and doped thoria with simultaneous presence of protonic and electronic defects are taken as examples: 7 concentrations as function of oxygen partial pressure and/or water vapour partial pressure are determined. Realistic values for the equilibrium constants for equilibration with oxygen gas and water vapour, as well as for the internal equilibrium between holes and electrons were taken from the literature. The present mathematical method is versatile - it has also been employed by the author to treat more complex systems, such as perovskite structure oxides with over- and under-stoichiometry in oxygen, cation vacancies and simultaneous presence of protons. (au) 6 refs.

  15. EPMLR: sequence-based linear B-cell epitope prediction method using multiple linear regression.

    Science.gov (United States)

    Lian, Yao; Ge, Meng; Pan, Xian-Ming

    2014-12-19

    B-cell epitopes have been studied extensively due to their immunological applications, such as peptide-based vaccine development, antibody production, and disease diagnosis and therapy. Despite several decades of research, the accurate prediction of linear B-cell epitopes has remained a challenging task. In this work, based on the antigen's primary sequence information, a novel linear B-cell epitope prediction model was developed using the multiple linear regression (MLR). A 10-fold cross-validation test on a large non-redundant dataset was performed to evaluate the performance of our model. To alleviate the problem caused by the noise of negative dataset, 300 experiments utilizing 300 sub-datasets were performed. We achieved overall sensitivity of 81.8%, precision of 64.1% and area under the receiver operating characteristic curve (AUC) of 0.728. We have presented a reliable method for the identification of linear B cell epitope using antigen's primary sequence information. Moreover, a web server EPMLR has been developed for linear B-cell epitope prediction: http://www.bioinfo.tsinghua.edu.cn/epitope/EPMLR/ .

  16. Discrimination of chicken seasonings and beef seasonings using electronic nose and sensory evaluation.

    Science.gov (United States)

    Tian, Huaixiang; Li, Fenghua; Qin, Lan; Yu, Haiyan; Ma, Xia

    2014-11-01

    This study examines the feasibility of electronic nose as a method to discriminate chicken and beef seasonings and to predict sensory attributes. Sensory evaluation showed that 8 chicken seasonings and 4 beef seasonings could be well discriminated and classified based on 8 sensory attributes. The sensory attributes including chicken/beef, gamey, garlic, spicy, onion, soy sauce, retention, and overall aroma intensity were generated by a trained evaluation panel. Principal component analysis (PCA), discriminant factor analysis (DFA), and cluster analysis (CA) combined with electronic nose were used to discriminate seasoning samples based on the difference of the sensor response signals of chicken and beef seasonings. The correlation between sensory attributes and electronic nose sensors signal was established using partial least squares regression (PLSR) method. The results showed that the seasoning samples were all correctly classified by the electronic nose combined with PCA, DFA, and CA. The electronic nose gave good prediction results for all the sensory attributes with correlation coefficient (r) higher than 0.8. The work indicated that electronic nose is an effective method for discriminating different seasonings and predicting sensory attributes. © 2014 Institute of Food Technologists®

  17. Measuring decision weights in recognition experiments with multiple response alternatives: comparing the correlation and multinomial-logistic-regression methods.

    Science.gov (United States)

    Dai, Huanping; Micheyl, Christophe

    2012-11-01

    Psychophysical "reverse-correlation" methods allow researchers to gain insight into the perceptual representations and decision weighting strategies of individual subjects in perceptual tasks. Although these methods have gained momentum, until recently their development was limited to experiments involving only two response categories. Recently, two approaches for estimating decision weights in m-alternative experiments have been put forward. One approach extends the two-category correlation method to m > 2 alternatives; the second uses multinomial logistic regression (MLR). In this article, the relative merits of the two methods are discussed, and the issues of convergence and statistical efficiency of the methods are evaluated quantitatively using Monte Carlo simulations. The results indicate that, for a range of values of the number of trials, the estimated weighting patterns are closer to their asymptotic values for the correlation method than for the MLR method. Moreover, for the MLR method, weight estimates for different stimulus components can exhibit strong correlations, making the analysis and interpretation of measured weighting patterns less straightforward than for the correlation method. These and other advantages of the correlation method, which include computational simplicity and a close relationship to other well-established psychophysical reverse-correlation methods, make it an attractive tool to uncover decision strategies in m-alternative experiments.

  18. Magnitude And Distance Determination From The First Few Seconds Of One Three Components Seismological Station Signal Using Support Vector Machine Regression Methods

    Science.gov (United States)

    Ochoa Gutierrez, L. H.; Vargas Jimenez, C. A.; Niño Vasquez, L. F.

    2011-12-01

    The "Sabana de Bogota" (Bogota Savannah) is the most important social and economical center of Colombia. Almost the third of population is concentrated in this region and generates about the 40% of Colombia's Internal Brute Product (IBP). According to this, the zone presents an elevated vulnerability in case that a high destructive seismic event occurs. Historical evidences show that high magnitude events took place in the past with a huge damage caused to the city and indicate that is probable that such events can occur in the next years. This is the reason why we are working in an early warning generation system, using the first few seconds of a seismic signal registered by three components and wide band seismometers. Such system can be implemented using Computational Intelligence tools, designed and calibrated to the particular Geological, Structural and environmental conditions present in the region. The methods developed are expected to work on real time, thus suitable software and electronic tools need to be developed. We used Support Vector Machines Regression (SVMR) methods trained and tested with historic seismic events registered by "EL ROSAL" Station, located near Bogotá, calculating descriptors or attributes as the input of the model, from the first 6 seconds of signal. With this algorithm, we obtained less than 10% of mean absolute error and correlation coefficients greater than 85% in hypocentral distance and Magnitude estimation. With this results we consider that we can improve the method trying to have better accuracy with less signal time and that this can be a very useful model to be implemented directly in the seismological stations to generate a fast characterization of the event, broadcasting not only raw signal but pre-processed information that can be very useful for accurate Early Warning Generation.

  19. Performance of the modified Poisson regression approach for estimating relative risks from clustered prospective data.

    Science.gov (United States)

    Yelland, Lisa N; Salter, Amy B; Ryan, Philip

    2011-10-15

    Modified Poisson regression, which combines a log Poisson regression model with robust variance estimation, is a useful alternative to log binomial regression for estimating relative risks. Previous studies have shown both analytically and by simulation that modified Poisson regression is appropriate for independent prospective data. This method is often applied to clustered prospective data, despite a lack of evidence to support its use in this setting. The purpose of this article is to evaluate the performance of the modified Poisson regression approach for estimating relative risks from clustered prospective data, by using generalized estimating equations to account for clustering. A simulation study is conducted to compare log binomial regression and modified Poisson regression for analyzing clustered data from intervention and observational studies. Both methods generally perform well in terms of bias, type I error, and coverage. Unlike log binomial regression, modified Poisson regression is not prone to convergence problems. The methods are contrasted by using example data sets from 2 large studies. The results presented in this article support the use of modified Poisson regression as an alternative to log binomial regression for analyzing clustered prospective data when clustering is taken into account by using generalized estimating equations.

  20. Statistics of electron multiplication in a multiplier phototube; Iterative method; Estadistica de la multiplicacion de electrones en un fotomultiplicador: Metodos iterativos

    Energy Technology Data Exchange (ETDEWEB)

    Ortiz, J F; Grau, A

    1985-07-01

    In the present paper an iterative method is applied to study the variation of dynode response in the multiplier phototube. Three different situation are considered that correspond to the following ways of electronic incidence on the first dynode: incidence of exactly one electron, incidence of exactly r electrons and incidence of an average r electrons. The responses are given for a number of steps between 1 and 5, and for values of the multiplication factor of 2.1, 2.5, 3 and 5. We study also the variance, the skewness and the excess of jurtosis for different multiplication factors. (Author) 11 refs.

  1. Study and Handling Methods of Power IGBT Module Failures in Power Electronic Converter Systems

    DEFF Research Database (Denmark)

    Choi, Uimin; Blaabjerg, Frede; Lee, Kyo-Beum

    2015-01-01

    Power electronics plays an important role in a wide range of applications in order to achieve high efficiency and performance. Increasing efforts are being made to improve the reliability of power electronics systems to ensure compliance with more stringent constraints on cost, safety......, and availability in different applications. This paper presents an overview of the major failure mechanisms of IGBT modules and their handling methods in power converter systems improving reliability. The major failure mechanisms of IGBT modules are presented first, and methods for predicting lifetime...... and estimating the junction temperature of IGBT modules are then discussed. Subsequently, different methods for detecting open- and short-circuit faults are presented. Finally, fault-tolerant strategies for improving the reliability of power electronic systems under field operation are explained and compared...

  2. Functional data analysis of generalized regression quantiles

    KAUST Repository

    Guo, Mengmeng; Zhou, Lan; Huang, Jianhua Z.; Hä rdle, Wolfgang Karl

    2013-01-01

    Generalized regression quantiles, including the conditional quantiles and expectiles as special cases, are useful alternatives to the conditional means for characterizing a conditional distribution, especially when the interest lies in the tails. We develop a functional data analysis approach to jointly estimate a family of generalized regression quantiles. Our approach assumes that the generalized regression quantiles share some common features that can be summarized by a small number of principal component functions. The principal component functions are modeled as splines and are estimated by minimizing a penalized asymmetric loss measure. An iterative least asymmetrically weighted squares algorithm is developed for computation. While separate estimation of individual generalized regression quantiles usually suffers from large variability due to lack of sufficient data, by borrowing strength across data sets, our joint estimation approach significantly improves the estimation efficiency, which is demonstrated in a simulation study. The proposed method is applied to data from 159 weather stations in China to obtain the generalized quantile curves of the volatility of the temperature at these stations. © 2013 Springer Science+Business Media New York.

  3. Functional data analysis of generalized regression quantiles

    KAUST Repository

    Guo, Mengmeng

    2013-11-05

    Generalized regression quantiles, including the conditional quantiles and expectiles as special cases, are useful alternatives to the conditional means for characterizing a conditional distribution, especially when the interest lies in the tails. We develop a functional data analysis approach to jointly estimate a family of generalized regression quantiles. Our approach assumes that the generalized regression quantiles share some common features that can be summarized by a small number of principal component functions. The principal component functions are modeled as splines and are estimated by minimizing a penalized asymmetric loss measure. An iterative least asymmetrically weighted squares algorithm is developed for computation. While separate estimation of individual generalized regression quantiles usually suffers from large variability due to lack of sufficient data, by borrowing strength across data sets, our joint estimation approach significantly improves the estimation efficiency, which is demonstrated in a simulation study. The proposed method is applied to data from 159 weather stations in China to obtain the generalized quantile curves of the volatility of the temperature at these stations. © 2013 Springer Science+Business Media New York.

  4. A Technique of Fuzzy C-Mean in Multiple Linear Regression Model toward Paddy Yield

    Science.gov (United States)

    Syazwan Wahab, Nur; Saifullah Rusiman, Mohd; Mohamad, Mahathir; Amira Azmi, Nur; Che Him, Norziha; Ghazali Kamardan, M.; Ali, Maselan

    2018-04-01

    In this paper, we propose a hybrid model which is a combination of multiple linear regression model and fuzzy c-means method. This research involved a relationship between 20 variates of the top soil that are analyzed prior to planting of paddy yields at standard fertilizer rates. Data used were from the multi-location trials for rice carried out by MARDI at major paddy granary in Peninsular Malaysia during the period from 2009 to 2012. Missing observations were estimated using mean estimation techniques. The data were analyzed using multiple linear regression model and a combination of multiple linear regression model and fuzzy c-means method. Analysis of normality and multicollinearity indicate that the data is normally scattered without multicollinearity among independent variables. Analysis of fuzzy c-means cluster the yield of paddy into two clusters before the multiple linear regression model can be used. The comparison between two method indicate that the hybrid of multiple linear regression model and fuzzy c-means method outperform the multiple linear regression model with lower value of mean square error.

  5. Using a binary logistic regression method and GIS for evaluating and mapping the groundwater spring potential in the Sultan Mountains (Aksehir, Turkey)

    Science.gov (United States)

    Ozdemir, Adnan

    2011-07-01

    SummaryThe purpose of this study is to produce a groundwater spring potential map of the Sultan Mountains in central Turkey, based on a logistic regression method within a Geographic Information System (GIS) environment. Using field surveys, the locations of the springs (440 springs) were determined in the study area. In this study, 17 spring-related factors were used in the analysis: geology, relative permeability, land use/land cover, precipitation, elevation, slope, aspect, total curvature, plan curvature, profile curvature, wetness index, stream power index, sediment transport capacity index, distance to drainage, distance to fault, drainage density, and fault density map. The coefficients of the predictor variables were estimated using binary logistic regression analysis and were used to calculate the groundwater spring potential for the entire study area. The accuracy of the final spring potential map was evaluated based on the observed springs. The accuracy of the model was evaluated by calculating the relative operating characteristics. The area value of the relative operating characteristic curve model was found to be 0.82. These results indicate that the model is a good estimator of the spring potential in the study area. The spring potential map shows that the areas of very low, low, moderate and high groundwater spring potential classes are 105.586 km 2 (28.99%), 74.271 km 2 (19.906%), 101.203 km 2 (27.14%), and 90.05 km 2 (24.671%), respectively. The interpretations of the potential map showed that stream power index, relative permeability of lithologies, geology, elevation, aspect, wetness index, plan curvature, and drainage density play major roles in spring occurrence and distribution in the Sultan Mountains. The logistic regression approach has not yet been used to delineate groundwater potential zones. In this study, the logistic regression method was used to locate potential zones for groundwater springs in the Sultan Mountains. The evolved model

  6. Organizational-methodical provisions for the audit of operations with electronic money

    Directory of Open Access Journals (Sweden)

    Semenetz A.P.

    2017-06-01

    Full Text Available To obtain objective and unbiased information about the accuracy and completeness of electronic money transactions at the enterprise, it is necessary to conduct an audit. The results of the external audit of electronic money transactions help the company’s management personnel to assess the efficiency and rationality of using such a modern means of payment, such as electronic money, as well as verify the proper functioning of the internal control service. The work substantiates organizational and methodical provisions of the process of conducting an external audit of transactions with electronic money in terms of clarifying the organizational provisions for conducting an audit of transactions with electronic money, namely the definition of the purpose, task, subjects and objects of audit and sources of information. Accordingly, the purpose of the audit of operations with electronic money is to provide the auditor’s unbiased opinion on the reliability of the financial statements of the enterprise in terms of operations with electronic money. Within the scope of this dissertation, the object of external audit is operations with electronic money, since electronic money is a new and contemporary object of accounting, and therefore the development of scientifically grounded order of conducting external audit of the investigated object is necessary. The subject of an external audit of electronic money transactions is a set of business transactions in electronic money settlements, that is, transactions with their acquisition and repayment and the accuracy of displaying information about them in the financial statements. In the course of the study, the procedure for the implementation of external audit procedures during the stages of the process of electronic money audit at the enterprise was determined, which allowed to confirm the correctness of the accounting of a new and modern means of payment such as electronic money. These proposals are aimed

  7. Pretreatment of Cellulose By Electron Beam Irradiation Method

    Science.gov (United States)

    Jusri, N. A. A.; Azizan, A.; Ibrahim, N.; Salleh, R. Mohd; Rahman, M. F. Abd

    2018-05-01

    Pretreatment process of lignocellulosic biomass (LCB) to produce biofuel has been conducted by using various methods including physical, chemical, physicochemical as well as biological. The conversion of bioethanol process typically involves several steps which consist of pretreatment, hydrolysis, fermentation and separation. In this project, microcrystalline cellulose (MCC) was used in replacement of LCB since cellulose has the highest content of LCB for the purpose of investigating the effectiveness of new pretreatment method using radiation technology. Irradiation with different doses (100 kGy to 1000 kGy) was conducted by using electron beam accelerator equipment at Agensi Nuklear Malaysia. Fourier Transform Infrared Spectroscopy (FTIR) and X-Ray Diffraction (XRD) analyses were studied to further understand the effect of the suggested pretreatment step to the content of MCC. Through this method namely IRR-LCB, an ideal and optimal condition for pretreatment prior to the production of biofuel by using LCB may be introduced.

  8. The process and utility of classification and regression tree methodology in nursing research.

    Science.gov (United States)

    Kuhn, Lisa; Page, Karen; Ward, John; Worrall-Carter, Linda

    2014-06-01

    This paper presents a discussion of classification and regression tree analysis and its utility in nursing research. Classification and regression tree analysis is an exploratory research method used to illustrate associations between variables not suited to traditional regression analysis. Complex interactions are demonstrated between covariates and variables of interest in inverted tree diagrams. Discussion paper. English language literature was sourced from eBooks, Medline Complete and CINAHL Plus databases, Google and Google Scholar, hard copy research texts and retrieved reference lists for terms including classification and regression tree* and derivatives and recursive partitioning from 1984-2013. Classification and regression tree analysis is an important method used to identify previously unknown patterns amongst data. Whilst there are several reasons to embrace this method as a means of exploratory quantitative research, issues regarding quality of data as well as the usefulness and validity of the findings should be considered. Classification and regression tree analysis is a valuable tool to guide nurses to reduce gaps in the application of evidence to practice. With the ever-expanding availability of data, it is important that nurses understand the utility and limitations of the research method. Classification and regression tree analysis is an easily interpreted method for modelling interactions between health-related variables that would otherwise remain obscured. Knowledge is presented graphically, providing insightful understanding of complex and hierarchical relationships in an accessible and useful way to nursing and other health professions. © 2013 The Authors. Journal of Advanced Nursing Published by John Wiley & Sons Ltd.

  9. Simulation of therapeutic electron beam tracking through a non-uniform magnetic field using finite element method.

    Science.gov (United States)

    Tahmasebibirgani, Mohammad Javad; Maskani, Reza; Behrooz, Mohammad Ali; Zabihzadeh, Mansour; Shahbazian, Hojatollah; Fatahiasl, Jafar; Chegeni, Nahid

    2017-04-01

    In radiotherapy, megaelectron volt (MeV) electrons are employed for treatment of superficial cancers. Magnetic fields can be used for deflection and deformation of the electron flow. A magnetic field is composed of non-uniform permanent magnets. The primary electrons are not mono-energetic and completely parallel. Calculation of electron beam deflection requires using complex mathematical methods. In this study, a device was made to apply a magnetic field to an electron beam and the path of electrons was simulated in the magnetic field using finite element method. A mini-applicator equipped with two neodymium permanent magnets was designed that enables tuning the distance between magnets. This device was placed in a standard applicator of Varian 2100 CD linear accelerator. The mini-applicator was simulated in CST Studio finite element software. Deflection angle and displacement of the electron beam was calculated after passing through the magnetic field. By determining a 2 to 5cm distance between two poles, various intensities of transverse magnetic field was created. The accelerator head was turned so that the deflected electrons became vertical to the water surface. To measure the displacement of the electron beam, EBT2 GafChromic films were employed. After being exposed, the films were scanned using HP G3010 reflection scanner and their optical density was extracted using programming in MATLAB environment. Displacement of the electron beam was compared with results of simulation after applying the magnetic field. Simulation results of the magnetic field showed good agreement with measured values. Maximum deflection angle for a 12 MeV beam was 32.9° and minimum deflection for 15 MeV was 12.1°. Measurement with the film showed precision of simulation in predicting the amount of displacement in the electron beam. A magnetic mini-applicator was made and simulated using finite element method. Deflection angle and displacement of electron beam were calculated. With

  10. Electronic Structure Calculation of Permanent Magnets using the KKR Green's Function Method

    Science.gov (United States)

    Doi, Shotaro; Akai, Hisazumi

    2014-03-01

    Electronic structure and magnetic properties of permanent magnetic materials, especially Nd2Fe14B, are investigated theoretically using the KKR Green's function method. Important physical quantities in magnetism, such as magnetic moment, Curie temperature, and anisotropy constant, which are obtained from electronics structure calculations in both cases of atomic-sphere-approximation and full-potential treatment, are compared with past band structure calculations and experiments. The site preference of heavy rare-earth impurities are also evaluated through the calculation of formation energy with the use of coherent potential approximations. Further, the development of electronic structure calculation code using the screened KKR for large super-cells, which is aimed at studying the electronic structure of realistic microstructures (e.g. grain boundary phase), is introduced with some test calculations.

  11. Moving gantry method for electron beam dose profile measurement at extended source-to-surface distances.

    Science.gov (United States)

    Fekete, Gábor; Fodor, Emese; Pesznyák, Csilla

    2015-03-08

    A novel method has been put forward for very large electron beam profile measurement. With this method, absorbed dose profiles can be measured at any depth in a solid phantom for total skin electron therapy. Electron beam dose profiles were collected with two different methods. Profile measurements were performed at 0.2 and 1.2 cm depths with a parallel plate and a thimble chamber, respectively. 108cm × 108 cm and 45 cm × 45 cm projected size electron beams were scanned by vertically moving phantom and detector at 300 cm source-to-surface distance with 90° and 270° gantry angles. The profiles collected this way were used as reference. Afterwards, the phantom was fixed on the central axis and the gantry was rotated with certain angular steps. After applying correction for the different source-to-detector distances and incidence of angle, the profiles measured in the two different setups were compared. Correction formalism has been developed. The agreement between the cross profiles taken at the depth of maximum dose with the 'classical' scanning and with the new moving gantry method was better than 0.5 % in the measuring range from zero to 71.9 cm. Inverse square and attenuation corrections had to be applied. The profiles measured with the parallel plate chamber agree better than 1%, except for the penumbra region, where the maximum difference is 1.5%. With the moving gantry method, very large electron field profiles can be measured at any depth in a solid phantom with high accuracy and reproducibility and with much less time per step. No special instrumentation is needed. The method can be used for commissioning of very large electron beams for computer-assisted treatment planning, for designing beam modifiers to improve dose uniformity, and for verification of computed dose profiles.

  12. Comparing electron tomography and HRTEM slicing methods as tools to measure the thickness of nanoparticles

    Energy Technology Data Exchange (ETDEWEB)

    Alloyeau, D., E-mail: alloyeau.damien@gmail.com [Laboratoire Materiaux et Phenomenes Quantiques, Universite Paris 7/CNRS, UMR 7162, 2 Place Jussieu, 75251 Paris (France); Laboratoire d' Etude des Microstructures - ONERA/CNRS, UMR 104, B.P. 72, 92322 Chatillon (France); Ricolleau, C. [Laboratoire Materiaux et Phenomenes Quantiques, Universite Paris 7/CNRS, UMR 7162, 2 Place Jussieu, 75251 Paris (France); Oikawa, T. [Laboratoire Materiaux et Phenomenes Quantiques, Universite Paris 7/CNRS, UMR 7162, 2 Place Jussieu, 75251 Paris (France); JEOL (Europe) SAS, Espace Claude Monet, 1 Allee de Giverny, 78290 Croissy-sur-Seine (France); Langlois, C. [Laboratoire Materiaux et Phenomenes Quantiques, Universite Paris 7/CNRS, UMR 7162, 2 Place Jussieu, 75251 Paris (France); Le Bouar, Y.; Loiseau, A. [Laboratoire d' Etude des Microstructures - ONERA/CNRS, UMR 104, B.P. 72, 92322 Chatillon (France)

    2009-06-15

    Nanoparticles' morphology is a key parameter in the understanding of their thermodynamical, optical, magnetic and catalytic properties. In general, nanoparticles, observed in transmission electron microscopy (TEM), are viewed in projection so that the determination of their thickness (along the projection direction) with respect to their projected lateral size is highly questionable. To date, the widely used methods to measure nanoparticles thickness in a transmission electron microscope are to use cross-section images or focal series in high-resolution transmission electron microscopy imaging (HRTEM 'slicing'). In this paper, we compare the focal series method with the electron tomography method to show that both techniques yield similar particle thickness in a range of size from 1 to 5 nm, but the electron tomography method provides better statistics since more particles can be analyzed at one time. For this purpose, we have compared, on the same samples, the nanoparticles thickness measurements obtained from focal series with the ones determined from cross-section profiles of tomograms (tomogram slicing) perpendicular to the plane of the substrate supporting the nanoparticles. The methodology is finally applied to the comparison of CoPt nanoparticles annealed ex situ at two different temperatures to illustrate the accuracy of the techniques in detecting small particle thickness changes.

  13. Mixed Frequency Data Sampling Regression Models: The R Package midasr

    Directory of Open Access Journals (Sweden)

    Eric Ghysels

    2016-08-01

    Full Text Available When modeling economic relationships it is increasingly common to encounter data sampled at different frequencies. We introduce the R package midasr which enables estimating regression models with variables sampled at different frequencies within a MIDAS regression framework put forward in work by Ghysels, Santa-Clara, and Valkanov (2002. In this article we define a general autoregressive MIDAS regression model with multiple variables of different frequencies and show how it can be specified using the familiar R formula interface and estimated using various optimization methods chosen by the researcher. We discuss how to check the validity of the estimated model both in terms of numerical convergence and statistical adequacy of a chosen regression specification, how to perform model selection based on a information criterion, how to assess forecasting accuracy of the MIDAS regression model and how to obtain a forecast aggregation of different MIDAS regression models. We illustrate the capabilities of the package with a simulated MIDAS regression model and give two empirical examples of application of MIDAS regression.

  14. PARAMETRIC AND NON PARAMETRIC (MARS: MULTIVARIATE ADDITIVE REGRESSION SPLINES) LOGISTIC REGRESSIONS FOR PREDICTION OF A DICHOTOMOUS RESPONSE VARIABLE WITH AN EXAMPLE FOR PRESENCE/ABSENCE OF AMPHIBIANS

    Science.gov (United States)

    The purpose of this report is to provide a reference manual that could be used by investigators for making informed use of logistic regression using two methods (standard logistic regression and MARS). The details for analyses of relationships between a dependent binary response ...

  15. Weighted SGD for ℓp Regression with Randomized Preconditioning*

    Science.gov (United States)

    Yang, Jiyan; Chow, Yin-Lam; Ré, Christopher; Mahoney, Michael W.

    2018-01-01

    In recent years, stochastic gradient descent (SGD) methods and randomized linear algebra (RLA) algorithms have been applied to many large-scale problems in machine learning and data analysis. SGD methods are easy to implement and applicable to a wide range of convex optimization problems. In contrast, RLA algorithms provide much stronger performance guarantees but are applicable to a narrower class of problems. We aim to bridge the gap between these two methods in solving constrained overdetermined linear regression problems—e.g., ℓ2 and ℓ1 regression problems. We propose a hybrid algorithm named pwSGD that uses RLA techniques for preconditioning and constructing an importance sampling distribution, and then performs an SGD-like iterative process with weighted sampling on the preconditioned system.By rewriting a deterministic ℓp regression problem as a stochastic optimization problem, we connect pwSGD to several existing ℓp solvers including RLA methods with algorithmic leveraging (RLA for short).We prove that pwSGD inherits faster convergence rates that only depend on the lower dimension of the linear system, while maintaining low computation complexity. Such SGD convergence rates are superior to other related SGD algorithm such as the weighted randomized Kaczmarz algorithm.Particularly, when solving ℓ1 regression with size n by d, pwSGD returns an approximate solution with ε relative error in the objective value in 𝒪(log n·nnz(A)+poly(d)/ε2) time. This complexity is uniformly better than that of RLA methods in terms of both ε and d when the problem is unconstrained. In the presence of constraints, pwSGD only has to solve a sequence of much simpler and smaller optimization problem over the same constraints. In general this is more efficient than solving the constrained subproblem required in RLA.For ℓ2 regression, pwSGD returns an approximate solution with ε relative error in the objective value and the solution vector measured in

  16. A rapid method of reprocessing for electronic microscopy of cut histological in paraffin

    International Nuclear Information System (INIS)

    Hernandez Chavarri, F.; Vargas Montero, M.; Rivera, P.; Carranza, A.

    2000-01-01

    A simple and rapid method is described for re-processing of light microscopy paraffin sections to observe they under transmission electron microscopy (TEM) and scanning electron microscopy (SEM) The paraffin-embedded tissue is sectioned and deparaffinized in toluene; then exposed to osmium vapor under microwave irradiation using a domestic microwave oven. The tissues were embedded in epoxy resin, polymerized and ultrathin sectioned. The method requires a relatively short time (about 30 minutes for TEM and 15 for SEM), and produces a reasonable quality of the ultrastructure for diagnostic purposes. (Author) [es

  17. Study on the Filament Yarns Spreading Techniques and Assessment Methods of the Electronic Fiberglass Fabric

    Science.gov (United States)

    Wang, Xi; Chen, Shouhui; Zheng, Tianyong; Ning, Xiangchun; Dai, Yifei

    2018-03-01

    The filament yarns spreading techniques of electronic fiberglass fabric were developed in the past few years in order to meet the requirements of the development of electronic industry. Copper clad laminate (CCL) requires that the warp and weft yarns of the fabric could be spread out of apart and formed flat. The penetration performance of resin could be improved due to the filament yarns spreading techniques of electronic fiberglass fabric, the same as peeling strength of CCL and drilling performance of printed circuit board (PCB). This paper shows the filament yarns spreading techniques of electronic fiberglass fabric from several aspects, such as methods and functions, also with the assessment methods of their effects.

  18. Determinants of LSIL Regression in Women from a Colombian Cohort

    International Nuclear Information System (INIS)

    Molano, Monica; Gonzalez, Mauricio; Gamboa, Oscar; Ortiz, Natasha; Luna, Joaquin; Hernandez, Gustavo; Posso, Hector; Murillo, Raul; Munoz, Nubia

    2010-01-01

    Objective: To analyze the role of Human Papillomavirus (HPV) and other risk factors in the regression of cervical lesions in women from the Bogota Cohort. Methods: 200 HPV positive women with abnormal cytology were included for regression analysis. The time of lesion regression was modeled using methods for interval censored survival time data. Median duration of total follow-up was 9 years. Results: 80 (40%) women were diagnosed with Atypical Squamous Cells of Undetermined Significance (ASCUS) or Atypical Glandular Cells of Undetermined Significance (AGUS) while 120 (60%) were diagnosed with Low Grade Squamous Intra-epithelial Lesions (LSIL). Globally, 40% of the lesions were still present at first year of follow up, while 1.5% was still present at 5 year check-up. The multivariate model showed similar regression rates for lesions in women with ASCUS/AGUS and women with LSIL (HR= 0.82, 95% CI 0.59-1.12). Women infected with HR HPV types and those with mixed infections had lower regression rates for lesions than did women infected with LR types (HR=0.526, 95% CI 0.33-0.84, for HR types and HR=0.378, 95% CI 0.20-0.69, for mixed infections). Furthermore, women over 30 years had a higher lesion regression rate than did women under 30 years (HR1.53, 95% CI 1.03-2.27). The study showed that the median time for lesion regression was 9 months while the median time for HPV clearance was 12 months. Conclusions: In the studied population, the type of infection and the age of the women are critical factors for the regression of cervical lesions.

  19. Electron mobilities of n-type organic semiconductors from time-dependent wavepacket diffusion method: pentacenequinone derivatives.

    Science.gov (United States)

    Zhang, WeiWei; Zhong, XinXin; Zhao, Yi

    2012-11-26

    The electron mobilities of two n-type pentacenequinone derivative organic semiconductors, 5,7,12,14-tetraaza-6,13-pentacenequinone (TAPQ5) and 1,4,8,11-tetraaza-6,13-pentacenequinone (TAPQ7), are investigated with use of the methods of electronic structure and quantum dynamics. The electronic structure calculations reveal that the two key parameters for the control of electron transfer, reorganization energy and electronic coupling, are similar for these two isomerization systems, and the charge carriers essentially display one-dimensional transport properties. The mobilities are then calculated by using the time-dependent wavepacket diffusion approach in which the dynamic fluctuations of the electronic couplings are incorporated via their correlation functions obtained from molecular dynamics simulations. The predicted mobility of TAPQ7 crystal is about six times larger than that of TAPQ5 crystal. Most interestingly, Fermi's golden rule predicts the mobilities very close to those from the time-dependent wavepacket diffusion method, even though the electronic couplings are explicitly large enough to make the perturbation theory invalid. The possible reason is analyzed from the dynamic fluctuations.

  20. Introduction to First-Principles Electronic Structure Methods: Application to Actinide Materials

    International Nuclear Information System (INIS)

    Klepeis, J E

    2005-01-01

    The purpose of this paper is to provide an introduction for non-experts to first-principles electronic structure methods that are widely used in the field of condensed-matter physics, including applications to actinide materials. The methods I describe are based on density functional theory (DFT) within the local density approximation (LDA) and the generalized gradient approximation (GGA). In addition to explaining the meaning of this terminology I also describe the underlying theory itself in some detail in order to enable a better understanding of the relative strengths and weaknesses of the methods. I briefly mention some particular numerical implementations of DFT, including the linear muffin-tin orbital (LMTO), linear augmented plane wave (LAPW), and pseudopotential methods, as well as general methodologies that go beyond DFT and specifically address some of the weaknesses of the theory. The last third of the paper is devoted to a few selected applications that illustrate the ideas discussed in the first two-thirds. In particular, I conclude by addressing the current controversy regarding magnetic DFT calculations for actinide materials. Throughout this paper particular emphasis is placed on providing the appropriate background to enable the non-expert to gain a better appreciation of the application of first-principles electronic structure methods to the study of actinide and other materials

  1. Dynamical electron diffraction simulation for non-orthogonal crystal system by a revised real space method.

    Science.gov (United States)

    Lv, C L; Liu, Q B; Cai, C Y; Huang, J; Zhou, G W; Wang, Y G

    2015-01-01

    In the transmission electron microscopy, a revised real space (RRS) method has been confirmed to be a more accurate dynamical electron diffraction simulation method for low-energy electron diffraction than the conventional multislice method (CMS). However, the RRS method can be only used to calculate the dynamical electron diffraction of orthogonal crystal system. In this work, the expression of the RRS method for non-orthogonal crystal system is derived. By taking Na2 Ti3 O7 and Si as examples, the correctness of the derived RRS formula for non-orthogonal crystal system is confirmed by testing the coincidence of numerical results of both sides of Schrödinger equation; moreover, the difference between the RRS method and the CMS for non-orthogonal crystal system is compared at the accelerating voltage range from 40 to 10 kV. Our results show that the CMS method is almost the same as the RRS method for the accelerating voltage above 40 kV. However, when the accelerating voltage is further lowered to 20 kV or below, the CMS method introduces significant errors, not only for the higher-order Laue zone diffractions, but also for zero-order Laue zone. These indicate that the RRS method for non-orthogonal crystal system is necessary to be used for more accurate dynamical simulation when the accelerating voltage is low. Furthermore, the reason for the increase of differences between those diffraction patterns calculated by the RRS method and the CMS method with the decrease of the accelerating voltage is discussed. © 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.

  2. Deep ensemble learning of sparse regression models for brain disease diagnosis.

    Science.gov (United States)

    Suk, Heung-Il; Lee, Seong-Whan; Shen, Dinggang

    2017-04-01

    Recent studies on brain imaging analysis witnessed the core roles of machine learning techniques in computer-assisted intervention for brain disease diagnosis. Of various machine-learning techniques, sparse regression models have proved their effectiveness in handling high-dimensional data but with a small number of training samples, especially in medical problems. In the meantime, deep learning methods have been making great successes by outperforming the state-of-the-art performances in various applications. In this paper, we propose a novel framework that combines the two conceptually different methods of sparse regression and deep learning for Alzheimer's disease/mild cognitive impairment diagnosis and prognosis. Specifically, we first train multiple sparse regression models, each of which is trained with different values of a regularization control parameter. Thus, our multiple sparse regression models potentially select different feature subsets from the original feature set; thereby they have different powers to predict the response values, i.e., clinical label and clinical scores in our work. By regarding the response values from our sparse regression models as target-level representations, we then build a deep convolutional neural network for clinical decision making, which thus we call 'Deep Ensemble Sparse Regression Network.' To our best knowledge, this is the first work that combines sparse regression models with deep neural network. In our experiments with the ADNI cohort, we validated the effectiveness of the proposed method by achieving the highest diagnostic accuracies in three classification tasks. We also rigorously analyzed our results and compared with the previous studies on the ADNI cohort in the literature. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Systematic review of treatment modalities for gingival depigmentation: a random-effects poisson regression analysis.

    Science.gov (United States)

    Lin, Yi Hung; Tu, Yu Kang; Lu, Chun Tai; Chung, Wen Chen; Huang, Chiung Fang; Huang, Mao Suan; Lu, Hsein Kun

    2014-01-01

    Repigmentation variably occurs with different treatment methods in patients with gingival pigmentation. A systemic review was conducted of various treatment modalities for eliminating melanin pigmentation of the gingiva, comprising bur abrasion, scalpel surgery, cryosurgery, electrosurgery, gingival grafts, and laser techniques, to compare the recurrence rates (Rrs) of these treatment procedures. Electronic databases, including PubMed, Web of Science, Google, and Medline were comprehensively searched, and manual searches were conducted for studies published from January 1951 to June 2013. After applying inclusion and exclusion criteria, the final list of articles was reviewed in depth to achieve the objectives of this review. A Poisson regression was used to analyze the outcome of depigmentation using the various treatment methods. The systematic review was based on case reports mainly. In total, 61 eligible publications met the defined criteria. The various therapeutic procedures showed variable clinical results with a wide range of Rrs. A random-effects Poisson regression showed that cryosurgery (Rr = 0.32%), electrosurgery (Rr = 0.74%), and laser depigmentation (Rr = 1.16%) yielded superior result, whereas bur abrasion yielded the highest Rr (8.89%). Within the limit of the sampling level, the present evidence-based results show that cryosurgery exhibits the optimal predictability for depigmentation of the gingiva among all procedures examined, followed by electrosurgery and laser techniques. It is possible to treat melanin pigmentation of the gingiva with various methods and prevent repigmentation. Among those treatment modalities, cryosurgery, electrosurgery, and laser surgery appear to be the best choices for treating gingival pigmentation. © 2014 Wiley Periodicals, Inc.

  4. The cross-validated AUC for MCP-logistic regression with high-dimensional data.

    Science.gov (United States)

    Jiang, Dingfeng; Huang, Jian; Zhang, Ying

    2013-10-01

    We propose a cross-validated area under the receiving operator characteristic (ROC) curve (CV-AUC) criterion for tuning parameter selection for penalized methods in sparse, high-dimensional logistic regression models. We use this criterion in combination with the minimax concave penalty (MCP) method for variable selection. The CV-AUC criterion is specifically designed for optimizing the classification performance for binary outcome data. To implement the proposed approach, we derive an efficient coordinate descent algorithm to compute the MCP-logistic regression solution surface. Simulation studies are conducted to evaluate the finite sample performance of the proposed method and its comparison with the existing methods including the Akaike information criterion (AIC), Bayesian information criterion (BIC) or Extended BIC (EBIC). The model selected based on the CV-AUC criterion tends to have a larger predictive AUC and smaller classification error than those with tuning parameters selected using the AIC, BIC or EBIC. We illustrate the application of the MCP-logistic regression with the CV-AUC criterion on three microarray datasets from the studies that attempt to identify genes related to cancers. Our simulation studies and data examples demonstrate that the CV-AUC is an attractive method for tuning parameter selection for penalized methods in high-dimensional logistic regression models.

  5. Image superresolution using support vector regression.

    Science.gov (United States)

    Ni, Karl S; Nguyen, Truong Q

    2007-06-01

    A thorough investigation of the application of support vector regression (SVR) to the superresolution problem is conducted through various frameworks. Prior to the study, the SVR problem is enhanced by finding the optimal kernel. This is done by formulating the kernel learning problem in SVR form as a convex optimization problem, specifically a semi-definite programming (SDP) problem. An additional constraint is added to reduce the SDP to a quadratically constrained quadratic programming (QCQP) problem. After this optimization, investigation of the relevancy of SVR to superresolution proceeds with the possibility of using a single and general support vector regression for all image content, and the results are impressive for small training sets. This idea is improved upon by observing structural properties in the discrete cosine transform (DCT) domain to aid in learning the regression. Further improvement involves a combination of classification and SVR-based techniques, extending works in resolution synthesis. This method, termed kernel resolution synthesis, uses specific regressors for isolated image content to describe the domain through a partitioned look of the vector space, thereby yielding good results.

  6. Variable Selection for Regression Models of Percentile Flows

    Science.gov (United States)

    Fouad, G.

    2017-12-01

    Percentile flows describe the flow magnitude equaled or exceeded for a given percent of time, and are widely used in water resource management. However, these statistics are normally unavailable since most basins are ungauged. Percentile flows of ungauged basins are often predicted using regression models based on readily observable basin characteristics, such as mean elevation. The number of these independent variables is too large to evaluate all possible models. A subset of models is typically evaluated using automatic procedures, like stepwise regression. This ignores a large variety of methods from the field of feature (variable) selection and physical understanding of percentile flows. A study of 918 basins in the United States was conducted to compare an automatic regression procedure to the following variable selection methods: (1) principal component analysis, (2) correlation analysis, (3) random forests, (4) genetic programming, (5) Bayesian networks, and (6) physical understanding. The automatic regression procedure only performed better than principal component analysis. Poor performance of the regression procedure was due to a commonly used filter for multicollinearity, which rejected the strongest models because they had cross-correlated independent variables. Multicollinearity did not decrease model performance in validation because of a representative set of calibration basins. Variable selection methods based strictly on predictive power (numbers 2-5 from above) performed similarly, likely indicating a limit to the predictive power of the variables. Similar performance was also reached using variables selected based on physical understanding, a finding that substantiates recent calls to emphasize physical understanding in modeling for predictions in ungauged basins. The strongest variables highlighted the importance of geology and land cover, whereas widely used topographic variables were the weakest predictors. Variables suffered from a high

  7. Hyperspectral Unmixing with Robust Collaborative Sparse Regression

    Directory of Open Access Journals (Sweden)

    Chang Li

    2016-07-01

    Full Text Available Recently, sparse unmixing (SU of hyperspectral data has received particular attention for analyzing remote sensing images. However, most SU methods are based on the commonly admitted linear mixing model (LMM, which ignores the possible nonlinear effects (i.e., nonlinearity. In this paper, we propose a new method named robust collaborative sparse regression (RCSR based on the robust LMM (rLMM for hyperspectral unmixing. The rLMM takes the nonlinearity into consideration, and the nonlinearity is merely treated as outlier, which has the underlying sparse property. The RCSR simultaneously takes the collaborative sparse property of the abundance and sparsely distributed additive property of the outlier into consideration, which can be formed as a robust joint sparse regression problem. The inexact augmented Lagrangian method (IALM is used to optimize the proposed RCSR. The qualitative and quantitative experiments on synthetic datasets and real hyperspectral images demonstrate that the proposed RCSR is efficient for solving the hyperspectral SU problem compared with the other four state-of-the-art algorithms.

  8. Collaborative regression-based anatomical landmark detection

    International Nuclear Information System (INIS)

    Gao, Yaozong; Shen, Dinggang

    2015-01-01

    Anatomical landmark detection plays an important role in medical image analysis, e.g. for registration, segmentation and quantitative analysis. Among the various existing methods for landmark detection, regression-based methods have recently attracted much attention due to their robustness and efficiency. In these methods, landmarks are localised through voting from all image voxels, which is completely different from the classification-based methods that use voxel-wise classification to detect landmarks. Despite their robustness, the accuracy of regression-based landmark detection methods is often limited due to (1) the inclusion of uninformative image voxels in the voting procedure, and (2) the lack of effective ways to incorporate inter-landmark spatial dependency into the detection step. In this paper, we propose a collaborative landmark detection framework to address these limitations. The concept of collaboration is reflected in two aspects. (1) Multi-resolution collaboration. A multi-resolution strategy is proposed to hierarchically localise landmarks by gradually excluding uninformative votes from faraway voxels. Moreover, for informative voxels near the landmark, a spherical sampling strategy is also designed at the training stage to improve their prediction accuracy. (2) Inter-landmark collaboration. A confidence-based landmark detection strategy is proposed to improve the detection accuracy of ‘difficult-to-detect’ landmarks by using spatial guidance from ‘easy-to-detect’ landmarks. To evaluate our method, we conducted experiments extensively on three datasets for detecting prostate landmarks and head and neck landmarks in computed tomography images, and also dental landmarks in cone beam computed tomography images. The results show the effectiveness of our collaborative landmark detection framework in improving landmark detection accuracy, compared to other state-of-the-art methods. (paper)

  9. Research on the multiple linear regression in non-invasive blood glucose measurement.

    Science.gov (United States)

    Zhu, Jianming; Chen, Zhencheng

    2015-01-01

    A non-invasive blood glucose measurement sensor and the data process algorithm based on the metabolic energy conservation (MEC) method are presented in this paper. The physiological parameters of human fingertip can be measured by various sensing modalities, and blood glucose value can be evaluated with the physiological parameters by the multiple linear regression analysis. Five methods such as enter, remove, forward, backward and stepwise in multiple linear regression were compared, and the backward method had the best performance. The best correlation coefficient was 0.876 with the standard error of the estimate 0.534, and the significance was 0.012 (sig. regression equation was valid. The Clarke error grid analysis was performed to compare the MEC method with the hexokinase method, using 200 data points. The correlation coefficient R was 0.867 and all of the points were located in Zone A and Zone B, which shows the MEC method provides a feasible and valid way for non-invasive blood glucose measurement.

  10. A comparison between univariate probabilistic and multivariate (logistic regression) methods for landslide susceptibility analysis: the example of the Febbraro valley (Northern Alps, Italy)

    Science.gov (United States)

    Rossi, M.; Apuani, T.; Felletti, F.

    2009-04-01

    The aim of this paper is to compare the results of two statistical methods for landslide susceptibility analysis: 1) univariate probabilistic method based on landslide susceptibility index, 2) multivariate method (logistic regression). The study area is the Febbraro valley, located in the central Italian Alps, where different types of metamorphic rocks croup out. On the eastern part of the studied basin a quaternary cover represented by colluvial and secondarily, by glacial deposits, is dominant. In this study 110 earth flows, mainly located toward NE portion of the catchment, were analyzed. They involve only the colluvial deposits and their extension mainly ranges from 36 to 3173 m2. Both statistical methods require to establish a spatial database, in which each landslide is described by several parameters that can be assigned using a main scarp central point of landslide. The spatial database is constructed using a Geographical Information System (GIS). Each landslide is described by several parameters corresponding to the value of main scarp central point of the landslide. Based on bibliographic review a total of 15 predisposing factors were utilized. The width of the intervals, in which the maps of the predisposing factors have to be reclassified, has been defined assuming constant intervals to: elevation (100 m), slope (5 °), solar radiation (0.1 MJ/cm2/year), profile curvature (1.2 1/m), tangential curvature (2.2 1/m), drainage density (0.5), lineament density (0.00126). For the other parameters have been used the results of the probability-probability plots analysis and the statistical indexes of landslides site. In particular slope length (0 ÷ 2, 2 ÷ 5, 5 ÷ 10, 10 ÷ 20, 20 ÷ 35, 35 ÷ 260), accumulation flow (0 ÷ 1, 1 ÷ 2, 2 ÷ 5, 5 ÷ 12, 12 ÷ 60, 60 ÷27265), Topographic Wetness Index 0 ÷ 0.74, 0.74 ÷ 1.94, 1.94 ÷ 2.62, 2.62 ÷ 3.48, 3.48 ÷ 6,00, 6.00 ÷ 9.44), Stream Power Index (0 ÷ 0.64, 0.64 ÷ 1.28, 1.28 ÷ 1.81, 1.81 ÷ 4.20, 4.20 ÷ 9

  11. Analysis of γ spectra in airborne radioactivity measurements using multiple linear regressions

    International Nuclear Information System (INIS)

    Bao Min; Shi Quanlin; Zhang Jiamei

    2004-01-01

    This paper describes the net peak counts calculating of nuclide 137 Cs at 662 keV of γ spectra in airborne radioactivity measurements using multiple linear regressions. Mathematic model is founded by analyzing every factor that has contribution to Cs peak counts in spectra, and multiple linear regression function is established. Calculating process adopts stepwise regression, and the indistinctive factors are eliminated by F check. The regression results and its uncertainty are calculated using Least Square Estimation, then the Cs peak net counts and its uncertainty can be gotten. The analysis results for experimental spectrum are displayed. The influence of energy shift and energy resolution on the analyzing result is discussed. In comparison with the stripping spectra method, multiple linear regression method needn't stripping radios, and the calculating result has relation with the counts in Cs peak only, and the calculating uncertainty is reduced. (authors)

  12. Sample preparation methods for scanning electron microscopy of homogenized Al-Mg-Si billets: A comparative study

    International Nuclear Information System (INIS)

    Österreicher, Johannes Albert; Kumar, Manoj; Schiffl, Andreas; Schwarz, Sabine; Hillebrand, Daniel; Bourret, Gilles Remi

    2016-01-01

    Characterization of Mg-Si precipitates is crucial for optimizing the homogenization heat treatment of Al-Mg-Si alloys. Although sample preparation is key for high quality scanning electron microscopy imaging, most common methods lead to dealloying of Mg-Si precipitates. In this article we systematically evaluate different sample preparation methods: mechanical polishing, etching with various reagents, and electropolishing using different electrolytes. We demonstrate that the use of a nitric acid and methanol electrolyte for electropolishing a homogenized Al-Mg-Si alloy prevents the dissolution of Mg-Si precipitates, resulting in micrographs of higher quality. This preparation method is investigated in depth and the obtained scanning electron microscopy images are compared with transmission electron micrographs: the shape and size of Mg-Si precipitates appear very similar in either method. The scanning electron micrographs allow proper identification and measurement of the Mg-Si phases including needles with lengths of roughly 200 nm. These needles are β″ precipitates as confirmed by high resolution transmission electron microscopy. - Highlights: •Secondary precipitation in homogenized 6xxx Al alloys is crucial for extrudability. •Existing sample preparation methods for SEM are improvable. •Electropolishing with nitric acid/methanol yields superior quality in SEM. •The obtained micrographs are compared to TEM micrographs.

  13. Sample preparation methods for scanning electron microscopy of homogenized Al-Mg-Si billets: A comparative study

    Energy Technology Data Exchange (ETDEWEB)

    Österreicher, Johannes Albert; Kumar, Manoj [LKR Light Metals Technologies Ranshofen, Austrian Institute of Technology, Postfach 26, 5282 Ranshofen (Austria); Schiffl, Andreas [Hammerer Aluminium Industries Extrusion GmbH, Lamprechtshausener Straße 69, 5282 Ranshofen (Austria); Schwarz, Sabine [University Service Centre for Transmission Electron Microscopy, Vienna University of Technology, Wiedner Hauptstr. 8-10, 1040 Wien (Austria); Hillebrand, Daniel [Hammerer Aluminium Industries Extrusion GmbH, Lamprechtshausener Straße 69, 5282 Ranshofen (Austria); Bourret, Gilles Remi, E-mail: gilles.bourret@sbg.ac.at [Department of Materials Science and Physics, University of Salzburg, Hellbrunner Straße 34, 5020 Salzburg (Austria)

    2016-12-15

    Characterization of Mg-Si precipitates is crucial for optimizing the homogenization heat treatment of Al-Mg-Si alloys. Although sample preparation is key for high quality scanning electron microscopy imaging, most common methods lead to dealloying of Mg-Si precipitates. In this article we systematically evaluate different sample preparation methods: mechanical polishing, etching with various reagents, and electropolishing using different electrolytes. We demonstrate that the use of a nitric acid and methanol electrolyte for electropolishing a homogenized Al-Mg-Si alloy prevents the dissolution of Mg-Si precipitates, resulting in micrographs of higher quality. This preparation method is investigated in depth and the obtained scanning electron microscopy images are compared with transmission electron micrographs: the shape and size of Mg-Si precipitates appear very similar in either method. The scanning electron micrographs allow proper identification and measurement of the Mg-Si phases including needles with lengths of roughly 200 nm. These needles are β″ precipitates as confirmed by high resolution transmission electron microscopy. - Highlights: •Secondary precipitation in homogenized 6xxx Al alloys is crucial for extrudability. •Existing sample preparation methods for SEM are improvable. •Electropolishing with nitric acid/methanol yields superior quality in SEM. •The obtained micrographs are compared to TEM micrographs.

  14. Use of empirical likelihood to calibrate auxiliary information in partly linear monotone regression models.

    Science.gov (United States)

    Chen, Baojiang; Qin, Jing

    2014-05-10

    In statistical analysis, a regression model is needed if one is interested in finding the relationship between a response variable and covariates. When the response depends on the covariate, then it may also depend on the function of this covariate. If one has no knowledge of this functional form but expect for monotonic increasing or decreasing, then the isotonic regression model is preferable. Estimation of parameters for isotonic regression models is based on the pool-adjacent-violators algorithm (PAVA), where the monotonicity constraints are built in. With missing data, people often employ the augmented estimating method to improve estimation efficiency by incorporating auxiliary information through a working regression model. However, under the framework of the isotonic regression model, the PAVA does not work as the monotonicity constraints are violated. In this paper, we develop an empirical likelihood-based method for isotonic regression model to incorporate the auxiliary information. Because the monotonicity constraints still hold, the PAVA can be used for parameter estimation. Simulation studies demonstrate that the proposed method can yield more efficient estimates, and in some situations, the efficiency improvement is substantial. We apply this method to a dementia study. Copyright © 2013 John Wiley & Sons, Ltd.

  15. Matrix-product-state method with local basis optimization for nonequilibrium electron-phonon systems

    Science.gov (United States)

    Heidrich-Meisner, Fabian; Brockt, Christoph; Dorfner, Florian; Vidmar, Lev; Jeckelmann, Eric

    We present a method for simulating the time evolution of quasi-one-dimensional correlated systems with strongly fluctuating bosonic degrees of freedom (e.g., phonons) using matrix product states. For this purpose we combine the time-evolving block decimation (TEBD) algorithm with a local basis optimization (LBO) approach. We discuss the performance of our approach in comparison to TEBD with a bare boson basis, exact diagonalization, and diagonalization in a limited functional space. TEBD with LBO can reduce the computational cost by orders of magnitude when boson fluctuations are large and thus it allows one to investigate problems that are out of reach of other approaches. First, we test our method on the non-equilibrium dynamics of a Holstein polaron and show that it allows us to study the regime of strong electron-phonon coupling. Second, the method is applied to the scattering of an electronic wave packet off a region with electron-phonon coupling. Our study reveals a rich physics including transient self-trapping and dissipation. Supported by Deutsche Forschungsgemeinschaft (DFG) via FOR 1807.

  16. A method for measuring the electron drift velocity in working gas using a Frisch-grid ionization chamber

    Energy Technology Data Exchange (ETDEWEB)

    Bai, Huaiyong; Wang, Zhimin; Zhang, Luyu; Chen, Jinxiang; Zhang, Guohui, E-mail: guohuizhang@pku.edu.cn

    2016-12-21

    A method for measuring the electron drift velocity in working gas is proposed. Based on the cathode and the anode signal waveforms of the Frisch-grid ionization chamber, the electron drift velocity is extracted. With this method, the electron drift velocities in Ar + 10% CH{sub 4}, Ar + 3.5% CO{sub 2} and Kr + 2.7% CO{sub 2} gases have been measured and the results are compared with the existing measurements and the simulating results. Using this method, the electron drift velocity can be monitored throughout the experiment of charged particle without bothering the measurement of other parameters, such as the energy and orientation.

  17. Establishment of regression dependences. Linear and nonlinear dependences

    International Nuclear Information System (INIS)

    Onishchenko, A.M.

    1994-01-01

    The main problems of determination of linear and 19 types of nonlinear regression dependences are completely discussed. It is taken into consideration that total dispersions are the sum of measurement dispersions and parameter variation dispersions themselves. Approaches to all dispersions determination are described. It is shown that the least square fit gives inconsistent estimation for industrial objects and processes. The correction methods by taking into account comparable measurement errors for both variable give an opportunity to obtain consistent estimation for the regression equation parameters. The condition of the correction technique application expediency is given. The technique for determination of nonlinear regression dependences taking into account the dependence form and comparable errors of both variables is described. 6 refs., 1 tab

  18. Determination of electron clinical spectra from percentage depth dose (PDD) curves by classical simulated annealing method

    International Nuclear Information System (INIS)

    Visbal, Jorge H. Wilches; Costa, Alessandro M.

    2016-01-01

    Percentage depth dose of electron beams represents an important item of data in radiation therapy treatment since it describes the dosimetric properties of these. Using an accurate transport theory, or the Monte Carlo method, has been shown obvious differences between the dose distribution of electron beams of a clinical accelerator in a water simulator object and the dose distribution of monoenergetic electrons of nominal energy of the clinical accelerator in water. In radiotherapy, the electron spectra should be considered to improve the accuracy of dose calculation since the shape of PDP curve depends of way how radiation particles deposit their energy in patient/phantom, that is, the spectrum. Exist three principal approaches to obtain electron energy spectra from central PDP: Monte Carlo Method, Direct Measurement and Inverse Reconstruction. In this work it will be presented the Simulated Annealing method as a practical, reliable and simple approach of inverse reconstruction as being an optimal alternative to other options. (author)

  19. Improved coating and fixation methods for scanning electron microscope autoradiography

    International Nuclear Information System (INIS)

    Weiss, R.L.

    1984-01-01

    A simple apparatus for emulsion coating is described. The apparatus is inexpensive and easily assembled in a standard glass shop. Emulsion coating for scanning electron microscope autoradiography with this apparatus consistently yields uniform layers. When used in conjunction with newly described fixation methods, this new approach produces reliable autoradiographs of undamaged specimens

  20. Evaluation of Wet Digestion Methods for Quantification of Metal Content in Electronic Scrap Material

    Directory of Open Access Journals (Sweden)

    Subhabrata Das

    2017-11-01

    Full Text Available Recent advances in the electronics sector and the short life-span of electronic products have triggered an exponential increase in the generation of electronic waste (E-waste. Effective recycling of E-waste has thus become a serious solid waste management challenge. E-waste management technologies include pyrometallurgy, hydrometallurgy, and bioleaching. Determining the metal content of an E-waste sample is critical in evaluating the efficiency of a metal recovery method in E-waste recycling. However, E-waste is complex and of diverse origins. The lack of a standard digestion method for E-waste has resulted in difficulty in comparing the efficiencies of different metal recovery processes. In this study, several solid digestion protocols including American Society for Testing and Materials (ASTM-D6357-11, United States Environment Protection Agency Solid Waste (US EPA SW 846 Method 3050b, ultrasound-assisted, and microwave digestion methods were compared to determine the metal content (Ag, Al, Au, Cu, Fe, Ni, Pb, Pd, Sn, and Zn of electronic scrap materials (ESM obtained from two different sources. The highest metal recovery (mg/g of ESM was obtained using ASTM D6357-11 for most of the metals, which remained mainly bound to silicate fractions, while a microwave-assisted digestion protocol (MWD-2 was more effective in solubilizing Al, Pb, and Sn. The study highlights the need for a judicious selection of digestion protocol, and proposes steps for selecting an effective acid digestion method for ESM.

  1. Tracking time-varying parameters with local regression

    DEFF Research Database (Denmark)

    Joensen, Alfred Karsten; Nielsen, Henrik Aalborg; Nielsen, Torben Skov

    2000-01-01

    This paper shows that the recursive least-squares (RLS) algorithm with forgetting factor is a special case of a varying-coe\\$cient model, and a model which can easily be estimated via simple local regression. This observation allows us to formulate a new method which retains the RLS algorithm, bu......, but extends the algorithm by including polynomial approximations. Simulation results are provided, which indicates that this new method is superior to the classical RLS method, if the parameter variations are smooth....

  2. Applied logistic regression

    CERN Document Server

    Hosmer, David W; Sturdivant, Rodney X

    2013-01-01

     A new edition of the definitive guide to logistic regression modeling for health science and other applications This thoroughly expanded Third Edition provides an easily accessible introduction to the logistic regression (LR) model and highlights the power of this model by examining the relationship between a dichotomous outcome and a set of covariables. Applied Logistic Regression, Third Edition emphasizes applications in the health sciences and handpicks topics that best suit the use of modern statistical software. The book provides readers with state-of-

  3. Electronic correlation studies. III. Self-correlated field method. Application to 2S ground state and 2P excited state of three-electron atomic systems

    International Nuclear Information System (INIS)

    Lissillour, R.; Guerillot, C.R.

    1975-01-01

    The self-correlated field method is based on the insertion in the group product wave function of pair functions built upon a set of correlated ''local'' functions and of ''nonlocal'' functions. This work is an application to three-electron systems. The effects of the outer electron on the inner pair are studied. The total electronic energy and some intermediary results such as pair energies, Coulomb and exchange ''correlated'' integrals, are given. The results are always better than those given by conventional SCF computations and reach the same level of accuracy as those given by more laborious methods used in correlation studies. (auth)

  4. Thick-target method in the measurement of inner-shell ionization cross-sections by low-energy electron impact

    International Nuclear Information System (INIS)

    An, Z.; Wu, Y.; Liu, M.T.; Duan, Y.M.; Tang, C.H.

    2006-01-01

    In this paper, we have studied the thick-target method for the measurements of atomic inner-shell ionization cross-section or X-ray production cross-section by keV electron impact. We find that in the processes of electron impact on the thick targets, the ratios of the characteristic X-ray yields of photoelectric ionization by bremsstrahlung to the total characteristic X-ray yields are Z-dependent and shell-dependent, and the ratios also show the weak energy-dependence. In addition, in the lower incident energy region (i.e. U < 5-6), the contribution from the rediffusion effect and the secondary electrons can be negligible. In general, the thick-target method can be appropriately applied to the measurements of atomic inner-shell ionization cross-sections or X-ray production cross-sections by electron impact for low and medium Z elements in the lower incident electron energy (i.e. U < 5-6). The experimental accuracies by the thick-target method can reach to the level equivalent or superior to the accuracies of experimental data based on the thin-target method. This thick-target method has been applied to the measurement of K-shell ionization cross-sections of Ni element by electron impact in this paper

  5. Normalization Ridge Regression in Practice I: Comparisons Between Ordinary Least Squares, Ridge Regression and Normalization Ridge Regression.

    Science.gov (United States)

    Bulcock, J. W.

    The problem of model estimation when the data are collinear was examined. Though the ridge regression (RR) outperforms ordinary least squares (OLS) regression in the presence of acute multicollinearity, it is not a problem free technique for reducing the variance of the estimates. It is a stochastic procedure when it should be nonstochastic and it…

  6. Multiplication factor versus regression analysis in stature estimation from hand and foot dimensions.

    Science.gov (United States)

    Krishan, Kewal; Kanchan, Tanuj; Sharma, Abhilasha

    2012-05-01

    Estimation of stature is an important parameter in identification of human remains in forensic examinations. The present study is aimed to compare the reliability and accuracy of stature estimation and to demonstrate the variability in estimated stature and actual stature using multiplication factor and regression analysis methods. The study is based on a sample of 246 subjects (123 males and 123 females) from North India aged between 17 and 20 years. Four anthropometric measurements; hand length, hand breadth, foot length and foot breadth taken on the left side in each subject were included in the study. Stature was measured using standard anthropometric techniques. Multiplication factors were calculated and linear regression models were derived for estimation of stature from hand and foot dimensions. Derived multiplication factors and regression formula were applied to the hand and foot measurements in the study sample. The estimated stature from the multiplication factors and regression analysis was compared with the actual stature to find the error in estimated stature. The results indicate that the range of error in estimation of stature from regression analysis method is less than that of multiplication factor method thus, confirming that the regression analysis method is better than multiplication factor analysis in stature estimation. Copyright © 2012 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  7. Arcuate Fasciculus in Autism Spectrum Disorder Toddlers with Language Regression

    Directory of Open Access Journals (Sweden)

    Zhang Lin

    2018-03-01

    Full Text Available Language regression is observed in a subset of toddlers with autism spectrum disorder (ASD as initial symptom. However, such a phenomenon has not been fully explored, partly due to the lack of definite diagnostic evaluation methods and criteria. Materials and Methods: Fifteen toddlers with ASD exhibiting language regression and fourteen age-matched typically developing (TD controls underwent diffusion tensor imaging (DTI. DTI parameters including fractional anisotropy (FA, average fiber length (AFL, tract volume (TV and number of voxels (NV were analyzed by Neuro 3D in Siemens syngo workstation. Subsequently, the data were analyzed by using IBM SPSS Statistics 22. Results: Compared with TD children, a significant reduction of FA along with an increase in TV and NV was observed in ASD children with language regression. Note that there were no significant differences between ASD and TD children in AFL of the arcuate fasciculus (AF. Conclusions: These DTI changes in the AF suggest that microstructural anomalies of the AF white matter may be associated with language deficits in ASD children exhibiting language regression starting from an early age.

  8. Electron beam treatment planning: A review of dose computation methods

    International Nuclear Information System (INIS)

    Mohan, R.; Riley, R.; Laughlin, J.S.

    1983-01-01

    Various methods of dose computations are reviewed. The equivalent path length methods used to account for body curvature and internal structure are not adequate because they ignore the lateral diffusion of electrons. The Monte Carlo method for the broad field three-dimensional situation in treatment planning is impractical because of the enormous computer time required. The pencil beam technique may represent a suitable compromise. The behavior of a pencil beam may be described by the multiple scattering theory or, alternatively, generated using the Monte Carlo method. Although nearly two orders of magnitude slower than the equivalent path length technique, the pencil beam method improves accuracy sufficiently to justify its use. It applies very well when accounting for the effect of surface irregularities; the formulation for handling inhomogeneous internal structure is yet to be developed

  9. Applied linear regression

    CERN Document Server

    Weisberg, Sanford

    2013-01-01

    Praise for the Third Edition ""...this is an excellent book which could easily be used as a course text...""-International Statistical Institute The Fourth Edition of Applied Linear Regression provides a thorough update of the basic theory and methodology of linear regression modeling. Demonstrating the practical applications of linear regression analysis techniques, the Fourth Edition uses interesting, real-world exercises and examples. Stressing central concepts such as model building, understanding parameters, assessing fit and reliability, and drawing conclusions, the new edition illus

  10. Regression: The Apple Does Not Fall Far From the Tree.

    Science.gov (United States)

    Vetter, Thomas R; Schober, Patrick

    2018-05-15

    Researchers and clinicians are frequently interested in either: (1) assessing whether there is a relationship or association between 2 or more variables and quantifying this association; or (2) determining whether 1 or more variables can predict another variable. The strength of such an association is mainly described by the correlation. However, regression analysis and regression models can be used not only to identify whether there is a significant relationship or association between variables but also to generate estimations of such a predictive relationship between variables. This basic statistical tutorial discusses the fundamental concepts and techniques related to the most common types of regression analysis and modeling, including simple linear regression, multiple regression, logistic regression, ordinal regression, and Poisson regression, as well as the common yet often underrecognized phenomenon of regression toward the mean. The various types of regression analysis are powerful statistical techniques, which when appropriately applied, can allow for the valid interpretation of complex, multifactorial data. Regression analysis and models can assess whether there is a relationship or association between 2 or more observed variables and estimate the strength of this association, as well as determine whether 1 or more variables can predict another variable. Regression is thus being applied more commonly in anesthesia, perioperative, critical care, and pain research. However, it is crucial to note that regression can identify plausible risk factors; it does not prove causation (a definitive cause and effect relationship). The results of a regression analysis instead identify independent (predictor) variable(s) associated with the dependent (outcome) variable. As with other statistical methods, applying regression requires that certain assumptions be met, which can be tested with specific diagnostics.

  11. Evaluation on Electronic Securities Settlements Systems by AHP Methods

    Science.gov (United States)

    Fukaya, Kiyoyuki; Komoda, Norihisa

    Accompanying the spread of Internet and the change of business models, electronic commerce expands buisness areas. Electronic finance commerce becomes popular and especially online security tradings becoome very popular in this area. This online securitiy tradings have some good points such as less mistakes than telephone calls. In order to expand this online security tradings, the transfer of the security paper is one the largest problems to be solved. Because it takes a few days to transfer the security paper from a seller to a buyer. So the dematerialization of security papers is one of the solutions. The demterilization needs the information systems for setteling security. Some countries such as France, German, United Kingdom and U.S.A. have been strating the dematerialization projects. The legacy assesments on these projects focus from the viewpoint of the legal schemes only and there is no assessment from system architectures. This paper focuses on the information system scheme and valuates these dematerlization projects by AHP methods from the viewpoints of “dematerializaion of security papers", “speed of transfer", “usefulness on the system" and “accumulation of risks". This is the first case of valuations on security settlements systems by AHP methods, especially four counties’ systems.

  12. Electronic method for autofluorography of macromolecules on two-d matrices

    International Nuclear Information System (INIS)

    Case, A.L.; Davidson, J.B.

    1983-01-01

    A method for detecting, localizing, and quantifying macromolecules contained in a two-dimensional matrix is provided which employs a television-based position sensitive detection system. A molecule-containing matrix may be produced by conventional means to produce spots of light at the molecule locations which are detected by the television system. The matrix , such as a gel matrix, is exposed to an electronic camera system including an image-intensifier and secondary electron conduction camera capable of light integrating times of many minutes. A light image stored in the form of a charge image on the camera tube target is scanned by conventional television techniques, digitized, and stored in a digital memory. Intensity of any point on the image may be determined from the number at the memory address of the point. The entire image may be displayed on a television monitor for inspection and photographing or individual spots may be analyzed through selected readout of the memory locations. Compared to conventional film exposure methods, the exposure time may be reduced 100-1000 times

  13. Measurement Error in Education and Growth Regressions

    NARCIS (Netherlands)

    Portela, Miguel; Alessie, Rob; Teulings, Coen

    2010-01-01

    The use of the perpetual inventory method for the construction of education data per country leads to systematic measurement error. This paper analyzes its effect on growth regressions. We suggest a methodology for correcting this error. The standard attenuation bias suggests that using these

  14. Measurement Error in Education and Growth Regressions

    NARCIS (Netherlands)

    Portela, M.; Teulings, C.N.; Alessie, R.

    The perpetual inventory method used for the construction of education data per country leads to systematic measurement error. This paper analyses the effect of this measurement error on GDP regressions. There is a systematic difference in the education level between census data and observations

  15. Measurement error in education and growth regressions

    NARCIS (Netherlands)

    Portela, Miguel; Teulings, Coen; Alessie, R.

    2004-01-01

    The perpetual inventory method used for the construction of education data per country leads to systematic measurement error. This paper analyses the effect of this measurement error on GDP regressions. There is a systematic difference in the education level between census data and observations

  16. Higher-order Multivariable Polynomial Regression to Estimate Human Affective States

    Science.gov (United States)

    Wei, Jie; Chen, Tong; Liu, Guangyuan; Yang, Jiemin

    2016-03-01

    From direct observations, facial, vocal, gestural, physiological, and central nervous signals, estimating human affective states through computational models such as multivariate linear-regression analysis, support vector regression, and artificial neural network, have been proposed in the past decade. In these models, linear models are generally lack of precision because of ignoring intrinsic nonlinearities of complex psychophysiological processes; and nonlinear models commonly adopt complicated algorithms. To improve accuracy and simplify model, we introduce a new computational modeling method named as higher-order multivariable polynomial regression to estimate human affective states. The study employs standardized pictures in the International Affective Picture System to induce thirty subjects’ affective states, and obtains pure affective patterns of skin conductance as input variables to the higher-order multivariable polynomial model for predicting affective valence and arousal. Experimental results show that our method is able to obtain efficient correlation coefficients of 0.98 and 0.96 for estimation of affective valence and arousal, respectively. Moreover, the method may provide certain indirect evidences that valence and arousal have their brain’s motivational circuit origins. Thus, the proposed method can serve as a novel one for efficiently estimating human affective states.

  17. Meta-analytical synthesis of regression coefficients under different categorization scheme of continuous covariates.

    Science.gov (United States)

    Yoneoka, Daisuke; Henmi, Masayuki

    2017-11-30

    Recently, the number of clinical prediction models sharing the same regression task has increased in the medical literature. However, evidence synthesis methodologies that use the results of these regression models have not been sufficiently studied, particularly in meta-analysis settings where only regression coefficients are available. One of the difficulties lies in the differences between the categorization schemes of continuous covariates across different studies. In general, categorization methods using cutoff values are study specific across available models, even if they focus on the same covariates of interest. Differences in the categorization of covariates could lead to serious bias in the estimated regression coefficients and thus in subsequent syntheses. To tackle this issue, we developed synthesis methods for linear regression models with different categorization schemes of covariates. A 2-step approach to aggregate the regression coefficient estimates is proposed. The first step is to estimate the joint distribution of covariates by introducing a latent sampling distribution, which uses one set of individual participant data to estimate the marginal distribution of covariates with categorization. The second step is to use a nonlinear mixed-effects model with correction terms for the bias due to categorization to estimate the overall regression coefficients. Especially in terms of precision, numerical simulations show that our approach outperforms conventional methods, which only use studies with common covariates or ignore the differences between categorization schemes. The method developed in this study is also applied to a series of WHO epidemiologic studies on white blood cell counts. Copyright © 2017 John Wiley & Sons, Ltd.

  18. Research and analyze of physical health using multiple regression analysis

    Directory of Open Access Journals (Sweden)

    T. S. Kyi

    2014-01-01

    Full Text Available This paper represents the research which is trying to create a mathematical model of the "healthy people" using the method of regression analysis. The factors are the physical parameters of the person (such as heart rate, lung capacity, blood pressure, breath holding, weight height coefficient, flexibility of the spine, muscles of the shoulder belt, abdominal muscles, squatting, etc.., and the response variable is an indicator of physical working capacity. After performing multiple regression analysis, obtained useful multiple regression models that can predict the physical performance of boys the aged of fourteen to seventeen years. This paper represents the development of regression model for the sixteen year old boys and analyzed results.

  19. Composite marginal quantile regression analysis for longitudinal adolescent body mass index data.

    Science.gov (United States)

    Yang, Chi-Chuan; Chen, Yi-Hau; Chang, Hsing-Yi

    2017-09-20

    Childhood and adolescenthood overweight or obesity, which may be quantified through the body mass index (BMI), is strongly associated with adult obesity and other health problems. Motivated by the child and adolescent behaviors in long-term evolution (CABLE) study, we are interested in individual, family, and school factors associated with marginal quantiles of longitudinal adolescent BMI values. We propose a new method for composite marginal quantile regression analysis for longitudinal outcome data, which performs marginal quantile regressions at multiple quantile levels simultaneously. The proposed method extends the quantile regression coefficient modeling method introduced by Frumento and Bottai (Biometrics 2016; 72:74-84) to longitudinal data accounting suitably for the correlation structure in longitudinal observations. A goodness-of-fit test for the proposed modeling is also developed. Simulation results show that the proposed method can be much more efficient than the analysis without taking correlation into account and the analysis performing separate quantile regressions at different quantile levels. The application to the longitudinal adolescent BMI data from the CABLE study demonstrates the practical utility of our proposal. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  20. LINEAR REGRESSION MODEL ESTİMATİON FOR RIGHT CENSORED DATA

    Directory of Open Access Journals (Sweden)

    Ersin Yılmaz

    2016-05-01

    Full Text Available In this study, firstly we will define a right censored data. If we say shortly right-censored data is censoring values that above the exact line. This may be related with scaling device. And then  we will use response variable acquainted from right-censored explanatory variables. Then the linear regression model will be estimated. For censored data’s existence, Kaplan-Meier weights will be used for  the estimation of the model. With the weights regression model  will be consistent and unbiased with that.   And also there is a method for the censored data that is a semi parametric regression and this method also give  useful results  for censored data too. This study also might be useful for the health studies because of the censored data used in medical issues generally.

  1. MULTICRITERIA METHODS IN PERFORMING COMPANIES’ RESULTS USING ELECTRONIC RECRUITING, CORPORATE COMMUNICATION AND FINANCIAL RATIOS

    Directory of Open Access Journals (Sweden)

    Ivana Bilić

    2011-02-01

    Full Text Available Human resources represent one of the most important companies’ resources responsible in creation of companies’ competitive advantage. In search for the most valuable resources, companies use different methods. Lately, one of the growing methods is electronic recruiting, not only as a recruitment tool, but also as a mean of external communication. Additionally, in the process of corporate communication, companies nowadays use the electronic corporate communication as the easiest, the cheapest and the simplest form of business communication. The aim of this paper is to investigate relationship between three groups of different criteria; including main characteristics of performed electronic recruiting, corporate communication and selected financial performances. Selected companies were ranked separately by each group of criteria by usage of multicriteria decision making method PROMETHEE II. The main idea is to research whether companies which are the highest performers by certain group of criteria obtain the similar results regarding other group of criteria or performing results.

  2. Supremum Norm Posterior Contraction and Credible Sets for Nonparametric Multivariate Regression

    NARCIS (Netherlands)

    Yoo, W.W.; Ghosal, S

    2016-01-01

    In the setting of nonparametric multivariate regression with unknown error variance, we study asymptotic properties of a Bayesian method for estimating a regression function f and its mixed partial derivatives. We use a random series of tensor product of B-splines with normal basis coefficients as a

  3. SDE based regression for random PDEs

    KAUST Repository

    Bayer, Christian

    2016-01-01

    A simulation based method for the numerical solution of PDE with random coefficients is presented. By the Feynman-Kac formula, the solution can be represented as conditional expectation of a functional of a corresponding stochastic differential equation driven by independent noise. A time discretization of the SDE for a set of points in the domain and a subsequent Monte Carlo regression lead to an approximation of the global solution of the random PDE. We provide an initial error and complexity analysis of the proposed method along with numerical examples illustrating its behaviour.

  4. SDE based regression for random PDEs

    KAUST Repository

    Bayer, Christian

    2016-01-06

    A simulation based method for the numerical solution of PDE with random coefficients is presented. By the Feynman-Kac formula, the solution can be represented as conditional expectation of a functional of a corresponding stochastic differential equation driven by independent noise. A time discretization of the SDE for a set of points in the domain and a subsequent Monte Carlo regression lead to an approximation of the global solution of the random PDE. We provide an initial error and complexity analysis of the proposed method along with numerical examples illustrating its behaviour.

  5. Prediction, Regression and Critical Realism

    DEFF Research Database (Denmark)

    Næss, Petter

    2004-01-01

    This paper considers the possibility of prediction in land use planning, and the use of statistical research methods in analyses of relationships between urban form and travel behaviour. Influential writers within the tradition of critical realism reject the possibility of predicting social...... phenomena. This position is fundamentally problematic to public planning. Without at least some ability to predict the likely consequences of different proposals, the justification for public sector intervention into market mechanisms will be frail. Statistical methods like regression analyses are commonly...... seen as necessary in order to identify aggregate level effects of policy measures, but are questioned by many advocates of critical realist ontology. Using research into the relationship between urban structure and travel as an example, the paper discusses relevant research methods and the kinds...

  6. The linearly scaling 3D fragment method for large scale electronic structure calculations

    Energy Technology Data Exchange (ETDEWEB)

    Zhao Zhengji [National Energy Research Scientific Computing Center (NERSC) (United States); Meza, Juan; Shan Hongzhang; Strohmaier, Erich; Bailey, David; Wang Linwang [Computational Research Division, Lawrence Berkeley National Laboratory (United States); Lee, Byounghak, E-mail: ZZhao@lbl.go [Physics Department, Texas State University (United States)

    2009-07-01

    The linearly scaling three-dimensional fragment (LS3DF) method is an O(N) ab initio electronic structure method for large-scale nano material simulations. It is a divide-and-conquer approach with a novel patching scheme that effectively cancels out the artificial boundary effects, which exist in all divide-and-conquer schemes. This method has made ab initio simulations of thousand-atom nanosystems feasible in a couple of hours, while retaining essentially the same accuracy as the direct calculation methods. The LS3DF method won the 2008 ACM Gordon Bell Prize for algorithm innovation. Our code has reached 442 Tflop/s running on 147,456 processors on the Cray XT5 (Jaguar) at OLCF, and has been run on 163,840 processors on the Blue Gene/P (Intrepid) at ALCF, and has been applied to a system containing 36,000 atoms. In this paper, we will present the recent parallel performance results of this code, and will apply the method to asymmetric CdSe/CdS core/shell nanorods, which have potential applications in electronic devices and solar cells.

  7. Small sample GEE estimation of regression parameters for longitudinal data.

    Science.gov (United States)

    Paul, Sudhir; Zhang, Xuemao

    2014-09-28

    Longitudinal (clustered) response data arise in many bio-statistical applications which, in general, cannot be assumed to be independent. Generalized estimating equation (GEE) is a widely used method to estimate marginal regression parameters for correlated responses. The advantage of the GEE is that the estimates of the regression parameters are asymptotically unbiased even if the correlation structure is misspecified, although their small sample properties are not known. In this paper, two bias adjusted GEE estimators of the regression parameters in longitudinal data are obtained when the number of subjects is small. One is based on a bias correction, and the other is based on a bias reduction. Simulations show that the performances of both the bias-corrected methods are similar in terms of bias, efficiency, coverage probability, average coverage length, impact of misspecification of correlation structure, and impact of cluster size on bias correction. Both these methods show superior properties over the GEE estimates for small samples. Further, analysis of data involving a small number of subjects also shows improvement in bias, MSE, standard error, and length of the confidence interval of the estimates by the two bias adjusted methods over the GEE estimates. For small to moderate sample sizes (N ≤50), either of the bias-corrected methods GEEBc and GEEBr can be used. However, the method GEEBc should be preferred over GEEBr, as the former is computationally easier. For large sample sizes, the GEE method can be used. Copyright © 2014 John Wiley & Sons, Ltd.

  8. Discrimination and characterization of strawberry juice based on electronic nose and tongue: comparison of different juice processing approaches by LDA, PLSR, RF, and SVM.

    Science.gov (United States)

    Qiu, Shanshan; Wang, Jun; Gao, Liping

    2014-07-09

    An electronic nose (E-nose) and an electronic tongue (E-tongue) have been used to characterize five types of strawberry juices based on processing approaches (i.e., microwave pasteurization, steam blanching, high temperature short time pasteurization, frozen-thawed, and freshly squeezed). Juice quality parameters (vitamin C, pH, total soluble solid, total acid, and sugar/acid ratio) were detected by traditional measuring methods. Multivariate statistical methods (linear discriminant analysis (LDA) and partial least squares regression (PLSR)) and neural networks (Random Forest (RF) and Support Vector Machines) were employed to qualitative classification and quantitative regression. E-tongue system reached higher accuracy rates than E-nose did, and the simultaneous utilization did have an advantage in LDA classification and PLSR regression. According to cross-validation, RF has shown outstanding and indisputable performances in the qualitative and quantitative analysis. This work indicates that the simultaneous utilization of E-nose and E-tongue can discriminate processed fruit juices and predict quality parameters successfully for the beverage industry.

  9. Introduction to regression graphics

    CERN Document Server

    Cook, R Dennis

    2009-01-01

    Covers the use of dynamic and interactive computer graphics in linear regression analysis, focusing on analytical graphics. Features new techniques like plot rotation. The authors have composed their own regression code, using Xlisp-Stat language called R-code, which is a nearly complete system for linear regression analysis and can be utilized as the main computer program in a linear regression course. The accompanying disks, for both Macintosh and Windows computers, contain the R-code and Xlisp-Stat. An Instructor's Manual presenting detailed solutions to all the problems in the book is ava

  10. Beta-binomial regression and bimodal utilization.

    Science.gov (United States)

    Liu, Chuan-Fen; Burgess, James F; Manning, Willard G; Maciejewski, Matthew L

    2013-10-01

    To illustrate how the analysis of bimodal U-shaped distributed utilization can be modeled with beta-binomial regression, which is rarely used in health services research. Veterans Affairs (VA) administrative data and Medicare claims in 2001-2004 for 11,123 Medicare-eligible VA primary care users in 2000. We compared means and distributions of VA reliance (the proportion of all VA/Medicare primary care visits occurring in VA) predicted from beta-binomial, binomial, and ordinary least-squares (OLS) models. Beta-binomial model fits the bimodal distribution of VA reliance better than binomial and OLS models due to the nondependence on normality and the greater flexibility in shape parameters. Increased awareness of beta-binomial regression may help analysts apply appropriate methods to outcomes with bimodal or U-shaped distributions. © Health Research and Educational Trust.

  11. Model selection in kernel ridge regression

    DEFF Research Database (Denmark)

    Exterkate, Peter

    2013-01-01

    Kernel ridge regression is a technique to perform ridge regression with a potentially infinite number of nonlinear transformations of the independent variables as regressors. This method is gaining popularity as a data-rich nonlinear forecasting tool, which is applicable in many different contexts....... The influence of the choice of kernel and the setting of tuning parameters on forecast accuracy is investigated. Several popular kernels are reviewed, including polynomial kernels, the Gaussian kernel, and the Sinc kernel. The latter two kernels are interpreted in terms of their smoothing properties......, and the tuning parameters associated to all these kernels are related to smoothness measures of the prediction function and to the signal-to-noise ratio. Based on these interpretations, guidelines are provided for selecting the tuning parameters from small grids using cross-validation. A Monte Carlo study...

  12. N-representability-driven reconstruction of the two-electron reduced-density matrix for a real-time time-dependent electronic structure method

    International Nuclear Information System (INIS)

    Jeffcoat, David B.; DePrince, A. Eugene

    2014-01-01

    Propagating the equations of motion (EOM) for the one-electron reduced-density matrix (1-RDM) requires knowledge of the corresponding two-electron RDM (2-RDM). We show that the indeterminacy of this expression can be removed through a constrained optimization that resembles the variational optimization of the ground-state 2-RDM subject to a set of known N-representability conditions. Electronic excitation energies can then be obtained by propagating the EOM for the 1-RDM and following the dipole moment after the system interacts with an oscillating external electric field. For simple systems with well-separated excited states whose symmetry differs from that of the ground state, excitation energies obtained from this method are comparable to those obtained from full configuration interaction computations. Although the optimized 2-RDM satisfies necessary N-representability conditions, the procedure cannot guarantee a unique mapping from the 1-RDM to the 2-RDM. This deficiency is evident in the mean-field-quality description of transitions to states of the same symmetry as the ground state, as well as in the inability of the method to describe Rabi oscillations

  13. N-representability-driven reconstruction of the two-electron reduced-density matrix for a real-time time-dependent electronic structure method

    Science.gov (United States)

    Jeffcoat, David B.; DePrince, A. Eugene

    2014-12-01

    Propagating the equations of motion (EOM) for the one-electron reduced-density matrix (1-RDM) requires knowledge of the corresponding two-electron RDM (2-RDM). We show that the indeterminacy of this expression can be removed through a constrained optimization that resembles the variational optimization of the ground-state 2-RDM subject to a set of known N-representability conditions. Electronic excitation energies can then be obtained by propagating the EOM for the 1-RDM and following the dipole moment after the system interacts with an oscillating external electric field. For simple systems with well-separated excited states whose symmetry differs from that of the ground state, excitation energies obtained from this method are comparable to those obtained from full configuration interaction computations. Although the optimized 2-RDM satisfies necessary N-representability conditions, the procedure cannot guarantee a unique mapping from the 1-RDM to the 2-RDM. This deficiency is evident in the mean-field-quality description of transitions to states of the same symmetry as the ground state, as well as in the inability of the method to describe Rabi oscillations.

  14. A comparison of three methods of assessing differential item functioning (DIF) in the Hospital Anxiety Depression Scale: ordinal logistic regression, Rasch analysis and the Mantel chi-square procedure.

    Science.gov (United States)

    Cameron, Isobel M; Scott, Neil W; Adler, Mats; Reid, Ian C

    2014-12-01

    It is important for clinical practice and research that measurement scales of well-being and quality of life exhibit only minimal differential item functioning (DIF). DIF occurs where different groups of people endorse items in a scale to different extents after being matched by the intended scale attribute. We investigate the equivalence or otherwise of common methods of assessing DIF. Three methods of measuring age- and sex-related DIF (ordinal logistic regression, Rasch analysis and Mantel χ(2) procedure) were applied to Hospital Anxiety Depression Scale (HADS) data pertaining to a sample of 1,068 patients consulting primary care practitioners. Three items were flagged by all three approaches as having either age- or sex-related DIF with a consistent direction of effect; a further three items identified did not meet stricter criteria for important DIF using at least one method. When applying strict criteria for significant DIF, ordinal logistic regression was slightly less sensitive. Ordinal logistic regression, Rasch analysis and contingency table methods yielded consistent results when identifying DIF in the HADS depression and HADS anxiety scales. Regardless of methods applied, investigators should use a combination of statistical significance, magnitude of the DIF effect and investigator judgement when interpreting the results.

  15. New method for characterizing paper coating structures using argon ion beam milling and field emission scanning electron microscopy.

    Science.gov (United States)

    Dahlström, C; Allem, R; Uesaka, T

    2011-02-01

    We have developed a new method for characterizing microstructures of paper coating using argon ion beam milling technique and field emission scanning electron microscopy. The combination of these two techniques produces extremely high-quality images with very few artefacts, which are particularly suited for quantitative analyses of coating structures. A new evaluation method has been developed by using marker-controlled watershed segmentation technique of the secondary electron images. The high-quality secondary electron images with well-defined pores makes it possible to use this semi-automatic segmentation method. One advantage of using secondary electron images instead of backscattered electron images is being able to avoid possible overestimation of the porosity because of the signal depth. A comparison was made between the new method and the conventional method using greyscale histogram thresholding of backscattered electron images. The results showed that the conventional method overestimated the pore area by 20% and detected around 5% more pores than the new method. As examples of the application of the new method, we have investigated the distributions of coating binders, and the relationship between local coating porosity and base sheet structures. The technique revealed, for the first time with direct evidence, the long-suspected coating non-uniformity, i.e. binder migration, and the correlation between coating porosity versus base sheet mass density, in a straightforward way. © 2010 The Authors Journal compilation © 2010 The Royal Microscopical Society.

  16. Retrieving relevant factors with exploratory SEM and principal-covariate regression: A comparison.

    Science.gov (United States)

    Vervloet, Marlies; Van den Noortgate, Wim; Ceulemans, Eva

    2018-02-12

    Behavioral researchers often linearly regress a criterion on multiple predictors, aiming to gain insight into the relations between the criterion and predictors. Obtaining this insight from the ordinary least squares (OLS) regression solution may be troublesome, because OLS regression weights show only the effect of a predictor on top of the effects of other predictors. Moreover, when the number of predictors grows larger, it becomes likely that the predictors will be highly collinear, which makes the regression weights' estimates unstable (i.e., the "bouncing beta" problem). Among other procedures, dimension-reduction-based methods have been proposed for dealing with these problems. These methods yield insight into the data by reducing the predictors to a smaller number of summarizing variables and regressing the criterion on these summarizing variables. Two promising methods are principal-covariate regression (PCovR) and exploratory structural equation modeling (ESEM). Both simultaneously optimize reduction and prediction, but they are based on different frameworks. The resulting solutions have not yet been compared; it is thus unclear what the strengths and weaknesses are of both methods. In this article, we focus on the extents to which PCovR and ESEM are able to extract the factors that truly underlie the predictor scores and can predict a single criterion. The results of two simulation studies showed that for a typical behavioral dataset, ESEM (using the BIC for model selection) in this regard is successful more often than PCovR. Yet, in 93% of the datasets PCovR performed equally well, and in the case of 48 predictors, 100 observations, and large differences in the strengths of the factors, PCovR even outperformed ESEM.

  17. E-learning platform for automated testing of electronic circuits using signature analysis method

    Science.gov (United States)

    Gherghina, Cǎtǎlina; Bacivarov, Angelica; Bacivarov, Ioan C.; Petricǎ, Gabriel

    2016-12-01

    Dependability of electronic circuits can be ensured only through testing of circuit modules. This is done by generating test vectors and their application to the circuit. Testability should be viewed as a concerted effort to ensure maximum efficiency throughout the product life cycle, from conception and design stage, through production to repairs during products operating. In this paper, is presented the platform developed by authors for training for testability in electronics, in general and in using signature analysis method, in particular. The platform allows highlighting the two approaches in the field namely analog and digital signature of circuits. As a part of this e-learning platform, it has been developed a database for signatures of different electronic components meant to put into the spotlight different techniques implying fault detection, and from this there were also self-repairing techniques of the systems with this kind of components. An approach for realizing self-testing circuits based on MATLAB environment and using signature analysis method is proposed. This paper analyses the benefits of signature analysis method and simulates signature analyzer performance based on the use of pseudo-random sequences, too.

  18. The intermediate endpoint effect in logistic and probit regression

    Science.gov (United States)

    MacKinnon, DP; Lockwood, CM; Brown, CH; Wang, W; Hoffman, JM

    2010-01-01

    Background An intermediate endpoint is hypothesized to be in the middle of the causal sequence relating an independent variable to a dependent variable. The intermediate variable is also called a surrogate or mediating variable and the corresponding effect is called the mediated, surrogate endpoint, or intermediate endpoint effect. Clinical studies are often designed to change an intermediate or surrogate endpoint and through this intermediate change influence the ultimate endpoint. In many intermediate endpoint clinical studies the dependent variable is binary, and logistic or probit regression is used. Purpose The purpose of this study is to describe a limitation of a widely used approach to assessing intermediate endpoint effects and to propose an alternative method, based on products of coefficients, that yields more accurate results. Methods The intermediate endpoint model for a binary outcome is described for a true binary outcome and for a dichotomization of a latent continuous outcome. Plots of true values and a simulation study are used to evaluate the different methods. Results Distorted estimates of the intermediate endpoint effect and incorrect conclusions can result from the application of widely used methods to assess the intermediate endpoint effect. The same problem occurs for the proportion of an effect explained by an intermediate endpoint, which has been suggested as a useful measure for identifying intermediate endpoints. A solution to this problem is given based on the relationship between latent variable modeling and logistic or probit regression. Limitations More complicated intermediate variable models are not addressed in the study, although the methods described in the article can be extended to these more complicated models. Conclusions Researchers are encouraged to use an intermediate endpoint method based on the product of regression coefficients. A common method based on difference in coefficient methods can lead to distorted

  19. Outlier detection algorithms for least squares time series regression

    DEFF Research Database (Denmark)

    Johansen, Søren; Nielsen, Bent

    We review recent asymptotic results on some robust methods for multiple regression. The regressors include stationary and non-stationary time series as well as polynomial terms. The methods include the Huber-skip M-estimator, 1-step Huber-skip M-estimators, in particular the Impulse Indicator Sat...

  20. Improving electronic structure methods to predict nano-optoelectronics and nano-catalyst functions.

    Energy Technology Data Exchange (ETDEWEB)

    Nielsen, Ida Marie B.; Marzari, Nicola (Massachusetts Institute of Technology); Shelnutt, John Allen; Kulik, Heather J. (Massachusetts Institute of Technology); Medforth, Craig John (University of New Mexico, Albuquerque, NM); Leung, Kevin

    2009-10-01

    This report focuses on quantum chemistry and ab initio molecular dynamics (AIMD) calculations applied to elucidate the mechanism of the multi-step, 2-electron, electrochemical reduction of the green house gas molecule carbon dioxide (CO{sub 2}) to carbon monoxide (CO) in aqueous media. When combined with H{sub 2} gas to form synthesis ('syn') gas, CO becomes a key precursor to methane, methanol, and other useful hydrocarbon products. To elucidate the mechanism of this reaction, we apply computational electrochemistry which is a fledgling, important area of basic science critical to energy storage. This report highlights several approaches, including the calculation of redox potentials, the explicit depiction of liquid water environments using AIMD, and free energy methods. While costly, these pioneering calculations reveal the key role of hydration- and protonation-stabilization of reaction intermediates, and may inform the design of CO{sub 2}-capture materials as well as its electrochemical reduction. In the course of this work, we have also dealt with the challenges of identifying and applying electronic structure methods which are sufficiently accurate to deal with transition metal ion complex-based catalyst. Such electronic structure methods are also pertinent to the accurate modeling of actinide materials and therefore to nuclear energy research. Our multi-pronged effort towards achieving this titular goal of the LDRD is discussed.