Alternative Methods of Regression
Birkes, David
2011-01-01
Of related interest. Nonlinear Regression Analysis and its Applications Douglas M. Bates and Donald G. Watts ".an extraordinary presentation of concepts and methods concerning the use and analysis of nonlinear regression models.highly recommend[ed].for anyone needing to use and/or understand issues concerning the analysis of nonlinear regression models." --Technometrics This book provides a balance between theory and practice supported by extensive displays of instructive geometrical constructs. Numerous in-depth case studies illustrate the use of nonlinear regression analysis--with all data s
DEFF Research Database (Denmark)
Fitzenberger, Bernd; Wilke, Ralf Andreas
2015-01-01
if the mean regression model does not. We provide a short informal introduction into the principle of quantile regression which includes an illustrative application from empirical labor market research. This is followed by briefly sketching the underlying statistical model for linear quantile regression based......Quantile regression is emerging as a popular statistical approach, which complements the estimation of conditional mean models. While the latter only focuses on one aspect of the conditional distribution of the dependent variable, the mean, quantile regression provides more detailed insights...... by modeling conditional quantiles. Quantile regression can therefore detect whether the partial effect of a regressor on the conditional quantiles is the same for all quantiles or differs across quantiles. Quantile regression can provide evidence for a statistical relationship between two variables even...
Regression methods for medical research
Tai, Bee Choo
2013-01-01
Regression Methods for Medical Research provides medical researchers with the skills they need to critically read and interpret research using more advanced statistical methods. The statistical requirements of interpreting and publishing in medical journals, together with rapid changes in science and technology, increasingly demands an understanding of more complex and sophisticated analytic procedures.The text explains the application of statistical models to a wide variety of practical medical investigative studies and clinical trials. Regression methods are used to appropriately answer the
International Nuclear Information System (INIS)
Ballini, J.-P.; Cazes, P.; Turpin, P.-Y.
1976-01-01
Analysing the histogram of anode pulse amplitudes allows a discussion of the hypothesis that has been proposed to account for the statistical processes of secondary multiplication in a photomultiplier. In an earlier work, good agreement was obtained between experimental and reconstructed spectra, assuming a first dynode distribution including two Poisson distributions of distinct mean values. This first approximation led to a search for a method which could give the weights of several Poisson distributions of distinct mean values. Three methods have been briefly exposed: classical linear regression, constraint regression (d'Esopo's method), and regression on variables subject to error. The use of these methods gives an approach of the frequency function which represents the dispersion of the punctual mean gain around the whole first dynode mean gain value. Comparison between this function and the one employed in Polya distribution allows the statement that the latter is inadequate to describe the statistical process of secondary multiplication. Numerous spectra obtained with two kinds of photomultiplier working under different physical conditions have been analysed. Then two points are discussed: - Does the frequency function represent the dynode structure and the interdynode collection process. - Is the model (the multiplication process of all dynodes but the first one, is Poissonian) valid whatever the photomultiplier and the utilization conditions. (Auth.)
Regression modeling methods, theory, and computation with SAS
Panik, Michael
2009-01-01
Regression Modeling: Methods, Theory, and Computation with SAS provides an introduction to a diverse assortment of regression techniques using SAS to solve a wide variety of regression problems. The author fully documents the SAS programs and thoroughly explains the output produced by the programs.The text presents the popular ordinary least squares (OLS) approach before introducing many alternative regression methods. It covers nonparametric regression, logistic regression (including Poisson regression), Bayesian regression, robust regression, fuzzy regression, random coefficients regression,
Stochastic development regression using method of moments
DEFF Research Database (Denmark)
Kühnel, Line; Sommer, Stefan Horst
2017-01-01
This paper considers the estimation problem arising when inferring parameters in the stochastic development regression model for manifold valued non-linear data. Stochastic development regression captures the relation between manifold-valued response and Euclidean covariate variables using...... the stochastic development construction. It is thereby able to incorporate several covariate variables and random effects. The model is intrinsically defined using the connection of the manifold, and the use of stochastic development avoids linearizing the geometry. We propose to infer parameters using...... the Method of Moments procedure that matches known constraints on moments of the observations conditional on the latent variables. The performance of the model is investigated in a simulation example using data on finite dimensional landmark manifolds....
Method for nonlinear exponential regression analysis
Junkin, B. G.
1972-01-01
Two computer programs developed according to two general types of exponential models for conducting nonlinear exponential regression analysis are described. Least squares procedure is used in which the nonlinear problem is linearized by expanding in a Taylor series. Program is written in FORTRAN 5 for the Univac 1108 computer.
A method for nonlinear exponential regression analysis
Junkin, B. G.
1971-01-01
A computer-oriented technique is presented for performing a nonlinear exponential regression analysis on decay-type experimental data. The technique involves the least squares procedure wherein the nonlinear problem is linearized by expansion in a Taylor series. A linear curve fitting procedure for determining the initial nominal estimates for the unknown exponential model parameters is included as an integral part of the technique. A correction matrix was derived and then applied to the nominal estimate to produce an improved set of model parameters. The solution cycle is repeated until some predetermined criterion is satisfied.
Ridge regression estimator: combining unbiased and ordinary ridge regression methods of estimation
Directory of Open Access Journals (Sweden)
Sharad Damodar Gore
2009-10-01
Full Text Available Statistical literature has several methods for coping with multicollinearity. This paper introduces a new shrinkage estimator, called modified unbiased ridge (MUR. This estimator is obtained from unbiased ridge regression (URR in the same way that ordinary ridge regression (ORR is obtained from ordinary least squares (OLS. Properties of MUR are derived. Results on its matrix mean squared error (MMSE are obtained. MUR is compared with ORR and URR in terms of MMSE. These results are illustrated with an example based on data generated by Hoerl and Kennard (1975.
A multiple regression method for genomewide association studies ...
Indian Academy of Sciences (India)
Bujun Mei
2018-06-07
Jun 7, 2018 ... Similar to the typical genomewide association tests using LD ... new approach performed validly when the multiple regression based on linkage method was employed. .... the model, two groups of scenarios were simulated.
BOX-COX REGRESSION METHOD IN TIME SCALING
Directory of Open Access Journals (Sweden)
ATİLLA GÖKTAŞ
2013-06-01
Full Text Available Box-Cox regression method with λj, for j = 1, 2, ..., k, power transformation can be used when dependent variable and error term of the linear regression model do not satisfy the continuity and normality assumptions. The situation obtaining the smallest mean square error when optimum power λj, transformation for j = 1, 2, ..., k, of Y has been discussed. Box-Cox regression method is especially appropriate to adjust existence skewness or heteroscedasticity of error terms for a nonlinear functional relationship between dependent and explanatory variables. In this study, the advantage and disadvantage use of Box-Cox regression method have been discussed in differentiation and differantial analysis of time scale concept.
On two flexible methods of 2-dimensional regression analysis
Czech Academy of Sciences Publication Activity Database
Volf, Petr
2012-01-01
Roč. 18, č. 4 (2012), s. 154-164 ISSN 1803-9782 Grant - others:GA ČR(CZ) GAP209/10/2045 Institutional support: RVO:67985556 Keywords : regression analysis * Gordon surface * prediction error * projection pursuit Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2013/SI/volf-on two flexible methods of 2-dimensional regression analysis.pdf
Directory of Open Access Journals (Sweden)
Paul Robert Martin Werfette
2010-06-01
Full Text Available Analysis of quantitative structure - activity relationship (QSAR for a series of antimalarial compound artemisinin derivatives has been done using principal component regression. The descriptors for QSAR study were representation of electronic structure i.e. atomic net charges of the artemisinin skeleton calculated by AM1 semi-empirical method. The antimalarial activity of the compound was expressed in log 1/IC50 which is an experimental data. The main purpose of the principal component analysis approach is to transform a large data set of atomic net charges to simplify into a data set which known as latent variables. The best QSAR equation to analyze of log 1/IC50 can be obtained from the regression method as a linear function of several latent variables i.e. x1, x2, x3, x4 and x5. The best QSAR model is expressed in the following equation, (;; Keywords: QSAR, antimalarial, artemisinin, principal component regression
Thermal Efficiency Degradation Diagnosis Method Using Regression Model
International Nuclear Information System (INIS)
Jee, Chang Hyun; Heo, Gyun Young; Jang, Seok Won; Lee, In Cheol
2011-01-01
This paper proposes an idea for thermal efficiency degradation diagnosis in turbine cycles, which is based on turbine cycle simulation under abnormal conditions and a linear regression model. The correlation between the inputs for representing degradation conditions (normally unmeasured but intrinsic states) and the simulation outputs (normally measured but superficial states) was analyzed with the linear regression model. The regression models can inversely response an associated intrinsic state for a superficial state observed from a power plant. The diagnosis method proposed herein is classified into three processes, 1) simulations for degradation conditions to get measured states (referred as what-if method), 2) development of the linear model correlating intrinsic and superficial states, and 3) determination of an intrinsic state using the superficial states of current plant and the linear regression model (referred as inverse what-if method). The what-if method is to generate the outputs for the inputs including various root causes and/or boundary conditions whereas the inverse what-if method is the process of calculating the inverse matrix with the given superficial states, that is, component degradation modes. The method suggested in this paper was validated using the turbine cycle model for an operating power plant
Linear regression methods a ccording to objective functions
Yasemin Sisman; Sebahattin Bektas
2012-01-01
The aim of the study is to explain the parameter estimation methods and the regression analysis. The simple linear regressionmethods grouped according to the objective function are introduced. The numerical solution is achieved for the simple linear regressionmethods according to objective function of Least Squares and theLeast Absolute Value adjustment methods. The success of the appliedmethods is analyzed using their objective function values.
Comparing parametric and nonparametric regression methods for panel data
DEFF Research Database (Denmark)
Czekaj, Tomasz Gerard; Henningsen, Arne
We investigate and compare the suitability of parametric and non-parametric stochastic regression methods for analysing production technologies and the optimal firm size. Our theoretical analysis shows that the most commonly used functional forms in empirical production analysis, Cobb......-Douglas and Translog, are unsuitable for analysing the optimal firm size. We show that the Translog functional form implies an implausible linear relationship between the (logarithmic) firm size and the elasticity of scale, where the slope is artificially related to the substitutability between the inputs....... The practical applicability of the parametric and non-parametric regression methods is scrutinised and compared by an empirical example: we analyse the production technology and investigate the optimal size of Polish crop farms based on a firm-level balanced panel data set. A nonparametric specification test...
Quantitative electron microscope autoradiography: application of multiple linear regression analysis
International Nuclear Information System (INIS)
Markov, D.V.
1986-01-01
A new method for the analysis of high resolution EM autoradiographs is described. It identifies labelled cell organelle profiles in sections on a strictly statistical basis and provides accurate estimates for their radioactivity without the need to make any assumptions about their size, shape and spatial arrangement. (author)
FATAL, General Experiment Fitting Program by Nonlinear Regression Method
International Nuclear Information System (INIS)
Salmon, L.; Budd, T.; Marshall, M.
1982-01-01
1 - Description of problem or function: A generalized fitting program with a free-format keyword interface to the user. It permits experimental data to be fitted by non-linear regression methods to any function describable by the user. The user requires the minimum of computer experience but needs to provide a subroutine to define his function. Some statistical output is included as well as 'best' estimates of the function's parameters. 2 - Method of solution: The regression method used is based on a minimization technique devised by Powell (Harwell Subroutine Library VA05A, 1972) which does not require the use of analytical derivatives. The method employs a quasi-Newton procedure balanced with a steepest descent correction. Experience shows this to be efficient for a very wide range of application. 3 - Restrictions on the complexity of the problem: The current version of the program permits functions to be defined with up to 20 parameters. The function may be fitted to a maximum of 400 points, preferably with estimated values of weight given
Mapping urban environmental noise: a land use regression method.
Xie, Dan; Liu, Yi; Chen, Jining
2011-09-01
Forecasting and preventing urban noise pollution are major challenges in urban environmental management. Most existing efforts, including experiment-based models, statistical models, and noise mapping, however, have limited capacity to explain the association between urban growth and corresponding noise change. Therefore, these conventional methods can hardly forecast urban noise at a given outlook of development layout. This paper, for the first time, introduces a land use regression method, which has been applied for simulating urban air quality for a decade, to construct an urban noise model (LUNOS) in Dalian Municipality, Northwest China. The LUNOS model describes noise as a dependent variable of surrounding various land areas via a regressive function. The results suggest that a linear model performs better in fitting monitoring data, and there is no significant difference of the LUNOS's outputs when applied to different spatial scales. As the LUNOS facilitates a better understanding of the association between land use and urban environmental noise in comparison to conventional methods, it can be regarded as a promising tool for noise prediction for planning purposes and aid smart decision-making.
Dimension Reduction and Discretization in Stochastic Problems by Regression Method
DEFF Research Database (Denmark)
Ditlevsen, Ove Dalager
1996-01-01
The chapter mainly deals with dimension reduction and field discretizations based directly on the concept of linear regression. Several examples of interesting applications in stochastic mechanics are also given.Keywords: Random fields discretization, Linear regression, Stochastic interpolation, ...
Analyzing Big Data with the Hybrid Interval Regression Methods
Directory of Open Access Journals (Sweden)
Chia-Hui Huang
2014-01-01
Full Text Available Big data is a new trend at present, forcing the significant impacts on information technologies. In big data applications, one of the most concerned issues is dealing with large-scale data sets that often require computation resources provided by public cloud services. How to analyze big data efficiently becomes a big challenge. In this paper, we collaborate interval regression with the smooth support vector machine (SSVM to analyze big data. Recently, the smooth support vector machine (SSVM was proposed as an alternative of the standard SVM that has been proved more efficient than the traditional SVM in processing large-scale data. In addition the soft margin method is proposed to modify the excursion of separation margin and to be effective in the gray zone that the distribution of data becomes hard to be described and the separation margin between classes.
DEFF Research Database (Denmark)
Sharifzadeh, Sara; Skytte, Jacob Lercke; Nielsen, Otto Højager Attermann
2012-01-01
Statistical solutions find wide spread use in food and medicine quality control. We investigate the effect of different regression and sparse regression methods for a viscosity estimation problem using the spectro-temporal features from new Sub-Surface Laser Scattering (SLS) vision system. From...... with sparse LAR, lasso and Elastic Net (EN) sparse regression methods. Due to the inconsistent measurement condition, Locally Weighted Scatter plot Smoothing (Loess) has been employed to alleviate the undesired variation in the estimated viscosity. The experimental results of applying different methods show...
Methods of Detecting Outliers in A Regression Analysis Model ...
African Journals Online (AJOL)
PROF. O. E. OSUAGWU
2013-06-01
Jun 1, 2013 ... especially true in observational studies .... Simple linear regression and multiple ... The simple linear ..... Grubbs,F.E (1950): Sample Criteria for Testing Outlying observations: Annals of ... In experimental design, the Relative.
Analysis of some methods for reduced rank Gaussian process regression
DEFF Research Database (Denmark)
Quinonero-Candela, J.; Rasmussen, Carl Edward
2005-01-01
While there is strong motivation for using Gaussian Processes (GPs) due to their excellent performance in regression and classification problems, their computational complexity makes them impractical when the size of the training set exceeds a few thousand cases. This has motivated the recent...... proliferation of a number of cost-effective approximations to GPs, both for classification and for regression. In this paper we analyze one popular approximation to GPs for regression: the reduced rank approximation. While generally GPs are equivalent to infinite linear models, we show that Reduced Rank...... Gaussian Processes (RRGPs) are equivalent to finite sparse linear models. We also introduce the concept of degenerate GPs and show that they correspond to inappropriate priors. We show how to modify the RRGP to prevent it from being degenerate at test time. Training RRGPs consists both in learning...
Chen, Carla Chia-Ming; Schwender, Holger; Keith, Jonathan; Nunkesser, Robin; Mengersen, Kerrie; Macrossan, Paula
2011-01-01
Due to advancements in computational ability, enhanced technology and a reduction in the price of genotyping, more data are being generated for understanding genetic associations with diseases and disorders. However, with the availability of large data sets comes the inherent challenges of new methods of statistical analysis and modeling. Considering a complex phenotype may be the effect of a combination of multiple loci, various statistical methods have been developed for identifying genetic epistasis effects. Among these methods, logic regression (LR) is an intriguing approach incorporating tree-like structures. Various methods have built on the original LR to improve different aspects of the model. In this study, we review four variations of LR, namely Logic Feature Selection, Monte Carlo Logic Regression, Genetic Programming for Association Studies, and Modified Logic Regression-Gene Expression Programming, and investigate the performance of each method using simulated and real genotype data. We contrast these with another tree-like approach, namely Random Forests, and a Bayesian logistic regression with stochastic search variable selection.
DEFF Research Database (Denmark)
Kirkeby, Carsten Thure; Hisham Beshara Halasa, Tariq; Gussmann, Maya Katrin
2017-01-01
the transmission rate. We use data from the two simulation models and vary the sampling intervals and the size of the population sampled. We devise two new methods to determine transmission rate, and compare these to the frequently used Poisson regression method in both epidemic and endemic situations. For most...... tested scenarios these new methods perform similar or better than Poisson regression, especially in the case of long sampling intervals. We conclude that transmission rate estimates are easily biased, which is important to take into account when using these rates in simulation models....
Helmreich, James E.; Krog, K. Peter
2018-01-01
We present a short, inquiry-based learning course on concepts and methods underlying ordinary least squares (OLS), least absolute deviation (LAD), and quantile regression (QR). Students investigate squared, absolute, and weighted absolute distance functions (metrics) as location measures. Using differential calculus and properties of convex…
Finding-equal regression method and its application in predication of U resources
International Nuclear Information System (INIS)
Cao Huimo
1995-03-01
The commonly adopted deposit model method in mineral resources predication has two main part: one is model data that show up geological mineralization law for deposit, the other is statistics predication method that accords with characters of the data namely pretty regression method. This kind of regression method may be called finding-equal regression, which is made of the linear regression and distribution finding-equal method. Because distribution finding-equal method is a data pretreatment which accords with advanced mathematical precondition for the linear regression namely equal distribution theory, and this kind of data pretreatment is possible of realization. Therefore finding-equal regression not only can overcome nonlinear limitations, that are commonly occurred in traditional linear regression or other regression and always have no solution, but also can distinguish outliers and eliminate its weak influence, which would usually appeared when Robust regression possesses outlier in independent variables. Thus this newly finding-equal regression stands the best status in all kind of regression methods. Finally, two good examples of U resource quantitative predication are provided
An Application of Robust Method in Multiple Linear Regression Model toward Credit Card Debt
Amira Azmi, Nur; Saifullah Rusiman, Mohd; Khalid, Kamil; Roslan, Rozaini; Sufahani, Suliadi; Mohamad, Mahathir; Salleh, Rohayu Mohd; Hamzah, Nur Shamsidah Amir
2018-04-01
Credit card is a convenient alternative replaced cash or cheque, and it is essential component for electronic and internet commerce. In this study, the researchers attempt to determine the relationship and significance variables between credit card debt and demographic variables such as age, household income, education level, years with current employer, years at current address, debt to income ratio and other debt. The provided data covers 850 customers information. There are three methods that applied to the credit card debt data which are multiple linear regression (MLR) models, MLR models with least quartile difference (LQD) method and MLR models with mean absolute deviation method. After comparing among three methods, it is found that MLR model with LQD method became the best model with the lowest value of mean square error (MSE). According to the final model, it shows that the years with current employer, years at current address, household income in thousands and debt to income ratio are positively associated with the amount of credit debt. Meanwhile variables for age, level of education and other debt are negatively associated with amount of credit debt. This study may serve as a reference for the bank company by using robust methods, so that they could better understand their options and choice that is best aligned with their goals for inference regarding to the credit card debt.
Directory of Open Access Journals (Sweden)
Hailun Wang
2017-01-01
Full Text Available Support vector regression algorithm is widely used in fault diagnosis of rolling bearing. A new model parameter selection method for support vector regression based on adaptive fusion of the mixed kernel function is proposed in this paper. We choose the mixed kernel function as the kernel function of support vector regression. The mixed kernel function of the fusion coefficients, kernel function parameters, and regression parameters are combined together as the parameters of the state vector. Thus, the model selection problem is transformed into a nonlinear system state estimation problem. We use a 5th-degree cubature Kalman filter to estimate the parameters. In this way, we realize the adaptive selection of mixed kernel function weighted coefficients and the kernel parameters, the regression parameters. Compared with a single kernel function, unscented Kalman filter (UKF support vector regression algorithms, and genetic algorithms, the decision regression function obtained by the proposed method has better generalization ability and higher prediction accuracy.
The Use of Nonparametric Kernel Regression Methods in Econometric Production Analysis
DEFF Research Database (Denmark)
Czekaj, Tomasz Gerard
and nonparametric estimations of production functions in order to evaluate the optimal firm size. The second paper discusses the use of parametric and nonparametric regression methods to estimate panel data regression models. The third paper analyses production risk, price uncertainty, and farmers' risk preferences...... within a nonparametric panel data regression framework. The fourth paper analyses the technical efficiency of dairy farms with environmental output using nonparametric kernel regression in a semiparametric stochastic frontier analysis. The results provided in this PhD thesis show that nonparametric......This PhD thesis addresses one of the fundamental problems in applied econometric analysis, namely the econometric estimation of regression functions. The conventional approach to regression analysis is the parametric approach, which requires the researcher to specify the form of the regression...
Easy methods for extracting individual regression slopes: Comparing SPSS, R, and Excel
Directory of Open Access Journals (Sweden)
Roland Pfister
2013-10-01
Full Text Available Three different methods for extracting coefficientsof linear regression analyses are presented. The focus is on automatic and easy-to-use approaches for common statistical packages: SPSS, R, and MS Excel / LibreOffice Calc. Hands-on examples are included for each analysis, followed by a brief description of how a subsequent regression coefficient analysis is performed.
Anderson, Carl A; McRae, Allan F; Visscher, Peter M
2006-07-01
Standard quantitative trait loci (QTL) mapping techniques commonly assume that the trait is both fully observed and normally distributed. When considering survival or age-at-onset traits these assumptions are often incorrect. Methods have been developed to map QTL for survival traits; however, they are both computationally intensive and not available in standard genome analysis software packages. We propose a grouped linear regression method for the analysis of continuous survival data. Using simulation we compare this method to both the Cox and Weibull proportional hazards models and a standard linear regression method that ignores censoring. The grouped linear regression method is of equivalent power to both the Cox and Weibull proportional hazards methods and is significantly better than the standard linear regression method when censored observations are present. The method is also robust to the proportion of censored individuals and the underlying distribution of the trait. On the basis of linear regression methodology, the grouped linear regression model is computationally simple and fast and can be implemented readily in freely available statistical software.
Gusriani, N.; Firdaniza
2018-03-01
The existence of outliers on multiple linear regression analysis causes the Gaussian assumption to be unfulfilled. If the Least Square method is forcedly used on these data, it will produce a model that cannot represent most data. For that, we need a robust regression method against outliers. This paper will compare the Minimum Covariance Determinant (MCD) method and the TELBS method on secondary data on the productivity of phytoplankton, which contains outliers. Based on the robust determinant coefficient value, MCD method produces a better model compared to TELBS method.
Energy Technology Data Exchange (ETDEWEB)
Lopez Fontan, J.L.; Costa, J.; Ruso, J.M.; Prieto, G. [Dept. of Applied Physics, Univ. of Santiago de Compostela, Santiago de Compostela (Spain); Sarmiento, F. [Dept. of Mathematics, Faculty of Informatics, Univ. of A Coruna, A Coruna (Spain)
2004-02-01
The application of a statistical method, the local polynomial regression method, (LPRM), based on a nonparametric estimation of the regression function to determine the critical micelle concentration (cmc) is presented. The method is extremely flexible because it does not impose any parametric model on the subjacent structure of the data but rather allows the data to speak for themselves. Good concordance of cmc values with those obtained by other methods was found for systems in which the variation of a measured physical property with concentration showed an abrupt change. When this variation was slow, discrepancies between the values obtained by LPRM and others methods were found. (orig.)
Fuzzy Linear Regression for the Time Series Data which is Fuzzified with SMRGT Method
Directory of Open Access Journals (Sweden)
Seçil YALAZ
2016-10-01
Full Text Available Our work on regression and classification provides a new contribution to the analysis of time series used in many areas for years. Owing to the fact that convergence could not obtained with the methods used in autocorrelation fixing process faced with time series regression application, success is not met or fall into obligation of changing the models’ degree. Changing the models’ degree may not be desirable in every situation. In our study, recommended for these situations, time series data was fuzzified by using the simple membership function and fuzzy rule generation technique (SMRGT and to estimate future an equation has created by applying fuzzy least square regression (FLSR method which is a simple linear regression method to this data. Although SMRGT has success in determining the flow discharge in open channels and can be used confidently for flow discharge modeling in open canals, as well as in pipe flow with some modifications, there is no clue about that this technique is successful in fuzzy linear regression modeling. Therefore, in order to address the luck of such a modeling, a new hybrid model has been described within this study. In conclusion, to demonstrate our methods’ efficiency, classical linear regression for time series data and linear regression for fuzzy time series data were applied to two different data sets, and these two approaches performances were compared by using different measures.
An NCME Instructional Module on Data Mining Methods for Classification and Regression
Sinharay, Sandip
2016-01-01
Data mining methods for classification and regression are becoming increasingly popular in various scientific fields. However, these methods have not been explored much in educational measurement. This module first provides a review, which should be accessible to a wide audience in education measurement, of some of these methods. The module then…
Cohen, Ayala; Nahum-Shani, Inbal; Doveh, Etti
2010-01-01
In their seminal paper, Edwards and Parry (1993) presented the polynomial regression as a better alternative to applying difference score in the study of congruence. Although this method is increasingly applied in congruence research, its complexity relative to other methods for assessing congruence (e.g., difference score methods) was one of the…
Statistical approach for selection of regression model during validation of bioanalytical method
Directory of Open Access Journals (Sweden)
Natalija Nakov
2014-06-01
Full Text Available The selection of an adequate regression model is the basis for obtaining accurate and reproducible results during the bionalytical method validation. Given the wide concentration range, frequently present in bioanalytical assays, heteroscedasticity of the data may be expected. Several weighted linear and quadratic regression models were evaluated during the selection of the adequate curve fit using nonparametric statistical tests: One sample rank test and Wilcoxon signed rank test for two independent groups of samples. The results obtained with One sample rank test could not give statistical justification for the selection of linear vs. quadratic regression models because slight differences between the error (presented through the relative residuals were obtained. Estimation of the significance of the differences in the RR was achieved using Wilcoxon signed rank test, where linear and quadratic regression models were treated as two independent groups. The application of this simple non-parametric statistical test provides statistical confirmation of the choice of an adequate regression model.
METHOD OF ELECTRON BEAM PROCESSING
DEFF Research Database (Denmark)
2003-01-01
As a rule, electron beam welding takes place in a vacuum. However, this means that the workpieces in question have to be placed in a vacuum chamber and have to be removed therefrom after welding. This is time−consuming and a serious limitation of a process the greatest advantage of which is the o......As a rule, electron beam welding takes place in a vacuum. However, this means that the workpieces in question have to be placed in a vacuum chamber and have to be removed therefrom after welding. This is time−consuming and a serious limitation of a process the greatest advantage of which...... is the option of welding workpieces of large thicknesses. Therefore the idea is to guide the electron beam (2) to the workpiece via a hollow wire, said wire thereby acting as a prolongation of the vacuum chamber (4) down to workpiece. Thus, a workpiece need not be placed inside the vacuum chamber, thereby...... exploiting the potential of electron beam processing to a greater degree than previously possible, for example by means of electron beam welding...
Directory of Open Access Journals (Sweden)
ELİF BULUT
2013-06-01
Full Text Available Partial Least Squares Regression (PLSR is a multivariate statistical method that consists of partial least squares and multiple linear regression analysis. Explanatory variables, X, having multicollinearity are reduced to components which explain the great amount of covariance between explanatory and response variable. These components are few in number and they don’t have multicollinearity problem. Then multiple linear regression analysis is applied to those components to model the response variable Y. There are various PLSR algorithms. In this study NIPALS and PLS-Kernel algorithms will be studied and illustrated on a real data set.
The Bland-Altman Method Should Not Be Used in Regression Cross-Validation Studies
O'Connor, Daniel P.; Mahar, Matthew T.; Laughlin, Mitzi S.; Jackson, Andrew S.
2011-01-01
The purpose of this study was to demonstrate the bias in the Bland-Altman (BA) limits of agreement method when it is used to validate regression models. Data from 1,158 men were used to develop three regression equations to estimate maximum oxygen uptake (R[superscript 2] = 0.40, 0.61, and 0.82, respectively). The equations were evaluated in a…
Sparling, D.W.; Barzen, J.A.; Lovvorn, J.R.; Serie, J.R.
1992-01-01
Regression equations that use mensural data to estimate body condition have been developed for several water birds. These equations often have been based on data that represent different sexes, age classes, or seasons, without being adequately tested for intergroup differences. We used proximate carcass analysis of 538 adult and juvenile canvasbacks (Aythya valisineria ) collected during fall migration, winter, and spring migrations in 1975-76 and 1982-85 to test regression methods for estimating body condition.
Treating experimental data of inverse kinetic method by unitary linear regression analysis
International Nuclear Information System (INIS)
Zhao Yusen; Chen Xiaoliang
2009-01-01
The theory of treating experimental data of inverse kinetic method by unitary linear regression analysis was described. Not only the reactivity, but also the effective neutron source intensity could be calculated by this method. Computer code was compiled base on the inverse kinetic method and unitary linear regression analysis. The data of zero power facility BFS-1 in Russia were processed and the results were compared. The results show that the reactivity and the effective neutron source intensity can be obtained correctly by treating experimental data of inverse kinetic method using unitary linear regression analysis and the precision of reactivity measurement is improved. The central element efficiency can be calculated by using the reactivity. The result also shows that the effect to reactivity measurement caused by external neutron source should be considered when the reactor power is low and the intensity of external neutron source is strong. (authors)
Regression Methods for Virtual Metrology of Layer Thickness in Chemical Vapor Deposition
DEFF Research Database (Denmark)
Purwins, Hendrik; Barak, Bernd; Nagi, Ahmed
2014-01-01
The quality of wafer production in semiconductor manufacturing cannot always be monitored by a costly physical measurement. Instead of measuring a quantity directly, it can be predicted by a regression method (Virtual Metrology). In this paper, a survey on regression methods is given to predict...... average Silicon Nitride cap layer thickness for the Plasma Enhanced Chemical Vapor Deposition (PECVD) dual-layer metal passivation stack process. Process and production equipment Fault Detection and Classification (FDC) data are used as predictor variables. Various variable sets are compared: one most...... algorithm, and Support Vector Regression (SVR). On a test set, SVR outperforms the other methods by a large margin, being more robust towards changes in the production conditions. The method performs better on high-dimensional multivariate input data than on the most predictive variables alone. Process...
Statistical methods in regression and calibration analysis of chromosome aberration data
International Nuclear Information System (INIS)
Merkle, W.
1983-01-01
The method of iteratively reweighted least squares for the regression analysis of Poisson distributed chromosome aberration data is reviewed in the context of other fit procedures used in the cytogenetic literature. As an application of the resulting regression curves methods for calculating confidence intervals on dose from aberration yield are described and compared, and, for the linear quadratic model a confidence interval is given. Emphasis is placed on the rational interpretation and the limitations of various methods from a statistical point of view. (orig./MG)
Thompson, Russel L.
Homoscedasticity is an important assumption of linear regression. This paper explains what it is and why it is important to the researcher. Graphical and mathematical methods for testing the homoscedasticity assumption are demonstrated. Sources of homoscedasticity and types of homoscedasticity are discussed, and methods for correction are…
Calculation of U, Ra, Th and K contents in uranium ore by multiple linear regression method
International Nuclear Information System (INIS)
Lin Chao; Chen Yingqiang; Zhang Qingwen; Tan Fuwen; Peng Guanghui
1991-01-01
A multiple linear regression method was used to compute γ spectra of uranium ore samples and to calculate contents of U, Ra, Th, and K. In comparison with the inverse matrix method, its advantage is that no standard samples of pure U, Ra, Th and K are needed for obtaining response coefficients
Martens, Edwin P; de Boer, Anthonius; Pestman, Wiebe R; Belitser, Svetlana V; Stricker, Bruno H Ch; Klungel, Olaf H
PURPOSE: To compare adjusted effects of drug treatment for hypertension on the risk of stroke from propensity score (PS) methods with a multivariable Cox proportional hazards (Cox PH) regression in an observational study with censored data. METHODS: From two prospective population-based cohort
Whole-Genome Regression and Prediction Methods Applied to Plant and Animal Breeding
de los Campos, Gustavo; Hickey, John M.; Pong-Wong, Ricardo; Daetwyler, Hans D.; Calus, Mario P. L.
2013-01-01
Genomic-enabled prediction is becoming increasingly important in animal and plant breeding and is also receiving attention in human genetics. Deriving accurate predictions of complex traits requires implementing whole-genome regression (WGR) models where phenotypes are regressed on thousands of markers concurrently. Methods exist that allow implementing these large-p with small-n regressions, and genome-enabled selection (GS) is being implemented in several plant and animal breeding programs. The list of available methods is long, and the relationships between them have not been fully addressed. In this article we provide an overview of available methods for implementing parametric WGR models, discuss selected topics that emerge in applications, and present a general discussion of lessons learned from simulation and empirical data analysis in the last decade. PMID:22745228
An improved partial least-squares regression method for Raman spectroscopy
Momenpour Tehran Monfared, Ali; Anis, Hanan
2017-10-01
It is known that the performance of partial least-squares (PLS) regression analysis can be improved using the backward variable selection method (BVSPLS). In this paper, we further improve the BVSPLS based on a novel selection mechanism. The proposed method is based on sorting the weighted regression coefficients, and then the importance of each variable of the sorted list is evaluated using root mean square errors of prediction (RMSEP) criterion in each iteration step. Our Improved BVSPLS (IBVSPLS) method has been applied to leukemia and heparin data sets and led to an improvement in limit of detection of Raman biosensing ranged from 10% to 43% compared to PLS. Our IBVSPLS was also compared to the jack-knifing (simpler) and Genetic Algorithm (more complex) methods. Our method was consistently better than the jack-knifing method and showed either a similar or a better performance compared to the genetic algorithm.
Wang, Jiangbo; Liu, Junhui; Li, Tiantian; Yin, Shuo; He, Xinhui
2018-01-01
The monthly electricity sales forecasting is a basic work to ensure the safety of the power system. This paper presented a monthly electricity sales forecasting method which comprehensively considers the coupled multi-factors of temperature, economic growth, electric power replacement and business expansion. The mathematical model is constructed by using regression method. The simulation results show that the proposed method is accurate and effective.
International Nuclear Information System (INIS)
Shuke, Noriyuki
1991-01-01
In hepatobiliary scintigraphy, kinetic model analysis, which provides kinetic parameters like hepatic extraction or excretion rate, have been done for quantitative evaluation of liver function. In this analysis, unknown model parameters are usually determined using nonlinear least square regression method (NLS method) where iterative calculation and initial estimate for unknown parameters are required. As a simple alternative to NLS method, direct integral linear least square regression method (DILS method), which can determine model parameters by a simple calculation without initial estimate, is proposed, and tested the applicability to analysis of hepatobiliary scintigraphy. In order to see whether DILS method could determine model parameters as good as NLS method, or to determine appropriate weight for DILS method, simulated theoretical data based on prefixed parameters were fitted to 1 compartment model using both DILS method with various weightings and NLS method. The parameter values obtained were then compared with prefixed values which were used for data generation. The effect of various weights on the error of parameter estimate was examined, and inverse of time was found to be the best weight to make the error minimum. When using this weight, DILS method could give parameter values close to those obtained by NLS method and both parameter values were very close to prefixed values. With appropriate weighting, the DILS method could provide reliable parameter estimate which is relatively insensitive to the data noise. In conclusion, the DILS method could be used as a simple alternative to NLS method, providing reliable parameter estimate. (author)
A different approach to estimate nonlinear regression model using numerical methods
Mahaboob, B.; Venkateswarlu, B.; Mokeshrayalu, G.; Balasiddamuni, P.
2017-11-01
This research paper concerns with the computational methods namely the Gauss-Newton method, Gradient algorithm methods (Newton-Raphson method, Steepest Descent or Steepest Ascent algorithm method, the Method of Scoring, the Method of Quadratic Hill-Climbing) based on numerical analysis to estimate parameters of nonlinear regression model in a very different way. Principles of matrix calculus have been used to discuss the Gradient-Algorithm methods. Yonathan Bard [1] discussed a comparison of gradient methods for the solution of nonlinear parameter estimation problems. However this article discusses an analytical approach to the gradient algorithm methods in a different way. This paper describes a new iterative technique namely Gauss-Newton method which differs from the iterative technique proposed by Gorden K. Smyth [2]. Hans Georg Bock et.al [10] proposed numerical methods for parameter estimation in DAE’s (Differential algebraic equation). Isabel Reis Dos Santos et al [11], Introduced weighted least squares procedure for estimating the unknown parameters of a nonlinear regression metamodel. For large-scale non smooth convex minimization the Hager and Zhang (HZ) conjugate gradient Method and the modified HZ (MHZ) method were presented by Gonglin Yuan et al [12].
Regression dilution bias: tools for correction methods and sample size calculation.
Berglund, Lars
2012-08-01
Random errors in measurement of a risk factor will introduce downward bias of an estimated association to a disease or a disease marker. This phenomenon is called regression dilution bias. A bias correction may be made with data from a validity study or a reliability study. In this article we give a non-technical description of designs of reliability studies with emphasis on selection of individuals for a repeated measurement, assumptions of measurement error models, and correction methods for the slope in a simple linear regression model where the dependent variable is a continuous variable. Also, we describe situations where correction for regression dilution bias is not appropriate. The methods are illustrated with the association between insulin sensitivity measured with the euglycaemic insulin clamp technique and fasting insulin, where measurement of the latter variable carries noticeable random error. We provide software tools for estimation of a corrected slope in a simple linear regression model assuming data for a continuous dependent variable and a continuous risk factor from a main study and an additional measurement of the risk factor in a reliability study. Also, we supply programs for estimation of the number of individuals needed in the reliability study and for choice of its design. Our conclusion is that correction for regression dilution bias is seldom applied in epidemiological studies. This may cause important effects of risk factors with large measurement errors to be neglected.
Directory of Open Access Journals (Sweden)
Yi-Ming Kuo
2011-06-01
Full Text Available Fine airborne particulate matter (PM2.5 has adverse effects on human health. Assessing the long-term effects of PM2.5 exposure on human health and ecology is often limited by a lack of reliable PM2.5 measurements. In Taipei, PM2.5 levels were not systematically measured until August, 2005. Due to the popularity of geographic information systems (GIS, the landuse regression method has been widely used in the spatial estimation of PM concentrations. This method accounts for the potential contributing factors of the local environment, such as traffic volume. Geostatistical methods, on other hand, account for the spatiotemporal dependence among the observations of ambient pollutants. This study assesses the performance of the landuse regression model for the spatiotemporal estimation of PM2.5 in the Taipei area. Specifically, this study integrates the landuse regression model with the geostatistical approach within the framework of the Bayesian maximum entropy (BME method. The resulting epistemic framework can assimilate knowledge bases including: (a empirical-based spatial trends of PM concentration based on landuse regression, (b the spatio-temporal dependence among PM observation information, and (c site-specific PM observations. The proposed approach performs the spatiotemporal estimation of PM2.5 levels in the Taipei area (Taiwan from 2005–2007.
Yu, Hwa-Lung; Wang, Chih-Hsih; Liu, Ming-Che; Kuo, Yi-Ming
2011-06-01
Fine airborne particulate matter (PM2.5) has adverse effects on human health. Assessing the long-term effects of PM2.5 exposure on human health and ecology is often limited by a lack of reliable PM2.5 measurements. In Taipei, PM2.5 levels were not systematically measured until August, 2005. Due to the popularity of geographic information systems (GIS), the landuse regression method has been widely used in the spatial estimation of PM concentrations. This method accounts for the potential contributing factors of the local environment, such as traffic volume. Geostatistical methods, on other hand, account for the spatiotemporal dependence among the observations of ambient pollutants. This study assesses the performance of the landuse regression model for the spatiotemporal estimation of PM2.5 in the Taipei area. Specifically, this study integrates the landuse regression model with the geostatistical approach within the framework of the Bayesian maximum entropy (BME) method. The resulting epistemic framework can assimilate knowledge bases including: (a) empirical-based spatial trends of PM concentration based on landuse regression, (b) the spatio-temporal dependence among PM observation information, and (c) site-specific PM observations. The proposed approach performs the spatiotemporal estimation of PM2.5 levels in the Taipei area (Taiwan) from 2005-2007.
A Simple and Convenient Method of Multiple Linear Regression to Calculate Iodine Molecular Constants
Cooper, Paul D.
2010-01-01
A new procedure using a student-friendly least-squares multiple linear-regression technique utilizing a function within Microsoft Excel is described that enables students to calculate molecular constants from the vibronic spectrum of iodine. This method is advantageous pedagogically as it calculates molecular constants for ground and excited…
Bianca N.I. Eskelson; Hailemariam Temesgen; Tara M. Barrett
2009-01-01
Cavity tree and snag abundance data are highly variable and contain many zero observations. We predict cavity tree and snag abundance from variables that are readily available from forest cover maps or remotely sensed data using negative binomial (NB), zero-inflated NB, and zero-altered NB (ZANB) regression models as well as nearest neighbor (NN) imputation methods....
Cox regression with missing covariate data using a modified partial likelihood method
DEFF Research Database (Denmark)
Martinussen, Torben; Holst, Klaus K.; Scheike, Thomas H.
2016-01-01
Missing covariate values is a common problem in survival analysis. In this paper we propose a novel method for the Cox regression model that is close to maximum likelihood but avoids the use of the EM-algorithm. It exploits that the observed hazard function is multiplicative in the baseline hazard...
Convert a low-cost sensor to a colorimeter using an improved regression method
Wu, Yifeng
2008-01-01
Closed loop color calibration is a process to maintain consistent color reproduction for color printers. To perform closed loop color calibration, a pre-designed color target should be printed, and automatically measured by a color measuring instrument. A low cost sensor has been embedded to the printer to perform the color measurement. A series of sensor calibration and color conversion methods have been developed. The purpose is to get accurate colorimetric measurement from the data measured by the low cost sensor. In order to get high accuracy colorimetric measurement, we need carefully calibrate the sensor, and minimize all possible errors during the color conversion. After comparing several classical color conversion methods, a regression based color conversion method has been selected. The regression is a powerful method to estimate the color conversion functions. But the main difficulty to use this method is to find an appropriate function to describe the relationship between the input and the output data. In this paper, we propose to use 1D pre-linearization tables to improve the linearity between the input sensor measuring data and the output colorimetric data. Using this method, we can increase the accuracy of the regression method, so as to improve the accuracy of the color conversion.
Sidik, S. M.
1975-01-01
Ridge, Marquardt's generalized inverse, shrunken, and principal components estimators are discussed in terms of the objectives of point estimation of parameters, estimation of the predictive regression function, and hypothesis testing. It is found that as the normal equations approach singularity, more consideration must be given to estimable functions of the parameters as opposed to estimation of the full parameter vector; that biased estimators all introduce constraints on the parameter space; that adoption of mean squared error as a criterion of goodness should be independent of the degree of singularity; and that ordinary least-squares subset regression is the best overall method.
Anderson, Carl A.; McRae, Allan F.; Visscher, Peter M.
2006-01-01
Standard quantitative trait loci (QTL) mapping techniques commonly assume that the trait is both fully observed and normally distributed. When considering survival or age-at-onset traits these assumptions are often incorrect. Methods have been developed to map QTL for survival traits; however, they are both computationally intensive and not available in standard genome analysis software packages. We propose a grouped linear regression method for the analysis of continuous survival data. Using...
A Comparative Study of Pairwise Learning Methods Based on Kernel Ridge Regression.
Stock, Michiel; Pahikkala, Tapio; Airola, Antti; De Baets, Bernard; Waegeman, Willem
2018-06-12
Many machine learning problems can be formulated as predicting labels for a pair of objects. Problems of that kind are often referred to as pairwise learning, dyadic prediction, or network inference problems. During the past decade, kernel methods have played a dominant role in pairwise learning. They still obtain a state-of-the-art predictive performance, but a theoretical analysis of their behavior has been underexplored in the machine learning literature. In this work we review and unify kernel-based algorithms that are commonly used in different pairwise learning settings, ranging from matrix filtering to zero-shot learning. To this end, we focus on closed-form efficient instantiations of Kronecker kernel ridge regression. We show that independent task kernel ridge regression, two-step kernel ridge regression, and a linear matrix filter arise naturally as a special case of Kronecker kernel ridge regression, implying that all these methods implicitly minimize a squared loss. In addition, we analyze universality, consistency, and spectral filtering properties. Our theoretical results provide valuable insights into assessing the advantages and limitations of existing pairwise learning methods.
Drafting method of electricity and electron design
International Nuclear Information System (INIS)
Gungbon, Junchun
1989-11-01
This book concentrates on drafting of electricity and electron design. It deals with The meaning of electricity and electron drafting JIS standard regulation the types of drafting and line and letter, basics drafting with projection drafting method, plan projection and development elevation, Drafting method of shop drawing, practical method of design and drafting, Design and drafting of technic and illustration, Connection diagram, Drafting of wiring diagram for light and illumination, Drafting of development connection diagram for sequence control, Drafting of logic circuit sign of flow chart and manual, drafting for a electron circuit diagram and Drawing of PC board.
Estimation Methods for Non-Homogeneous Regression - Minimum CRPS vs Maximum Likelihood
Gebetsberger, Manuel; Messner, Jakob W.; Mayr, Georg J.; Zeileis, Achim
2017-04-01
Non-homogeneous regression models are widely used to statistically post-process numerical weather prediction models. Such regression models correct for errors in mean and variance and are capable to forecast a full probability distribution. In order to estimate the corresponding regression coefficients, CRPS minimization is performed in many meteorological post-processing studies since the last decade. In contrast to maximum likelihood estimation, CRPS minimization is claimed to yield more calibrated forecasts. Theoretically, both scoring rules used as an optimization score should be able to locate a similar and unknown optimum. Discrepancies might result from a wrong distributional assumption of the observed quantity. To address this theoretical concept, this study compares maximum likelihood and minimum CRPS estimation for different distributional assumptions. First, a synthetic case study shows that, for an appropriate distributional assumption, both estimation methods yield to similar regression coefficients. The log-likelihood estimator is slightly more efficient. A real world case study for surface temperature forecasts at different sites in Europe confirms these results but shows that surface temperature does not always follow the classical assumption of a Gaussian distribution. KEYWORDS: ensemble post-processing, maximum likelihood estimation, CRPS minimization, probabilistic temperature forecasting, distributional regression models
Survey of electronic payment methods and systems
Havinga, Paul J.M.; Smit, Gerardus Johannes Maria; Helme, A.; Verbraeck, A.
1996-01-01
In this paper an overview of electronic payment methods and systems is given. This survey is done as part of the Moby Dick project. Electronic payment systems can be grouped into three broad classes: traditional money transactions, digital currency and creditdebit payments. Such payment systems have
A Fast Gradient Method for Nonnegative Sparse Regression With Self-Dictionary
Gillis, Nicolas; Luce, Robert
2018-01-01
A nonnegative matrix factorization (NMF) can be computed efficiently under the separability assumption, which asserts that all the columns of the given input data matrix belong to the cone generated by a (small) subset of them. The provably most robust methods to identify these conic basis columns are based on nonnegative sparse regression and self dictionaries, and require the solution of large-scale convex optimization problems. In this paper we study a particular nonnegative sparse regression model with self dictionary. As opposed to previously proposed models, this model yields a smooth optimization problem where the sparsity is enforced through linear constraints. We show that the Euclidean projection on the polyhedron defined by these constraints can be computed efficiently, and propose a fast gradient method to solve our model. We compare our algorithm with several state-of-the-art methods on synthetic data sets and real-world hyperspectral images.
Van Belle, Vanya; Pelckmans, Kristiaan; Van Huffel, Sabine; Suykens, Johan A K
2011-10-01
To compare and evaluate ranking, regression and combined machine learning approaches for the analysis of survival data. The literature describes two approaches based on support vector machines to deal with censored observations. In the first approach the key idea is to rephrase the task as a ranking problem via the concordance index, a problem which can be solved efficiently in a context of structural risk minimization and convex optimization techniques. In a second approach, one uses a regression approach, dealing with censoring by means of inequality constraints. The goal of this paper is then twofold: (i) introducing a new model combining the ranking and regression strategy, which retains the link with existing survival models such as the proportional hazards model via transformation models; and (ii) comparison of the three techniques on 6 clinical and 3 high-dimensional datasets and discussing the relevance of these techniques over classical approaches fur survival data. We compare svm-based survival models based on ranking constraints, based on regression constraints and models based on both ranking and regression constraints. The performance of the models is compared by means of three different measures: (i) the concordance index, measuring the model's discriminating ability; (ii) the logrank test statistic, indicating whether patients with a prognostic index lower than the median prognostic index have a significant different survival than patients with a prognostic index higher than the median; and (iii) the hazard ratio after normalization to restrict the prognostic index between 0 and 1. Our results indicate a significantly better performance for models including regression constraints above models only based on ranking constraints. This work gives empirical evidence that svm-based models using regression constraints perform significantly better than svm-based models based on ranking constraints. Our experiments show a comparable performance for methods
Directory of Open Access Journals (Sweden)
Giuliano de Oliveira Freitas
2013-10-01
Full Text Available PURPOSE: To determine linear regression models between Alpins descriptive indices and Thibos astigmatic power vectors (APV, assessing the validity and strength of such correlations. METHODS: This case series prospectively assessed 62 eyes of 31 consecutive cataract patients with preoperative corneal astigmatism between 0.75 and 2.50 diopters in both eyes. Patients were randomly assorted among two phacoemulsification groups: one assigned to receive AcrySof®Toric intraocular lens (IOL in both eyes and another assigned to have AcrySof Natural IOL associated with limbal relaxing incisions, also in both eyes. All patients were reevaluated postoperatively at 6 months, when refractive astigmatism analysis was performed using both Alpins and Thibos methods. The ratio between Thibos postoperative APV and preoperative APV (APVratio and its linear regression to Alpins percentage of success of astigmatic surgery, percentage of astigmatism corrected and percentage of astigmatism reduction at the intended axis were assessed. RESULTS: Significant negative correlation between the ratio of post- and preoperative Thibos APVratio and Alpins percentage of success (%Success was found (Spearman's ρ=-0.93; linear regression is given by the following equation: %Success = (-APVratio + 1.00x100. CONCLUSION: The linear regression we found between APVratio and %Success permits a validated mathematical inference concerning the overall success of astigmatic surgery.
Using the fuzzy linear regression method to benchmark the energy efficiency of commercial buildings
International Nuclear Information System (INIS)
Chung, William
2012-01-01
Highlights: ► Fuzzy linear regression method is used for developing benchmarking systems. ► The systems can be used to benchmark energy efficiency of commercial buildings. ► The resulting benchmarking model can be used by public users. ► The resulting benchmarking model can capture the fuzzy nature of input–output data. -- Abstract: Benchmarking systems from a sample of reference buildings need to be developed to conduct benchmarking processes for the energy efficiency of commercial buildings. However, not all benchmarking systems can be adopted by public users (i.e., other non-reference building owners) because of the different methods in developing such systems. An approach for benchmarking the energy efficiency of commercial buildings using statistical regression analysis to normalize other factors, such as management performance, was developed in a previous work. However, the field data given by experts can be regarded as a distribution of possibility. Thus, the previous work may not be adequate to handle such fuzzy input–output data. Consequently, a number of fuzzy structures cannot be fully captured by statistical regression analysis. This present paper proposes the use of fuzzy linear regression analysis to develop a benchmarking process, the resulting model of which can be used by public users. An illustrative example is given as well.
Directory of Open Access Journals (Sweden)
Massoud Tabesh
2011-07-01
Full Text Available Optimum operation of water distribution networks is one of the priorities of sustainable development of water resources, considering the issues of increasing efficiency and decreasing the water losses. One of the key subjects in optimum operational management of water distribution systems is preparing rehabilitation and replacement schemes, prediction of pipes break rate and evaluation of their reliability. Several approaches have been presented in recent years regarding prediction of pipe failure rates which each one requires especial data sets. Deterministic models based on age and deterministic multi variables and stochastic group modeling are examples of the solutions which relate pipe break rates to parameters like age, material and diameters. In this paper besides the mentioned parameters, more factors such as pipe depth and hydraulic pressures are considered as well. Then using multi variable regression method, intelligent approaches (Artificial neural network and neuro fuzzy models and Evolutionary polynomial Regression method (EPR pipe burst rate are predicted. To evaluate the results of different approaches, a case study is carried out in a part ofMashhadwater distribution network. The results show the capability and advantages of ANN and EPR methods to predict pipe break rates, in comparison with neuro fuzzy and multi-variable regression methods.
Kaneko, Hiromasa
2018-02-26
To develop a new ensemble learning method and construct highly predictive regression models in chemoinformatics and chemometrics, applicability domains (ADs) are introduced into the ensemble learning process of prediction. When estimating values of an objective variable using subregression models, only the submodels with ADs that cover a query sample, i.e., the sample is inside the model's AD, are used. By constructing submodels and changing a list of selected explanatory variables, the union of the submodels' ADs, which defines the overall AD, becomes large, and the prediction performance is enhanced for diverse compounds. By analyzing a quantitative structure-activity relationship data set and a quantitative structure-property relationship data set, it is confirmed that the ADs can be enlarged and the estimation performance of regression models is improved compared with traditional methods.
Development of Compressive Failure Strength for Composite Laminate Using Regression Analysis Method
Energy Technology Data Exchange (ETDEWEB)
Lee, Myoung Keon [Agency for Defense Development, Daejeon (Korea, Republic of); Lee, Jeong Won; Yoon, Dong Hyun; Kim, Jae Hoon [Chungnam Nat’l Univ., Daejeon (Korea, Republic of)
2016-10-15
This paper provides the compressive failure strength value of composite laminate developed by using regression analysis method. Composite material in this document is a Carbon/Epoxy unidirection(UD) tape prepreg(Cycom G40-800/5276-1) cured at 350°F(177°C). The operating temperature is –60°F~+200°F(-55°C - +95°C). A total of 56 compression tests were conducted on specimens from eight (8) distinct laminates that were laid up by standard angle layers (0°, +45°, –45° and 90°). The ASTM-D-6484 standard was used for test method. The regression analysis was performed with the response variable being the laminate ultimate fracture strength and the regressor variables being two ply orientations (0° and ±45°)
Development of Compressive Failure Strength for Composite Laminate Using Regression Analysis Method
International Nuclear Information System (INIS)
Lee, Myoung Keon; Lee, Jeong Won; Yoon, Dong Hyun; Kim, Jae Hoon
2016-01-01
This paper provides the compressive failure strength value of composite laminate developed by using regression analysis method. Composite material in this document is a Carbon/Epoxy unidirection(UD) tape prepreg(Cycom G40-800/5276-1) cured at 350°F(177°C). The operating temperature is –60°F~+200°F(-55°C - +95°C). A total of 56 compression tests were conducted on specimens from eight (8) distinct laminates that were laid up by standard angle layers (0°, +45°, –45° and 90°). The ASTM-D-6484 standard was used for test method. The regression analysis was performed with the response variable being the laminate ultimate fracture strength and the regressor variables being two ply orientations (0° and ±45°)
James W. Hardin; Henrik Schmeidiche; Raymond J. Carroll
2003-01-01
This paper discusses and illustrates the method of regression calibration. This is a straightforward technique for fitting models with additive measurement error. We present this discussion in terms of generalized linear models (GLMs) following the notation defined in Hardin and Carroll (2003). Discussion will include specified measurement error, measurement error estimated by replicate error-prone proxies, and measurement error estimated by instrumental variables. The discussion focuses on s...
Assessing the performance of variational methods for mixed logistic regression models
Czech Academy of Sciences Publication Activity Database
Rijmen, F.; Vomlel, Jiří
2008-01-01
Roč. 78, č. 8 (2008), s. 765-779 ISSN 0094-9655 R&D Projects: GA MŠk 1M0572 Grant - others:GA MŠk(CZ) 2C06019 Institutional research plan: CEZ:AV0Z10750506 Keywords : Mixed models * Logistic regression * Variational methods * Lower bound approximation Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.353, year: 2008
Comparison of Adaline and Multiple Linear Regression Methods for Rainfall Forecasting
Sutawinaya, IP; Astawa, INGA; Hariyanti, NKD
2018-01-01
Heavy rainfall can cause disaster, therefore need a forecast to predict rainfall intensity. Main factor that cause flooding is there is a high rainfall intensity and it makes the river become overcapacity. This will cause flooding around the area. Rainfall factor is a dynamic factor, so rainfall is very interesting to be studied. In order to support the rainfall forecasting, there are methods that can be used from Artificial Intelligence (AI) to statistic. In this research, we used Adaline for AI method and Regression for statistic method. The more accurate forecast result shows the method that used is good for forecasting the rainfall. Through those methods, we expected which is the best method for rainfall forecasting here.
Hassanzadeh, S.; Hosseinibalam, F.; Omidvari, M.
2008-04-01
Data of seven meteorological variables (relative humidity, wet temperature, dry temperature, maximum temperature, minimum temperature, ground temperature and sun radiation time) and ozone values have been used for statistical analysis. Meteorological variables and ozone values were analyzed using both multiple linear regression and principal component methods. Data for the period 1999-2004 are analyzed jointly using both methods. For all periods, temperature dependent variables were highly correlated, but were all negatively correlated with relative humidity. Multiple regression analysis was used to fit the meteorological variables using the meteorological variables as predictors. A variable selection method based on high loading of varimax rotated principal components was used to obtain subsets of the predictor variables to be included in the linear regression model of the meteorological variables. In 1999, 2001 and 2002 one of the meteorological variables was weakly influenced predominantly by the ozone concentrations. However, the model did not predict that the meteorological variables for the year 2000 were not influenced predominantly by the ozone concentrations that point to variation in sun radiation. This could be due to other factors that were not explicitly considered in this study.
Correcting for cryptic relatedness by a regression-based genomic control method
Directory of Open Access Journals (Sweden)
Yang Yaning
2009-12-01
Full Text Available Abstract Background Genomic control (GC method is a useful tool to correct for the cryptic relatedness in population-based association studies. It was originally proposed for correcting for the variance inflation of Cochran-Armitage's additive trend test by using information from unlinked null markers, and was later generalized to be applicable to other tests with the additional requirement that the null markers are matched with the candidate marker in allele frequencies. However, matching allele frequencies limits the number of available null markers and thus limits the applicability of the GC method. On the other hand, errors in genotype/allele frequencies may cause further bias and variance inflation and thereby aggravate the effect of GC correction. Results In this paper, we propose a regression-based GC method using null markers that are not necessarily matched in allele frequencies with the candidate marker. Variation of allele frequencies of the null markers is adjusted by a regression method. Conclusion The proposed method can be readily applied to the Cochran-Armitage's trend tests other than the additive trend test, the Pearson's chi-square test and other robust efficiency tests. Simulation results show that the proposed method is effective in controlling type I error in the presence of population substructure.
A subagging regression method for estimating the qualitative and quantitative state of groundwater
Jeong, Jina; Park, Eungyu; Han, Weon Shik; Kim, Kue-Young
2017-08-01
A subsample aggregating (subagging) regression (SBR) method for the analysis of groundwater data pertaining to trend-estimation-associated uncertainty is proposed. The SBR method is validated against synthetic data competitively with other conventional robust and non-robust methods. From the results, it is verified that the estimation accuracies of the SBR method are consistent and superior to those of other methods, and the uncertainties are reasonably estimated; the others have no uncertainty analysis option. To validate further, actual groundwater data are employed and analyzed comparatively with Gaussian process regression (GPR). For all cases, the trend and the associated uncertainties are reasonably estimated by both SBR and GPR regardless of Gaussian or non-Gaussian skewed data. However, it is expected that GPR has a limitation in applications to severely corrupted data by outliers owing to its non-robustness. From the implementations, it is determined that the SBR method has the potential to be further developed as an effective tool of anomaly detection or outlier identification in groundwater state data such as the groundwater level and contaminant concentration.
Impact of regression methods on improved effects of soil structure on soil water retention estimates
Nguyen, Phuong Minh; De Pue, Jan; Le, Khoa Van; Cornelis, Wim
2015-06-01
Increasing the accuracy of pedotransfer functions (PTFs), an indirect method for predicting non-readily available soil features such as soil water retention characteristics (SWRC), is of crucial importance for large scale agro-hydrological modeling. Adding significant predictors (i.e., soil structure), and implementing more flexible regression algorithms are among the main strategies of PTFs improvement. The aim of this study was to investigate whether the improved effect of categorical soil structure information on estimating soil-water content at various matric potentials, which has been reported in literature, could be enduringly captured by regression techniques other than the usually applied linear regression. Two data mining techniques, i.e., Support Vector Machines (SVM), and k-Nearest Neighbors (kNN), which have been recently introduced as promising tools for PTF development, were utilized to test if the incorporation of soil structure will improve PTF's accuracy under a context of rather limited training data. The results show that incorporating descriptive soil structure information, i.e., massive, structured and structureless, as grouping criterion can improve the accuracy of PTFs derived by SVM approach in the range of matric potential of -6 to -33 kPa (average RMSE decreased up to 0.005 m3 m-3 after grouping, depending on matric potentials). The improvement was primarily attributed to the outperformance of SVM-PTFs calibrated on structureless soils. No improvement was obtained with kNN technique, at least not in our study in which the data set became limited in size after grouping. Since there is an impact of regression techniques on the improved effect of incorporating qualitative soil structure information, selecting a proper technique will help to maximize the combined influence of flexible regression algorithms and soil structure information on PTF accuracy.
A method for fitting regression splines with varying polynomial order in the linear mixed model.
Edwards, Lloyd J; Stewart, Paul W; MacDougall, James E; Helms, Ronald W
2006-02-15
The linear mixed model has become a widely used tool for longitudinal analysis of continuous variables. The use of regression splines in these models offers the analyst additional flexibility in the formulation of descriptive analyses, exploratory analyses and hypothesis-driven confirmatory analyses. We propose a method for fitting piecewise polynomial regression splines with varying polynomial order in the fixed effects and/or random effects of the linear mixed model. The polynomial segments are explicitly constrained by side conditions for continuity and some smoothness at the points where they join. By using a reparameterization of this explicitly constrained linear mixed model, an implicitly constrained linear mixed model is constructed that simplifies implementation of fixed-knot regression splines. The proposed approach is relatively simple, handles splines in one variable or multiple variables, and can be easily programmed using existing commercial software such as SAS or S-plus. The method is illustrated using two examples: an analysis of longitudinal viral load data from a study of subjects with acute HIV-1 infection and an analysis of 24-hour ambulatory blood pressure profiles.
Prastuti, M.; Suhartono; Salehah, NA
2018-04-01
The need for energy supply, especially for electricity in Indonesia has been increasing in the last past years. Furthermore, the high electricity usage by people at different times leads to the occurrence of heteroscedasticity issue. Estimate the electricity supply that could fulfilled the community’s need is very important, but the heteroscedasticity issue often made electricity forecasting hard to be done. An accurate forecast of electricity consumptions is one of the key challenges for energy provider to make better resources and service planning and also take control actions in order to balance the electricity supply and demand for community. In this paper, hybrid ARIMAX Quantile Regression (ARIMAX-QR) approach was proposed to predict the short-term electricity consumption in East Java. This method will also be compared to time series regression using RMSE, MAPE, and MdAPE criteria. The data used in this research was the electricity consumption per half-an-hour data during the period of September 2015 to April 2016. The results show that the proposed approach can be a competitive alternative to forecast short-term electricity in East Java. ARIMAX-QR using lag values and dummy variables as predictors yield more accurate prediction in both in-sample and out-sample data. Moreover, both time series regression and ARIMAX-QR methods with addition of lag values as predictor could capture accurately the patterns in the data. Hence, it produces better predictions compared to the models that not use additional lag variables.
Method of fabricating a cooled electronic system
Chainer, Timothy J; Gaynes, Michael A; Graybill, David P; Iyengar, Madhusudan K; Kamath, Vinod; Kochuparambil, Bejoy J; Schmidt, Roger R; Schultz, Mark D; Simco, Daniel P; Steinke, Mark E
2014-02-11
A method of fabricating a liquid-cooled electronic system is provided which includes an electronic assembly having an electronics card and a socket with a latch at one end. The latch facilitates securing of the card within the socket. The method includes providing a liquid-cooled cold rail at the one end of the socket, and a thermal spreader to couple the electronics card to the cold rail. The thermal spreader includes first and second thermal transfer plates coupled to first and second surfaces on opposite sides of the card, and thermally conductive extensions extending from end edges of the plates, which couple the respective transfer plates to the liquid-cooled cold rail. The extensions are disposed to the sides of the latch, and the card is securable within or removable from the socket using the latch without removing the cold rail or the thermal spreader.
A robust and efficient stepwise regression method for building sparse polynomial chaos expansions
Energy Technology Data Exchange (ETDEWEB)
Abraham, Simon, E-mail: Simon.Abraham@ulb.ac.be [Vrije Universiteit Brussel (VUB), Department of Mechanical Engineering, Research Group Fluid Mechanics and Thermodynamics, Pleinlaan 2, 1050 Brussels (Belgium); Raisee, Mehrdad [School of Mechanical Engineering, College of Engineering, University of Tehran, P.O. Box: 11155-4563, Tehran (Iran, Islamic Republic of); Ghorbaniasl, Ghader; Contino, Francesco; Lacor, Chris [Vrije Universiteit Brussel (VUB), Department of Mechanical Engineering, Research Group Fluid Mechanics and Thermodynamics, Pleinlaan 2, 1050 Brussels (Belgium)
2017-03-01
Polynomial Chaos (PC) expansions are widely used in various engineering fields for quantifying uncertainties arising from uncertain parameters. The computational cost of classical PC solution schemes is unaffordable as the number of deterministic simulations to be calculated grows dramatically with the number of stochastic dimension. This considerably restricts the practical use of PC at the industrial level. A common approach to address such problems is to make use of sparse PC expansions. This paper presents a non-intrusive regression-based method for building sparse PC expansions. The most important PC contributions are detected sequentially through an automatic search procedure. The variable selection criterion is based on efficient tools relevant to probabilistic method. Two benchmark analytical functions are used to validate the proposed algorithm. The computational efficiency of the method is then illustrated by a more realistic CFD application, consisting of the non-deterministic flow around a transonic airfoil subject to geometrical uncertainties. To assess the performance of the developed methodology, a detailed comparison is made with the well established LAR-based selection technique. The results show that the developed sparse regression technique is able to identify the most significant PC contributions describing the problem. Moreover, the most important stochastic features are captured at a reduced computational cost compared to the LAR method. The results also demonstrate the superior robustness of the method by repeating the analyses using random experimental designs.
Kim, Yoonsang; Choi, Young-Ku; Emery, Sherry
2013-08-01
Several statistical packages are capable of estimating generalized linear mixed models and these packages provide one or more of three estimation methods: penalized quasi-likelihood, Laplace, and Gauss-Hermite. Many studies have investigated these methods' performance for the mixed-effects logistic regression model. However, the authors focused on models with one or two random effects and assumed a simple covariance structure between them, which may not be realistic. When there are multiple correlated random effects in a model, the computation becomes intensive, and often an algorithm fails to converge. Moreover, in our analysis of smoking status and exposure to anti-tobacco advertisements, we have observed that when a model included multiple random effects, parameter estimates varied considerably from one statistical package to another even when using the same estimation method. This article presents a comprehensive review of the advantages and disadvantages of each estimation method. In addition, we compare the performances of the three methods across statistical packages via simulation, which involves two- and three-level logistic regression models with at least three correlated random effects. We apply our findings to a real dataset. Our results suggest that two packages-SAS GLIMMIX Laplace and SuperMix Gaussian quadrature-perform well in terms of accuracy, precision, convergence rates, and computing speed. We also discuss the strengths and weaknesses of the two packages in regard to sample sizes.
Computational methods of electron/photon transport
International Nuclear Information System (INIS)
Mack, J.M.
1983-01-01
A review of computational methods simulating the non-plasma transport of electrons and their attendant cascades is presented. Remarks are mainly restricted to linearized formalisms at electron energies above 1 keV. The effectiveness of various metods is discussed including moments, point-kernel, invariant imbedding, discrete-ordinates, and Monte Carlo. Future research directions and the potential impact on various aspects of science and engineering are indicated
Kim, Yoonsang; Emery, Sherry
2013-01-01
Several statistical packages are capable of estimating generalized linear mixed models and these packages provide one or more of three estimation methods: penalized quasi-likelihood, Laplace, and Gauss-Hermite. Many studies have investigated these methods’ performance for the mixed-effects logistic regression model. However, the authors focused on models with one or two random effects and assumed a simple covariance structure between them, which may not be realistic. When there are multiple correlated random effects in a model, the computation becomes intensive, and often an algorithm fails to converge. Moreover, in our analysis of smoking status and exposure to anti-tobacco advertisements, we have observed that when a model included multiple random effects, parameter estimates varied considerably from one statistical package to another even when using the same estimation method. This article presents a comprehensive review of the advantages and disadvantages of each estimation method. In addition, we compare the performances of the three methods across statistical packages via simulation, which involves two- and three-level logistic regression models with at least three correlated random effects. We apply our findings to a real dataset. Our results suggest that two packages—SAS GLIMMIX Laplace and SuperMix Gaussian quadrature—perform well in terms of accuracy, precision, convergence rates, and computing speed. We also discuss the strengths and weaknesses of the two packages in regard to sample sizes. PMID:24288415
Directory of Open Access Journals (Sweden)
Guan Lian
2018-01-01
Full Text Available Accurate prediction of taxi-out time is significant precondition for improving the operationality of the departure process at an airport, as well as reducing the long taxi-out time, congestion, and excessive emission of greenhouse gases. Unfortunately, several of the traditional methods of predicting taxi-out time perform unsatisfactorily at congested airports. This paper describes and tests three of those conventional methods which include Generalized Linear Model, Softmax Regression Model, and Artificial Neural Network method and two improved Support Vector Regression (SVR approaches based on swarm intelligence algorithm optimization, which include Particle Swarm Optimization (PSO and Firefly Algorithm. In order to improve the global searching ability of Firefly Algorithm, adaptive step factor and Lévy flight are implemented simultaneously when updating the location function. Six factors are analysed, of which delay is identified as one significant factor in congested airports. Through a series of specific dynamic analyses, a case study of Beijing International Airport (PEK is tested with historical data. The performance measures show that the proposed two SVR approaches, especially the Improved Firefly Algorithm (IFA optimization-based SVR method, not only perform as the best modelling measures and accuracy rate compared with the representative forecast models, but also can achieve a better predictive performance when dealing with abnormal taxi-out time states.
da Silva, Claudia Pereira; Emídio, Elissandro Soares; de Marchi, Mary Rosa Rodrigues
2015-01-01
This paper describes the validation of a method consisting of solid-phase extraction followed by gas chromatography-tandem mass spectrometry for the analysis of the ultraviolet (UV) filters benzophenone-3, ethylhexyl salicylate, ethylhexyl methoxycinnamate and octocrylene. The method validation criteria included evaluation of selectivity, analytical curve, trueness, precision, limits of detection and limits of quantification. The non-weighted linear regression model has traditionally been used for calibration, but it is not necessarily the optimal model in all cases. Because the assumption of homoscedasticity was not met for the analytical data in this work, a weighted least squares linear regression was used for the calibration method. The evaluated analytical parameters were satisfactory for the analytes and showed recoveries at four fortification levels between 62% and 107%, with relative standard deviations less than 14%. The detection limits ranged from 7.6 to 24.1 ng L(-1). The proposed method was used to determine the amount of UV filters in water samples from water treatment plants in Araraquara and Jau in São Paulo, Brazil. Copyright © 2014 Elsevier B.V. All rights reserved.
Zhu, Xiaofeng; Suk, Heung-Il; Wang, Li; Lee, Seong-Whan; Shen, Dinggang
2017-05-01
In this paper, we focus on joint regression and classification for Alzheimer's disease diagnosis and propose a new feature selection method by embedding the relational information inherent in the observations into a sparse multi-task learning framework. Specifically, the relational information includes three kinds of relationships (such as feature-feature relation, response-response relation, and sample-sample relation), for preserving three kinds of the similarity, such as for the features, the response variables, and the samples, respectively. To conduct feature selection, we first formulate the objective function by imposing these three relational characteristics along with an ℓ 2,1 -norm regularization term, and further propose a computationally efficient algorithm to optimize the proposed objective function. With the dimension-reduced data, we train two support vector regression models to predict the clinical scores of ADAS-Cog and MMSE, respectively, and also a support vector classification model to determine the clinical label. We conducted extensive experiments on the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset to validate the effectiveness of the proposed method. Our experimental results showed the efficacy of the proposed method in enhancing the performances of both clinical scores prediction and disease status identification, compared to the state-of-the-art methods. Copyright © 2015 Elsevier B.V. All rights reserved.
Directory of Open Access Journals (Sweden)
Gholam Reza Sheykhzadeh
2017-02-01
Full Text Available Introduction: Penetration resistance is one of the criteria for evaluating soil compaction. It correlates with several soil properties such as vehicle trafficability, resistance to root penetration, seedling emergence, and soil compaction by farm machinery. Direct measurement of penetration resistance is time consuming and difficult because of high temporal and spatial variability. Therefore, many different regressions and artificial neural network pedotransfer functions have been proposed to estimate penetration resistance from readily available soil variables such as particle size distribution, bulk density (Db and gravimetric water content (θm. The lands of Ardabil Province are one of the main production regions of potato in Iran, thus, obtaining the soil penetration resistance in these regions help with the management of potato production. The objective of this research was to derive pedotransfer functions by using regression and artificial neural network to predict penetration resistance from some soil variations in the agricultural soils of Ardabil plain and to compare the performance of artificial neural network with regression models. Materials and methods: Disturbed and undisturbed soil samples (n= 105 were systematically taken from 0-10 cm soil depth with nearly 3000 m distance in the agricultural lands of the Ardabil plain ((lat 38°15' to 38°40' N, long 48°16' to 48°61' E. The contents of sand, silt and clay (hydrometer method, CaCO3 (titration method, bulk density (cylinder method, particle density (Dp (pychnometer method, organic carbon (wet oxidation method, total porosity(calculating from Db and Dp, saturated (θs and field soil water (θf using the gravimetric method were measured in the laboratory. Mean geometric diameter (dg and standard deviation (σg of soil particles were computed using the percentages of sand, silt and clay. Penetration resistance was measured in situ using cone penetrometer (analog model at 10
Landslide susceptibility mapping on a global scale using the method of logistic regression
Directory of Open Access Journals (Sweden)
L. Lin
2017-08-01
Full Text Available This paper proposes a statistical model for mapping global landslide susceptibility based on logistic regression. After investigating explanatory factors for landslides in the existing literature, five factors were selected for model landslide susceptibility: relative relief, extreme precipitation, lithology, ground motion and soil moisture. When building the model, 70 % of landslide and nonlandslide points were randomly selected for logistic regression, and the others were used for model validation. To evaluate the accuracy of predictive models, this paper adopts several criteria including a receiver operating characteristic (ROC curve method. Logistic regression experiments found all five factors to be significant in explaining landslide occurrence on a global scale. During the modeling process, percentage correct in confusion matrix of landslide classification was approximately 80 % and the area under the curve (AUC was nearly 0.87. During the validation process, the above statistics were about 81 % and 0.88, respectively. Such a result indicates that the model has strong robustness and stable performance. This model found that at a global scale, soil moisture can be dominant in the occurrence of landslides and topographic factor may be secondary.
Liu, Ke; Chen, Xiaojing; Li, Limin; Chen, Huiling; Ruan, Xiukai; Liu, Wenbin
2015-02-09
The successive projections algorithm (SPA) is widely used to select variables for multiple linear regression (MLR) modeling. However, SPA used only once may not obtain all the useful information of the full spectra, because the number of selected variables cannot exceed the number of calibration samples in the SPA algorithm. Therefore, the SPA-MLR method risks the loss of useful information. To make a full use of the useful information in the spectra, a new method named "consensus SPA-MLR" (C-SPA-MLR) is proposed herein. This method is the combination of consensus strategy and SPA-MLR method. In the C-SPA-MLR method, SPA-MLR is used to construct member models with different subsets of variables, which are selected from the remaining variables iteratively. A consensus prediction is obtained by combining the predictions of the member models. The proposed method is evaluated by analyzing the near infrared (NIR) spectra of corn and diesel. The results of C-SPA-MLR method showed a better prediction performance compared with the SPA-MLR and full-spectra PLS methods. Moreover, these results could serve as a reference for combination the consensus strategy and other variable selection methods when analyzing NIR spectra and other spectroscopic techniques. Copyright © 2014 Elsevier B.V. All rights reserved.
Development of K-Nearest Neighbour Regression Method in Forecasting River Stream Flow
Directory of Open Access Journals (Sweden)
Mohammad Azmi
2012-07-01
Full Text Available Different statistical, non-statistical and black-box methods have been used in forecasting processes. Among statistical methods, K-nearest neighbour non-parametric regression method (K-NN due to its natural simplicity and mathematical base is one of the recommended methods for forecasting processes. In this study, K-NN method is explained completely. Besides, development and improvement approaches such as best neighbour estimation, data transformation functions, distance functions and proposed extrapolation method are described. K-NN method in company with its development approaches is used in streamflow forecasting of Zayandeh-Rud Dam upper basin. Comparing between final results of classic K-NN method and modified K-NN (number of neighbour 5, transformation function of Range Scaling, distance function of Mahanalobis and proposed extrapolation method shows that modified K-NN in criteria of goodness of fit, root mean square error, percentage of volume of error and correlation has had performance improvement 45% , 59% and 17% respectively. These results approve necessity of applying mentioned approaches to derive more accurate forecasts.
Comparing the index-flood and multiple-regression methods using L-moments
Malekinezhad, H.; Nachtnebel, H. P.; Klik, A.
In arid and semi-arid regions, the length of records is usually too short to ensure reliable quantile estimates. Comparing index-flood and multiple-regression analyses based on L-moments was the main objective of this study. Factor analysis was applied to determine main influencing variables on flood magnitude. Ward’s cluster and L-moments approaches were applied to several sites in the Namak-Lake basin in central Iran to delineate homogeneous regions based on site characteristics. Homogeneity test was done using L-moments-based measures. Several distributions were fitted to the regional flood data and index-flood and multiple-regression methods as two regional flood frequency methods were compared. The results of factor analysis showed that length of main waterway, compactness coefficient, mean annual precipitation, and mean annual temperature were the main variables affecting flood magnitude. The study area was divided into three regions based on the Ward’s method of clustering approach. The homogeneity test based on L-moments showed that all three regions were acceptably homogeneous. Five distributions were fitted to the annual peak flood data of three homogeneous regions. Using the L-moment ratios and the Z-statistic criteria, GEV distribution was identified as the most robust distribution among five candidate distributions for all the proposed sub-regions of the study area, and in general, it was concluded that the generalised extreme value distribution was the best-fit distribution for every three regions. The relative root mean square error (RRMSE) measure was applied for evaluating the performance of the index-flood and multiple-regression methods in comparison with the curve fitting (plotting position) method. In general, index-flood method gives more reliable estimations for various flood magnitudes of different recurrence intervals. Therefore, this method should be adopted as regional flood frequency method for the study area and the Namak-Lake basin
International Nuclear Information System (INIS)
Wu, Jie; Wang, Jianzhou; Lu, Haiyan; Dong, Yao; Lu, Xiaoxiao
2013-01-01
Highlights: ► The seasonal and trend items of the data series are forecasted separately. ► Seasonal item in the data series is verified by the Kendall τ correlation testing. ► Different regression models are applied to the trend item forecasting. ► We examine the superiority of the combined models by the quartile value comparison. ► Paired-sample T test is utilized to confirm the superiority of the combined models. - Abstract: For an energy-limited economy system, it is crucial to forecast load demand accurately. This paper devotes to 1-week-ahead daily load forecasting approach in which load demand series are predicted by employing the information of days before being similar to that of the forecast day. As well as in many nonlinear systems, seasonal item and trend item are coexisting in load demand datasets. In this paper, the existing of the seasonal item in the load demand data series is firstly verified according to the Kendall τ correlation testing method. Then in the belief of the separate forecasting to the seasonal item and the trend item would improve the forecasting accuracy, hybrid models by combining seasonal exponential adjustment method (SEAM) with the regression methods are proposed in this paper, where SEAM and the regression models are employed to seasonal and trend items forecasting respectively. Comparisons of the quartile values as well as the mean absolute percentage error values demonstrate this forecasting technique can significantly improve the accuracy though models applied to the trend item forecasting are eleven different ones. This superior performance of this separate forecasting technique is further confirmed by the paired-sample T tests
Directory of Open Access Journals (Sweden)
Bangyong Sun
2014-01-01
Full Text Available The polynomial regression method is employed to calculate the relationship of device color space and CIE color space for color characterization, and the performance of different expressions with specific parameters is evaluated. Firstly, the polynomial equation for color conversion is established and the computation of polynomial coefficients is analysed. And then different forms of polynomial equations are used to calculate the RGB and CMYK’s CIE color values, while the corresponding color errors are compared. At last, an optimal polynomial expression is obtained by analysing several related parameters during color conversion, including polynomial numbers, the degree of polynomial terms, the selection of CIE visual spaces, and the linearization.
Face Hallucination with Linear Regression Model in Semi-Orthogonal Multilinear PCA Method
Asavaskulkiet, Krissada
2018-04-01
In this paper, we propose a new face hallucination technique, face images reconstruction in HSV color space with a semi-orthogonal multilinear principal component analysis method. This novel hallucination technique can perform directly from tensors via tensor-to-vector projection by imposing the orthogonality constraint in only one mode. In our experiments, we use facial images from FERET database to test our hallucination approach which is demonstrated by extensive experiments with high-quality hallucinated color faces. The experimental results assure clearly demonstrated that we can generate photorealistic color face images by using the SO-MPCA subspace with a linear regression model.
Real-time prediction of respiratory motion based on local regression methods
International Nuclear Information System (INIS)
Ruan, D; Fessler, J A; Balter, J M
2007-01-01
Recent developments in modulation techniques enable conformal delivery of radiation doses to small, localized target volumes. One of the challenges in using these techniques is real-time tracking and predicting target motion, which is necessary to accommodate system latencies. For image-guided-radiotherapy systems, it is also desirable to minimize sampling rates to reduce imaging dose. This study focuses on predicting respiratory motion, which can significantly affect lung tumours. Predicting respiratory motion in real-time is challenging, due to the complexity of breathing patterns and the many sources of variability. We propose a prediction method based on local regression. There are three major ingredients of this approach: (1) forming an augmented state space to capture system dynamics, (2) local regression in the augmented space to train the predictor from previous observation data using semi-periodicity of respiratory motion, (3) local weighting adjustment to incorporate fading temporal correlations. To evaluate prediction accuracy, we computed the root mean square error between predicted tumor motion and its observed location for ten patients. For comparison, we also investigated commonly used predictive methods, namely linear prediction, neural networks and Kalman filtering to the same data. The proposed method reduced the prediction error for all imaging rates and latency lengths, particularly for long prediction lengths
Methods for fabrication of flexible hybrid electronics
Street, Robert A.; Mei, Ping; Krusor, Brent; Ready, Steve E.; Zhang, Yong; Schwartz, David E.; Pierre, Adrien; Doris, Sean E.; Russo, Beverly; Kor, Siv; Veres, Janos
2017-08-01
Printed and flexible hybrid electronics is an emerging technology with potential applications in smart labels, wearable electronics, soft robotics, and prosthetics. Printed solution-based materials are compatible with plastic film substrates that are flexible, soft, and stretchable, thus enabling conformal integration with non-planar objects. In addition, manufacturing by printing is scalable to large areas and is amenable to low-cost sheet-fed and roll-to-roll processes. FHE includes display and sensory components to interface with users and environments. On the system level, devices also require electronic circuits for power, memory, signal conditioning, and communications. Those electronic components can be integrated onto a flexible substrate by either assembly or printing. PARC has developed systems and processes for realizing both approaches. This talk presents fabrication methods with an emphasis on techniques recently developed for the assembly of off-the-shelf chips. A few examples of systems fabricated with this approach are also described.
Local regression type methods applied to the study of geophysics and high frequency financial data
Mariani, M. C.; Basu, K.
2014-09-01
In this work we applied locally weighted scatterplot smoothing techniques (Lowess/Loess) to Geophysical and high frequency financial data. We first analyze and apply this technique to the California earthquake geological data. A spatial analysis was performed to show that the estimation of the earthquake magnitude at a fixed location is very accurate up to the relative error of 0.01%. We also applied the same method to a high frequency data set arising in the financial sector and obtained similar satisfactory results. The application of this approach to the two different data sets demonstrates that the overall method is accurate and efficient, and the Lowess approach is much more desirable than the Loess method. The previous works studied the time series analysis; in this paper our local regression models perform a spatial analysis for the geophysics data providing different information. For the high frequency data, our models estimate the curve of best fit where data are dependent on time.
Geographically weighted regression based methods for merging satellite and gauge precipitation
Chao, Lijun; Zhang, Ke; Li, Zhijia; Zhu, Yuelong; Wang, Jingfeng; Yu, Zhongbo
2018-03-01
Real-time precipitation data with high spatiotemporal resolutions are crucial for accurate hydrological forecasting. To improve the spatial resolution and quality of satellite precipitation, a three-step satellite and gauge precipitation merging method was formulated in this study: (1) bilinear interpolation is first applied to downscale coarser satellite precipitation to a finer resolution (PS); (2) the (mixed) geographically weighted regression methods coupled with a weighting function are then used to estimate biases of PS as functions of gauge observations (PO) and PS; and (3) biases of PS are finally corrected to produce a merged precipitation product. Based on the above framework, eight algorithms, a combination of two geographically weighted regression methods and four weighting functions, are developed to merge CMORPH (CPC MORPHing technique) precipitation with station observations on a daily scale in the Ziwuhe Basin of China. The geographical variables (elevation, slope, aspect, surface roughness, and distance to the coastline) and a meteorological variable (wind speed) were used for merging precipitation to avoid the artificial spatial autocorrelation resulting from traditional interpolation methods. The results show that the combination of the MGWR and BI-square function (MGWR-BI) has the best performance (R = 0.863 and RMSE = 7.273 mm/day) among the eight algorithms. The MGWR-BI algorithm was then applied to produce hourly merged precipitation product. Compared to the original CMORPH product (R = 0.208 and RMSE = 1.208 mm/hr), the quality of the merged data is significantly higher (R = 0.724 and RMSE = 0.706 mm/hr). The developed merging method not only improves the spatial resolution and quality of the satellite product but also is easy to implement, which is valuable for hydrological modeling and other applications.
Numerical methods in electron magnetic resonance
International Nuclear Information System (INIS)
Soernes, A.R.
1998-01-01
The focal point of the thesis is the development and use of numerical methods in the analysis, simulation and interpretation of Electron Magnetic Resonance experiments on free radicals in solids to uncover the structure, the dynamics and the environment of the system
Numerical methods in electron magnetic resonance
Energy Technology Data Exchange (ETDEWEB)
Soernes, A.R
1998-07-01
The focal point of the thesis is the development and use of numerical methods in the analysis, simulation and interpretation of Electron Magnetic Resonance experiments on free radicals in solids to uncover the structure, the dynamics and the environment of the system.
Directory of Open Access Journals (Sweden)
Adi Syahputra
2014-03-01
Full Text Available Quantitative structure activity relationship (QSAR for 21 insecticides of phthalamides containing hydrazone (PCH was studied using multiple linear regression (MLR, principle component regression (PCR and artificial neural network (ANN. Five descriptors were included in the model for MLR and ANN analysis, and five latent variables obtained from principle component analysis (PCA were used in PCR analysis. Calculation of descriptors was performed using semi-empirical PM6 method. ANN analysis was found to be superior statistical technique compared to the other methods and gave a good correlation between descriptors and activity (r2 = 0.84. Based on the obtained model, we have successfully designed some new insecticides with higher predicted activity than those of previously synthesized compounds, e.g.2-(decalinecarbamoyl-5-chloro-N’-((5-methylthiophen-2-ylmethylene benzohydrazide, 2-(decalinecarbamoyl-5-chloro-N’-((thiophen-2-yl-methylene benzohydrazide and 2-(decaline carbamoyl-N’-(4-fluorobenzylidene-5-chlorobenzohydrazide with predicted log LC50 of 1.640, 1.672, and 1.769 respectively.
Nonparametric Methods in Astronomy: Think, Regress, Observe—Pick Any Three
Steinhardt, Charles L.; Jermyn, Adam S.
2018-02-01
Telescopes are much more expensive than astronomers, so it is essential to minimize required sample sizes by using the most data-efficient statistical methods possible. However, the most commonly used model-independent techniques for finding the relationship between two variables in astronomy are flawed. In the worst case they can lead without warning to subtly yet catastrophically wrong results, and even in the best case they require more data than necessary. Unfortunately, there is no single best technique for nonparametric regression. Instead, we provide a guide for how astronomers can choose the best method for their specific problem and provide a python library with both wrappers for the most useful existing algorithms and implementations of two new algorithms developed here.
Energy Technology Data Exchange (ETDEWEB)
Tondu, Thomas; Belhaj, Mohamed; Inguimbert, Virginie [Onera, DESP, 2 Avenue Edouard Belin, 31400 Toulouse (France); Onera, DESP, 2 Avenue Edouard Belin, 31400 Toulouse, France and Fondation STAE, 4 allee Emile Monso, BP 84234-31432, Toulouse Cedex 4 (France); Onera, DESP, 2 Avenue Edouard Belin, 31400 Toulouse (France)
2010-09-15
Secondary electron emission yield of gold under electron impact at normal incidence below 50 eV was investigated by the classical collector method and by the Kelvin probe method. The authors show that biasing a collector to ensure secondary electron collection while keeping the target grounded can lead to primary electron beam perturbations. Thus reliable secondary electron emission yield at low primary electron energy cannot be obtained with a biased collector. The authors present two collector-free methods based on current measurement and on electron pulse surface potential buildup (Kelvin probe method). These methods are consistent, but at very low energy, measurements become sensitive to the earth magnetic field (below 10 eV). For gold, the authors can extrapolate total emission yield at 0 eV to 0.5, while a total electron emission yield of 1 is obtained at 40{+-}1 eV.
International Nuclear Information System (INIS)
Tondu, Thomas; Belhaj, Mohamed; Inguimbert, Virginie
2010-01-01
Secondary electron emission yield of gold under electron impact at normal incidence below 50 eV was investigated by the classical collector method and by the Kelvin probe method. The authors show that biasing a collector to ensure secondary electron collection while keeping the target grounded can lead to primary electron beam perturbations. Thus reliable secondary electron emission yield at low primary electron energy cannot be obtained with a biased collector. The authors present two collector-free methods based on current measurement and on electron pulse surface potential buildup (Kelvin probe method). These methods are consistent, but at very low energy, measurements become sensitive to the earth magnetic field (below 10 eV). For gold, the authors can extrapolate total emission yield at 0 eV to 0.5, while a total electron emission yield of 1 is obtained at 40±1 eV.
Estimating HIES Data through Ratio and Regression Methods for Different Sampling Designs
Directory of Open Access Journals (Sweden)
Faqir Muhammad
2007-01-01
Full Text Available In this study, comparison has been made for different sampling designs, using the HIES data of North West Frontier Province (NWFP for 2001-02 and 1998-99 collected from the Federal Bureau of Statistics, Statistical Division, Government of Pakistan, Islamabad. The performance of the estimators has also been considered using bootstrap and Jacknife. A two-stage stratified random sample design is adopted by HIES. In the first stage, enumeration blocks and villages are treated as the first stage Primary Sampling Units (PSU. The sample PSU’s are selected with probability proportional to size. Secondary Sampling Units (SSU i.e., households are selected by systematic sampling with a random start. They have used a single study variable. We have compared the HIES technique with some other designs, which are: Stratified Simple Random Sampling. Stratified Systematic Sampling. Stratified Ranked Set Sampling. Stratified Two Phase Sampling. Ratio and Regression methods were applied with two study variables, which are: Income (y and Household sizes (x. Jacknife and Bootstrap are used for variance replication. Simple Random Sampling with sample size (462 to 561 gave moderate variances both by Jacknife and Bootstrap. By applying Systematic Sampling, we received moderate variance with sample size (467. In Jacknife with Systematic Sampling, we obtained variance of regression estimator greater than that of ratio estimator for a sample size (467 to 631. At a sample size (952 variance of ratio estimator gets greater than that of regression estimator. The most efficient design comes out to be Ranked set sampling compared with other designs. The Ranked set sampling with jackknife and bootstrap, gives minimum variance even with the smallest sample size (467. Two Phase sampling gave poor performance. Multi-stage sampling applied by HIES gave large variances especially if used with a single study variable.
Robust Methods for Moderation Analysis with a Two-Level Regression Model.
Yang, Miao; Yuan, Ke-Hai
2016-01-01
Moderation analysis has many applications in social sciences. Most widely used estimation methods for moderation analysis assume that errors are normally distributed and homoscedastic. When these assumptions are not met, the results from a classical moderation analysis can be misleading. For more reliable moderation analysis, this article proposes two robust methods with a two-level regression model when the predictors do not contain measurement error. One method is based on maximum likelihood with Student's t distribution and the other is based on M-estimators with Huber-type weights. An algorithm for obtaining the robust estimators is developed. Consistent estimates of standard errors of the robust estimators are provided. The robust approaches are compared against normal-distribution-based maximum likelihood (NML) with respect to power and accuracy of parameter estimates through a simulation study. Results show that the robust approaches outperform NML under various distributional conditions. Application of the robust methods is illustrated through a real data example. An R program is developed and documented to facilitate the application of the robust methods.
Applications of Monte Carlo method to nonlinear regression of rheological data
Kim, Sangmo; Lee, Junghaeng; Kim, Sihyun; Cho, Kwang Soo
2018-02-01
In rheological study, it is often to determine the parameters of rheological models from experimental data. Since both rheological data and values of the parameters vary in logarithmic scale and the number of the parameters is quite large, conventional method of nonlinear regression such as Levenberg-Marquardt (LM) method is usually ineffective. The gradient-based method such as LM is apt to be caught in local minima which give unphysical values of the parameters whenever the initial guess of the parameters is far from the global optimum. Although this problem could be solved by simulated annealing (SA), the Monte Carlo (MC) method needs adjustable parameter which could be determined in ad hoc manner. We suggest a simplified version of SA, a kind of MC methods which results in effective values of the parameters of most complicated rheological models such as the Carreau-Yasuda model of steady shear viscosity, discrete relaxation spectrum and zero-shear viscosity as a function of concentration and molecular weight.
Logistic Regression and Path Analysis Method to Analyze Factors influencing Students’ Achievement
Noeryanti, N.; Suryowati, K.; Setyawan, Y.; Aulia, R. R.
2018-04-01
Students' academic achievement cannot be separated from the influence of two factors namely internal and external factors. The first factors of the student (internal factors) consist of intelligence (X1), health (X2), interest (X3), and motivation of students (X4). The external factors consist of family environment (X5), school environment (X6), and society environment (X7). The objects of this research are eighth grade students of the school year 2016/2017 at SMPN 1 Jiwan Madiun sampled by using simple random sampling. Primary data are obtained by distributing questionnaires. The method used in this study is binary logistic regression analysis that aims to identify internal and external factors that affect student’s achievement and how the trends of them. Path Analysis was used to determine the factors that influence directly, indirectly or totally on student’s achievement. Based on the results of binary logistic regression, variables that affect student’s achievement are interest and motivation. And based on the results obtained by path analysis, factors that have a direct impact on student’s achievement are students’ interest (59%) and students’ motivation (27%). While the factors that have indirect influences on students’ achievement, are family environment (97%) and school environment (37).
Wulandari, S. P.; Salamah, M.; Rositawati, A. F. D.
2018-04-01
Food security is the condition where the food fulfilment is managed well for the country till the individual. Indonesia is one of the country which has the commitment to create the food security becomes main priority. However, the food necessity becomes common thing means that it doesn’t care about nutrient standard and the health condition of family member, so in the fulfilment of food necessity also has to consider the disease suffered by the family member, one of them is pulmonary tuberculosa. From that reasons, this research is conducted to know the factors which influence on household food security status which suffered from pulmonary tuberculosis in the coastal area of Surabaya by using binary logistic regression method. The analysis result by using binary logistic regression shows that the variables wife latest education, house density and spacious house ventilation significantly affect on household food security status which suffered from pulmonary tuberculosis in the coastal area of Surabaya, where the wife education level is University/equivalent, the house density is eligible or 8 m2/person and spacious house ventilation 10% of the floor area has the opportunity to become food secure households amounted to 0.911089. While the chance of becoming food insecure households amounted to 0.088911. The model household food security status which suffered from pulmonary tuberculosis in the coastal area of Surabaya has been conformable, and the overall percentages of those classifications are at 71.8%.
International Nuclear Information System (INIS)
Gupta, N
2008-01-01
3013 containers are designed in accordance with the DOE-STD-3013-2004. These containers are qualified to store plutonium (Pu) bearing materials such as PuO2 for 50 years. DOT shipping packages such as the 9975 are used to store the 3013 containers in the K-Area Material Storage (KAMS) facility at Savannah River Site (SRS). DOE-STD-3013-2004 requires that a comprehensive surveillance program be set up to ensure that the 3013 container design parameters are not violated during the long term storage. To ensure structural integrity of the 3013 containers, thermal analyses using finite element models were performed to predict the contents and component temperatures for different but well defined parameters such as storage ambient temperature, PuO 2 density, fill heights, weights, and thermal loading. Interpolation is normally used to calculate temperatures if the actual parameter values are different from the analyzed values. A statistical analysis technique using regression methods is proposed to develop simple polynomial relations to predict temperatures for the actual parameter values found in the containers. The analysis shows that regression analysis is a powerful tool to develop simple relations to assess component temperatures
Multi-step polynomial regression method to model and forecast malaria incidence.
Directory of Open Access Journals (Sweden)
Chandrajit Chatterjee
Full Text Available Malaria is one of the most severe problems faced by the world even today. Understanding the causative factors such as age, sex, social factors, environmental variability etc. as well as underlying transmission dynamics of the disease is important for epidemiological research on malaria and its eradication. Thus, development of suitable modeling approach and methodology, based on the available data on the incidence of the disease and other related factors is of utmost importance. In this study, we developed a simple non-linear regression methodology in modeling and forecasting malaria incidence in Chennai city, India, and predicted future disease incidence with high confidence level. We considered three types of data to develop the regression methodology: a longer time series data of Slide Positivity Rates (SPR of malaria; a smaller time series data (deaths due to Plasmodium vivax of one year; and spatial data (zonal distribution of P. vivax deaths for the city along with the climatic factors, population and previous incidence of the disease. We performed variable selection by simple correlation study, identification of the initial relationship between variables through non-linear curve fitting and used multi-step methods for induction of variables in the non-linear regression analysis along with applied Gauss-Markov models, and ANOVA for testing the prediction, validity and constructing the confidence intervals. The results execute the applicability of our method for different types of data, the autoregressive nature of forecasting, and show high prediction power for both SPR and P. vivax deaths, where the one-lag SPR values plays an influential role and proves useful for better prediction. Different climatic factors are identified as playing crucial role on shaping the disease curve. Further, disease incidence at zonal level and the effect of causative factors on different zonal clusters indicate the pattern of malaria prevalence in the city
Improved methods for high resolution electron microscopy
Energy Technology Data Exchange (ETDEWEB)
Taylor, J.R.
1987-04-01
Existing methods of making support films for high resolution transmission electron microscopy are investigated and novel methods are developed. Existing methods of fabricating fenestrated, metal reinforced specimen supports (microgrids) are evaluated for their potential to reduce beam induced movement of monolamellar crystals of C/sub 44/H/sub 90/ paraffin supported on thin carbon films. Improved methods of producing hydrophobic carbon films by vacuum evaporation, and improved methods of depositing well ordered monolamellar paraffin crystals on carbon films are developed. A novel technique for vacuum evaporation of metals is described which is used to reinforce microgrids. A technique is also developed to bond thin carbon films to microgrids with a polymer bonding agent. Unique biochemical methods are described to accomplish site specific covalent modification of membrane proteins. Protocols are given which covalently convert the carboxy terminus of papain cleaved bacteriorhodopsin to a free thiol. 53 refs., 19 figs., 1 tab.
A New Global Regression Analysis Method for the Prediction of Wind Tunnel Model Weight Corrections
Ulbrich, Norbert Manfred; Bridge, Thomas M.; Amaya, Max A.
2014-01-01
A new global regression analysis method is discussed that predicts wind tunnel model weight corrections for strain-gage balance loads during a wind tunnel test. The method determines corrections by combining "wind-on" model attitude measurements with least squares estimates of the model weight and center of gravity coordinates that are obtained from "wind-off" data points. The method treats the least squares fit of the model weight separate from the fit of the center of gravity coordinates. Therefore, it performs two fits of "wind- off" data points and uses the least squares estimator of the model weight as an input for the fit of the center of gravity coordinates. Explicit equations for the least squares estimators of the weight and center of gravity coordinates are derived that simplify the implementation of the method in the data system software of a wind tunnel. In addition, recommendations for sets of "wind-off" data points are made that take typical model support system constraints into account. Explicit equations of the confidence intervals on the model weight and center of gravity coordinates and two different error analyses of the model weight prediction are also discussed in the appendices of the paper.
Hwang, Kyu-Baek; Lee, In-Hee; Park, Jin-Ho; Hambuch, Tina; Choe, Yongjoon; Kim, MinHyeok; Lee, Kyungjoon; Song, Taemin; Neu, Matthew B; Gupta, Neha; Kohane, Isaac S; Green, Robert C; Kong, Sek Won
2014-08-01
As whole genome sequencing (WGS) uncovers variants associated with rare and common diseases, an immediate challenge is to minimize false-positive findings due to sequencing and variant calling errors. False positives can be reduced by combining results from orthogonal sequencing methods, but costly. Here, we present variant filtering approaches using logistic regression (LR) and ensemble genotyping to minimize false positives without sacrificing sensitivity. We evaluated the methods using paired WGS datasets of an extended family prepared using two sequencing platforms and a validated set of variants in NA12878. Using LR or ensemble genotyping based filtering, false-negative rates were significantly reduced by 1.1- to 17.8-fold at the same levels of false discovery rates (5.4% for heterozygous and 4.5% for homozygous single nucleotide variants (SNVs); 30.0% for heterozygous and 18.7% for homozygous insertions; 25.2% for heterozygous and 16.6% for homozygous deletions) compared to the filtering based on genotype quality scores. Moreover, ensemble genotyping excluded > 98% (105,080 of 107,167) of false positives while retaining > 95% (897 of 937) of true positives in de novo mutation (DNM) discovery in NA12878, and performed better than a consensus method using two sequencing platforms. Our proposed methods were effective in prioritizing phenotype-associated variants, and an ensemble genotyping would be essential to minimize false-positive DNM candidates. © 2014 WILEY PERIODICALS, INC.
A dynamic particle filter-support vector regression method for reliability prediction
International Nuclear Information System (INIS)
Wei, Zhao; Tao, Tao; ZhuoShu, Ding; Zio, Enrico
2013-01-01
Support vector regression (SVR) has been applied to time series prediction and some works have demonstrated the feasibility of its use to forecast system reliability. For accuracy of reliability forecasting, the selection of SVR's parameters is important. The existing research works on SVR's parameters selection divide the example dataset into training and test subsets, and tune the parameters on the training data. However, these fixed parameters can lead to poor prediction capabilities if the data of the test subset differ significantly from those of training. Differently, the novel method proposed in this paper uses particle filtering to estimate the SVR model parameters according to the whole measurement sequence up to the last observation instance. By treating the SVR training model as the observation equation of a particle filter, our method allows updating the SVR model parameters dynamically when a new observation comes. Because of the adaptability of the parameters to dynamic data pattern, the new PF–SVR method has superior prediction performance over that of standard SVR. Four application results show that PF–SVR is more robust than SVR to the decrease of the number of training data and the change of initial SVR parameter values. Also, even if there are trends in the test data different from those in the training data, the method can capture the changes, correct the SVR parameters and obtain good predictions. -- Highlights: •A dynamic PF–SVR method is proposed to predict the system reliability. •The method can adjust the SVR parameters according to the change of data. •The method is robust to the size of training data and initial parameter values. •Some cases based on both artificial and real data are studied. •PF–SVR shows superior prediction performance over standard SVR
Statistical learning method in regression analysis of simulated positron spectral data
International Nuclear Information System (INIS)
Avdic, S. Dz.
2005-01-01
Positron lifetime spectroscopy is a non-destructive tool for detection of radiation induced defects in nuclear reactor materials. This work concerns the applicability of the support vector machines method for the input data compression in the neural network analysis of positron lifetime spectra. It has been demonstrated that the SVM technique can be successfully applied to regression analysis of positron spectra. A substantial data compression of about 50 % and 8 % of the whole training set with two and three spectral components respectively has been achieved including a high accuracy of the spectra approximation. However, some parameters in the SVM approach such as the insensitivity zone e and the penalty parameter C have to be chosen carefully to obtain a good performance. (author)
EDM 1.0: electron direct methods.
Kilaas, R; Marks, L D; Own, C S
2005-02-01
A computer program designed to provide a number of quantitative analysis tools for high-resolution imaging and electron diffraction data is described. The program includes basic image manipulation, both real space and reciprocal space image processing, Wiener-filtering, symmetry averaging, methods for quantification of electron diffraction patterns and two-dimensional direct methods. The program consists of a number of sub-programs written in a combination of C++, C and Fortran. It can be downloaded either as GNU source code or as binaries and has been compiled and verified on a wide range of platforms, both Unix based and PC's. Elements of the design philosophy as well as future possible extensions are described.
The crux of the method: assumptions in ordinary least squares and logistic regression.
Long, Rebecca G
2008-10-01
Logistic regression has increasingly become the tool of choice when analyzing data with a binary dependent variable. While resources relating to the technique are widely available, clear discussions of why logistic regression should be used in place of ordinary least squares regression are difficult to find. The current paper compares and contrasts the assumptions of ordinary least squares with those of logistic regression and explains why logistic regression's looser assumptions make it adept at handling violations of the more important assumptions in ordinary least squares.
Austin, Peter C; Lee, Douglas S; Steyerberg, Ewout W; Tu, Jack V
2012-01-01
In biomedical research, the logistic regression model is the most commonly used method for predicting the probability of a binary outcome. While many clinical researchers have expressed an enthusiasm for regression trees, this method may have limited accuracy for predicting health outcomes. We aimed to evaluate the improvement that is achieved by using ensemble-based methods, including bootstrap aggregation (bagging) of regression trees, random forests, and boosted regression trees. We analyzed 30-day mortality in two large cohorts of patients hospitalized with either acute myocardial infarction (N = 16,230) or congestive heart failure (N = 15,848) in two distinct eras (1999–2001 and 2004–2005). We found that both the in-sample and out-of-sample prediction of ensemble methods offered substantial improvement in predicting cardiovascular mortality compared to conventional regression trees. However, conventional logistic regression models that incorporated restricted cubic smoothing splines had even better performance. We conclude that ensemble methods from the data mining and machine learning literature increase the predictive performance of regression trees, but may not lead to clear advantages over conventional logistic regression models for predicting short-term mortality in population-based samples of subjects with cardiovascular disease. PMID:22777999
Dinç, Erdal; Ustündağ, Ozgür; Baleanu, Dumitru
2010-08-01
The sole use of pyridoxine hydrochloride during treatment of tuberculosis gives rise to pyridoxine deficiency. Therefore, a combination of pyridoxine hydrochloride and isoniazid is used in pharmaceutical dosage form in tuberculosis treatment to reduce this side effect. In this study, two chemometric methods, partial least squares (PLS) and principal component regression (PCR), were applied to the simultaneous determination of pyridoxine (PYR) and isoniazid (ISO) in their tablets. A concentration training set comprising binary mixtures of PYR and ISO consisting of 20 different combinations were randomly prepared in 0.1 M HCl. Both multivariate calibration models were constructed using the relationships between the concentration data set (concentration data matrix) and absorbance data matrix in the spectral region 200-330 nm. The accuracy and the precision of the proposed chemometric methods were validated by analyzing synthetic mixtures containing the investigated drugs. The recovery results obtained by applying PCR and PLS calibrations to the artificial mixtures were found between 100.0 and 100.7%. Satisfactory results obtained by applying the PLS and PCR methods to both artificial and commercial samples were obtained. The results obtained in this manuscript strongly encourage us to use them for the quality control and the routine analysis of the marketing tablets containing PYR and ISO drugs. Copyright © 2010 John Wiley & Sons, Ltd.
Innovative electron transport methods in EGS5
International Nuclear Information System (INIS)
Bielajew, A.F.; Wilderman, S.J.
2000-01-01
The initial formulation of a Monte Carlo scheme for the transport of high-energy (>≅ 100 keV) electrons was established by Berger in 1963. Calling his method the 'condensed history theory', Berger combined the theoretical results of the previous generation of research into developing approximate solutions of the Boltzmann transport equation with numerical algorithms for exploiting the power of computers to permit iterative, piece-wise solution of the transport equation in a computationally intensive but much less approximate fashion. The methods devised by Berger, with comparatively little modification, provide the foundation of all present day Monte Carlo electron transport simulation algorithms. Only in the last 15 years, beginning with the development and publication of the PRESTA algorithm, has there been a significant revisitation of the problem of simulating electron transport within the condensed history framework. Research in this area is ongoing, highly active, and far from complete. It presents an enormous challenge, demanding derivation of new analytical transport solutions based on underlying fundamental interaction mechanisms, intuitive insight in the development of computer algorithms, and state of the art computer science skills in order to permit deployment of these techniques in an efficient manner. The EGS5 project, a modern ground-up rewrite of the EGS4 code, is now in the design phase. EGS5 will take modern photon and electron transport algorithms and deploy them in an easy-to-maintain, modern computer language-ANSI-standard C ++. Moreover, the well-known difficulties of applying EGS4 to practical geometries (geometry code development, tally routine design) should be made easier and more intuitive through the use of a visual user interface being designed by Quantum Research, Inc., work that is presented elsewhere in this conference. This report commences with a historical review of electron transport models culminating with the proposal of a
International Nuclear Information System (INIS)
Tsushima, Motoo; Fujii, Shigeki; Yutani, Chikao; Yamamoto, Akira; Naitoh, Hiroaki.
1990-01-01
We evaluated the wall thickening and stenosis rate (ASI), the calcification rate (ACI), and the wall thickening and calcification stenosis rate (SCI) of the lower abdominal aorta calculated by the 12 sector method from simple or enhanced computed tomography. The intra-observer variation of the calculation of ASI was 5.7% and that of ACI was 2.4%. In 9 patients who underwent an autopsy examination, ACI was significantly correlated with the rate of the calcification dimension to the whole objective area of the abdominal aorta (r=0.856, p<0.01). However, there were no correlations between ASI and the surface involvement or the atherosclerotic index obtained by the point-counting method of the autopsy materials. In the analysis of 40 patients with atherosclerotic vascular diseases, ASI and ACI were also highly correlated with the percentage volume of the arterial wall in relation to the whole volume of the observed artery (r=0.852, p<0.0001) and also the percentage calcification volume (r=0.913, p<0.0001) calculated by the computed method, respectively. The percentage of atherosclerotic vascular diseases increased in the group of both high ASI (over 10%) and high ACI (over 20%). We used SCI as a reliable index when the progression and regression of atherosclerosis was considered. Among patients of hypercholesterolemia consisting of 15 with familial hypercholesterolemia (FH) and 6 non-FH patients, the change of SCI (d-SCI) was significantly correlated with the change of total cholesterol concentration (d-TC) after the treatment (r=0.466, p<0.05) and the change of the right Achilles' tendon thickening (d-ATT) was also correlated with d-TC (r=0.634, p<0.005). However, no correlation between d-SCI and d-ATT was observed. In conclusion, CT indices of atherosclerosis were useful as a noninvasive quantitative diagnostic method and we were able to use them to assess the progression and regression of atherosclerosis. (author)
Reflexion on linear regression trip production modelling method for ensuring good model quality
Suprayitno, Hitapriya; Ratnasari, Vita
2017-11-01
Transport Modelling is important. For certain cases, the conventional model still has to be used, in which having a good trip production model is capital. A good model can only be obtained from a good sample. Two of the basic principles of a good sampling is having a sample capable to represent the population characteristics and capable to produce an acceptable error at a certain confidence level. It seems that this principle is not yet quite understood and used in trip production modeling. Therefore, investigating the Trip Production Modelling practice in Indonesia and try to formulate a better modeling method for ensuring the Model Quality is necessary. This research result is presented as follows. Statistics knows a method to calculate span of prediction value at a certain confidence level for linear regression, which is called Confidence Interval of Predicted Value. The common modeling practice uses R2 as the principal quality measure, the sampling practice varies and not always conform to the sampling principles. An experiment indicates that small sample is already capable to give excellent R2 value and sample composition can significantly change the model. Hence, good R2 value, in fact, does not always mean good model quality. These lead to three basic ideas for ensuring good model quality, i.e. reformulating quality measure, calculation procedure, and sampling method. A quality measure is defined as having a good R2 value and a good Confidence Interval of Predicted Value. Calculation procedure must incorporate statistical calculation method and appropriate statistical tests needed. A good sampling method must incorporate random well distributed stratified sampling with a certain minimum number of samples. These three ideas need to be more developed and tested.
Directory of Open Access Journals (Sweden)
Nina L. Timofeeva
2014-01-01
Full Text Available The article presents the methodological and technical bases for the creation of regression models that adequately reflect reality. The focus is on methods of removing residual autocorrelation in models. Algorithms eliminating heteroscedasticity and autocorrelation of the regression model residuals: reweighted least squares method, the method of Cochran-Orkutta are given. A model of "pure" regression is build, as well as to compare the effect on the dependent variable of the different explanatory variables when the latter are expressed in different units, a standardized form of the regression equation. The scheme of abatement techniques of heteroskedasticity and autocorrelation for the creation of regression models specific to the social and cultural sphere is developed.
Methods and apparatus for cooling electronics
Hall, Shawn Anthony; Kopcsay, Gerard Vincent
2014-12-02
Methods and apparatus are provided for choosing an energy-efficient coolant temperature for electronics by considering the temperature dependence of the electronics' power dissipation. This dependence is explicitly considered in selecting the coolant temperature T.sub.0 that is sent to the equipment. To minimize power consumption P.sub.Total for the entire system, where P.sub.Total=P.sub.0+P.sub.Cool is the sum of the electronic equipment's power consumption P.sub.0 plus the cooling equipment's power consumption P.sub.Cool, P.sub.Total is obtained experimentally, by measuring P.sub.0 and P.sub.Cool, as a function of three parameters: coolant temperature T.sub.0; weather-related temperature T.sub.3 that affects the performance of free-cooling equipment; and computational state C of the electronic equipment, which affects the temperature dependence of its power consumption. This experiment provides, for each possible combination of T.sub.3 and C, the value T.sub.0* of T.sub.0 that minimizes P.sub.Total. During operation, for any combination of T.sub.3 and C that occurs, the corresponding optimal coolant temperature T.sub.0* is selected, and the cooling equipment is commanded to produce it.
Scanning probe methods applied to molecular electronics
Energy Technology Data Exchange (ETDEWEB)
Pavlicek, Niko
2013-08-01
Scanning probe methods on insulating films offer a rich toolbox to study electronic, structural and spin properties of individual molecules. This work discusses three issues in the field of molecular and organic electronics. An STM head to be operated in high magnetic fields has been designed and built up. The STM head is very compact and rigid relying on a robust coarse approach mechanism. This will facilitate investigations of the spin properties of individual molecules in the future. Combined STM/AFM studies revealed a reversible molecular switch based on two stable configurations of DBTH molecules on ultrathin NaCl films. AFM experiments visualize the molecular structure in both states. Our experiments allowed to unambiguously determine the pathway of the switch. Finally, tunneling into and out of the frontier molecular orbitals of pentacene molecules has been investigated on different insulating films. These experiments show that the local symmetry of initial and final electron wave function are decisive for the ratio between elastic and vibration-assisted tunneling. The results can be generalized to electron transport in organic materials.
2017-12-01
Fig. 2 Simulation method; the process for one iteration of the simulation . It was repeated 250 times per combination of HR and FAR. Analysis was...distribution is unlimited. 8 Fig. 2 Simulation method; the process for one iteration of the simulation . It was repeated 250 times per combination of HR...stimuli. Simulations show that this regression method results in an unbiased and accurate estimate of target detection performance. The regression
Isa, Zakiah Mohd; Tawfiq, Omar Farouq; Noor, Norliza Mohd; Shamsudheen, Mohd Iqbal; Rijal, Omar Mohd
2010-03-01
In rehabilitating edentulous patients, selecting appropriately sized teeth in the absence of preextraction records is problematic. The purpose of this study was to investigate the relationships between some facial dimensions and widths of the maxillary anterior teeth to potentially provide a guide for tooth selection. Sixty full dentate Malaysian adults (18-36 years) representing 2 ethnic groups (Malay and Chinese), with well aligned maxillary anterior teeth and minimal attrition, participated in this study. Standardized digital images of the face, viewed frontally, were recorded. Using image analyzing software, the images were used to determine the interpupillary distance (IPD), inner canthal distance (ICD), and interalar width (IA). Widths of the 6 maxillary anterior teeth were measured directly from casts of the subjects using digital calipers. Regression analyses were conducted to measure the strength of the associations between the variables (alpha=.10). The means (standard deviations) of IPD, IA, and ICD of the subjects were 62.28 (2.47), 39.36 (3.12), and 34.36 (2.15) mm, respectively. The mesiodistal diameters of the maxillary central incisors, lateral incisors, and canines were 8.54 (0.50), 7.09 (0.48), and 7.94 (0.40) mm, respectively. The width of the central incisors was highly correlated to the IPD (r=0.99), while the widths of the lateral incisors and canines were highly correlated to a combination of IPD and IA (r=0.99 and 0.94, respectively). Using regression methods, the widths of the anterior teeth within the population tested may be predicted by a combination of the facial dimensions studied. (c) 2010 The Editorial Council of the Journal of Prosthetic Dentistry. Published by Mosby, Inc. All rights reserved.
Delwiche, Stephen R; Reeves, James B
2010-01-01
In multivariate regression analysis of spectroscopy data, spectral preprocessing is often performed to reduce unwanted background information (offsets, sloped baselines) or accentuate absorption features in intrinsically overlapping bands. These procedures, also known as pretreatments, are commonly smoothing operations or derivatives. While such operations are often useful in reducing the number of latent variables of the actual decomposition and lowering residual error, they also run the risk of misleading the practitioner into accepting calibration equations that are poorly adapted to samples outside of the calibration. The current study developed a graphical method to examine this effect on partial least squares (PLS) regression calibrations of near-infrared (NIR) reflection spectra of ground wheat meal with two analytes, protein content and sodium dodecyl sulfate sedimentation (SDS) volume (an indicator of the quantity of the gluten proteins that contribute to strong doughs). These two properties were chosen because of their differing abilities to be modeled by NIR spectroscopy: excellent for protein content, fair for SDS sedimentation volume. To further demonstrate the potential pitfalls of preprocessing, an artificial component, a randomly generated value, was included in PLS regression trials. Savitzky-Golay (digital filter) smoothing, first-derivative, and second-derivative preprocess functions (5 to 25 centrally symmetric convolution points, derived from quadratic polynomials) were applied to PLS calibrations of 1 to 15 factors. The results demonstrated the danger of an over reliance on preprocessing when (1) the number of samples used in a multivariate calibration is low (<50), (2) the spectral response of the analyte is weak, and (3) the goodness of the calibration is based on the coefficient of determination (R(2)) rather than a term based on residual error. The graphical method has application to the evaluation of other preprocess functions and various
Whole-genome regression and prediction methods applied to plant and animal breeding
Los Campos, De G.; Hickey, J.M.; Pong-Wong, R.; Daetwyler, H.D.; Calus, M.P.L.
2013-01-01
Genomic-enabled prediction is becoming increasingly important in animal and plant breeding, and is also receiving attention in human genetics. Deriving accurate predictions of complex traits requires implementing whole-genome regression (WGR) models where phenotypes are regressed on thousands of
Modelling infant mortality rate in Central Java, Indonesia use generalized poisson regression method
Prahutama, Alan; Sudarno
2018-05-01
The infant mortality rate is the number of deaths under one year of age occurring among the live births in a given geographical area during a given year, per 1,000 live births occurring among the population of the given geographical area during the same year. This problem needs to be addressed because it is an important element of a country’s economic development. High infant mortality rate will disrupt the stability of a country as it relates to the sustainability of the population in the country. One of regression model that can be used to analyze the relationship between dependent variable Y in the form of discrete data and independent variable X is Poisson regression model. Recently The regression modeling used for data with dependent variable is discrete, among others, poisson regression, negative binomial regression and generalized poisson regression. In this research, generalized poisson regression modeling gives better AIC value than poisson regression. The most significant variable is the Number of health facilities (X1), while the variable that gives the most influence to infant mortality rate is the average breastfeeding (X9).
Selecting minimum dataset soil variables using PLSR as a regressive multivariate method
Stellacci, Anna Maria; Armenise, Elena; Castellini, Mirko; Rossi, Roberta; Vitti, Carolina; Leogrande, Rita; De Benedetto, Daniela; Ferrara, Rossana M.; Vivaldi, Gaetano A.
2017-04-01
Long-term field experiments and science-based tools that characterize soil status (namely the soil quality indices, SQIs) assume a strategic role in assessing the effect of agronomic techniques and thus in improving soil management especially in marginal environments. Selecting key soil variables able to best represent soil status is a critical step for the calculation of SQIs. Current studies show the effectiveness of statistical methods for variable selection to extract relevant information deriving from multivariate datasets. Principal component analysis (PCA) has been mainly used, however supervised multivariate methods and regressive techniques are progressively being evaluated (Armenise et al., 2013; de Paul Obade et al., 2016; Pulido Moncada et al., 2014). The present study explores the effectiveness of partial least square regression (PLSR) in selecting critical soil variables, using a dataset comparing conventional tillage and sod-seeding on durum wheat. The results were compared to those obtained using PCA and stepwise discriminant analysis (SDA). The soil data derived from a long-term field experiment in Southern Italy. On samples collected in April 2015, the following set of variables was quantified: (i) chemical: total organic carbon and nitrogen (TOC and TN), alkali-extractable C (TEC and humic substances - HA-FA), water extractable N and organic C (WEN and WEOC), Olsen extractable P, exchangeable cations, pH and EC; (ii) physical: texture, dry bulk density (BD), macroporosity (Pmac), air capacity (AC), and relative field capacity (RFC); (iii) biological: carbon of the microbial biomass quantified with the fumigation-extraction method. PCA and SDA were previously applied to the multivariate dataset (Stellacci et al., 2016). PLSR was carried out on mean centered and variance scaled data of predictors (soil variables) and response (wheat yield) variables using the PLS procedure of SAS/STAT. In addition, variable importance for projection (VIP
EPMLR: sequence-based linear B-cell epitope prediction method using multiple linear regression.
Lian, Yao; Ge, Meng; Pan, Xian-Ming
2014-12-19
B-cell epitopes have been studied extensively due to their immunological applications, such as peptide-based vaccine development, antibody production, and disease diagnosis and therapy. Despite several decades of research, the accurate prediction of linear B-cell epitopes has remained a challenging task. In this work, based on the antigen's primary sequence information, a novel linear B-cell epitope prediction model was developed using the multiple linear regression (MLR). A 10-fold cross-validation test on a large non-redundant dataset was performed to evaluate the performance of our model. To alleviate the problem caused by the noise of negative dataset, 300 experiments utilizing 300 sub-datasets were performed. We achieved overall sensitivity of 81.8%, precision of 64.1% and area under the receiver operating characteristic curve (AUC) of 0.728. We have presented a reliable method for the identification of linear B cell epitope using antigen's primary sequence information. Moreover, a web server EPMLR has been developed for linear B-cell epitope prediction: http://www.bioinfo.tsinghua.edu.cn/epitope/EPMLR/ .
Standardless quantification methods in electron probe microanalysis
Energy Technology Data Exchange (ETDEWEB)
Trincavelli, Jorge, E-mail: trincavelli@famaf.unc.edu.ar [Facultad de Matemática, Astronomía y Física, Universidad Nacional de Córdoba, Ciudad Universitaria, 5000 Córdoba (Argentina); Instituto de Física Enrique Gaviola, Consejo Nacional de Investigaciones Científicas y Técnicas de la República Argentina, Medina Allende s/n, Ciudad Universitaria, 5000 Córdoba (Argentina); Limandri, Silvina, E-mail: s.limandri@conicet.gov.ar [Facultad de Matemática, Astronomía y Física, Universidad Nacional de Córdoba, Ciudad Universitaria, 5000 Córdoba (Argentina); Instituto de Física Enrique Gaviola, Consejo Nacional de Investigaciones Científicas y Técnicas de la República Argentina, Medina Allende s/n, Ciudad Universitaria, 5000 Córdoba (Argentina); Bonetto, Rita, E-mail: bonetto@quimica.unlp.edu.ar [Centro de Investigación y Desarrollo en Ciencias Aplicadas Dr. Jorge Ronco, Consejo Nacional de Investigaciones Científicas y Técnicas de la República Argentina, Facultad de Ciencias Exactas, de la Universidad Nacional de La Plata, Calle 47 N° 257, 1900 La Plata (Argentina)
2014-11-01
The elemental composition of a solid sample can be determined by electron probe microanalysis with or without the use of standards. The standardless algorithms are quite faster than the methods that require standards; they are useful when a suitable set of standards is not available or for rough samples, and also they help to solve the problem of current variation, for example, in equipments with cold field emission gun. Due to significant advances in the accuracy achieved during the last years, product of the successive efforts made to improve the description of generation, absorption and detection of X-rays, the standardless methods have increasingly become an interesting option for the user. Nevertheless, up to now, algorithms that use standards are still more precise than standardless methods. It is important to remark, that care must be taken with results provided by standardless methods that normalize the calculated concentration values to 100%, unless an estimate of the errors is reported. In this work, a comprehensive discussion of the key features of the main standardless quantification methods, as well as the level of accuracy achieved by them is presented. - Highlights: • Standardless methods are a good alternative when no suitable standards are available. • Their accuracy reaches 10% for 95% of the analyses when traces are excluded. • Some of them are suitable for the analysis of rough samples.
31 CFR 203.10 - Electronic payment methods.
2010-07-01
... 31 Money and Finance: Treasury 2 2010-07-01 2010-07-01 false Electronic payment methods. 203.10... TAX AND LOAN PROGRAM Electronic Federal Tax Payments § 203.10 Electronic payment methods. (a) General. Electronic payment methods for Federal tax payments available under this subpart include ACH debit entries...
Functional regression method for whole genome eQTL epistasis analysis with sequencing data.
Xu, Kelin; Jin, Li; Xiong, Momiao
2017-05-18
Epistasis plays an essential rule in understanding the regulation mechanisms and is an essential component of the genetic architecture of the gene expressions. However, interaction analysis of gene expressions remains fundamentally unexplored due to great computational challenges and data availability. Due to variation in splicing, transcription start sites, polyadenylation sites, post-transcriptional RNA editing across the entire gene, and transcription rates of the cells, RNA-seq measurements generate large expression variability and collectively create the observed position level read count curves. A single number for measuring gene expression which is widely used for microarray measured gene expression analysis is highly unlikely to sufficiently account for large expression variation across the gene. Simultaneously analyzing epistatic architecture using the RNA-seq and whole genome sequencing (WGS) data poses enormous challenges. We develop a nonlinear functional regression model (FRGM) with functional responses where the position-level read counts within a gene are taken as a function of genomic position, and functional predictors where genotype profiles are viewed as a function of genomic position, for epistasis analysis with RNA-seq data. Instead of testing the interaction of all possible pair-wises SNPs, the FRGM takes a gene as a basic unit for epistasis analysis, which tests for the interaction of all possible pairs of genes and use all the information that can be accessed to collectively test interaction between all possible pairs of SNPs within two genome regions. By large-scale simulations, we demonstrate that the proposed FRGM for epistasis analysis can achieve the correct type 1 error and has higher power to detect the interactions between genes than the existing methods. The proposed methods are applied to the RNA-seq and WGS data from the 1000 Genome Project. The numbers of pairs of significantly interacting genes after Bonferroni correction
Sanitation methods using high energy electron beams
International Nuclear Information System (INIS)
Levaillant, C.; Gallien, C.L.
1979-01-01
Short recycling of waste water and the use of liquid or dehydrated sludge as natural manure for agriculture or animal supplement feed is of great economical and ecological interest. It implies strong biological and chemical disinfection. Ionizing radiations produced by radioactive elements or linear accelerators can be used as a complement of conventional methods in the treatment of liquid and solid waste. An experiment conducted with high-energy electron-beam linear accelerators is presented. Degradation of undesirable metabolites in water occurs for a dose of 50 kRad. Undesirable seeds present in sludge are destroyed with a 200 kRad dose. A 300 kRad dose is sufficient for parasitic and bacterial disinfection (DL 90). Destruction of polio virus (DL 90) is obtained for 400 kRad. Higher doses (1000 to 2000 kRad) produce mineralization of toxic organic mercury, reduce some chemical toxic pollutants present in sludge and improve flocculation. (author)
Directory of Open Access Journals (Sweden)
Sergei Vladimirovich Varaksin
2017-06-01
Full Text Available Purpose. Construction of a mathematical model of the dynamics of childbearing change in the Altai region in 2000–2016, analysis of the dynamics of changes in birth rates for multiple age categories of women of childbearing age. Methodology. A auxiliary analysis element is the construction of linear mathematical models of the dynamics of childbearing by using fuzzy linear regression method based on fuzzy numbers. Fuzzy linear regression is considered as an alternative to standard statistical linear regression for short time series and unknown distribution law. The parameters of fuzzy linear and standard statistical regressions for childbearing time series were defined with using the built in language MatLab algorithm. Method of fuzzy linear regression is not used in sociological researches yet. Results. There are made the conclusions about the socio-demographic changes in society, the high efficiency of the demographic policy of the leadership of the region and the country, and the applicability of the method of fuzzy linear regression for sociological analysis.
International Nuclear Information System (INIS)
Wang Weida; Xia Junding; Zhou Zhixin; Leung, P.L.
2001-01-01
Thermoluminescence (TL) dating using a regression method of saturating exponential in pre-dose technique was described. 23 porcelain samples from past dynasties of China were dated by this method. The results show that the TL ages are in reasonable agreement with archaeological dates within a standard deviation of 27%. Such error can be accepted in porcelain dating
The analysis of survival data in nephrology: basic concepts and methods of Cox regression
van Dijk, Paul C.; Jager, Kitty J.; Zwinderman, Aeilko H.; Zoccali, Carmine; Dekker, Friedo W.
2008-01-01
How much does the survival of one group differ from the survival of another group? How do differences in age in these two groups affect such a comparison? To obtain a quantity to compare the survival of different patient groups and to account for confounding effects, a multiple regression technique
Estimating traffic volume on Wyoming low volume roads using linear and logistic regression methods
Directory of Open Access Journals (Sweden)
Dick Apronti
2016-12-01
Full Text Available Traffic volume is an important parameter in most transportation planning applications. Low volume roads make up about 69% of road miles in the United States. Estimating traffic on the low volume roads is a cost-effective alternative to taking traffic counts. This is because traditional traffic counts are expensive and impractical for low priority roads. The purpose of this paper is to present the development of two alternative means of cost-effectively estimating traffic volumes for low volume roads in Wyoming and to make recommendations for their implementation. The study methodology involves reviewing existing studies, identifying data sources, and carrying out the model development. The utility of the models developed were then verified by comparing actual traffic volumes to those predicted by the model. The study resulted in two regression models that are inexpensive and easy to implement. The first regression model was a linear regression model that utilized pavement type, access to highways, predominant land use types, and population to estimate traffic volume. In verifying the model, an R2 value of 0.64 and a root mean square error of 73.4% were obtained. The second model was a logistic regression model that identified the level of traffic on roads using five thresholds or levels. The logistic regression model was verified by estimating traffic volume thresholds and determining the percentage of roads that were accurately classified as belonging to the given thresholds. For the five thresholds, the percentage of roads classified correctly ranged from 79% to 88%. In conclusion, the verification of the models indicated both model types to be useful for accurate and cost-effective estimation of traffic volumes for low volume Wyoming roads. The models developed were recommended for use in traffic volume estimations for low volume roads in pavement management and environmental impact assessment studies.
Electronic device and method of manufacturing an electronic device
2009-01-01
An electronic device comprising at least one die stack having at least a first die (D1) comprising a first array of light emitting units (OLED) for emitting light, a second layer (D2) comprising a second array of via holes (VH) and a third die (D3) comprising a third array of light detecting units
Energy Technology Data Exchange (ETDEWEB)
Keilacker, H; Becker, G; Ziegler, M; Gottschling, H D [Zentralinstitut fuer Diabetes, Karlsburg (German Democratic Republic)
1980-10-01
In order to handle all types of radioimmunoassay (RIA) calibration curves obtained in the authors' laboratory in the same way, they tried to find a non-linear expression for their regression which allows calibration curves with different degrees of curvature to be fitted. Considering the two boundary cases of the incubation protocol they derived a hyperbolic inverse regression function: x = a/sub 1/y + a/sub 0/ + asub(-1)y/sup -1/, where x is the total concentration of antigen, asub(i) are constants, and y is the specifically bound radioactivity. An RIA evaluation procedure based on this function is described providing a fitted inverse RIA calibration curve and some statistical quality parameters. The latter are of an order which is normal for RIA systems. There is an excellent agreement between fitted and experimentally obtained calibration curves having a different degree of curvature.
[SciELO: method for electronic publishing].
Laerte Packer, A; Rocha Biojone, M; Antonio, I; Mayumi Takemaka, R; Pedroso García, A; Costa da Silva, A; Toshiyuki Murasaki, R; Mylek, C; Carvalho Reisl, O; Rocha F Delbucio, H C
2001-01-01
It describes the SciELO Methodology Scientific Electronic Library Online for electronic publishing of scientific periodicals, examining issues such as the transition from traditional printed publication to electronic publishing, the scientific communication process, the principles which founded the methodology development, its application in the building of the SciELO site, its modules and components, the tools use for its construction etc. The article also discusses the potentialities and trends for the area in Brazil and Latin America, pointing out questions and proposals which should be investigated and solved by the methodology. It concludes that the SciELO Methodology is an efficient, flexible and wide solution for the scientific electronic publishing.
The synthesis method for design of electron flow sources
Alexahin, Yu I.; Molodozhenzev, A. Yu
1997-01-01
The synthesis method to design a relativistic magnetically - focused beam source is described in this paper. It allows to find a shape of electrodes necessary to produce laminar space charge flows. Electron guns with shielded cathodes designed with this method were analyzed using the EGUN code. The obtained results have shown the coincidence of the synthesis and analysis calculations [1]. This method of electron gun calculation may be applied for immersed electron flows - of interest for the EBIS electron gun design.
Directory of Open Access Journals (Sweden)
Jun Bi
2018-04-01
Full Text Available Battery electric vehicles (BEVs reduce energy consumption and air pollution as compared with conventional vehicles. However, the limited driving range and potential long charging time of BEVs create new problems. Accurate charging time prediction of BEVs helps drivers determine travel plans and alleviate their range anxiety during trips. This study proposed a combined model for charging time prediction based on regression and time-series methods according to the actual data from BEVs operating in Beijing, China. After data analysis, a regression model was established by considering the charged amount for charging time prediction. Furthermore, a time-series method was adopted to calibrate the regression model, which significantly improved the fitting accuracy of the model. The parameters of the model were determined by using the actual data. Verification results confirmed the accuracy of the model and showed that the model errors were small. The proposed model can accurately depict the charging time characteristics of BEVs in Beijing.
Lusiana, Evellin Dewi
2017-12-01
The parameters of binary probit regression model are commonly estimated by using Maximum Likelihood Estimation (MLE) method. However, MLE method has limitation if the binary data contains separation. Separation is the condition where there are one or several independent variables that exactly grouped the categories in binary response. It will result the estimators of MLE method become non-convergent, so that they cannot be used in modeling. One of the effort to resolve the separation is using Firths approach instead. This research has two aims. First, to identify the chance of separation occurrence in binary probit regression model between MLE method and Firths approach. Second, to compare the performance of binary probit regression model estimator that obtained by MLE method and Firths approach using RMSE criteria. Those are performed using simulation method and under different sample size. The results showed that the chance of separation occurrence in MLE method for small sample size is higher than Firths approach. On the other hand, for larger sample size, the probability decreased and relatively identic between MLE method and Firths approach. Meanwhile, Firths estimators have smaller RMSE than MLEs especially for smaller sample sizes. But for larger sample sizes, the RMSEs are not much different. It means that Firths estimators outperformed MLE estimator.
Regression to fuzziness method for estimation of remaining useful life in power plant components
Alamaniotis, Miltiadis; Grelle, Austin; Tsoukalas, Lefteri H.
2014-10-01
Mitigation of severe accidents in power plants requires the reliable operation of all systems and the on-time replacement of mechanical components. Therefore, the continuous surveillance of power systems is a crucial concern for the overall safety, cost control, and on-time maintenance of a power plant. In this paper a methodology called regression to fuzziness is presented that estimates the remaining useful life (RUL) of power plant components. The RUL is defined as the difference between the time that a measurement was taken and the estimated failure time of that component. The methodology aims to compensate for a potential lack of historical data by modeling an expert's operational experience and expertise applied to the system. It initially identifies critical degradation parameters and their associated value range. Once completed, the operator's experience is modeled through fuzzy sets which span the entire parameter range. This model is then synergistically used with linear regression and a component's failure point to estimate the RUL. The proposed methodology is tested on estimating the RUL of a turbine (the basic electrical generating component of a power plant) in three different cases. Results demonstrate the benefits of the methodology for components for which operational data is not readily available and emphasize the significance of the selection of fuzzy sets and the effect of knowledge representation on the predicted output. To verify the effectiveness of the methodology, it was benchmarked against the data-based simple linear regression model used for predictions which was shown to perform equal or worse than the presented methodology. Furthermore, methodology comparison highlighted the improvement in estimation offered by the adoption of appropriate of fuzzy sets for parameter representation.
Amini, Payam; Maroufizadeh, Saman; Samani, Reza Omani; Hamidi, Omid; Sepidarkish, Mahdi
2017-06-01
Preterm birth (PTB) is a leading cause of neonatal death and the second biggest cause of death in children under five years of age. The objective of this study was to determine the prevalence of PTB and its associated factors using logistic regression and decision tree classification methods. This cross-sectional study was conducted on 4,415 pregnant women in Tehran, Iran, from July 6-21, 2015. Data were collected by a researcher-developed questionnaire through interviews with mothers and review of their medical records. To evaluate the accuracy of the logistic regression and decision tree methods, several indices such as sensitivity, specificity, and the area under the curve were used. The PTB rate was 5.5% in this study. The logistic regression outperformed the decision tree for the classification of PTB based on risk factors. Logistic regression showed that multiple pregnancies, mothers with preeclampsia, and those who conceived with assisted reproductive technology had an increased risk for PTB ( p logistic regression model for the classification of risk groups for PTB.
Eekhout, I.; Wiel, M.A. van de; Heymans, M.W.
2017-01-01
Background. Multiple imputation is a recommended method to handle missing data. For significance testing after multiple imputation, Rubin’s Rules (RR) are easily applied to pool parameter estimates. In a logistic regression model, to consider whether a categorical covariate with more than two levels
14 CFR 1260.69 - Electronic funds transfer payment methods.
2010-01-01
... Government by electronic funds transfer through the Treasury Fedline Payment System (FEDLINE) or the... 14 Aeronautics and Space 5 2010-01-01 2010-01-01 false Electronic funds transfer payment methods... COOPERATIVE AGREEMENTS General Special Conditions § 1260.69 Electronic funds transfer payment methods...
Determination of benzo(apyrene content in PM10 using regression methods
Directory of Open Access Journals (Sweden)
Jacek Gębicki
2015-12-01
Full Text Available The paper presents an attempt of application of multidimensional linear regression to estimation of an empirical model describing the factors influencing on B(aP content in suspended dust PM10 in Olsztyn and Elbląg city regions between 2010 and 2013. During this period annual average concentration of B(aP in PM10 exceeded the admissible level 1.5-3 times. Conducted investigations confirm that the reasons of B(aP concentration increase are low-efficiency individual home heat stations or low-temperature heat sources, which are responsible for so-called low emission during heating period. Dependences between the following quantities were analysed: concentration of PM10 dust in air, air temperature, wind velocity, air humidity. A measure of model fitting to actual B(aP concentration in PM10 was the coefficient of determination of the model. Application of multidimensional linear regression yielded the equations characterized by high values of the coefficient of determination of the model, especially during heating season. This parameter ranged from 0.54 to 0.80 during the analyzed period.
Boucher, Thomas F.; Ozanne, Marie V.; Carmosino, Marco L.; Dyar, M. Darby; Mahadevan, Sridhar; Breves, Elly A.; Lepore, Kate H.; Clegg, Samuel M.
2015-05-01
The ChemCam instrument on the Mars Curiosity rover is generating thousands of LIBS spectra and bringing interest in this technique to public attention. The key to interpreting Mars or any other types of LIBS data are calibrations that relate laboratory standards to unknowns examined in other settings and enable predictions of chemical composition. Here, LIBS spectral data are analyzed using linear regression methods including partial least squares (PLS-1 and PLS-2), principal component regression (PCR), least absolute shrinkage and selection operator (lasso), elastic net, and linear support vector regression (SVR-Lin). These were compared against results from nonlinear regression methods including kernel principal component regression (K-PCR), polynomial kernel support vector regression (SVR-Py) and k-nearest neighbor (kNN) regression to discern the most effective models for interpreting chemical abundances from LIBS spectra of geological samples. The results were evaluated for 100 samples analyzed with 50 laser pulses at each of five locations averaged together. Wilcoxon signed-rank tests were employed to evaluate the statistical significance of differences among the nine models using their predicted residual sum of squares (PRESS) to make comparisons. For MgO, SiO2, Fe2O3, CaO, and MnO, the sparse models outperform all the others except for linear SVR, while for Na2O, K2O, TiO2, and P2O5, the sparse methods produce inferior results, likely because their emission lines in this energy range have lower transition probabilities. The strong performance of the sparse methods in this study suggests that use of dimensionality-reduction techniques as a preprocessing step may improve the performance of the linear models. Nonlinear methods tend to overfit the data and predict less accurately, while the linear methods proved to be more generalizable with better predictive performance. These results are attributed to the high dimensionality of the data (6144 channels
Directory of Open Access Journals (Sweden)
Xiangbing Zhou
2018-04-01
Full Text Available Rapidly growing GPS (Global Positioning System trajectories hide much valuable information, such as city road planning, urban travel demand, and population migration. In order to mine the hidden information and to capture better clustering results, a trajectory regression clustering method (an unsupervised trajectory clustering method is proposed to reduce local information loss of the trajectory and to avoid getting stuck in the local optimum. Using this method, we first define our new concept of trajectory clustering and construct a novel partitioning (angle-based partitioning method of line segments; second, the Lagrange-based method and Hausdorff-based K-means++ are integrated in fuzzy C-means (FCM clustering, which are used to maintain the stability and the robustness of the clustering process; finally, least squares regression model is employed to achieve regression clustering of the trajectory. In our experiment, the performance and effectiveness of our method is validated against real-world taxi GPS data. When comparing our clustering algorithm with the partition-based clustering algorithms (K-means, K-median, and FCM, our experimental results demonstrate that the presented method is more effective and generates a more reasonable trajectory.
Method for surface treatment by electron beams
International Nuclear Information System (INIS)
Panzer, S.; Doehler, H.; Bartel, R.; Ardenne, T. von.
1985-01-01
The invention has been aimed at simplifying the technology and saving energy in modifying surfaces with the aid of electron beams. The described beam-object geometry allows to abandon additional heat treatments. It can be used for surface hardening
A method to determine the necessity for global signal regression in resting-state fMRI studies.
Chen, Gang; Chen, Guangyu; Xie, Chunming; Ward, B Douglas; Li, Wenjun; Antuono, Piero; Li, Shi-Jiang
2012-12-01
In resting-state functional MRI studies, the global signal (operationally defined as the global average of resting-state functional MRI time courses) is often considered a nuisance effect and commonly removed in preprocessing. This global signal regression method can introduce artifacts, such as false anticorrelated resting-state networks in functional connectivity analyses. Therefore, the efficacy of this technique as a correction tool remains questionable. In this article, we establish that the accuracy of the estimated global signal is determined by the level of global noise (i.e., non-neural noise that has a global effect on the resting-state functional MRI signal). When the global noise level is low, the global signal resembles the resting-state functional MRI time courses of the largest cluster, but not those of the global noise. Using real data, we demonstrate that the global signal is strongly correlated with the default mode network components and has biological significance. These results call into question whether or not global signal regression should be applied. We introduce a method to quantify global noise levels. We show that a criteria for global signal regression can be found based on the method. By using the criteria, one can determine whether to include or exclude the global signal regression in minimizing errors in functional connectivity measures. Copyright © 2012 Wiley Periodicals, Inc.
Kolasa-Wiecek, Alicja
2015-04-01
The energy sector in Poland is the source of 81% of greenhouse gas (GHG) emissions. Poland, among other European Union countries, occupies a leading position with regard to coal consumption. Polish energy sector actively participates in efforts to reduce GHG emissions to the atmosphere, through a gradual decrease of the share of coal in the fuel mix and development of renewable energy sources. All evidence which completes the knowledge about issues related to GHG emissions is a valuable source of information. The article presents the results of modeling of GHG emissions which are generated by the energy sector in Poland. For a better understanding of the quantitative relationship between total consumption of primary energy and greenhouse gas emission, multiple stepwise regression model was applied. The modeling results of CO2 emissions demonstrate a high relationship (0.97) with the hard coal consumption variable. Adjustment coefficient of the model to actual data is high and equal to 95%. The backward step regression model, in the case of CH4 emission, indicated the presence of hard coal (0.66), peat and fuel wood (0.34), solid waste fuels, as well as other sources (-0.64) as the most important variables. The adjusted coefficient is suitable and equals R2=0.90. For N2O emission modeling the obtained coefficient of determination is low and equal to 43%. A significant variable influencing the amount of N2O emission is the peat and wood fuel consumption. Copyright © 2015. Published by Elsevier B.V.
Efectivity of Additive Spline for Partial Least Square Method in Regression Model Estimation
Directory of Open Access Journals (Sweden)
Ahmad Bilfarsah
2005-04-01
Full Text Available Additive Spline of Partial Least Square method (ASPL as one generalization of Partial Least Square (PLS method. ASPLS method can be acommodation to non linear and multicollinearity case of predictor variables. As a principle, The ASPLS method approach is cahracterized by two idea. The first is to used parametric transformations of predictors by spline function; the second is to make ASPLS components mutually uncorrelated, to preserve properties of the linear PLS components. The performance of ASPLS compared with other PLS method is illustrated with the fisher economic application especially the tuna fish production.
Spady, Richard; Stouli, Sami
2012-01-01
We propose dual regression as an alternative to the quantile regression process for the global estimation of conditional distribution functions under minimal assumptions. Dual regression provides all the interpretational power of the quantile regression process while avoiding the need for repairing the intersecting conditional quantile surfaces that quantile regression often produces in practice. Our approach introduces a mathematical programming characterization of conditional distribution f...
Olive, David J
2017-01-01
This text covers both multiple linear regression and some experimental design models. The text uses the response plot to visualize the model and to detect outliers, does not assume that the error distribution has a known parametric distribution, develops prediction intervals that work when the error distribution is unknown, suggests bootstrap hypothesis tests that may be useful for inference after variable selection, and develops prediction regions and large sample theory for the multivariate linear regression model that has m response variables. A relationship between multivariate prediction regions and confidence regions provides a simple way to bootstrap confidence regions. These confidence regions often provide a practical method for testing hypotheses. There is also a chapter on generalized linear models and generalized additive models. There are many R functions to produce response and residual plots, to simulate prediction intervals and hypothesis tests, to detect outliers, and to choose response trans...
International Nuclear Information System (INIS)
Sun Zhong-Hua; Jiang Fan
2010-01-01
In this paper a new continuous variable called core-ratio is defined to describe the probability for a residue to be in a binding site, thereby replacing the previous binary description of the interface residue using 0 and 1. So we can use the support vector machine regression method to fit the core-ratio value and predict the protein binding sites. We also design a new group of physical and chemical descriptors to characterize the binding sites. The new descriptors are more effective, with an averaging procedure used. Our test shows that much better prediction results can be obtained by the support vector regression (SVR) method than by the support vector classification method. (rapid communication)
Directory of Open Access Journals (Sweden)
Liyun Su
2012-01-01
Full Text Available We introduce the extension of local polynomial fitting to the linear heteroscedastic regression model. Firstly, the local polynomial fitting is applied to estimate heteroscedastic function, then the coefficients of regression model are obtained by using generalized least squares method. One noteworthy feature of our approach is that we avoid the testing for heteroscedasticity by improving the traditional two-stage method. Due to nonparametric technique of local polynomial estimation, we do not need to know the heteroscedastic function. Therefore, we can improve the estimation precision, when the heteroscedastic function is unknown. Furthermore, we focus on comparison of parameters and reach an optimal fitting. Besides, we verify the asymptotic normality of parameters based on numerical simulations. Finally, this approach is applied to a case of economics, and it indicates that our method is surely effective in finite-sample situations.
Gilstrap, Donald L.
2013-01-01
In addition to qualitative methods presented in chaos and complexity theories in educational research, this article addresses quantitative methods that may show potential for future research studies. Although much in the social and behavioral sciences literature has focused on computer simulations, this article explores current chaos and…
New methods for trigger electronics development
Energy Technology Data Exchange (ETDEWEB)
Cleland, W.E.; Stern, E.G. [Univ. of Pittsburgh, PA (United States)
1991-12-31
The large and complex nature of RHIC experiments and the tight time schedule for their construction requires that new techniques for designing the electronics should be employed. This is particularly true of the trigger and data acquisition electronics which has to be ready for turn-on of the experiment. We describe the use of the Workview package from VIEWlogic Inc. for design, simulation, and verification of a flash ADC readout system. We also show how field-programmable gate arrays such as the Xilinx 4000 might be employed to construct or prototype circuits with a large number of gates while preserving flexibility.
Forecast daily indices of solar activity, F10.7, using support vector regression method
International Nuclear Information System (INIS)
Huang Cong; Liu Dandan; Wang Jingsong
2009-01-01
The 10.7 cm solar radio flux (F10.7), the value of the solar radio emission flux density at a wavelength of 10.7 cm, is a useful index of solar activity as a proxy for solar extreme ultraviolet radiation. It is meaningful and important to predict F10.7 values accurately for both long-term (months-years) and short-term (days) forecasting, which are often used as inputs in space weather models. This study applies a novel neural network technique, support vector regression (SVR), to forecasting daily values of F10.7. The aim of this study is to examine the feasibility of SVR in short-term F10.7 forecasting. The approach, based on SVR, reduces the dimension of feature space in the training process by using a kernel-based learning algorithm. Thus, the complexity of the calculation becomes lower and a small amount of training data will be sufficient. The time series of F10.7 from 2002 to 2006 are employed as the data sets. The performance of the approach is estimated by calculating the norm mean square error and mean absolute percentage error. It is shown that our approach can perform well by using fewer training data points than the traditional neural network. (research paper)
International Nuclear Information System (INIS)
Yang, Jianhong; Yi, Cancan; Xu, Jinwu; Ma, Xianghong
2015-01-01
A new LIBS quantitative analysis method based on analytical line adaptive selection and Relevance Vector Machine (RVM) regression model is proposed. First, a scheme of adaptively selecting analytical line is put forward in order to overcome the drawback of high dependency on a priori knowledge. The candidate analytical lines are automatically selected based on the built-in characteristics of spectral lines, such as spectral intensity, wavelength and width at half height. The analytical lines which will be used as input variables of regression model are determined adaptively according to the samples for both training and testing. Second, an LIBS quantitative analysis method based on RVM is presented. The intensities of analytical lines and the elemental concentrations of certified standard samples are used to train the RVM regression model. The predicted elemental concentration analysis results will be given with a form of confidence interval of probabilistic distribution, which is helpful for evaluating the uncertainness contained in the measured spectra. Chromium concentration analysis experiments of 23 certified standard high-alloy steel samples have been carried out. The multiple correlation coefficient of the prediction was up to 98.85%, and the average relative error of the prediction was 4.01%. The experiment results showed that the proposed LIBS quantitative analysis method achieved better prediction accuracy and better modeling robustness compared with the methods based on partial least squares regression, artificial neural network and standard support vector machine. - Highlights: • Both training and testing samples are considered for analytical lines selection. • The analytical lines are auto-selected based on the built-in characteristics of spectral lines. • The new method can achieve better prediction accuracy and modeling robustness. • Model predictions are given with confidence interval of probabilistic distribution
Directory of Open Access Journals (Sweden)
Sara Mortaz Hejri
2013-01-01
Full Text Available Background: One of the methods used for standard setting is the borderline regression method (BRM. This study aims to assess the reliability of BRM when the pass-fail standard in an objective structured clinical examination (OSCE was calculated by averaging the BRM standards obtained for each station separately. Materials and Methods: In nine stations of the OSCE with direct observation the examiners gave each student a checklist score and a global score. Using a linear regression model for each station, we calculated the checklist score cut-off on the regression equation for the global scale cut-off set at 2. The OSCE pass-fail standard was defined as the average of all station′s standard. To determine the reliability, the root mean square error (RMSE was calculated. The R2 coefficient and the inter-grade discrimination were calculated to assess the quality of OSCE. Results: The mean total test score was 60.78. The OSCE pass-fail standard and its RMSE were 47.37 and 0.55, respectively. The R2 coefficients ranged from 0.44 to 0.79. The inter-grade discrimination score varied greatly among stations. Conclusion: The RMSE of the standard was very small indicating that BRM is a reliable method of setting standard for OSCE, which has the advantage of providing data for quality assurance.
Borodachev, S. M.
2016-06-01
The simple derivation of recursive least squares (RLS) method equations is given as special case of Kalman filter estimation of a constant system state under changing observation conditions. A numerical example illustrates application of RLS to multicollinearity problem.
Huang, Lei
2015-01-01
To solve the problem in which the conventional ARMA modeling methods for gyro random noise require a large number of samples and converge slowly, an ARMA modeling method using a robust Kalman filtering is developed. The ARMA model parameters are employed as state arguments. Unknown time-varying estimators of observation noise are used to achieve the estimated mean and variance of the observation noise. Using the robust Kalman filtering, the ARMA model parameters are estimated accurately. The developed ARMA modeling method has the advantages of a rapid convergence and high accuracy. Thus, the required sample size is reduced. It can be applied to modeling applications for gyro random noise in which a fast and accurate ARMA modeling method is required. PMID:26437409
14 CFR 1274.931 - Electronic funds transfer payment methods.
2010-01-01
... cooperative agreement will be made by the Government by electronic funds transfer through the Treasury Fedline... 14 Aeronautics and Space 5 2010-01-01 2010-01-01 false Electronic funds transfer payment methods... COOPERATIVE AGREEMENTS WITH COMMERCIAL FIRMS Other Provisions and Special Conditions § 1274.931 Electronic...
Bolarinwa, O A; Adeola, O
2012-12-01
Digestible and metabolizable energy contents of feed ingredients for pigs can be determined by direct or indirect methods. There are situations when only the indirect approach is suitable and the regression method is a robust indirect approach. This study was conducted to compare the direct and regression methods for determining the energy value of wheat for pigs. Twenty-four barrows with an average initial BW of 31 kg were assigned to 4 diets in a randomized complete block design. The 4 diets consisted of 969 g wheat/kg plus minerals and vitamins (sole wheat) for the direct method, corn (Zea mays)-soybean (Glycine max) meal reference diet (RD), RD + 300 g wheat/kg, and RD + 600 g wheat/kg. The 3 corn-soybean meal diets were used for the regression method and wheat replaced the energy-yielding ingredients, corn and soybean meal, so that the same ratio of corn and soybean meal across the experimental diets was maintained. The wheat used was analyzed to contain 883 g DM, 15.2 g N, and 3.94 Mcal GE/kg. Each diet was fed to 6 barrows in individual metabolism crates for a 5-d acclimation followed by a 5-d total but separate collection of feces and urine. The DE and ME for the sole wheat diet were 3.83 and 3.77 Mcal/kg DM, respectively. Because the sole wheat diet contained 969 g wheat/kg, these translate to 3.95 Mcal DE/kg DM and 3.89 Mcal ME/kg DM. The RD used for the regression approach yielded 4.00 Mcal DE and 3.91 Mcal ME/kg DM diet. Increasing levels of wheat in the RD linearly reduced (P direct method (3.95 and 3.89 Mcal/kg DM) did not differ (0.78 < P < 0.89) from those obtained using the regression method (3.96 and 3.88 Mcal/kg DM).
Liou, Jyun-you; Smith, Elliot H.; Bateman, Lisa M.; McKhann, Guy M., II; Goodman, Robert R.; Greger, Bradley; Davis, Tyler S.; Kellis, Spencer S.; House, Paul A.; Schevon, Catherine A.
2017-08-01
Objective. Epileptiform discharges, an electrophysiological hallmark of seizures, can propagate across cortical tissue in a manner similar to traveling waves. Recent work has focused attention on the origination and propagation patterns of these discharges, yielding important clues to their source location and mechanism of travel. However, systematic studies of methods for measuring propagation are lacking. Approach. We analyzed epileptiform discharges in microelectrode array recordings of human seizures. The array records multiunit activity and local field potentials at 400 micron spatial resolution, from a small cortical site free of obstructions. We evaluated several computationally efficient statistical methods for calculating traveling wave velocity, benchmarking them to analyses of associated neuronal burst firing. Main results. Over 90% of discharges met statistical criteria for propagation across the sampled cortical territory. Detection rate, direction and speed estimates derived from a multiunit estimator were compared to four field potential-based estimators: negative peak, maximum descent, high gamma power, and cross-correlation. Interestingly, the methods that were computationally simplest and most efficient (negative peak and maximal descent) offer non-inferior results in predicting neuronal traveling wave velocities compared to the other two, more complex methods. Moreover, the negative peak and maximal descent methods proved to be more robust against reduced spatial sampling challenges. Using least absolute deviation in place of least squares error minimized the impact of outliers, and reduced the discrepancies between local field potential-based and multiunit estimators. Significance. Our findings suggest that ictal epileptiform discharges typically take the form of exceptionally strong, rapidly traveling waves, with propagation detectable across millimeter distances. The sequential activation of neurons in space can be inferred from clinically
Energy Technology Data Exchange (ETDEWEB)
Jabr, R.A. [Electrical, Computer and Communication Engineering Department, Notre Dame University, P.O. Box 72, Zouk Mikhael, Zouk Mosbeh (Lebanon)
2006-02-15
This paper presents an implementation of the least absolute value (LAV) power system state estimator based on obtaining a sequence of solutions to the L{sub 1}-regression problem using an iteratively reweighted least squares (IRLS{sub L1}) method. The proposed implementation avoids reformulating the regression problem into standard linear programming (LP) form and consequently does not require the use of common methods of LP, such as those based on the simplex method or interior-point methods. It is shown that the IRLS{sub L1} method is equivalent to solving a sequence of linear weighted least squares (LS) problems. Thus, its implementation presents little additional effort since the sparse LS solver is common to existing LS state estimators. Studies on the termination criteria of the IRLS{sub L1} method have been carried out to determine a procedure for which the proposed estimator is more computationally efficient than a previously proposed non-linear iteratively reweighted least squares (IRLS) estimator. Indeed, it is revealed that the proposed method is a generalization of the previously reported IRLS estimator, but is based on more rigorous theory. (author)
Geometric reconstruction methods for electron tomography
DEFF Research Database (Denmark)
Alpers, Andreas; Gardner, Richard J.; König, Stefan
2013-01-01
Electron tomography is becoming an increasingly important tool in materials science for studying the three-dimensional morphologies and chemical compositions of nanostructures. The image quality obtained by many current algorithms is seriously affected by the problems of missing wedge artefacts...... and discuss several algorithms from the mathematical fields of geometric and discrete tomography. The algorithms incorporate geometric prior knowledge (mainly convexity and homogeneity), which also in principle considerably reduces the number of tilt angles required. Results are discussed...
Energy Technology Data Exchange (ETDEWEB)
Boucher, Thomas F., E-mail: boucher@cs.umass.edu [School of Computer Science, University of Massachusetts Amherst, 140 Governor' s Drive, Amherst, MA 01003, United States. (United States); Ozanne, Marie V. [Department of Astronomy, Mount Holyoke College, South Hadley, MA 01075 (United States); Carmosino, Marco L. [School of Computer Science, University of Massachusetts Amherst, 140 Governor' s Drive, Amherst, MA 01003, United States. (United States); Dyar, M. Darby [Department of Astronomy, Mount Holyoke College, South Hadley, MA 01075 (United States); Mahadevan, Sridhar [School of Computer Science, University of Massachusetts Amherst, 140 Governor' s Drive, Amherst, MA 01003, United States. (United States); Breves, Elly A.; Lepore, Kate H. [Department of Astronomy, Mount Holyoke College, South Hadley, MA 01075 (United States); Clegg, Samuel M. [Los Alamos National Laboratory, P.O. Box 1663, MS J565, Los Alamos, NM 87545 (United States)
2015-05-01
The ChemCam instrument on the Mars Curiosity rover is generating thousands of LIBS spectra and bringing interest in this technique to public attention. The key to interpreting Mars or any other types of LIBS data are calibrations that relate laboratory standards to unknowns examined in other settings and enable predictions of chemical composition. Here, LIBS spectral data are analyzed using linear regression methods including partial least squares (PLS-1 and PLS-2), principal component regression (PCR), least absolute shrinkage and selection operator (lasso), elastic net, and linear support vector regression (SVR-Lin). These were compared against results from nonlinear regression methods including kernel principal component regression (K-PCR), polynomial kernel support vector regression (SVR-Py) and k-nearest neighbor (kNN) regression to discern the most effective models for interpreting chemical abundances from LIBS spectra of geological samples. The results were evaluated for 100 samples analyzed with 50 laser pulses at each of five locations averaged together. Wilcoxon signed-rank tests were employed to evaluate the statistical significance of differences among the nine models using their predicted residual sum of squares (PRESS) to make comparisons. For MgO, SiO{sub 2}, Fe{sub 2}O{sub 3}, CaO, and MnO, the sparse models outperform all the others except for linear SVR, while for Na{sub 2}O, K{sub 2}O, TiO{sub 2}, and P{sub 2}O{sub 5}, the sparse methods produce inferior results, likely because their emission lines in this energy range have lower transition probabilities. The strong performance of the sparse methods in this study suggests that use of dimensionality-reduction techniques as a preprocessing step may improve the performance of the linear models. Nonlinear methods tend to overfit the data and predict less accurately, while the linear methods proved to be more generalizable with better predictive performance. These results are attributed to the high
International Nuclear Information System (INIS)
Boucher, Thomas F.; Ozanne, Marie V.; Carmosino, Marco L.; Dyar, M. Darby; Mahadevan, Sridhar; Breves, Elly A.; Lepore, Kate H.; Clegg, Samuel M.
2015-01-01
The ChemCam instrument on the Mars Curiosity rover is generating thousands of LIBS spectra and bringing interest in this technique to public attention. The key to interpreting Mars or any other types of LIBS data are calibrations that relate laboratory standards to unknowns examined in other settings and enable predictions of chemical composition. Here, LIBS spectral data are analyzed using linear regression methods including partial least squares (PLS-1 and PLS-2), principal component regression (PCR), least absolute shrinkage and selection operator (lasso), elastic net, and linear support vector regression (SVR-Lin). These were compared against results from nonlinear regression methods including kernel principal component regression (K-PCR), polynomial kernel support vector regression (SVR-Py) and k-nearest neighbor (kNN) regression to discern the most effective models for interpreting chemical abundances from LIBS spectra of geological samples. The results were evaluated for 100 samples analyzed with 50 laser pulses at each of five locations averaged together. Wilcoxon signed-rank tests were employed to evaluate the statistical significance of differences among the nine models using their predicted residual sum of squares (PRESS) to make comparisons. For MgO, SiO 2 , Fe 2 O 3 , CaO, and MnO, the sparse models outperform all the others except for linear SVR, while for Na 2 O, K 2 O, TiO 2 , and P 2 O 5 , the sparse methods produce inferior results, likely because their emission lines in this energy range have lower transition probabilities. The strong performance of the sparse methods in this study suggests that use of dimensionality-reduction techniques as a preprocessing step may improve the performance of the linear models. Nonlinear methods tend to overfit the data and predict less accurately, while the linear methods proved to be more generalizable with better predictive performance. These results are attributed to the high dimensionality of the data (6144
Directory of Open Access Journals (Sweden)
Tamer Khatib
2014-01-01
Full Text Available In this research an improved approach for sizing standalone PV system (SAPV is presented. This work is an improved work developed previously by the authors. The previous work is based on the analytical method which faced some concerns regarding the difficulty of finding the model’s coefficients. Therefore, the proposed approach in this research is based on a combination of an analytical method and a machine learning approach for a generalized artificial neural network (GRNN. The GRNN assists to predict the optimal size of a PV system using the geographical coordinates of the targeted site instead of using mathematical formulas. Employing the GRNN facilitates the use of a previously developed method by the authors and avoids some of its drawbacks. The approach has been tested using data from five Malaysian sites. According to the results, the proposed method can be efficiently used for SAPV sizing whereas the proposed GRNN based model predicts the sizing curves of the PV system accurately with a prediction error of 0.6%. Moreover, hourly meteorological and load demand data are used in this research in order to consider the uncertainty of the solar energy and the load demand.
Comparison of Sparse and Jack-knife partial least squares regression methods for variable selection
DEFF Research Database (Denmark)
Karaman, Ibrahim; Qannari, El Mostafa; Martens, Harald
2013-01-01
The objective of this study was to compare two different techniques of variable selection, Sparse PLSR and Jack-knife PLSR, with respect to their predictive ability and their ability to identify relevant variables. Sparse PLSR is a method that is frequently used in genomics, whereas Jack-knife PL...
Using a Linear Regression Method to Detect Outliers in IRT Common Item Equating
He, Yong; Cui, Zhongmin; Fang, Yu; Chen, Hanwei
2013-01-01
Common test items play an important role in equating alternate test forms under the common item nonequivalent groups design. When the item response theory (IRT) method is applied in equating, inconsistent item parameter estimates among common items can lead to large bias in equated scores. It is prudent to evaluate inconsistency in parameter…
Sun, L.G.; De Visser, C.C.; Chu, Q.P.; Mulder, J.A.
2012-01-01
The optimality of the kernel number and kernel centers plays a significant role in determining the approximation power of nearly all kernel methods. However, the process of choosing optimal kernels is always formulated as a global optimization task, which is hard to accomplish. Recently, an
Asghari, Mehdi Poursheikhali; Hayatshahi, Sayyed Hamed Sadat; Abdolmaleki, Parviz
2012-01-01
From both the structural and functional points of view, β-turns play important biological roles in proteins. In the present study, a novel two-stage hybrid procedure has been developed to identify β-turns in proteins. Binary logistic regression was initially used for the first time to select significant sequence parameters in identification of β-turns due to a re-substitution test procedure. Sequence parameters were consisted of 80 amino acid positional occurrences and 20 amino acid percentages in sequence. Among these parameters, the most significant ones which were selected by binary logistic regression model, were percentages of Gly, Ser and the occurrence of Asn in position i+2, respectively, in sequence. These significant parameters have the highest effect on the constitution of a β-turn sequence. A neural network model was then constructed and fed by the parameters selected by binary logistic regression to build a hybrid predictor. The networks have been trained and tested on a non-homologous dataset of 565 protein chains. With applying a nine fold cross-validation test on the dataset, the network reached an overall accuracy (Qtotal) of 74, which is comparable with results of the other β-turn prediction methods. In conclusion, this study proves that the parameter selection ability of binary logistic regression together with the prediction capability of neural networks lead to the development of more precise models for identifying β-turns in proteins.
Directory of Open Access Journals (Sweden)
Xiaoyan Yang
2018-04-01
Full Text Available The Advanced Spaceborne Thermal-Emission and Reflection Radiometer Global Digital Elevation Model (ASTER GDEM is important to a wide range of geographical and environmental studies. Its accuracy, to some extent associated with land-use types reflecting topography, vegetation coverage, and human activities, impacts the results and conclusions of these studies. In order to improve the accuracy of ASTER GDEM prior to its application, we investigated ASTER GDEM errors based on individual land-use types and proposed two linear regression calibration methods, one considering only land use-specific errors and the other considering the impact of both land-use and topography. Our calibration methods were tested on the coastal prefectural city of Lianyungang in eastern China. Results indicate that (1 ASTER GDEM is highly accurate for rice, wheat, grass and mining lands but less accurate for scenic, garden, wood and bare lands; (2 despite improvements in ASTER GDEM2 accuracy, multiple linear regression calibration requires more data (topography and a relatively complex calibration process; (3 simple linear regression calibration proves a practicable and simplified means to systematically investigate and improve the impact of land-use on ASTER GDEM accuracy. Our method is applicable to areas with detailed land-use data based on highly accurate field-based point-elevation measurements.
Directory of Open Access Journals (Sweden)
Hukharnsusatrue, A.
2005-11-01
Full Text Available The objective of this research is to compare multiple regression coefficients estimating methods with existence of multicollinearity among independent variables. The estimation methods are Ordinary Least Squares method (OLS, Restricted Least Squares method (RLS, Restricted Ridge Regression method (RRR and Restricted Liu method (RL when restrictions are true and restrictions are not true. The study used the Monte Carlo Simulation method. The experiment was repeated 1,000 times under each situation. The analyzed results of the data are demonstrated as follows. CASE 1: The restrictions are true. In all cases, RRR and RL methods have a smaller Average Mean Square Error (AMSE than OLS and RLS method, respectively. RRR method provides the smallest AMSE when the level of correlations is high and also provides the smallest AMSE for all level of correlations and all sample sizes when standard deviation is equal to 5. However, RL method provides the smallest AMSE when the level of correlations is low and middle, except in the case of standard deviation equal to 3, small sample sizes, RRR method provides the smallest AMSE.The AMSE varies with, most to least, respectively, level of correlations, standard deviation and number of independent variables but inversely with to sample size.CASE 2: The restrictions are not true.In all cases, RRR method provides the smallest AMSE, except in the case of standard deviation equal to 1 and error of restrictions equal to 5%, OLS method provides the smallest AMSE when the level of correlations is low or median and there is a large sample size, but the small sample sizes, RL method provides the smallest AMSE. In addition, when error of restrictions is increased, OLS method provides the smallest AMSE for all level, of correlations and all sample sizes, except when the level of correlations is high and sample sizes small. Moreover, the case OLS method provides the smallest AMSE, the most RLS method has a smaller AMSE than
A computer program for uncertainty analysis integrating regression and Bayesian methods
Lu, Dan; Ye, Ming; Hill, Mary C.; Poeter, Eileen P.; Curtis, Gary
2014-01-01
This work develops a new functionality in UCODE_2014 to evaluate Bayesian credible intervals using the Markov Chain Monte Carlo (MCMC) method. The MCMC capability in UCODE_2014 is based on the FORTRAN version of the differential evolution adaptive Metropolis (DREAM) algorithm of Vrugt et al. (2009), which estimates the posterior probability density function of model parameters in high-dimensional and multimodal sampling problems. The UCODE MCMC capability provides eleven prior probability distributions and three ways to initialize the sampling process. It evaluates parametric and predictive uncertainties and it has parallel computing capability based on multiple chains to accelerate the sampling process. This paper tests and demonstrates the MCMC capability using a 10-dimensional multimodal mathematical function, a 100-dimensional Gaussian function, and a groundwater reactive transport model. The use of the MCMC capability is made straightforward and flexible by adopting the JUPITER API protocol. With the new MCMC capability, UCODE_2014 can be used to calculate three types of uncertainty intervals, which all can account for prior information: (1) linear confidence intervals which require linearity and Gaussian error assumptions and typically 10s–100s of highly parallelizable model runs after optimization, (2) nonlinear confidence intervals which require a smooth objective function surface and Gaussian observation error assumptions and typically 100s–1,000s of partially parallelizable model runs after optimization, and (3) MCMC Bayesian credible intervals which require few assumptions and commonly 10,000s–100,000s or more partially parallelizable model runs. Ready access allows users to select methods best suited to their work, and to compare methods in many circumstances.
CSIR Research Space (South Africa)
Gregor, Luke
2017-12-01
Full Text Available understanding with spatially integrated air–sea flux estimates (Fay and McKinley, 2014). Conversely, ocean biogeochemical process models are good tools for mechanis- tic understanding, but fail to represent the seasonality of CO2 fluxes in the Southern Ocean... of including coordinate variables as proxies of 1pCO2 in the empirical methods. In the inter- comparison study by Rödenbeck et al. (2015) proxies typi- cally include, but are not limited to, sea surface temperature (SST), chlorophyll a (Chl a), mixed layer...
Consistency analysis of subspace identification methods based on a linear regression approach
DEFF Research Database (Denmark)
Knudsen, Torben
2001-01-01
In the literature results can be found which claim consistency for the subspace method under certain quite weak assumptions. Unfortunately, a new result gives a counter example showing inconsistency under these assumptions and then gives new more strict sufficient assumptions which however does n...... not include important model structures as e.g. Box-Jenkins. Based on a simple least squares approach this paper shows the possible inconsistency under the weak assumptions and develops only slightly stricter assumptions sufficient for consistency and which includes any model structure...
International Nuclear Information System (INIS)
Sambou, Soussou
2004-01-01
In flood forecasting modelling, large basins are often considered as hydrological systems with multiple inputs and one output. Inputs are hydrological variables such rainfall, runoff and physical characteristics of basin; output is runoff. Relating inputs to output can be achieved using deterministic, conceptual, or stochastic models. Rainfall runoff models generally lack of accuracy. Physical hydrological processes based models, either deterministic or conceptual are highly data requirement demanding and by the way very complex. Stochastic multiple input-output models, using only historical chronicles of hydrological variables particularly runoff are by the way very popular among the hydrologists for large river basin flood forecasting. Application is made on the Senegal River upstream of Bakel, where the River is formed by the main branch, Bafing, and two tributaries, Bakoye and Faleme; Bafing being regulated by Manantaly Dam. A three inputs and one output model has been used for flood forecasting on Bakel. Influence of the lead forecasting, and of the three inputs taken separately, then associated two by two, and altogether has been verified using a dimensionless variance as criterion of quality. Inadequacies occur generally between model output and observations; to put model in better compliance with current observations, we have compared four parameter updating procedure, recursive least squares, Kalman filtering, stochastic gradient method, iterative method, and an AR errors forecasting model. A combination of these model updating have been used in real time flood forecasting.(Author)
High cycle fatigue test and regression methods of S-N curve
International Nuclear Information System (INIS)
Kim, D. W.; Park, J. Y.; Kim, W. G.; Yoon, J. H.
2011-11-01
The fatigue design curve in the ASME Boiler and Pressure Vessel Code Section III are based on the assumption that fatigue life is infinite after 106 cycles. This is because standard fatigue testing equipment prior to the past decades was limited in speed to less than 200 cycles per second. Traditional servo-hydraulic machines work at frequency of 50 Hz. Servo-hydraulic machines working at 1000 Hz have been developed after 1997. This machines allow high frequency and displacement of up to ±0.1 mm and dynamic load of ±20 kN are guaranteed. The frequency of resonant fatigue test machine is 50-250 Hz. Various forced vibration-based system works at 500 Hz or 1.8 kHz. Rotating bending machines allow testing frequency at 0.1-200 Hz. The main advantage of ultrasonic fatigue testing at 20 kHz is performing Although S-N curve is determined by experiment, the fatigue strength corresponding to a given fatigue life should be determined by statistical method considering the scatter of fatigue properties. In this report, the statistical methods for evaluation of fatigue test data is investigated
Geometric reconstruction methods for electron tomography
Energy Technology Data Exchange (ETDEWEB)
Alpers, Andreas, E-mail: alpers@ma.tum.de [Zentrum Mathematik, Technische Universität München, D-85747 Garching bei München (Germany); Gardner, Richard J., E-mail: Richard.Gardner@wwu.edu [Department of Mathematics, Western Washington University, Bellingham, WA 98225-9063 (United States); König, Stefan, E-mail: koenig@ma.tum.de [Zentrum Mathematik, Technische Universität München, D-85747 Garching bei München (Germany); Pennington, Robert S., E-mail: robert.pennington@uni-ulm.de [Center for Electron Nanoscopy, Technical University of Denmark, DK-2800 Kongens Lyngby (Denmark); Boothroyd, Chris B., E-mail: ChrisBoothroyd@cantab.net [Ernst Ruska-Centre for Microscopy and Spectroscopy with Electrons and Peter Grünberg Institute, Forschungszentrum Jülich, D-52425 Jülich (Germany); Houben, Lothar, E-mail: l.houben@fz-juelich.de [Ernst Ruska-Centre for Microscopy and Spectroscopy with Electrons and Peter Grünberg Institute, Forschungszentrum Jülich, D-52425 Jülich (Germany); Dunin-Borkowski, Rafal E., E-mail: rdb@fz-juelich.de [Ernst Ruska-Centre for Microscopy and Spectroscopy with Electrons and Peter Grünberg Institute, Forschungszentrum Jülich, D-52425 Jülich (Germany); Joost Batenburg, Kees, E-mail: Joost.Batenburg@cwi.nl [Centrum Wiskunde and Informatica, NL-1098XG, Amsterdam, The Netherlands and Vision Lab, Department of Physics, University of Antwerp, B-2610 Wilrijk (Belgium)
2013-05-15
Electron tomography is becoming an increasingly important tool in materials science for studying the three-dimensional morphologies and chemical compositions of nanostructures. The image quality obtained by many current algorithms is seriously affected by the problems of missing wedge artefacts and non-linear projection intensities due to diffraction effects. The former refers to the fact that data cannot be acquired over the full 180° tilt range; the latter implies that for some orientations, crystalline structures can show strong contrast changes. To overcome these problems we introduce and discuss several algorithms from the mathematical fields of geometric and discrete tomography. The algorithms incorporate geometric prior knowledge (mainly convexity and homogeneity), which also in principle considerably reduces the number of tilt angles required. Results are discussed for the reconstruction of an InAs nanowire. - Highlights: ► Four algorithms for electron tomography are introduced that utilize prior knowledge. ► Objects are assumed to be homogeneous; convexity and regularity is also discussed. ► We are able to reconstruct slices of a nanowire from as few as four projections. ► Algorithms should be selected based on the specific reconstruction task at hand.
Geometric reconstruction methods for electron tomography
International Nuclear Information System (INIS)
Alpers, Andreas; Gardner, Richard J.; König, Stefan; Pennington, Robert S.; Boothroyd, Chris B.; Houben, Lothar; Dunin-Borkowski, Rafal E.; Joost Batenburg, Kees
2013-01-01
Electron tomography is becoming an increasingly important tool in materials science for studying the three-dimensional morphologies and chemical compositions of nanostructures. The image quality obtained by many current algorithms is seriously affected by the problems of missing wedge artefacts and non-linear projection intensities due to diffraction effects. The former refers to the fact that data cannot be acquired over the full 180° tilt range; the latter implies that for some orientations, crystalline structures can show strong contrast changes. To overcome these problems we introduce and discuss several algorithms from the mathematical fields of geometric and discrete tomography. The algorithms incorporate geometric prior knowledge (mainly convexity and homogeneity), which also in principle considerably reduces the number of tilt angles required. Results are discussed for the reconstruction of an InAs nanowire. - Highlights: ► Four algorithms for electron tomography are introduced that utilize prior knowledge. ► Objects are assumed to be homogeneous; convexity and regularity is also discussed. ► We are able to reconstruct slices of a nanowire from as few as four projections. ► Algorithms should be selected based on the specific reconstruction task at hand
Variable selection methods in PLS regression - a comparison study on metabolomics data
DEFF Research Database (Denmark)
Karaman, İbrahim; Hedemann, Mette Skou; Knudsen, Knud Erik Bach
. The aim of the metabolomics study was to investigate the metabolic profile in pigs fed various cereal fractions with special attention to the metabolism of lignans using LC-MS based metabolomic approach. References 1. Lê Cao KA, Rossouw D, Robert-Granié C, Besse P: A Sparse PLS for Variable Selection when...... integrated approach. Due to the high number of variables in data sets (both raw data and after peak picking) the selection of important variables in an explorative analysis is difficult, especially when different data sets of metabolomics data need to be related. Variable selection (or removal of irrelevant...... different strategies for variable selection on PLSR method were considered and compared with respect to selected subset of variables and the possibility for biological validation. Sparse PLSR [1] as well as PLSR with Jack-knifing [2] was applied to data in order to achieve variable selection prior...
Freitas, Alex A; Limbu, Kriti; Ghafourian, Taravat
2015-01-01
Volume of distribution is an important pharmacokinetic property that indicates the extent of a drug's distribution in the body tissues. This paper addresses the problem of how to estimate the apparent volume of distribution at steady state (Vss) of chemical compounds in the human body using decision tree-based regression methods from the area of data mining (or machine learning). Hence, the pros and cons of several different types of decision tree-based regression methods have been discussed. The regression methods predict Vss using, as predictive features, both the compounds' molecular descriptors and the compounds' tissue:plasma partition coefficients (Kt:p) - often used in physiologically-based pharmacokinetics. Therefore, this work has assessed whether the data mining-based prediction of Vss can be made more accurate by using as input not only the compounds' molecular descriptors but also (a subset of) their predicted Kt:p values. Comparison of the models that used only molecular descriptors, in particular, the Bagging decision tree (mean fold error of 2.33), with those employing predicted Kt:p values in addition to the molecular descriptors, such as the Bagging decision tree using adipose Kt:p (mean fold error of 2.29), indicated that the use of predicted Kt:p values as descriptors may be beneficial for accurate prediction of Vss using decision trees if prior feature selection is applied. Decision tree based models presented in this work have an accuracy that is reasonable and similar to the accuracy of reported Vss inter-species extrapolations in the literature. The estimation of Vss for new compounds in drug discovery will benefit from methods that are able to integrate large and varied sources of data and flexible non-linear data mining methods such as decision trees, which can produce interpretable models. Graphical AbstractDecision trees for the prediction of tissue partition coefficient and volume of distribution of drugs.
Dai, Huanping; Micheyl, Christophe
2012-11-01
Psychophysical "reverse-correlation" methods allow researchers to gain insight into the perceptual representations and decision weighting strategies of individual subjects in perceptual tasks. Although these methods have gained momentum, until recently their development was limited to experiments involving only two response categories. Recently, two approaches for estimating decision weights in m-alternative experiments have been put forward. One approach extends the two-category correlation method to m > 2 alternatives; the second uses multinomial logistic regression (MLR). In this article, the relative merits of the two methods are discussed, and the issues of convergence and statistical efficiency of the methods are evaluated quantitatively using Monte Carlo simulations. The results indicate that, for a range of values of the number of trials, the estimated weighting patterns are closer to their asymptotic values for the correlation method than for the MLR method. Moreover, for the MLR method, weight estimates for different stimulus components can exhibit strong correlations, making the analysis and interpretation of measured weighting patterns less straightforward than for the correlation method. These and other advantages of the correlation method, which include computational simplicity and a close relationship to other well-established psychophysical reverse-correlation methods, make it an attractive tool to uncover decision strategies in m-alternative experiments.
Zhao, Yu Xi; Xie, Ping; Sang, Yan Fang; Wu, Zi Yi
2018-04-01
Hydrological process evaluation is temporal dependent. Hydrological time series including dependence components do not meet the data consistency assumption for hydrological computation. Both of those factors cause great difficulty for water researches. Given the existence of hydrological dependence variability, we proposed a correlationcoefficient-based method for significance evaluation of hydrological dependence based on auto-regression model. By calculating the correlation coefficient between the original series and its dependence component and selecting reasonable thresholds of correlation coefficient, this method divided significance degree of dependence into no variability, weak variability, mid variability, strong variability, and drastic variability. By deducing the relationship between correlation coefficient and auto-correlation coefficient in each order of series, we found that the correlation coefficient was mainly determined by the magnitude of auto-correlation coefficient from the 1 order to p order, which clarified the theoretical basis of this method. With the first-order and second-order auto-regression models as examples, the reasonability of the deduced formula was verified through Monte-Carlo experiments to classify the relationship between correlation coefficient and auto-correlation coefficient. This method was used to analyze three observed hydrological time series. The results indicated the coexistence of stochastic and dependence characteristics in hydrological process.
Coskuntuncel, Orkun
2013-01-01
The purpose of this study is two-fold; the first aim being to show the effect of outliers on the widely used least squares regression estimator in social sciences. The second aim is to compare the classical method of least squares with the robust M-estimator using the "determination of coefficient" (R[superscript 2]). For this purpose,…
Methods of Analysis of Electronic Money in Banks
Directory of Open Access Journals (Sweden)
Melnychenko Oleksandr V.
2014-03-01
Full Text Available The article identifies methods of analysis of electronic money, formalises its instruments and offers an integral indicator, which should be calculated by issuing banks and those banks, which carry out operations with electronic money, issued by other banks. Calculation of the integral indicator would allow complex assessment of activity of the studied bank with electronic money and would allow comparison of parameters of different banks by the aggregate of indicators for the study of the electronic money market, its level of development, etc. The article presents methods which envisage economic analysis of electronic money in banks by the following directions: solvency and liquidity, efficiency of electronic money issue, business activity of the bank and social responsibility. Moreover, the proposed indicators by each of the directions are offered to be taken into account when building integral indicators, with the help of which banks are studied: business activity, profitability, solvency, liquidity and so on.
Electron beam directed energy device and methods of using same
Retsky, Michael W.
2007-10-16
A method and apparatus is disclosed for an electron beam directed energy device. The device consists of an electron gun with one or more electron beams. The device includes one or more accelerating plates with holes aligned for beam passage. The plates may be flat or preferably shaped to direct each electron beam to exit the electron gun at a predetermined orientation. In one preferred application, the device is located in outer space with individual beams that are directed to focus at a distant target to be used to impact and destroy missiles. The aimings of the separate beams are designed to overcome Coulomb repulsion. A method is also presented for directing the beams to a target considering the variable terrestrial magnetic field. In another preferred application, the electron beam is directed into the ground to produce a subsurface x-ray source to locate and/or destroy buried or otherwise hidden objects including explosive devices.
Numerical simulation methods for electron and ion optics
International Nuclear Information System (INIS)
Munro, Eric
2011-01-01
This paper summarizes currently used techniques for simulation and computer-aided design in electron and ion beam optics. Topics covered include: field computation, methods for computing optical properties (including Paraxial Rays and Aberration Integrals, Differential Algebra and Direct Ray Tracing), simulation of Coulomb interactions, space charge effects in electron and ion sources, tolerancing, wave optical simulations and optimization. Simulation examples are presented for multipole aberration correctors, Wien filter monochromators, imaging energy filters, magnetic prisms, general curved axis systems and electron mirrors.
Multilayer electronic component systems and methods of manufacture
Thompson, Dane (Inventor); Wang, Guoan (Inventor); Kingsley, Nickolas D. (Inventor); Papapolymerou, Ioannis (Inventor); Tentzeris, Emmanouil M. (Inventor); Bairavasubramanian, Ramanan (Inventor); DeJean, Gerald (Inventor); Li, RongLin (Inventor)
2010-01-01
Multilayer electronic component systems and methods of manufacture are provided. In this regard, an exemplary system comprises a first layer of liquid crystal polymer (LCP), first electronic components supported by the first layer, and a second layer of LCP. The first layer is attached to the second layer by thermal bonds. Additionally, at least a portion of the first electronic components are located between the first layer and the second layer.
Efficient electronic structure methods applied to metal nanoparticles
DEFF Research Database (Denmark)
Larsen, Ask Hjorth
of efficient approaches to density functional theory and the application of these methods to metal nanoparticles. We describe the formalism and implementation of localized atom-centered basis sets within the projector augmented wave method. Basis sets allow for a dramatic increase in performance compared....... The basis set method is used to study the electronic effects for the contiguous range of clusters up to several hundred atoms. The s-electrons hybridize to form electronic shells consistent with the jellium model, leading to electronic magic numbers for clusters with full shells. Large electronic gaps...... and jumps in Fermi level near magic numbers can lead to alkali-like or halogen-like behaviour when main-group atoms adsorb onto gold clusters. A non-self-consistent NewnsAnderson model is used to more closely study the chemisorption of main-group atoms on magic-number Au clusters. The behaviour at magic...
Determination of the Electronics Charge--Electrolysis of Water Method.
Venkatachar, Arun C.
1985-01-01
Presents an alternative method for measuring the electronic charge using data from the electrolysis of acidified distilled water. The process (carried out in a commercially available electrolytic cell) has the advantage of short completion time so that students can determine electron charge and mass in one laboratory period. (DH)
Fatekurohman, Mohamat; Nurmala, Nita; Anggraeni, Dian
2018-04-01
Lungs are the most important organ, in the case of respiratory system. Problems related to disorder of the lungs are various, i.e. pneumonia, emphysema, tuberculosis and lung cancer. Comparing all those problems, lung cancer is the most harmful. Considering about that, the aim of this research applies survival analysis and factors affecting the endurance of the lung cancer patient using comparison of exact, Efron and Breslow parameter approach method on hazard ratio and stratified cox regression model. The data applied are based on the medical records of lung cancer patients in Jember Paru-paru hospital on 2016, east java, Indonesia. The factors affecting the endurance of the lung cancer patients can be classified into several criteria, i.e. sex, age, hemoglobin, leukocytes, erythrocytes, sedimentation rate of blood, therapy status, general condition, body weight. The result shows that exact method of stratified cox regression model is better than other. On the other hand, the endurance of the patients is affected by their age and the general conditions.
Directory of Open Access Journals (Sweden)
Mohd Faris Dziauddin
2017-07-01
Full Text Available This study estimates the effect of locational attributes on residential property values in Kuala Lumpur, Malaysia. Geographically weighted regression (GWR enables the use of the local parameter rather than the global parameter to be estimated, with the results presented in map form. The results of this study reveal that residential property values are mainly determined by the property’s physical (structural attributes, but proximity to locational attributes also contributes marginally. The use of GWR in this study is considered a better approach than other methods to examine the effect of locational attributes on residential property values. GWR has the capability to produce meaningful results in which different locational attributes have differential spatial effects across a geographical area on residential property values. This method has the ability to determine the factors on which premiums depend, and in turn it can assist the government in taxation matters.
Statistics of electron multiplication in multiplier phototube: iterative method
International Nuclear Information System (INIS)
Grau Malonda, A.; Ortiz Sanchez, J.F.
1985-01-01
An iterative method is applied to study the variation of dynode response in the multiplier phototube. Three different situations are considered that correspond to the following ways of electronic incidence on the first dynode: incidence of exactly one electron, incidence of exactly r electrons and incidence of an average anti-r electrons. The responses are given for a number of steps between 1 and 5, and for values of the multiplication factor of 2.1, 2.5, 3 and 5. We study also the variance, the skewness and the excess of jurtosis for different multiplication factors. (author)
Statistics of electron multiplication in a multiplier phototube; Iterative method
International Nuclear Information System (INIS)
Ortiz, J. F.; Grau, A.
1985-01-01
In the present paper an iterative method is applied to study the variation of dynode response in the multiplier phototube. Three different situation are considered that correspond to the following ways of electronic incidence on the first dynode: incidence of exactly one electron, incidence of exactly r electrons and incidence of an average r electrons. The responses are given for a number of steps between 1 and 5, and for values of the multiplication factor of 2.1, 2.5, 3 and 5. We study also the variance, the skewness and the excess of jurtosis for different multiplication factors. (Author) 11 refs
Electron microscopy methods in studies of cultural heritage sites
Energy Technology Data Exchange (ETDEWEB)
Vasiliev, A. L., E-mail: a.vasiliev56@gmail.com; Kovalchuk, M. V.; Yatsishina, E. B. [National Research Centre “Kurchatov Institute” (Russian Federation)
2016-11-15
The history of the development and application of scanning electron microscopy (SEM), transmission electron microscopy (TEM), and energy-dispersive X-ray microanalysis (EDXMA) in studies of cultural heritage sites is considered. In fact, investigations based on these methods began when electron microscopes became a commercial product. Currently, these methods, being developed and improved, help solve many historical enigmas. To date, electron microscopy combined with microanalysis makes it possible to investigate any object, from parchment and wooden articles to pigments, tools, and objects of art. Studies by these methods have revealed that some articles were made by ancient masters using ancient “nanotechnologies”; hence, their comprehensive analysis calls for the latest achievements in the corresponding instrumental methods and sample preparation techniques.
Several cases of electronics and the measuring methods
International Nuclear Information System (INIS)
Supardiyono, Bb.; Kamadi, J.; Suparmono, M.; Indarto.
1980-01-01
Several cases of electronics and the measuring methods, covering electric conductivity and electric potential of analog systems, electric current, electric conductivity and electric potential of semiconductor diodes, and characteristics of transistors are described. (SMN)
Electron microscopy methods in studies of cultural heritage sites
Vasiliev, A. L.; Kovalchuk, M. V.; Yatsishina, E. B.
2016-11-01
The history of the development and application of scanning electron microscopy (SEM), transmission electron microscopy (TEM), and energy-dispersive X-ray microanalysis (EDXMA) in studies of cultural heritage sites is considered. In fact, investigations based on these methods began when electron microscopes became a commercial product. Currently, these methods, being developed and improved, help solve many historical enigmas. To date, electron microscopy combined with microanalysis makes it possible to investigate any object, from parchment and wooden articles to pigments, tools, and objects of art. Studies by these methods have revealed that some articles were made by ancient masters using ancient "nanotechnologies"; hence, their comprehensive analysis calls for the latest achievements in the corresponding instrumental methods and sample preparation techniques.
Electron microscopy methods in studies of cultural heritage sites
International Nuclear Information System (INIS)
Vasiliev, A. L.; Kovalchuk, M. V.; Yatsishina, E. B.
2016-01-01
The history of the development and application of scanning electron microscopy (SEM), transmission electron microscopy (TEM), and energy-dispersive X-ray microanalysis (EDXMA) in studies of cultural heritage sites is considered. In fact, investigations based on these methods began when electron microscopes became a commercial product. Currently, these methods, being developed and improved, help solve many historical enigmas. To date, electron microscopy combined with microanalysis makes it possible to investigate any object, from parchment and wooden articles to pigments, tools, and objects of art. Studies by these methods have revealed that some articles were made by ancient masters using ancient “nanotechnologies”; hence, their comprehensive analysis calls for the latest achievements in the corresponding instrumental methods and sample preparation techniques.
Zhang, Hongyang; Welch, William J.; Zamar, Ruben H.
2017-01-01
Tomal et al. (2015) introduced the notion of "phalanxes" in the context of rare-class detection in two-class classification problems. A phalanx is a subset of features that work well for classification tasks. In this paper, we propose a different class of phalanxes for application in regression settings. We define a "Regression Phalanx" - a subset of features that work well together for prediction. We propose a novel algorithm which automatically chooses Regression Phalanxes from high-dimensi...
Method of determining the position of an irradiated electron beam
International Nuclear Information System (INIS)
Fukuda, Wataru.
1967-01-01
The present invention relates to the method of determining the position of a radiated electron beam, in particular, the method of detecting the position of a p-n junction by a novel method when irradiating the electron beam on to the semi-conductor wafer, controlling the position of the electron beam from said junction. When the electron beam is irradiated on to the semi-conductor wafer which possesses the p-n junction, the position of the p-n junction may be ascertained to determine the position of the irradiated electron beam by detecting the electromotive force resulting from said p-n junction with a metal disposed in the proximity of but without mechanical contact with said semi-conductor wafer. Furthermore, as far as a semi-conductor wafer having at least one p-n junction is concerned, the present invention allows said p-n junction to be used to determine the position of an irradiated electron beam. Thus, according to the present invention, the electromotive force of the electron beam resulting from the p-n junction may easily be detected by electrostatic coupling, enabling the position of the irradiated electron beam to be accurately determined. (Masui, R.)
Zhang, L; Liu, X J
2016-06-03
With the rapid development of next-generation high-throughput sequencing technology, RNA-seq has become a standard and important technique for transcriptome analysis. For multi-sample RNA-seq data, the existing expression estimation methods usually deal with each single-RNA-seq sample, and ignore that the read distributions are consistent across multiple samples. In the current study, we propose a structured sparse regression method, SSRSeq, to estimate isoform expression using multi-sample RNA-seq data. SSRSeq uses a non-parameter model to capture the general tendency of non-uniformity read distribution for all genes across multiple samples. Additionally, our method adds a structured sparse regularization, which not only incorporates the sparse specificity between a gene and its corresponding isoform expression levels, but also reduces the effects of noisy reads, especially for lowly expressed genes and isoforms. Four real datasets were used to evaluate our method on isoform expression estimation. Compared with other popular methods, SSRSeq reduced the variance between multiple samples, and produced more accurate isoform expression estimations, and thus more meaningful biological interpretations.
Directory of Open Access Journals (Sweden)
Francis Markham
2017-05-01
Full Text Available Abstract Background Many jurisdictions regularly conduct surveys to estimate the prevalence of problem gambling in their adult populations. However, the comparison of such estimates is problematic due to methodological variations between studies. Total consumption theory suggests that an association between mean electronic gaming machine (EGM and casino gambling losses and problem gambling prevalence estimates may exist. If this is the case, then changes in EGM losses may be used as a proxy indicator for changes in problem gambling prevalence. To test for this association this study examines the relationship between aggregated losses on electronic gaming machines (EGMs and problem gambling prevalence estimates for Australian states and territories between 1994 and 2016. Methods A Bayesian meta-regression analysis of 41 cross-sectional problem gambling prevalence estimates was undertaken using EGM gambling losses, year of survey and methodological variations as predictor variables. General population studies of adults in Australian states and territory published before 1 July 2016 were considered in scope. 41 studies were identified, with a total of 267,367 participants. Problem gambling prevalence, moderate-risk problem gambling prevalence, problem gambling screen, administration mode and frequency threshold were extracted from surveys. Administrative data on EGM and casino gambling loss data were extracted from government reports and expressed as the proportion of household disposable income lost. Results Money lost on EGMs is correlated with problem gambling prevalence. An increase of 1% of household disposable income lost on EGMs and in casinos was associated with problem gambling prevalence estimates that were 1.33 times higher [95% credible interval 1.04, 1.71]. There was no clear association between EGM losses and moderate-risk problem gambling prevalence estimates. Moderate-risk problem gambling prevalence estimates were not explained by
Directory of Open Access Journals (Sweden)
Ying-Hsin Chang
2013-01-01
Full Text Available Human estrogen receptor (ER isoforms, ERα and ERβ, have long been an important focus in the field of biology. To better understand the structural features associated with the binding of ERα ligands to ERα and modulate their function, several QSAR models, including CoMFA, CoMSIA, SVR, and LR methods, have been employed to predict the inhibitory activity of 68 raloxifene derivatives. In the SVR and LR modeling, 11 descriptors were selected through feature ranking and sequential feature addition/deletion to generate equations to predict the inhibitory activity toward ERα. Among four descriptors that constantly appear in various generated equations, two agree with CoMFA and CoMSIA steric fields and another two can be correlated to a calculated electrostatic potential of ERα.
Methods for recovering metals from electronic waste, and related systems
Lister, Tedd E; Parkman, Jacob A; Diaz Aldana, Luis A; Clark, Gemma; Dufek, Eric J; Keller, Philip
2017-10-03
A method of recovering metals from electronic waste comprises providing a powder comprising electronic waste in at least a first reactor and a second reactor and providing an electrolyte comprising at least ferric ions in an electrochemical cell in fluid communication with the first reactor and the second reactor. The method further includes contacting the powders within the first reactor and the second reactor with the electrolyte to dissolve at least one base metal from each reactor into the electrolyte and reduce at least some of the ferric ions to ferrous ions. The ferrous ions are oxidized at an anode of the electrochemical cell to regenerate the ferric ions. The powder within the second reactor comprises a higher weight percent of the at least one base metal than the powder in the first reactor. Additional methods of recovering metals from electronic waste are also described, as well as an apparatus of recovering metals from electronic waste.
A simultaneous electron energy and dosimeter calibration method for an electron beam irradiator
International Nuclear Information System (INIS)
Tanaka, R.; Sunaga, H.; Kojima, T.
1991-01-01
In radiation processing using electron accelerators, the reproducibility of absorbed dose in the product depends not only on the variation of beam current and conveyor speed, but also on variations of other accelerator parameters. This requires routine monitoring of the beam current and the scan width, and also requires periodical calibration of routine dosimeters usually in the shape of film, electron energy, and other radiation field parameters. The electron energy calibration is important especially for food processing. The dose calibration method using partial absorption calorimeters provides only information about absorbed dose. Measurement of average electron current density provides basic information about the radiation field formed by the beam scanning and scattering at the beam window, though it does not allow direct dose calibration. The total absorption calorimeter with a thick absorber allows dose and dosimeter calibration, if the depth profile of relative dose in a reference absorber is given experimentally. It also allows accurate calibration of the average electron energy at the surface of the calorimeter core, if electron fluence received by the calorimeter is measured at the same time. This means that both electron energy and dosimeters can be simultaneously calibrated by irradiation of a combined system including the calorimeter, the detector of the electron current density meter, and a thick reference absorber for depth profile measurement of relative dose. We have developed a simple and multifunctional system using the combined calibration method for 5 MeV electron beams. The paper describes a simultaneous calibration method for electron energy and film dosimeters, and describes the electron current density meter, the total absorption calorimeter, and the characteristics of this method. (author). 13 refs, 7 figs, 3 tabs
Directory of Open Access Journals (Sweden)
Corrado Dimauro
2010-01-01
Full Text Available Two methods of SNPs pre-selection based on single marker regression for the estimation of genomic breeding values (G-EBVs were compared using simulated data provided by the XII QTL-MAS workshop: i Bonferroni correction of the significance threshold and ii Permutation test to obtain the reference distribution of the null hypothesis and identify significant markers at P<0.01 and P<0.001 significance thresholds. From the set of markers significant at P<0.001, random subsets of 50% and 25% markers were extracted, to evaluate the effect of further reducing the number of significant SNPs on G-EBV predictions. The Bonferroni correction method allowed the identification of 595 significant SNPs that gave the best G-EBV accuracies in prediction generations (82.80%. The permutation methods gave slightly lower G-EBV accuracies even if a larger number of SNPs resulted significant (2,053 and 1,352 for 0.01 and 0.001 significance thresholds, respectively. Interestingly, halving or dividing by four the number of SNPs significant at P<0.001 resulted in an only slightly decrease of G-EBV accuracies. The genetic structure of the simulated population with few QTL carrying large effects, might have favoured the Bonferroni method.
Akbari, Somaye; Zebardast, Tannaz; Zarghi, Afshin; Hajimahdi, Zahra
2017-01-01
COX-2 inhibitory activities of some 1,4-dihydropyridine and 5-oxo-1,4,5,6,7,8-hexahydroquinoline derivatives were modeled by quantitative structure-activity relationship (QSAR) using stepwise-multiple linear regression (SW-MLR) method. The built model was robust and predictive with correlation coefficient (R 2 ) of 0.972 and 0.531 for training and test groups, respectively. The quality of the model was evaluated by leave-one-out (LOO) cross validation (LOO correlation coefficient (Q 2 ) of 0.943) and Y-randomization. We also employed a leverage approach for the defining of applicability domain of model. Based on QSAR models results, COX-2 inhibitory activity of selected data set had correlation with BEHm6 (highest eigenvalue n. 6 of Burden matrix/weighted by atomic masses), Mor03u (signal 03/unweighted) and IVDE (Mean information content on the vertex degree equality) descriptors which derived from their structures.
An electron moiré method for a common SEM
Institute of Scientific and Technical Information of China (English)
Y.M.Xing; S.Kishimoto; Y.R.Zhao
2006-01-01
In the electron moiré method,a high-frequency grating is used to measure microscopic deformation,which promises significant potential applications for the method in the microscopic analysis of materials.However,a special beam scanning control device is required to produce a grating and generate a moiré fringe pattern for the scanning electron microscope (SEM).Because only a few SEMs used in the material science studies are equipped with this device,the use of the electron moiré method is limited.In this study,an electron moiré method for a common SEM without the beam control device is presented.A grating based on a multi-scanning concept is fabricated in any observing mode.A real-time moiré pattern can also be generated in the SEM or an optical filtering system.Without the beam control device being a prerequisite,the electron moiré method can be more widely used.The experimental results from three different types of SEMS show that high quality gratings with uniform lines and less pitch error can be fabricated by this method,and moiré patterns can also be correctly generated.
Directory of Open Access Journals (Sweden)
Lüdtke Rainer
2008-08-01
Full Text Available Abstract Background Regression to the mean (RTM occurs in situations of repeated measurements when extreme values are followed by measurements in the same subjects that are closer to the mean of the basic population. In uncontrolled studies such changes are likely to be interpreted as a real treatment effect. Methods Several statistical approaches have been developed to analyse such situations, including the algorithm of Mee and Chua which assumes a known population mean μ. We extend this approach to a situation where μ is unknown and suggest to vary it systematically over a range of reasonable values. Using differential calculus we provide formulas to estimate the range of μ where treatment effects are likely to occur when RTM is present. Results We successfully applied our method to three real world examples denoting situations when (a no treatment effect can be confirmed regardless which μ is true, (b when a treatment effect must be assumed independent from the true μ and (c in the appraisal of results of uncontrolled studies. Conclusion Our method can be used to separate the wheat from the chaff in situations, when one has to interpret the results of uncontrolled studies. In meta-analysis, health-technology reports or systematic reviews this approach may be helpful to clarify the evidence given from uncontrolled observational studies.
Electron paramagnetic resonance: A new method of quaternary dating
International Nuclear Information System (INIS)
Poupeau, G.; Rossi, A.; Teles, M.M.; Danon, J.
1984-01-01
Significant progress has occurred in the last years in quaternary geochronology. One of this is the emergence of a new dating approach, the Electron Spin Resonance Method. The aim of this paper is to briefly review the method and discuss some aspects of the work at CBPF. (Author) [pt
Electron paramagnetic resonance: a new method of quaternary dating
International Nuclear Information System (INIS)
Poupeau, G.; Rossi, A.; Universidade Federal Rural do Rio de Janeiro; Telles, M.; Danon, J.
1984-01-01
Significant progress has occurred in the last years in quaternary geochronology. One of this is the emergence of a new dating approach, the Electron Spin Resonance Method. The aim of this paper is to briefly review the method and discuss some aspects of the work at CBPF. (Author) [pt
Regression analysis by example
Chatterjee, Samprit
2012-01-01
Praise for the Fourth Edition: ""This book is . . . an excellent source of examples for regression analysis. It has been and still is readily readable and understandable."" -Journal of the American Statistical Association Regression analysis is a conceptually simple method for investigating relationships among variables. Carrying out a successful application of regression analysis, however, requires a balance of theoretical results, empirical rules, and subjective judgment. Regression Analysis by Example, Fifth Edition has been expanded
Apparatus and method for generating high density pulses of electrons
International Nuclear Information System (INIS)
Lee, C.; Oettinger, P.E.
1981-01-01
An apparatus and method are described for the production of high density pulses of electrons using a laser energized emitter. Caesium atoms from a low pressure vapour atmosphere are absorbed on and migrate from a metallic target rapidly heated by a laser to a high temperature. Due to this heating time being short compared with the residence time of the caesium atoms adsorbed on the target surface, copious electrons are emitted which form a high current density pulse. (U.K.)
''In situ'' electronic testing method of a neutron detector performance
International Nuclear Information System (INIS)
Gonzalez, J.M.; Levai, F.
1987-01-01
The method allows detection of any important change in the electrical characteristics of a neutron sensor channel. It checks the response signal produced by an electronic detector circuit when a pulse generator is connected as input signal in the high voltage supply. The electronic circuit compares the detector capacitance value, previously measured, against a reference value, which is adjusted in a window type comparator electronic circuit to detect any important degrading condition of the capacitance value in a detector-cable system. The ''in-situ'' electronic testing method of neutron detector performance has been verified in a laboratory atmosphere to be a potential method to detect any significant change in the capacitance value of a nuclear sensor and its connecting cable, also checking: detector disconnections, cable disconnections, length changes of the connecting cable, electric short-opened circuits in the sensor channel, and any electrical trouble in the detector-connector-cable system. The experimental practices were carried out by simulation of several electric changes in a nuclear sensor-cable system from a linear D.C. channel which measures reactor power during nuclear reactor operation. It was made at the Training Reactor Electronic Laboratory. The results and conclusions obtained at the Laboratory were proved, satisfactorily, in the Electronic Instrumentation of Budapest Technical University Training Reactor, Hungary
International Nuclear Information System (INIS)
Arsenault, Louis-François; Millis, Andrew J; Neuberg, Richard; Hannah, Lauren A
2017-01-01
We present a supervised machine learning approach to the inversion of Fredholm integrals of the first kind as they arise, for example, in the analytic continuation problem of quantum many-body physics. The approach provides a natural regularization for the ill-conditioned inverse of the Fredholm kernel, as well as an efficient and stable treatment of constraints. The key observation is that the stability of the forward problem permits the construction of a large database of outputs for physically meaningful inputs. Applying machine learning to this database generates a regression function of controlled complexity, which returns approximate solutions for previously unseen inputs; the approximate solutions are then projected onto the subspace of functions satisfying relevant constraints. Under standard error metrics the method performs as well or better than the Maximum Entropy method for low input noise and is substantially more robust to increased input noise. We suggest that the methodology will be similarly effective for other problems involving a formally ill-conditioned inversion of an integral operator, provided that the forward problem can be efficiently solved. (paper)
System for cooling hybrid vehicle electronics, method for cooling hybrid vehicle electronics
France, David M.; Yu, Wenhua; Singh, Dileep; Zhao, Weihuan
2017-11-21
The invention provides a single radiator cooling system for use in hybrid electric vehicles, the system comprising a surface in thermal communication with electronics, and subcooled boiling fluid contacting the surface. The invention also provides a single radiator method for simultaneously cooling electronics and an internal combustion engine in a hybrid electric vehicle, the method comprising separating a coolant fluid into a first portion and a second portion; directing the first portion to the electronics and the second portion to the internal combustion engine for a time sufficient to maintain the temperature of the electronics at or below 175.degree. C.; combining the first and second portion to reestablish the coolant fluid; and treating the reestablished coolant fluid to the single radiator for a time sufficient to decrease the temperature of the reestablished coolant fluid to the temperature it had before separation.
Energy Technology Data Exchange (ETDEWEB)
Golusin, Mirjana [Educons University, Vojvode Putnika st. bb, 21013 Sremska Kamnica (RS); Ivanovic, Olja Munitlak [Faculty of Business in Services, Vojvode Putnik st. bb, 21013 Sremska Kamenica (RS); Teodorovic, Natasa [Faculty of Entrepreneurial Management, Modene st. 5, 21000 Novi Sad (RS)
2011-01-15
The need for preservation and adequate management of the quality of environment requires the development of new methods and techniques by which the achieved degree of sustainable development can be defined as well as the laws regarding the relationship among its subsystems. Main objective of research is to point to a strong contradiction between the development of ecological and economic subsystems. In order to improve previous research, this study suggests the use of linear evaluation, by which it is possible to determine the exact degree of contradiction between these two subsystems and to define the regularities as well as the deviations. Authors present the essential steps that were used. Conducted by the method of linear regression this research shows a significant negative correlation between ecological and economic subsystem indicators, whereas its value R{sup 2} 0.58 proves the expected contradiction that exists between the two previously mentioned subsystems. By observing the sustainable development as a two-dimensional system that includes ecological and economic indicators, the authors suggest the methodology to modelling the relationship between economic and ecological development as an orthogonal distance between the degree of the current state measured by the relation between economic and ecological indicators of sustainable development and the degree which was obtained in a traditional way. The method used in this research proved to be extremely suitable for modelling the relationship between ecological and economic subsystems of sustainable development. This research was conducted on a repeated sample of countries of South East Europe by including the data for France and Germany, being two countries on the highest level of development in the European Union. (author)
Energy Technology Data Exchange (ETDEWEB)
Lee, Sang Dae; Lohumi, Santosh; Cho, Byoung Kwan [Dept. of Biosystems Machinery Engineering, Chungnam National University, Daejeon (Korea, Republic of); Kim, Moon Sung [United States Department of Agriculture Agricultural Research Service, Washington (United States); Lee, Soo Hee [Life and Technology Co.,Ltd., Hwasung (Korea, Republic of)
2014-08-15
This study was conducted to develop a non-destructive detection method for adulterated powder products using Raman spectroscopy and partial least squares regression(PLSR). Garlic and ginger powder, which are used as natural seasoning and in health supplement foods, were selected for this experiment. Samples were adulterated with corn starch in concentrations of 5-35%. PLSR models for adulterated garlic and ginger powders were developed and their performances evaluated using cross validation. The R{sup 2}{sub c} and SEC of an optimal PLSR model were 0.99 and 2.16 for the garlic powder samples, and 0.99 and 0.84 for the ginger samples, respectively. The variable importance in projection (VIP) score is a useful and simple tool for the evaluation of the importance of each variable in a PLSR model. After the VIP scores were taken pre-selection, the Raman spectrum data was reduced by one third. New PLSR models, based on a reduced number of wavelengths selected by the VIP scores technique, gave good predictions for the adulterated garlic and ginger powder samples.
Akita, Yasuyuki; Baldasano, Jose M; Beelen, Rob; Cirach, Marta; de Hoogh, Kees; Hoek, Gerard; Nieuwenhuijsen, Mark; Serre, Marc L; de Nazelle, Audrey
2014-04-15
In recognition that intraurban exposure gradients may be as large as between-city variations, recent air pollution epidemiologic studies have become increasingly interested in capturing within-city exposure gradients. In addition, because of the rapidly accumulating health data, recent studies also need to handle large study populations distributed over large geographic domains. Even though several modeling approaches have been introduced, a consistent modeling framework capturing within-city exposure variability and applicable to large geographic domains is still missing. To address these needs, we proposed a modeling framework based on the Bayesian Maximum Entropy method that integrates monitoring data and outputs from existing air quality models based on Land Use Regression (LUR) and Chemical Transport Models (CTM). The framework was applied to estimate the yearly average NO2 concentrations over the region of Catalunya in Spain. By jointly accounting for the global scale variability in the concentration from the output of CTM and the intraurban scale variability through LUR model output, the proposed framework outperformed more conventional approaches.
International Nuclear Information System (INIS)
Lee, Sang Dae; Lohumi, Santosh; Cho, Byoung Kwan; Kim, Moon Sung; Lee, Soo Hee
2014-01-01
This study was conducted to develop a non-destructive detection method for adulterated powder products using Raman spectroscopy and partial least squares regression(PLSR). Garlic and ginger powder, which are used as natural seasoning and in health supplement foods, were selected for this experiment. Samples were adulterated with corn starch in concentrations of 5-35%. PLSR models for adulterated garlic and ginger powders were developed and their performances evaluated using cross validation. The R 2 c and SEC of an optimal PLSR model were 0.99 and 2.16 for the garlic powder samples, and 0.99 and 0.84 for the ginger samples, respectively. The variable importance in projection (VIP) score is a useful and simple tool for the evaluation of the importance of each variable in a PLSR model. After the VIP scores were taken pre-selection, the Raman spectrum data was reduced by one third. New PLSR models, based on a reduced number of wavelengths selected by the VIP scores technique, gave good predictions for the adulterated garlic and ginger powder samples.
Matson, Johnny L.; Kozlowski, Alison M.
2010-01-01
Autistic regression is one of the many mysteries in the developmental course of autism and pervasive developmental disorders not otherwise specified (PDD-NOS). Various definitions of this phenomenon have been used, further clouding the study of the topic. Despite this problem, some efforts at establishing prevalence have been made. The purpose of…
New Combined Electron-Beam Methods of Wastewater Purification
International Nuclear Information System (INIS)
Pikaev, A.K.; Makarov, I.E.; Ponomarev, A.V.; Kartasheva, L.I.; Podzorova, E.A.; Chulkov, V.N.; Han, B.; Kim, D.K.
1999-01-01
The paper is a brief review of the results obtained with the participation of the authors from the study on combined electron-beam methods for purification of some wastewaters. The data on purification of wastewaters containing dyes or hydrogen peroxide and municipal wastewater in the aerosol flow are considered
Improved coating and fixation methods for scanning electron microscope autoradiography
International Nuclear Information System (INIS)
Weiss, R.L.
1984-01-01
A simple apparatus for emulsion coating is described. The apparatus is inexpensive and easily assembled in a standard glass shop. Emulsion coating for scanning electron microscope autoradiography with this apparatus consistently yields uniform layers. When used in conjunction with newly described fixation methods, this new approach produces reliable autoradiographs of undamaged specimens
Thick-Restart Lanczos Method for Electronic Structure Calculations
International Nuclear Information System (INIS)
Simon, Horst D.; Wang, L.-W.; Wu, Kesheng
1999-01-01
This paper describes two recent innovations related to the classic Lanczos method for eigenvalue problems, namely the thick-restart technique and dynamic restarting schemes. Combining these two new techniques we are able to implement an efficient eigenvalue problem solver. This paper will demonstrate its effectiveness on one particular class of problems for which this method is well suited: linear eigenvalue problems generated from non-self-consistent electronic structure calculations
Variational methods in electron-atom scattering theory
Nesbet, Robert K
1980-01-01
The investigation of scattering phenomena is a major theme of modern physics. A scattered particle provides a dynamical probe of the target system. The practical problem of interest here is the scattering of a low energy electron by an N-electron atom. It has been difficult in this area of study to achieve theoretical results that are even qualitatively correct, yet quantitative accuracy is often needed as an adjunct to experiment. The present book describes a quantitative theoretical method, or class of methods, that has been applied effectively to this problem. Quantum mechanical theory relevant to the scattering of an electron by an N-electron atom, which may gain or lose energy in the process, is summarized in Chapter 1. The variational theory itself is presented in Chapter 2, both as currently used and in forms that may facilitate future applications. The theory of multichannel resonance and threshold effects, which provide a rich structure to observed electron-atom scattering data, is presented in Cha...
Borowik, Piotr; Thobel, Jean-Luc; Adamowicz, Leszek
2017-07-01
Standard computational methods used to take account of the Pauli Exclusion Principle into Monte Carlo (MC) simulations of electron transport in semiconductors may give unphysical results in low field regime, where obtained electron distribution function takes values exceeding unity. Modified algorithms were already proposed and allow to correctly account for electron scattering on phonons or impurities. Present paper extends this approach and proposes improved simulation scheme allowing including Pauli exclusion principle for electron-electron (e-e) scattering into MC simulations. Simulations with significantly reduced computational cost recreate correct values of the electron distribution function. Proposed algorithm is applied to study transport properties of degenerate electrons in graphene with e-e interactions. This required adapting the treatment of e-e scattering in the case of linear band dispersion relation. Hence, this part of the simulation algorithm is described in details.
Gómez-Valent, Adrià; Amendola, Luca
2018-04-01
In this paper we present new constraints on the Hubble parameter H0 using: (i) the available data on H(z) obtained from cosmic chronometers (CCH); (ii) the Hubble rate data points extracted from the supernovae of Type Ia (SnIa) of the Pantheon compilation and the Hubble Space Telescope (HST) CANDELS and CLASH Multy-Cycle Treasury (MCT) programs; and (iii) the local HST measurement of H0 provided by Riess et al. (2018), H0HST=(73.45±1.66) km/s/Mpc. Various determinations of H0 using the Gaussian processes (GPs) method and the most updated list of CCH data have been recently provided by Yu, Ratra & Wang (2018). Using the Gaussian kernel they find H0=(67.42± 4.75) km/s/Mpc. Here we extend their analysis to also include the most released and complete set of SnIa data, which allows us to reduce the uncertainty by a factor ~ 3 with respect to the result found by only considering the CCH information. We obtain H0=(67.06± 1.68) km/s/Mpc, which favors again the lower range of values for H0 and is in tension with H0HST. The tension reaches the 2.71σ level. We round off the GPs determination too by taking also into account the error propagation of the kernel hyperparameters when the CCH with and without H0HST are used in the analysis. In addition, we present a novel method to reconstruct functions from data, which consists in a weighted sum of polynomial regressions (WPR). We apply it from a cosmographic perspective to reconstruct H(z) and estimate H0 from CCH and SnIa measurements. The result obtained with this method, H0=(68.90± 1.96) km/s/Mpc, is fully compatible with the GPs ones. Finally, a more conservative GPs+WPR value is also provided, H0=(68.45± 2.00) km/s/Mpc, which is still almost 2σ away from H0HST.
Fast electronic structure methods for strongly correlated molecular systems
International Nuclear Information System (INIS)
Head-Gordon, Martin; Beran, Gregory J O; Sodt, Alex; Jung, Yousung
2005-01-01
A short review is given of newly developed fast electronic structure methods that are designed to treat molecular systems with strong electron correlations, such as diradicaloid molecules, for which standard electronic structure methods such as density functional theory are inadequate. These new local correlation methods are based on coupled cluster theory within a perfect pairing active space, containing either a linear or quadratic number of pair correlation amplitudes, to yield the perfect pairing (PP) and imperfect pairing (IP) models. This reduces the scaling of the coupled cluster iterations to no worse than cubic, relative to the sixth power dependence of the usual (untruncated) coupled cluster doubles model. A second order perturbation correction, PP(2), to treat the neglected (weaker) correlations is formulated for the PP model. To ensure minimal prefactors, in addition to favorable size-scaling, highly efficient implementations of PP, IP and PP(2) have been completed, using auxiliary basis expansions. This yields speedups of almost an order of magnitude over the best alternatives using 4-center 2-electron integrals. A short discussion of the scope of accessible chemical applications is given
Development and application of advanced methods for electronic structure calculations
DEFF Research Database (Denmark)
Schmidt, Per Simmendefeldt
. For this reason, part of this thesis relates to developing and applying a new method for constructing so-called norm-conserving PAW setups, that are applicable to GW calculations by using a genetic algorithm. The effect of applying the new setups significantly affects the absolute band positions, both for bulk......This thesis relates to improvements and applications of beyond-DFT methods for electronic structure calculations that are applied in computational material science. The improvements are of both technical and principal character. The well-known GW approximation is optimized for accurate calculations...... of electronic excitations in two-dimensional materials by exploiting exact limits of the screened Coulomb potential. This approach reduces the computational time by an order of magnitude, enabling large scale applications. The GW method is further improved by including so-called vertex corrections. This turns...
Cao, M H; Adeola, O
2016-02-01
The energy values of poultry byproduct meal (PBM) and animal-vegetable oil blend (A-V blend) were determined in 2 experiments with 288 broiler chickens from d 19 to 25 post hatching. The birds were fed a starter diet from d 0 to 19 post hatching. In each experiment, 144 birds were grouped by weight into 8 replicates of cages with 6 birds per cage. There were 3 diets in each experiment consisting of one reference diet (RD) and 2 test diets (TD). The TD contained 2 levels of PBM (Exp. 1) or A-V blend (Exp. 2) that replaced the energy sources in the RD at 50 or 100 g/kg (Exp. 1) or 40 or 80 g/kg (Exp. 2) in such a way that the same ratio were maintained for energy ingredients across experimental diets. The ileal digestible energy (IDE), ME, and MEn of PBM and A-V blend were determined by the regression method. Dry matter of PBM and A-V blend were 984 and 999 g/kg; the gross energies were 5,284 and 9,604 kcal/kg of DM, respectively. Addition of PBM to the RD in Exp. 1 linearly decreased (P blend to the RD linearly increased (P blend as follows: IDE = 10,616x + 7.350, r(2) = 0.96; ME = 10,121x + 0.447, r(2) = 0.99; MEn = 10,124x + 2.425, r(2) = 0.99. These data indicate the respective IDE, ME, MEn values (kcal/kg of DM) of PBM evaluated to be 3,537, 3,805, and 3,278, and A-V blend evaluated to be 10,616, 10,121, and 10,124. © 2015 Poultry Science Association Inc.
International Nuclear Information System (INIS)
Ghasemi, Jahanbakhsh; Asadpour, Saeid; Abdolmaleki, Azizeh
2007-01-01
A quantitative structure-retention relationship (QSRR) study, has been carried out on the gas chromatograph/electron capture detector (GC/ECD) system retention times (t R s) of 38 diverse chlorinated pesticides, herbicides, and organohalides by using molecular structural descriptors. Modeling of retention times of these compounds as a function of the theoretically derived descriptors was established by multiple linear regression (MLR) and partial least squares (PLS) regression. The stepwise regression using SPSS was used for the selection of the variables that resulted in the best-fitted models. Appropriate models with low standard errors and high correlation coefficients were obtained. Three types of molecular descriptors including electronic, steric and thermodynamic were used to develop a quantitative relationship between the retention times and structural properties. MLR and PLS analysis has been carried out to derive the best QSRR models. After variables selection, MLR and PLS methods used with leave-one-out cross validation for building the regression models. The predictive quality of the QSRR models were tested for an external prediction set of 12 compounds randomly chosen from 38 compounds. The PLS regression method was used to model the structure-retention relationships, more accurately. However, the results surprisingly showed more or less the same quality for MLR and PLS modeling according to squared regression coefficients R 2 which were 0.951 and 0.948 for MLR and PLS, respectively
Adaptive multiresolution method for MAP reconstruction in electron tomography
Energy Technology Data Exchange (ETDEWEB)
Acar, Erman, E-mail: erman.acar@tut.fi [Department of Signal Processing, Tampere University of Technology, P.O. Box 553, FI-33101 Tampere (Finland); BioMediTech, Tampere University of Technology, Biokatu 10, 33520 Tampere (Finland); Peltonen, Sari; Ruotsalainen, Ulla [Department of Signal Processing, Tampere University of Technology, P.O. Box 553, FI-33101 Tampere (Finland); BioMediTech, Tampere University of Technology, Biokatu 10, 33520 Tampere (Finland)
2016-11-15
3D image reconstruction with electron tomography holds problems due to the severely limited range of projection angles and low signal to noise ratio of the acquired projection images. The maximum a posteriori (MAP) reconstruction methods have been successful in compensating for the missing information and suppressing noise with their intrinsic regularization techniques. There are two major problems in MAP reconstruction methods: (1) selection of the regularization parameter that controls the balance between the data fidelity and the prior information, and (2) long computation time. One aim of this study is to provide an adaptive solution to the regularization parameter selection problem without having additional knowledge about the imaging environment and the sample. The other aim is to realize the reconstruction using sequences of resolution levels to shorten the computation time. The reconstructions were analyzed in terms of accuracy and computational efficiency using a simulated biological phantom and publically available experimental datasets of electron tomography. The numerical and visual evaluations of the experiments show that the adaptive multiresolution method can provide more accurate results than the weighted back projection (WBP), simultaneous iterative reconstruction technique (SIRT), and sequential MAP expectation maximization (sMAPEM) method. The method is superior to sMAPEM also in terms of computation time and usability since it can reconstruct 3D images significantly faster without requiring any parameter to be set by the user. - Highlights: • An adaptive multiresolution reconstruction method is introduced for electron tomography. • The method provides more accurate results than the conventional reconstruction methods. • The missing wedge and noise problems can be compensated by the method efficiently.
Ab initio methods for electron-molecule collisions
International Nuclear Information System (INIS)
Collins, L.A.; Schneider, B.I.
1987-01-01
This review concentrates on the recent advances in treating the electronic aspect of the electron-molecule interaction and leaves to other articles the description of the rotational and vibrational motions. Those methods which give the most complete treatment of the direct, exchange, and correlation effects are focused on. Such full treatments are generally necessary at energies below a few Rydbergs (≅ 60 eV). This choice unfortunately necessitates omission of those active and vital areas devoted to the development of model potentials and approximate scattering formulations. The ab initio and model approaches complement each other and are both extremely important to the full explication of the electron-scattering process. Due to the rapid developments of recent years, the approaches that provide the fullest treatment are concentrated on. 81 refs
Choi, Giehae; Bell, Michelle L.; Lee, Jong-Tae
2017-04-01
The land-use regression (LUR) approach to estimate the levels of ambient air pollutants is becoming popular due to its high validity in predicting small-area variations. However, only a few studies have been conducted in Asian countries, and much less research has been conducted on comparing the performances and applied estimates of different exposure assessments including LUR. The main objectives of the current study were to conduct nitrogen dioxide (NO2) exposure assessment with four methods including LUR in the Republic of Korea, to compare the model performances, and to estimate the empirical NO2 exposures of a cohort. The study population was defined as the year 2010 participants of a government-supported cohort established for bio-monitoring in Ulsan, Republic of Korea. The annual ambient NO2 exposures of the 969 study participants were estimated with LUR, nearest station, inverse distance weighting, and ordinary kriging. Modeling was based on the annual NO2 average, traffic-related data, land-use data, and altitude of the 13 regularly monitored stations. The final LUR model indicated that area of transportation, distance to residential area, and area of wetland were important predictors of NO2. The LUR model explained 85.8% of the variation observed in the 13 monitoring stations of the year 2009. The LUR model outperformed the others based on leave-one out cross-validation comparing the correlations and root-mean square error. All NO2 estimates ranged from 11.3-18.0 ppb, with that of LUR having the widest range. The NO2 exposure levels of the residents differed by demographics. However, the average was below the national annual guidelines of the Republic of Korea (30 ppb). The LUR models showed high performances in an industrial city in the Republic of Korea, despite the small sample size and limited data. Our findings suggest that the LUR method may be useful in similar settings in Asian countries where the target region is small and availability of data is
International Nuclear Information System (INIS)
Woo, M.K.; Cunningham, J.R.
1990-01-01
In the convolution/superposition method of photon beam dose calculations, inhomogeneities are usually handled by using some form of scaling involving the relative electron densities of the inhomogeneities. In this paper the accuracy of density scaling as applied to primary electrons generated in photon interactions is examined. Monte Carlo calculations are compared with density scaling calculations for air and cork slab inhomogeneities. For individual primary photon kernels as well as for photon interactions restricted to a thin layer, the results can differ significantly, by up to 50%, between the two calculations. However, for realistic photon beams where interactions occur throughout the whole irradiated volume, the discrepancies are much less severe. The discrepancies for the kernel calculation are attributed to the scattering characteristics of the electrons and the consequent oversimplified modeling used in the density scaling method. A technique called the kernel integration technique is developed to analyze the general effects of air and cork inhomogeneities. It is shown that the discrepancies become significant only under rather extreme conditions, such as immediately beyond the surface after a large air gap. In electron beams all the primary electrons originate from the surface of the phantom and the errors caused by simple density scaling can be much more significant. Various aspects relating to the accuracy of density scaling for air and cork slab inhomogeneities are discussed
Ochoa Gutierrez, L. H.; Vargas Jimenez, C. A.; Niño Vasquez, L. F.
2011-12-01
The "Sabana de Bogota" (Bogota Savannah) is the most important social and economical center of Colombia. Almost the third of population is concentrated in this region and generates about the 40% of Colombia's Internal Brute Product (IBP). According to this, the zone presents an elevated vulnerability in case that a high destructive seismic event occurs. Historical evidences show that high magnitude events took place in the past with a huge damage caused to the city and indicate that is probable that such events can occur in the next years. This is the reason why we are working in an early warning generation system, using the first few seconds of a seismic signal registered by three components and wide band seismometers. Such system can be implemented using Computational Intelligence tools, designed and calibrated to the particular Geological, Structural and environmental conditions present in the region. The methods developed are expected to work on real time, thus suitable software and electronic tools need to be developed. We used Support Vector Machines Regression (SVMR) methods trained and tested with historic seismic events registered by "EL ROSAL" Station, located near Bogotá, calculating descriptors or attributes as the input of the model, from the first 6 seconds of signal. With this algorithm, we obtained less than 10% of mean absolute error and correlation coefficients greater than 85% in hypocentral distance and Magnitude estimation. With this results we consider that we can improve the method trying to have better accuracy with less signal time and that this can be a very useful model to be implemented directly in the seismological stations to generate a fast characterization of the event, broadcasting not only raw signal but pre-processed information that can be very useful for accurate Early Warning Generation.
Electron beam treatment planning: A review of dose computation methods
International Nuclear Information System (INIS)
Mohan, R.; Riley, R.; Laughlin, J.S.
1983-01-01
Various methods of dose computations are reviewed. The equivalent path length methods used to account for body curvature and internal structure are not adequate because they ignore the lateral diffusion of electrons. The Monte Carlo method for the broad field three-dimensional situation in treatment planning is impractical because of the enormous computer time required. The pencil beam technique may represent a suitable compromise. The behavior of a pencil beam may be described by the multiple scattering theory or, alternatively, generated using the Monte Carlo method. Although nearly two orders of magnitude slower than the equivalent path length technique, the pencil beam method improves accuracy sufficiently to justify its use. It applies very well when accounting for the effect of surface irregularities; the formulation for handling inhomogeneous internal structure is yet to be developed
Nondestructive testing method for a new generation of electronics
Directory of Open Access Journals (Sweden)
Azin Anton
2018-01-01
Full Text Available The implementation of the Smart City system needs reliable and smoothly operating electronic equipment. The study is aimed at developing a nondestructive testing method for electronic equipment and its components. This method can be used to identify critical design defects of printed circuit boards (PCB and to predict their service life, taking into account the nature of probable operating loads. The study uses an acoustic emission method to identify and localize critical design defects of printed circuit boards. Geometric dimensions of detected critical defects can be determined by the X-ray tomography method. Based on the results of the study, a method combining acoustic emission and X-ray tomography was developed for nondestructive testing of printed circuit boards. The stress-strain state of solder joints containing detected defects was analyzed. This paper gives an example of using the developed method for estimating the degree of damage to joints between PCB components and predicting the service life of the entire PCB.
Electronic-projecting Moire method applying CBR-technology
Kuzyakov, O. N.; Lapteva, U. V.; Andreeva, M. A.
2018-01-01
Electronic-projecting method based on Moire effect for examining surface topology is suggested. Conditions of forming Moire fringes and their parameters’ dependence on reference parameters of object and virtual grids are analyzed. Control system structure and decision-making subsystem are elaborated. Subsystem execution includes CBR-technology, based on applying case base. The approach related to analysing and forming decision for each separate local area with consequent formation of common topology map is applied.
Askerov, Bahram M
2010-01-01
This book deals with theoretical thermodynamics and the statistical physics of electron and particle gases. While treating the laws of thermodynamics from both classical and quantum theoretical viewpoints, it posits that the basis of the statistical theory of macroscopic properties of a system is the microcanonical distribution of isolated systems, from which all canonical distributions stem. To calculate the free energy, the Gibbs method is applied to ideal and non-ideal gases, and also to a crystalline solid. Considerable attention is paid to the Fermi-Dirac and Bose-Einstein quantum statistics and its application to different quantum gases, and electron gas in both metals and semiconductors is considered in a nonequilibrium state. A separate chapter treats the statistical theory of thermodynamic properties of an electron gas in a quantizing magnetic field.
Comparison of optimization methods for electronic-structure calculations
International Nuclear Information System (INIS)
Garner, J.; Das, S.G.; Min, B.I.; Woodward, C.; Benedek, R.
1989-01-01
The performance of several local-optimization methods for calculating electronic structure is compared. The fictitious first-order equation of motion proposed by Williams and Soler is integrated numerically by three procedures: simple finite-difference integration, approximate analytical integration (the Williams-Soler algorithm), and the Born perturbation series. These techniques are applied to a model problem for which exact solutions are known, the Mathieu equation. The Williams-Soler algorithm and the second Born approximation converge equally rapidly, but the former involves considerably less computational effort and gives a more accurate converged solution. Application of the method of conjugate gradients to the Mathieu equation is discussed
Monte Carlo methods in electron transport problems. Pt. 1
International Nuclear Information System (INIS)
Cleri, F.
1989-01-01
The condensed-history Monte Carlo method for charged particles transport is reviewed and discussed starting from a general form of the Boltzmann equation (Part I). The physics of the electronic interactions, together with some pedagogic example will be introduced in the part II. The lecture is directed to potential users of the method, for which it can be a useful introduction to the subject matter, and wants to establish the basis of the work on the computer code RECORD, which is at present in a developing stage
Method of electron emission control in RF guns
International Nuclear Information System (INIS)
Khodak, I.V.; Kushnir, V.A.
2001-01-01
The electron emission control method for a RF gun is considered.According to the main idea of the method,the additional resonance system is created in a cathode region where the RF field strength could be varied using the external pulse equipment. The additional resonance system is composed of a coaxial cavity coupled with a RF gun cylindrical cavity via an axial hole. Computed results of radiofrequency and electrodynamic performances of such a two-cavity system and results of the RF gun model pilot study are presented in. Results of particle dynamics simulation are described
Method of electron emission control in RF guns
Khodak, I V
2001-01-01
The electron emission control method for a RF gun is considered.According to the main idea of the method,the additional resonance system is created in a cathode region where the RF field strength could be varied using the external pulse equipment. The additional resonance system is composed of a coaxial cavity coupled with a RF gun cylindrical cavity via an axial hole. Computed results of radiofrequency and electrodynamic performances of such a two-cavity system and results of the RF gun model pilot study are presented in. Results of particle dynamics simulation are described.
Wan, Jian; Chen, Yi-Chieh; Morris, A Julian; Thennadil, Suresh N
2017-07-01
Near-infrared (NIR) spectroscopy is being widely used in various fields ranging from pharmaceutics to the food industry for analyzing chemical and physical properties of the substances concerned. Its advantages over other analytical techniques include available physical interpretation of spectral data, nondestructive nature and high speed of measurements, and little or no need for sample preparation. The successful application of NIR spectroscopy relies on three main aspects: pre-processing of spectral data to eliminate nonlinear variations due to temperature, light scattering effects and many others, selection of those wavelengths that contribute useful information, and identification of suitable calibration models using linear/nonlinear regression . Several methods have been developed for each of these three aspects and many comparative studies of different methods exist for an individual aspect or some combinations. However, there is still a lack of comparative studies for the interactions among these three aspects, which can shed light on what role each aspect plays in the calibration and how to combine various methods of each aspect together to obtain the best calibration model. This paper aims to provide such a comparative study based on four benchmark data sets using three typical pre-processing methods, namely, orthogonal signal correction (OSC), extended multiplicative signal correction (EMSC) and optical path-length estimation and correction (OPLEC); two existing wavelength selection methods, namely, stepwise forward selection (SFS) and genetic algorithm optimization combined with partial least squares regression for spectral data (GAPLSSP); four popular regression methods, namely, partial least squares (PLS), least absolute shrinkage and selection operator (LASSO), least squares support vector machine (LS-SVM), and Gaussian process regression (GPR). The comparative study indicates that, in general, pre-processing of spectral data can play a significant
Bolarinwa, O A; Adeola, O
2016-02-01
Direct or indirect methods can be used to determine the DE and ME of feed ingredients for pigs. In situations when only the indirect approach is suitable, the regression method presents a robust indirect approach. Three experiments were conducted to compare the direct and regression methods for determining the DE and ME values of barley, sorghum, and wheat for pigs. In each experiment, 24 barrows with an average initial BW of 31, 32, and 33 kg were assigned to 4 diets in a randomized complete block design. The 4 diets consisted of 969 g barley, sorghum, or wheat/kg plus minerals and vitamins for the direct method; a corn-soybean meal reference diet (RD); the RD + 300 g barley, sorghum, or wheat/kg; and the RD + 600 g barley, sorghum, or wheat/kg. The 3 corn-soybean meal diets were used for the regression method. Each diet was fed to 6 barrows in individual metabolism crates for a 5-d acclimation followed by a 5-d period of total but separate collection of feces and urine in each experiment. Graded substitution of barley or wheat, but not sorghum, into the RD linearly reduced ( direct method-derived DE and ME for barley were 3,669 and 3,593 kcal/kg DM, respectively. The regressions of barley contribution to DE and ME in kilocalories against the quantity of barley DMI in kilograms generated 3,746 kcal DE/kg DM and 3,647 kcal ME/kg DM. The DE and ME for sorghum by the direct method were 4,097 and 4,042 kcal/kg DM, respectively; the corresponding regression-derived estimates were 4,145 and 4,066 kcal/kg DM. Using the direct method, energy values for wheat were 3,953 kcal DE/kg DM and 3,889 kcal ME/kg DM. The regressions of wheat contribution to DE and ME in kilocalories against the quantity of wheat DMI in kilograms generated 3,960 kcal DE/kg DM and 3,874 kcal ME/kg DM. The DE and ME of barley using the direct method were not different (0.3 direct method-derived DE and ME of sorghum were not different (0.5 direct method- and regression method-derived DE (3,953 and 3
Directory of Open Access Journals (Sweden)
E Ghasemikhah
2012-03-01
Full Text Available This study investigated the electronic properties of antiferromagnetic UBi2 metal by using ab initio calculations based on the density functional theory (DFT, employing the augmented plane waves plus local orbital method. We used the exact exchange for correlated electrons (EECE method to calculate the exchange-correlation energy under a variety of hybrid functionals. Electric field gradients (EFGs at the uranium site in UBi2 compound were calculated and compared with the experiment. The EFGs were predicted experimentally at the U site to be very small in this compound. The EFG calculated by the EECE functional are in agreement with the experiment. The densities of states (DOSs show that 5f U orbital is hybrided with the other orbitals. The plotted Fermi surfaces show that there are two kinds of charges on Fermi surface of this compound.
First-principles method for electron-phonon coupling and electron mobility
DEFF Research Database (Denmark)
Gunst, Tue; Markussen, Troels; Stokbro, Kurt
2016-01-01
We present density functional theory calculations of the phonon-limited mobility in n-type monolayer graphene, silicene, and MoS2. The material properties, including the electron-phonon interaction, are calculated from first principles. We provide a detailed description of the normalized full......-band relaxation time approximation for the linearized Boltzmann transport equation (BTE) that includes inelastic scattering processes. The bulk electron-phonon coupling is evaluated by a supercell method. The method employed is fully numerical and does therefore not require a semianalytic treatment of part...... of the problem and, importantly, it keeps the anisotropy information stored in the coupling as well as the band structure. In addition, we perform calculations of the low-field mobility and its dependence on carrier density and temperature to obtain a better understanding of transport in graphene, silicene...
Simple method for generating adjustable trains of picosecond electron bunches
Directory of Open Access Journals (Sweden)
P. Muggli
2010-05-01
Full Text Available A simple, passive method for producing an adjustable train of picosecond electron bunches is demonstrated. The key component of this method is an electron beam mask consisting of an array of parallel wires that selectively spoils the beam emittance. This mask is positioned in a high magnetic dispersion, low beta-function region of the beam line. The incoming electron beam striking the mask has a time/energy correlation that corresponds to a time/position correlation at the mask location. The mask pattern is transformed into a time pattern or train of bunches when the dispersion is brought back to zero downstream of the mask. Results are presented of a proof-of-principle experiment demonstrating this novel technique that was performed at the Brookhaven National Laboratory Accelerator Test Facility. This technique allows for easy tailoring of the bunch train for a particular application, including varying the bunch width and spacing, and enabling the generation of a trailing witness bunch.
Trojan Horse Method: A tool to explore electron screening effect
Energy Technology Data Exchange (ETDEWEB)
Pizzone, R G; Spitaleri, C; Cherubini, S; Cognata, M La; Lamia, L; Romano, S; Sergi, M L [Laboratori Nazionali del Sud-INFN, Catania (Italy) and Dipartimento di Metodologie Fisiche e Chimiche per l' Ingegneria, Universita di Catania, Catania (Italy); Rolfs, C; Strieder, F [Ruhr Universitaet Bochum (Germany); Burjan, V; Kroha, V; Mrazek, J [Cyclotron Institute, Academy of Science, Rez (Czech Republic); Li, C; Wen, Q; Zhou, S [CIAE, Beijing (China); Tumino, A, E-mail: rgpizzone@lns.infn.i [Universita Kore, Erma (Italy)
2010-01-01
Owing the presence of the Coulomb barrier at astrophysically relevant energies, it is very difficult, or sometimes impossible to measure reaction rates for charged particle induced reactions. Moreover due to the presence of the electron screening effect in direct measurements, the relevant nuclear input for astrophysics, i.e. the bare nucleus S(E)-factor, can hardly be extracted. This is why different indirect techniques are being used along with direct measurements. The THM is an unique: indirect technique which allows one to measure reactions cross sections of astrophysical interest down the thermal energies typical of the different scenarios. The basic principle and a review of the main applications of the Trojan Horse Method are given. The applications aiming at the extraction of the bare S{sub b}(E) astrophysical factor and electron screening potentials U{sub e} for several two body processes are discussed.
Mansouri, Edris; Feizi, Faranak; Jafari Rad, Alireza; Arian, Mehran
2018-03-01
This paper uses multivariate regression to create a mathematical model for iron skarn exploration in the Sarvian area, central Iran, using multivariate regression for mineral prospectivity mapping (MPM). The main target of this paper is to apply multivariate regression analysis (as an MPM method) to map iron outcrops in the northeastern part of the study area in order to discover new iron deposits in other parts of the study area. Two types of multivariate regression models using two linear equations were employed to discover new mineral deposits. This method is one of the reliable methods for processing satellite images. ASTER satellite images (14 bands) were used as unique independent variables (UIVs), and iron outcrops were mapped as dependent variables for MPM. According to the results of the probability value (p value), coefficient of determination value (R2) and adjusted determination coefficient (Radj2), the second regression model (which consistent of multiple UIVs) fitted better than other models. The accuracy of the model was confirmed by iron outcrops map and geological observation. Based on field observation, iron mineralization occurs at the contact of limestone and intrusive rocks (skarn type).
Misyura, Maksym; Sukhai, Mahadeo A; Kulasignam, Vathany; Zhang, Tong; Kamel-Reid, Suzanne; Stockley, Tracy L
2018-02-01
A standard approach in test evaluation is to compare results of the assay in validation to results from previously validated methods. For quantitative molecular diagnostic assays, comparison of test values is often performed using simple linear regression and the coefficient of determination (R 2 ), using R 2 as the primary metric of assay agreement. However, the use of R 2 alone does not adequately quantify constant or proportional errors required for optimal test evaluation. More extensive statistical approaches, such as Bland-Altman and expanded interpretation of linear regression methods, can be used to more thoroughly compare data from quantitative molecular assays. We present the application of Bland-Altman and linear regression statistical methods to evaluate quantitative outputs from next-generation sequencing assays (NGS). NGS-derived data sets from assay validation experiments were used to demonstrate the utility of the statistical methods. Both Bland-Altman and linear regression were able to detect the presence and magnitude of constant and proportional error in quantitative values of NGS data. Deming linear regression was used in the context of assay comparison studies, while simple linear regression was used to analyse serial dilution data. Bland-Altman statistical approach was also adapted to quantify assay accuracy, including constant and proportional errors, and precision where theoretical and empirical values were known. The complementary application of the statistical methods described in this manuscript enables more extensive evaluation of performance characteristics of quantitative molecular assays, prior to implementation in the clinical molecular laboratory. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Pretreatment of Cellulose By Electron Beam Irradiation Method
Jusri, N. A. A.; Azizan, A.; Ibrahim, N.; Salleh, R. Mohd; Rahman, M. F. Abd
2018-05-01
Pretreatment process of lignocellulosic biomass (LCB) to produce biofuel has been conducted by using various methods including physical, chemical, physicochemical as well as biological. The conversion of bioethanol process typically involves several steps which consist of pretreatment, hydrolysis, fermentation and separation. In this project, microcrystalline cellulose (MCC) was used in replacement of LCB since cellulose has the highest content of LCB for the purpose of investigating the effectiveness of new pretreatment method using radiation technology. Irradiation with different doses (100 kGy to 1000 kGy) was conducted by using electron beam accelerator equipment at Agensi Nuklear Malaysia. Fourier Transform Infrared Spectroscopy (FTIR) and X-Ray Diffraction (XRD) analyses were studied to further understand the effect of the suggested pretreatment step to the content of MCC. Through this method namely IRR-LCB, an ideal and optimal condition for pretreatment prior to the production of biofuel by using LCB may be introduced.
Understanding poisson regression.
Hayat, Matthew J; Higgins, Melinda
2014-04-01
Nurse investigators often collect study data in the form of counts. Traditional methods of data analysis have historically approached analysis of count data either as if the count data were continuous and normally distributed or with dichotomization of the counts into the categories of occurred or did not occur. These outdated methods for analyzing count data have been replaced with more appropriate statistical methods that make use of the Poisson probability distribution, which is useful for analyzing count data. The purpose of this article is to provide an overview of the Poisson distribution and its use in Poisson regression. Assumption violations for the standard Poisson regression model are addressed with alternative approaches, including addition of an overdispersion parameter or negative binomial regression. An illustrative example is presented with an application from the ENSPIRE study, and regression modeling of comorbidity data is included for illustrative purposes. Copyright 2014, SLACK Incorporated.
Quantitative methods for the analysis of electron microscope images
DEFF Research Database (Denmark)
Skands, Peter Ulrik Vallø
1996-01-01
The topic of this thesis is an general introduction to quantitative methods for the analysis of digital microscope images. The images presented are primarily been acquired from Scanning Electron Microscopes (SEM) and interfermeter microscopes (IFM). The topic is approached though several examples...... foundation of the thesis fall in the areas of: 1) Mathematical Morphology; 2) Distance transforms and applications; and 3) Fractal geometry. Image analysis opens in general the possibility of a quantitative and statistical well founded measurement of digital microscope images. Herein lies also the conditions...
Method for coating a resinous coating material. [electron beam irradiation
Energy Technology Data Exchange (ETDEWEB)
Ino, T; Fujioka, S; Mibae, J; Takahashi, M
1968-07-13
The strength, flexibility and durability of a vinyl chloride resin, acryl resin and the like are improved. This method of application comprises the steps of applying and thereafter radically curing a mixture composed of a polymer (II) having double bond(s) on its side chain and an ethylenic unsaturated monomer, said polymer (II) being obtained by the reaction between an unsaturated carboxylic acid or anhydride represented by the formula XCH = CHY (X = (CH/sub 2/)sub(n)COOH, where 0 <= n <= 2, Y = COOR/sub 1/ or R/sub 2/(R/sub 1/ and R/sub 2/ are hydrogen or an alkyl group having from 1 to 10 atoms of carbon)) and the acrylic copolymer (I), containing a hydroxyl group, obtained by copolymerization of 10 to 50% by weight of at least one selected from the group of beta-hydroxy alkyl acrylate, beta-hydroxy alkyl methacrylate, N-methylol acrylamide and N-methylol methacryl amide with at least one selected from the group of acrylic ester, methacrylic ester and stylene. The copolymer (I) can be obtained by the usual radical polymerization such as bulk polymerization, solution polymerization, suspension polymerization or the like. The polymer (II) is dissolved in the ethylenic unsaturated monomer and radically cured with radical polymerization catalysts or electron beams, etc. The energy range of the electron beams may be 0.1 to 3 MeV. Any type of electron accelerator may be used.
Antropov, K M; Varaksin, A N
2013-01-01
This paper provides the description of Land Use Regression (LUR) modeling and the result of its application in the study of nitrogen dioxide air pollution in Ekaterinburg. The paper describes the difficulties of the modeling for air pollution caused by motor vehicles exhaust, and the ways to address these challenges. To create LUR model of the NO2 air pollution in Ekaterinburg, concentrations of NO2 were measured, data on factors affecting air pollution were collected, a statistical analysis of the data were held. A statistical model of NO2 air pollution (coefficient of determination R2 = 0.70) and a map of pollution were created.
Ali, M Sanni; Groenwold, Rolf H H; Belitser, Svetlana V; Souverein, Patrick C; Martín, Elisa; Gatto, Nicolle M; Huerta, Consuelo; Gardarsdottir, Helga; Roes, Kit C B; Hoes, Arno W; de Boer, Antonius; Klungel, Olaf H
2016-01-01
BACKGROUND: Observational studies including time-varying treatments are prone to confounding. We compared time-varying Cox regression analysis, propensity score (PS) methods, and marginal structural models (MSMs) in a study of antidepressant [selective serotonin reuptake inhibitors (SSRIs)] use and
Freund, Rudolf J; Sa, Ping
2006-01-01
The book provides complete coverage of the classical methods of statistical analysis. It is designed to give students an understanding of the purpose of statistical analyses, to allow the student to determine, at least to some degree, the correct type of statistical analyses to be performed in a given situation, and have some appreciation of what constitutes good experimental design
Spectral-Product Methods for Electronic Structure Calculations (Preprint)
National Research Council Canada - National Science Library
Langhoff, P. W; Mills, J. E; Boatz, J. A
2006-01-01
.... The spectral-product approach to molecular electronic structure avoids the repeated evaluations of the one- and two-electron integrals required in construction of polyatomic Hamiltonian matrices...
Spectral-Product Methods for Electronic Structure Calculations (Postprint)
National Research Council Canada - National Science Library
Langhoff, P. W; Hinde, R. J; Mills, J. D; Boatz, J. A
2007-01-01
.... The spectral-product approach to molecular electronic structure avoids the repeated evaluations of the one- and two-electron integrals required in construction of polyatomic Hamiltonian matrices...
Generalized Hartree-Fock method for electron-atom scattering
International Nuclear Information System (INIS)
Rosenberg, L.
1997-01-01
In the widely used Hartree-Fock procedure for atomic structure calculations, trial functions in the form of linear combinations of Slater determinants are constructed and the Rayleigh-Ritz minimum principle is applied to determine the best in that class. A generalization of this approach, applicable to low-energy electron-atom scattering, is developed here. The method is based on a unique decomposition of the scattering wave function into open- and closed-channel components, so chosen that an approximation to the closed-channel component may be obtained by adopting it as a trial function in a minimum principle, whose rigor can be maintained even when the target wave functions are imprecisely known. Given a closed-channel trial function, the full scattering function may be determined from the solution of an effective one-body Schroedinger equation. Alternatively, in a generalized Hartree-Fock approach, the minimum principle leads to coupled integrodifferential equations to be satisfied by the basis functions appearing in a Slater-determinant representation of the closed-channel wave function; it also provides a procedure for optimizing the choice of nonlinear parameters in a variational determination of these basis functions. Inclusion of additional Slater determinants in the closed-channel trial function allows for systematic improvement of that function, as well as the calculated scattering parameters, with the possibility of spurious singularities avoided. Electron-electron correlations can be important in accounting for long-range forces and resonances. These correlation effects can be included explicitly by suitable choice of one component of the closed-channel wave function; the remaining component may then be determined by the generalized Hartree-Fock procedure. As a simple test, the method is applied to s-wave scattering of positrons by hydrogen. copyright 1997 The American Physical Society
Selected Methods For Increases Reliability The Of Electronic Systems Security
Directory of Open Access Journals (Sweden)
Paś Jacek
2015-11-01
Full Text Available The article presents the issues related to the different methods to increase the reliability of electronic security systems (ESS for example, a fire alarm system (SSP. Reliability of the SSP in the descriptive sense is a property preservation capacity to implement the preset function (e.g. protection: fire airport, the port, logistics base, etc., at a certain time and under certain conditions, e.g. Environmental, despite the possible non-compliance by a specific subset of elements this system. Analyzing the available literature on the ESS-SSP is not available studies on methods to increase the reliability (several works similar topics but moving with respect to the burglary and robbery (Intrusion. Based on the analysis of the set of all paths in the system suitability of the SSP for the scenario mentioned elements fire events (device critical because of security.
Method of electroplating a conversion electron emitting source on implant
Srivastava, Suresh C [Setauket, NY; Gonzales, Gilbert R [New York, NY; Adzic, Radoslav [East Setauket, NY; Meinken, George E [Middle Island, NY
2012-02-14
Methods for preparing an implant coated with a conversion electron emitting source (CEES) are disclosed. The typical method includes cleaning the surface of the implant; placing the implant in an activating solution comprising hydrochloric acid to activate the surface; reducing the surface by H.sub.2 evolution in H.sub.2SO.sub.4 solution; and placing the implant in an electroplating solution that includes ions of the CEES, HCl, H.sub.2SO.sub.4, and resorcinol, gelatin, or a combination thereof. Alternatively, before tin plating, a seed layer is formed on the surface. The electroplated CEES coating can be further protected and stabilized by annealing in a heated oven, by passivation, or by being covered with a protective film. The invention also relates to a holding device for holding an implant, wherein the device selectively prevents electrodeposition on the portions of the implant contacting the device.
Evaluation on Electronic Securities Settlements Systems by AHP Methods
Fukaya, Kiyoyuki; Komoda, Norihisa
Accompanying the spread of Internet and the change of business models, electronic commerce expands buisness areas. Electronic finance commerce becomes popular and especially online security tradings becoome very popular in this area. This online securitiy tradings have some good points such as less mistakes than telephone calls. In order to expand this online security tradings, the transfer of the security paper is one the largest problems to be solved. Because it takes a few days to transfer the security paper from a seller to a buyer. So the dematerialization of security papers is one of the solutions. The demterilization needs the information systems for setteling security. Some countries such as France, German, United Kingdom and U.S.A. have been strating the dematerialization projects. The legacy assesments on these projects focus from the viewpoint of the legal schemes only and there is no assessment from system architectures. This paper focuses on the information system scheme and valuates these dematerlization projects by AHP methods from the viewpoints of “dematerializaion of security papers", “speed of transfer", “usefulness on the system" and “accumulation of risks". This is the first case of valuations on security settlements systems by AHP methods, especially four counties’ systems.
Markham, Francis; Young, Martin; Doran, Bruce; Sugden, Mark
2017-05-23
Many jurisdictions regularly conduct surveys to estimate the prevalence of problem gambling in their adult populations. However, the comparison of such estimates is problematic due to methodological variations between studies. Total consumption theory suggests that an association between mean electronic gaming machine (EGM) and casino gambling losses and problem gambling prevalence estimates may exist. If this is the case, then changes in EGM losses may be used as a proxy indicator for changes in problem gambling prevalence. To test for this association this study examines the relationship between aggregated losses on electronic gaming machines (EGMs) and problem gambling prevalence estimates for Australian states and territories between 1994 and 2016. A Bayesian meta-regression analysis of 41 cross-sectional problem gambling prevalence estimates was undertaken using EGM gambling losses, year of survey and methodological variations as predictor variables. General population studies of adults in Australian states and territory published before 1 July 2016 were considered in scope. 41 studies were identified, with a total of 267,367 participants. Problem gambling prevalence, moderate-risk problem gambling prevalence, problem gambling screen, administration mode and frequency threshold were extracted from surveys. Administrative data on EGM and casino gambling loss data were extracted from government reports and expressed as the proportion of household disposable income lost. Money lost on EGMs is correlated with problem gambling prevalence. An increase of 1% of household disposable income lost on EGMs and in casinos was associated with problem gambling prevalence estimates that were 1.33 times higher [95% credible interval 1.04, 1.71]. There was no clear association between EGM losses and moderate-risk problem gambling prevalence estimates. Moderate-risk problem gambling prevalence estimates were not explained by the models (I 2 ≥ 0.97; R 2 ≤ 0.01). The
Lee, Mi Hee; Lee, Soo Bong; Eo, Yang Dam; Kim, Sun Woong; Woo, Jung-Hun; Han, Soo Hee
2017-07-01
Landsat optical images have enough spatial and spectral resolution to analyze vegetation growth characteristics. But, the clouds and water vapor degrade the image quality quite often, which limits the availability of usable images for the time series vegetation vitality measurement. To overcome this shortcoming, simulated images are used as an alternative. In this study, weighted average method, spatial and temporal adaptive reflectance fusion model (STARFM) method, and multilinear regression analysis method have been tested to produce simulated Landsat normalized difference vegetation index (NDVI) images of the Korean Peninsula. The test results showed that the weighted average method produced the images most similar to the actual images, provided that the images were available within 1 month before and after the target date. The STARFM method gives good results when the input image date is close to the target date. Careful regional and seasonal consideration is required in selecting input images. During summer season, due to clouds, it is very difficult to get the images close enough to the target date. Multilinear regression analysis gives meaningful results even when the input image date is not so close to the target date. Average R 2 values for weighted average method, STARFM, and multilinear regression analysis were 0.741, 0.70, and 0.61, respectively.
Energy Technology Data Exchange (ETDEWEB)
Zhang, Yan-Feng; Dai, Shu-Gui [College of Environmental Science and Engineering, Nankai University, Key Laboratory for Pollution Process and Environmental Criteria of Ministry of Education, Tianjin (China); Ma, Yi [College of Chemistry, Nankai University, Institute of Elemento-Organic Chemistry, Tianjin (China); Gao, Zhi-Xian [Institute of Hygiene and Environmental Medicine, Tianjin (China)
2010-07-15
Immunoassays have been regarded as a possible alternative or supplement for measuring polycyclic aromatic hydrocarbons (PAHs) in the environment. Since there are too many potential cross-reactants for PAH immunoassays, it is difficult to determine all the cross-reactivities (CRs) by experimental tests. The relationship between CR and the physical-chemical properties of PAHs and related compounds was investigated using the CR data from a commercial enzyme-linked immunosorbent assay (ELISA) kit test. Two quantitative structure-activity relationship (QSAR) techniques, regression analysis and comparative molecular field analysis (CoMFA), were applied for predicting the CR of PAHs in this ELISA kit. Parabolic regression indicates that the CRs are significantly correlated with the logarithm of the partition coefficient for the octanol-water system (log K{sub ow}) (r{sup 2}=0.643, n=23, P<0.0001), suggesting that hydrophobic interactions play an important role in the antigen-antibody binding and the cross-reactions in this ELISA test. The CoMFA model obtained shows that the CRs of the PAHs are correlated with the 3D structure of the molecules (r{sub cv}{sup 2}=0.663, r{sup 2}=0.873, F{sub 4,32}=55.086). The contributions of the steric and electrostatic fields to CR were 40.4 and 59.6%, respectively. Both of the QSAR models satisfactorily predict the CR in this PAH immunoassay kit, and help in understanding the mechanisms of antigen-antibody interaction. (orig.)
Directory of Open Access Journals (Sweden)
Mehmet Das
2018-01-01
Full Text Available In this study, an air heated solar collector (AHSC dryer was designed to determine the drying characteristics of the pear. Flat pear slices of 10 mm thickness were used in the experiments. The pears were dried both in the AHSC dryer and under the sun. Panel glass temperature, panel floor temperature, panel inlet temperature, panel outlet temperature, drying cabinet inlet temperature, drying cabinet outlet temperature, drying cabinet temperature, drying cabinet moisture, solar radiation, pear internal temperature, air velocity and mass loss of pear were measured at 30 min intervals. Experiments were carried out during the periods of June 2017 in Elazig, Turkey. The experiments started at 8:00 a.m. and continued till 18:00. The experiments were continued until the weight changes in the pear slices stopped. Wet basis moisture content (MCw, dry basis moisture content (MCd, adjustable moisture ratio (MR, drying rate (DR, and convective heat transfer coefficient (hc were calculated with both in the AHSC dryer and the open sun drying experiment data. It was found that the values of hc in both drying systems with a range 12.4 and 20.8 W/m2 °C. Three different kernel models were used in the support vector machine (SVM regression to construct the predictive model of the calculated hc values for both systems. The mean absolute error (MAE, root mean squared error (RMSE, relative absolute error (RAE and root relative absolute error (RRAE analysis were performed to indicate the predictive model’s accuracy. As a result, the rate of drying of the pear was examined for both systems and it was observed that the pear had dried earlier in the AHSC drying system. A predictive model was obtained using the SVM regression for the calculated hc values for the pear in the AHSC drying system. The normalized polynomial kernel was determined as the best kernel model in SVM for estimating the hc values.
Gross, Samuel M; Tibshirani, Robert
2015-04-01
We consider the scenario where one observes an outcome variable and sets of features from multiple assays, all measured on the same set of samples. One approach that has been proposed for dealing with these type of data is "sparse multiple canonical correlation analysis" (sparse mCCA). All of the current sparse mCCA techniques are biconvex and thus have no guarantees about reaching a global optimum. We propose a method for performing sparse supervised canonical correlation analysis (sparse sCCA), a specific case of sparse mCCA when one of the datasets is a vector. Our proposal for sparse sCCA is convex and thus does not face the same difficulties as the other methods. We derive efficient algorithms for this problem that can be implemented with off the shelf solvers, and illustrate their use on simulated and real data. © The Author 2014. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Tokunaga, Makoto; Watanabe, Susumu; Sonoda, Shigeru
2017-09-01
Multiple linear regression analysis is often used to predict the outcome of stroke rehabilitation. However, the predictive accuracy may not be satisfactory. The objective of this study was to elucidate the predictive accuracy of a method of calculating motor Functional Independence Measure (mFIM) at discharge from mFIM effectiveness predicted by multiple regression analysis. The subjects were 505 patients with stroke who were hospitalized in a convalescent rehabilitation hospital. The formula "mFIM at discharge = mFIM effectiveness × (91 points - mFIM at admission) + mFIM at admission" was used. By including the predicted mFIM effectiveness obtained through multiple regression analysis in this formula, we obtained the predicted mFIM at discharge (A). We also used multiple regression analysis to directly predict mFIM at discharge (B). The correlation between the predicted and the measured values of mFIM at discharge was compared between A and B. The correlation coefficients were .916 for A and .878 for B. Calculating mFIM at discharge from mFIM effectiveness predicted by multiple regression analysis had a higher degree of predictive accuracy of mFIM at discharge than that directly predicted. Copyright © 2017 National Stroke Association. Published by Elsevier Inc. All rights reserved.
Jacobsen, R. T.; Stewart, R. B.; Crain, R. W., Jr.; Rose, G. L.; Myers, A. F.
1976-01-01
A method was developed for establishing a rational choice of the terms to be included in an equation of state with a large number of adjustable coefficients. The methods presented were developed for use in the determination of an equation of state for oxygen and nitrogen. However, a general application of the methods is possible in studies involving the determination of an optimum polynomial equation for fitting a large number of data points. The data considered in the least squares problem are experimental thermodynamic pressure-density-temperature data. Attention is given to a description of stepwise multiple regression and the use of stepwise regression in the determination of an equation of state for oxygen and nitrogen.
Jović, Ozren; Smrečki, Neven; Popović, Zora
2016-04-01
A novel quantitative prediction and variable selection method called interval ridge regression (iRR) is studied in this work. The method is performed on six data sets of FTIR, two data sets of UV-vis and one data set of DSC. The obtained results show that models built with ridge regression on optimal variables selected with iRR significantly outperfom models built with ridge regression on all variables in both calibration (6 out of 9 cases) and validation (2 out of 9 cases). In this study, iRR is also compared with interval partial least squares regression (iPLS). iRR outperfomed iPLS in validation (insignificantly in 6 out of 9 cases and significantly in one out of 9 cases for poil, a well known health beneficial nutrient, is studied in this work by mixing it with cheap and widely used oils such as soybean (So) oil, rapeseed (R) oil and sunflower (Su) oil. Binary mixture sets of hempseed oil with these three oils (HSo, HR and HSu) and a ternary mixture set of H oil, R oil and Su oil (HRSu) were considered. The obtained accuracy indicates that using iRR on FTIR and UV-vis data, each particular oil can be very successfully quantified (in all 8 cases RMSEPoil (R(2)>0.99). Copyright © 2015 Elsevier B.V. All rights reserved.
Setiawan, Suhartono, Ahmad, Imam Safawi; Rahmawati, Noorgam Ika
2015-12-01
Bank Indonesia (BI) as the central bank of Republic Indonesiahas a single overarching objective to establish and maintain rupiah stability. This objective could be achieved by monitoring traffic of inflow and outflow money currency. Inflow and outflow are related to stock and distribution of money currency around Indonesia territory. It will effect of economic activities. Economic activities of Indonesia,as one of Moslem country, absolutely related to Islamic Calendar (lunar calendar), that different with Gregorian calendar. This research aims to forecast the inflow and outflow money currency of Representative Office (RO) of BI Semarang Central Java region. The results of the analysis shows that the characteristics of inflow and outflow money currency influenced by the effects of the calendar variations, that is the day of Eid al-Fitr (moslem holyday) as well as seasonal patterns. In addition, the period of a certain week during Eid al-Fitr also affect the increase of inflow and outflow money currency. The best model based on the value of the smallestRoot Mean Square Error (RMSE) for inflow data is ARIMA model. While the best model for predicting the outflow data in RO of BI Semarang is ARIMAX model or Time Series Regression, because both of them have the same model. The results forecast in a period of 2015 shows an increase of inflow money currency happened in August, while the increase in outflow money currency happened in July.
Directory of Open Access Journals (Sweden)
Matthias Schmid
Full Text Available Regression analysis with a bounded outcome is a common problem in applied statistics. Typical examples include regression models for percentage outcomes and the analysis of ratings that are measured on a bounded scale. In this paper, we consider beta regression, which is a generalization of logit models to situations where the response is continuous on the interval (0,1. Consequently, beta regression is a convenient tool for analyzing percentage responses. The classical approach to fit a beta regression model is to use maximum likelihood estimation with subsequent AIC-based variable selection. As an alternative to this established - yet unstable - approach, we propose a new estimation technique called boosted beta regression. With boosted beta regression estimation and variable selection can be carried out simultaneously in a highly efficient way. Additionally, both the mean and the variance of a percentage response can be modeled using flexible nonlinear covariate effects. As a consequence, the new method accounts for common problems such as overdispersion and non-binomial variance structures.
Specific surface area evaluation method by using scanning electron microscopy
International Nuclear Information System (INIS)
Petrescu, Camelia; Petrescu, Cristian; Axinte, Adrian
2000-01-01
Ceramics are among the most interesting materials for a large category of applications, including both industry and health. Among the characteristic of the ceramic materials, the specific surface area is often difficult to evaluate.The paper presents a method of evaluation for the specific surface area of two ceramic powders by means of scanning electron microscopy measurements and an original method of computing the specific surface area.Cumulative curves are used to calculate the specific surface area under assumption that the values of particles diameters follow a normal logarithmic distribution. For two powder types, X7R and NPO the results are the following: - for the density ρ (g/cm 2 ), 5.5 and 6.0, respectively; - for the average diameter D bar (μm), 0.51 and 0.53, respectively; - for σ, 1.465 and 1.385, respectively; - for specific surface area (m 2 /g), 1.248 and 1.330, respectively. The obtained results are in good agreement with the values measured by conventional methods. (authors)
Advanced cluster methods for correlated-electron systems
Energy Technology Data Exchange (ETDEWEB)
Fischer, Andre
2015-04-27
In this thesis, quantum cluster methods are used to calculate electronic properties of correlated-electron systems. A special focus lies in the determination of the ground state properties of a 3/4 filled triangular lattice within the one-band Hubbard model. At this filling, the electronic density of states exhibits a so-called van Hove singularity and the Fermi surface becomes perfectly nested, causing an instability towards a variety of spin-density-wave (SDW) and superconducting states. While chiral d+id-wave superconductivity has been proposed as the ground state in the weak coupling limit, the situation towards strong interactions is unclear. Additionally, quantum cluster methods are used here to investigate the interplay of Coulomb interactions and symmetry-breaking mechanisms within the nematic phase of iron-pnictide superconductors. The transition from a tetragonal to an orthorhombic phase is accompanied by a significant change in electronic properties, while long-range magnetic order is not established yet. The driving force of this transition may not only be phonons but also magnetic or orbital fluctuations. The signatures of these scenarios are studied with quantum cluster methods to identify the most important effects. Here, cluster perturbation theory (CPT) and its variational extention, the variational cluster approach (VCA) are used to treat the respective systems on a level beyond mean-field theory. Short-range correlations are incorporated numerically exactly by exact diagonalization (ED). In the VCA, long-range interactions are included by variational optimization of a fictitious symmetry-breaking field based on a self-energy functional approach. Due to limitations of ED, cluster sizes are limited to a small number of degrees of freedom. For the 3/4 filled triangular lattice, the VCA is performed for different cluster symmetries. A strong symmetry dependence and finite-size effects make a comparison of the results from different clusters difficult
A method to study electron heating during ICRH
International Nuclear Information System (INIS)
Eriksson, L.G.; Hellsten, T.
1989-01-01
Collisionless absorption of ICRF waves occurs either by ion cyclotron absorption or by electron Landau (ELD) and transit damping (TTMP). Both ion cyclotron absorption, and direct electron absorption results in electron heating. Electron heating by minority ions occurs after a high energy tail of the resonating ions has been formed i.e. typically after 0.2-1s in present JET experiments. Electron heating through ELD, and TTMP, takes place on the timescale given by electron-electron collisions which is typically of the order of ms. This difference in the timescales can be used to separate the two damping mechanisms. This can be done by measuring the time derivatives of the electron temperature after sawtooth crashes during ramp-up and ramp-down of the RF-power. (author) 4 refs., 4 figs
Model Robust Calibration: Method and Application to Electronically-Scanned Pressure Transducers
Walker, Eric L.; Starnes, B. Alden; Birch, Jeffery B.; Mays, James E.
2010-01-01
This article presents the application of a recently developed statistical regression method to the controlled instrument calibration problem. The statistical method of Model Robust Regression (MRR), developed by Mays, Birch, and Starnes, is shown to improve instrument calibration by reducing the reliance of the calibration on a predetermined parametric (e.g. polynomial, exponential, logarithmic) model. This is accomplished by allowing fits from the predetermined parametric model to be augmented by a certain portion of a fit to the residuals from the initial regression using a nonparametric (locally parametric) regression technique. The method is demonstrated for the absolute scale calibration of silicon-based pressure transducers.
Doran, Kara S.; Howd, Peter A.; Sallenger,, Asbury H.
2016-01-04
This report documents the development of statistical tools used to quantify the hazard presented by the response of sea-level elevation to natural or anthropogenic changes in climate and ocean circulation. A hazard is a physical process (or processes) that, when combined with vulnerability (or susceptibility to the hazard), results in risk. This study presents the development and comparison of new and existing sea-level analysis methods, exploration of the strengths and weaknesses of the methods using synthetic time series, and when appropriate, synthesis of the application of the method to observed sea-level time series. These reports are intended to enhance material presented in peer-reviewed journal articles where it is not always possible to provide the level of detail that might be necessary to fully support or recreate published results.
International Nuclear Information System (INIS)
Hu, Chao; Jain, Gaurav; Zhang, Puqiang; Schmidt, Craig; Gomadam, Parthasarathy; Gorka, Tom
2014-01-01
Highlights: • We develop a data-driven method for the battery capacity estimation. • Five charge-related features that are indicative of the capacity are defined. • The kNN regression model captures the dependency of the capacity on the features. • Results with 10 years’ continuous cycling data verify the effectiveness of the method. - Abstract: Reliability of lithium-ion (Li-ion) rechargeable batteries used in implantable medical devices has been recognized as of high importance from a broad range of stakeholders, including medical device manufacturers, regulatory agencies, physicians, and patients. To ensure Li-ion batteries in these devices operate reliably, it is important to be able to assess the battery health condition by estimating the battery capacity over the life-time. This paper presents a data-driven method for estimating the capacity of Li-ion battery based on the charge voltage and current curves. The contributions of this paper are three-fold: (i) the definition of five characteristic features of the charge curves that are indicative of the capacity, (ii) the development of a non-linear kernel regression model, based on the k-nearest neighbor (kNN) regression, that captures the complex dependency of the capacity on the five features, and (iii) the adaptation of particle swarm optimization (PSO) to finding the optimal combination of feature weights for creating a kNN regression model that minimizes the cross validation (CV) error in the capacity estimation. Verification with 10 years’ continuous cycling data suggests that the proposed method is able to accurately estimate the capacity of Li-ion battery throughout the whole life-time
International Nuclear Information System (INIS)
Briggs, D.J.; De Hoogh, C.; Elliot, P.; Gulliver, J.; Wills, J.; Kingham, S.; Smallbone, K.
2000-01-01
Accurate, high-resolution maps of traffic-related air pollution are needed both as a basis for assessing exposures as part of epidemiological studies, and to inform urban air-quality policy and traffic management. This paper assesses the use of a GIS-based, regression mapping technique to model spatial patterns of traffic-related air pollution. The model - developed using data from 80 passive sampler sites in Huddersfield, as part of the SAVIAH (Small Area Variations in Air Quality and Health) project - uses data on traffic flows and land cover in the 300-m buffer zone around each site, and altitude of the site, as predictors of NO 2 concentrations. It was tested here by application in four urban areas in the UK: Huddersfield (for the year following that used for initial model development), Sheffield, Northampton, and part of London. In each case, a GIS was built in ArcInfo, integrating relevant data on road traffic, urban land use and topography. Monitoring of NO 2 was undertaken using replicate passive samplers (in London, data were obtained from surveys carried out as part of the London network). In Huddersfield, Sheffield and Northampton, the model was first calibrated by comparing modelled results with monitored NO 2 concentrations at 10 randomly selected sites; the calibrated model was then validated against data from a further 10-28 sites. In London, where data for only 11 sites were available, validation was not undertaken. Results showed that the model performed well in all cases. After local calibration, the model gave estimates of mean annual NO 2 concentrations within a factor of 1.5 of the actual mean (approx. 70-90%) of the time and within a factor of 2 between 70 and 100% of the time. r 2 values between modelled and observed concentrations are in the range of 0.58-0.76. These results are comparable to those achieved by more sophisticated dispersion models. The model also has several advantages over dispersion modelling. It is able, for example, to
Display methods of electronic patient record screens: patient privacy concerns.
Niimi, Yukari; Ota, Katsumasa
2013-01-01
To provide adequate care, medical professionals have to collect not only medical information but also information that may be related to private aspects of the patient's life. With patients' increasing awareness of information privacy, healthcare providers have to pay attention to the patients' right of privacy. This study aimed to clarify the requirements of the display method of electronic patient record (EPR) screens in consideration of both patients' information privacy concerns and health professionals' information needs. For this purpose, semi-structured group interviews were conducted of 78 medical professionals. They pointed out that partial concealment of information to meet patients' requests for privacy could result in challenges in (1) safety in healthcare, (2) information sharing, (3) collaboration, (4) hospital management, and (5) communication. They believed that EPRs should (1) meet the requirements of the therapeutic process, (2) have restricted access, (3) provide convenient access to necessary information, and (4) facilitate interprofessional collaboration. This study provides direction for the development of display methods that balance the sharing of vital information and protection of patient privacy.
Silva, João Paulo Santos; Mônaco, Luciana da Mata; Paschoal, André Monteiro; Oliveira, Ícaro Agenor Ferreira de; Leoni, Renata Ferranti
2018-05-16
Arterial spin labeling (ASL) is an established magnetic resonance imaging (MRI) technique that is finding broader applications in functional studies of the healthy and diseased brain. To promote improvement in cerebral blood flow (CBF) signal specificity, many algorithms and imaging procedures, such as subtraction methods, were proposed to eliminate or, at least, minimize noise sources. Therefore, this study addressed the main considerations of how CBF functional connectivity (FC) is changed, regarding resting brain network (RBN) identification and correlations between regions of interest (ROI), by different subtraction methods and removal of residual motion artifacts and global signal fluctuations (RMAGSF). Twenty young healthy participants (13 M/7F, mean age = 25 ± 3 years) underwent an MRI protocol with a pseudo-continuous ASL (pCASL) sequence. Perfusion-based images were obtained using simple, sinc and running subtraction. RMAGSF removal was applied to all CBF time series. Independent Component Analysis (ICA) was used for RBN identification, while Pearson' correlation was performed for ROI-based FC analysis. Temporal signal-to-noise ratio (tSNR) was higher in CBF maps obtained by sinc subtraction, although RMAGSF removal had a significant effect on maps obtained with simple and running subtractions. Neither the subtraction method nor the RMAGSF removal directly affected the identification of RBNs. However, the number of correlated and anti-correlated voxels varied for different subtraction and filtering methods. In an ROI-to-ROI level, changes were prominent in FC values and their statistical significance. Our study showed that both RMAGSF filtering and subtraction method might influence resting-state FC results, especially in an ROI level, consequently affecting FC analysis and its interpretation. Taking our results and the whole discussion together, we understand that for an exploratory assessment of the brain, one could avoid removing RMAGSF to
Al-Harrasi, Ahmed; Rehman, Najeeb Ur; Mabood, Fazal; Albroumi, Muhammaed; Ali, Liaqat; Hussain, Javid; Hussain, Hidayat; Csuk, René; Khan, Abdul Latif; Alam, Tanveer; Alameri, Saif
2017-09-01
In the present study, for the first time, NIR spectroscopy coupled with PLS regression as a rapid and alternative method was developed to quantify the amount of Keto-β-Boswellic Acid (KBA) in different plant parts of Boswellia sacra and the resin exudates of the trunk. NIR spectroscopy was used for the measurement of KBA standards and B. sacra samples in absorption mode in the wavelength range from 700-2500 nm. PLS regression model was built from the obtained spectral data using 70% of KBA standards (training set) in the range from 0.1 ppm to 100 ppm. The PLS regression model obtained was having R-square value of 98% with 0.99 corelationship value and having good prediction with RMSEP value 3.2 and correlation of 0.99. It was then used to quantify the amount of KBA in the samples of B. sacra. The results indicated that the MeOH extract of resin has the highest concentration of KBA (0.6%) followed by essential oil (0.1%). However, no KBA was found in the aqueous extract. The MeOH extract of the resin was subjected to column chromatography to get various sub-fractions at different polarity of organic solvents. The sub-fraction at 4% MeOH/CHCl3 (4.1% of KBA) was found to contain the highest percentage of KBA followed by another sub-fraction at 2% MeOH/CHCl3 (2.2% of KBA). The present results also indicated that KBA is only present in the gum-resin of the trunk and not in all parts of the plant. These results were further confirmed through HPLC analysis and therefore it is concluded that NIRS coupled with PLS regression is a rapid and alternate method for quantification of KBA in Boswellia sacra. It is non-destructive, rapid, sensitive and uses simple methods of sample preparation.
Polynomial regression analysis and significance test of the regression function
International Nuclear Information System (INIS)
Gao Zhengming; Zhao Juan; He Shengping
2012-01-01
In order to analyze the decay heating power of a certain radioactive isotope per kilogram with polynomial regression method, the paper firstly demonstrated the broad usage of polynomial function and deduced its parameters with ordinary least squares estimate. Then significance test method of polynomial regression function is derived considering the similarity between the polynomial regression model and the multivariable linear regression model. Finally, polynomial regression analysis and significance test of the polynomial function are done to the decay heating power of the iso tope per kilogram in accord with the authors' real work. (authors)
Chen, Qingxia; Ibrahim, Joseph G
2014-07-01
Multiple Imputation, Maximum Likelihood and Fully Bayesian methods are the three most commonly used model-based approaches in missing data problems. Although it is easy to show that when the responses are missing at random (MAR), the complete case analysis is unbiased and efficient, the aforementioned methods are still commonly used in practice for this setting. To examine the performance of and relationships between these three methods in this setting, we derive and investigate small sample and asymptotic expressions of the estimates and standard errors, and fully examine how these estimates are related for the three approaches in the linear regression model when the responses are MAR. We show that when the responses are MAR in the linear model, the estimates of the regression coefficients using these three methods are asymptotically equivalent to the complete case estimates under general conditions. One simulation and a real data set from a liver cancer clinical trial are given to compare the properties of these methods when the responses are MAR.
Seber, George A F
2012-01-01
Concise, mathematically clear, and comprehensive treatment of the subject.* Expanded coverage of diagnostics and methods of model fitting.* Requires no specialized knowledge beyond a good grasp of matrix algebra and some acquaintance with straight-line regression and simple analysis of variance models.* More than 200 problems throughout the book plus outline solutions for the exercises.* This revision has been extensively class-tested.
Balabin, Roman M; Lomakina, Ekaterina I
2011-04-21
In this study, we make a general comparison of the accuracy and robustness of five multivariate calibration models: partial least squares (PLS) regression or projection to latent structures, polynomial partial least squares (Poly-PLS) regression, artificial neural networks (ANNs), and two novel techniques based on support vector machines (SVMs) for multivariate data analysis: support vector regression (SVR) and least-squares support vector machines (LS-SVMs). The comparison is based on fourteen (14) different datasets: seven sets of gasoline data (density, benzene content, and fractional composition/boiling points), two sets of ethanol gasoline fuel data (density and ethanol content), one set of diesel fuel data (total sulfur content), three sets of petroleum (crude oil) macromolecules data (weight percentages of asphaltenes, resins, and paraffins), and one set of petroleum resins data (resins content). Vibrational (near-infrared, NIR) spectroscopic data are used to predict the properties and quality coefficients of gasoline, biofuel/biodiesel, diesel fuel, and other samples of interest. The four systems presented here range greatly in composition, properties, strength of intermolecular interactions (e.g., van der Waals forces, H-bonds), colloid structure, and phase behavior. Due to the high diversity of chemical systems studied, general conclusions about SVM regression methods can be made. We try to answer the following question: to what extent can SVM-based techniques replace ANN-based approaches in real-world (industrial/scientific) applications? The results show that both SVR and LS-SVM methods are comparable to ANNs in accuracy. Due to the much higher robustness of the former, the SVM-based approaches are recommended for practical (industrial) application. This has been shown to be especially true for complicated, highly nonlinear objects.
Farhadian, Maryam; Aliabadi, Mohsen; Darvishi, Ebrahim
2015-01-01
Prediction models are used in a variety of medical domains, and they are frequently built from experience which constitutes data acquired from actual cases. This study aimed to analyze the potential of artificial neural networks and logistic regression techniques for estimation of hearing impairment among industrial workers. A total of 210 workers employed in a steel factory (in West of Iran) were selected, and their occupational exposure histories were analyzed. The hearing loss thresholds of the studied workers were determined using a calibrated audiometer. The personal noise exposures were also measured using a noise dosimeter in the workstations. Data obtained from five variables, which can influence the hearing loss, were used as input features, and the hearing loss thresholds were considered as target feature of the prediction methods. Multilayer feedforward neural networks and logistic regression were developed using MATLAB R2011a software. Based on the World Health Organization classification for the grades of hearing loss, 74.2% of the studied workers have normal hearing thresholds, 23.4% have slight hearing loss, and 2.4% have moderate hearing loss. The accuracy and kappa coefficient of the best developed neural networks for prediction of the grades of hearing loss were 88.6 and 66.30, respectively. The accuracy and kappa coefficient of the logistic regression were also 84.28 and 51.30, respectively. Neural networks could provide more accurate predictions of the hearing loss than logistic regression. The prediction method can provide reliable and comprehensible information for occupational health and medicine experts.
Directory of Open Access Journals (Sweden)
Fahim Mohammad
Full Text Available Laboratory testing is the single highest-volume medical activity, making it useful to ask how well one can anticipate whether a given test result will be high, low, or within the reference interval ("normal". We analyzed 10 years of electronic health records--a total of 69.4 million blood tests--to see how well standard rule-mining techniques can anticipate test results based on patient age and gender, recent diagnoses, and recent laboratory test results. We evaluated rules according to their positive and negative predictive value (PPV and NPV and area under the receiver-operator characteristic curve (ROC AUCs. Using a stringent cutoff of PPV and/or NPV≥0.95, standard techniques yield few rules for sendout tests but several for in-house tests, mostly for repeat laboratory tests that are part of the complete blood count and basic metabolic panel. Most rules were clinically and pathophysiologically plausible, and several seemed clinically useful for informing pre-test probability of a given result. But overall, rules were unlikely to be able to function as a general substitute for actually ordering a test. Improving laboratory utilization will likely require different input data and/or alternative methods.
Mohammad, Fahim; Theisen-Toupal, Jesse C; Arnaout, Ramy
2014-01-01
Laboratory testing is the single highest-volume medical activity, making it useful to ask how well one can anticipate whether a given test result will be high, low, or within the reference interval ("normal"). We analyzed 10 years of electronic health records--a total of 69.4 million blood tests--to see how well standard rule-mining techniques can anticipate test results based on patient age and gender, recent diagnoses, and recent laboratory test results. We evaluated rules according to their positive and negative predictive value (PPV and NPV) and area under the receiver-operator characteristic curve (ROC AUCs). Using a stringent cutoff of PPV and/or NPV≥0.95, standard techniques yield few rules for sendout tests but several for in-house tests, mostly for repeat laboratory tests that are part of the complete blood count and basic metabolic panel. Most rules were clinically and pathophysiologically plausible, and several seemed clinically useful for informing pre-test probability of a given result. But overall, rules were unlikely to be able to function as a general substitute for actually ordering a test. Improving laboratory utilization will likely require different input data and/or alternative methods.
Statistical Methods for Single-Particle Electron Cryomicroscopy
DEFF Research Database (Denmark)
Jensen, Katrine Hommelhoff
Electron cryomicroscopy (cryo-EM) is a form of transmission electron microscopy, aimed at reconstructing the 3D structure of a macromolecular complex from a large set of 2D projection images, as they exhibit a very low signal-to-noise ratio (SNR). In the single-particle reconstruction (SPR) probl...
Application of maximum entropy method for the study of electron ...
Indian Academy of Sciences (India)
in terms of the computing power of the machine on which it runs. Since the electron ... Table 1. The Debye–Waller factors of individual atoms and the reliability indices of three sulphides. .... The size of the electron cloud indicates the size of the ...
Directory of Open Access Journals (Sweden)
Natalia N. Gorinchoy
2012-06-01
Full Text Available The electron-conformational (EC method is employed for the toxicophore (Tph identification and quantitative prediction of toxicity using the training set of 24 compounds that are considered as fragrance allergens. The values of a=LD50 in oral exposure of rats were chosen as a measure of toxicity. EC parameters are evaluated on the base of conformational analysis and ab initio electronic structure calculations (including solvent influence. The Tph consists of four sites which in this series of compounds are represented by three carbon and one oxygen atoms, but may be any other atoms that have the same electronic and geometric features within the tolerance limits. The regression model taking into consideration the Tph flexibility, anti-Tph shielding, and influence of out-of-Tph functional groups predicts well the experimental values of toxicity (R2 = 0.93 with a reasonable leaveone- out cross-validation.
Donnelly, Aoife; Misstear, Bruce; Broderick, Brian
2011-02-15
Background concentrations of nitrogen dioxide (NO(2)) are not constant but vary temporally and spatially. The current paper presents a powerful tool for the quantification of the effects of wind direction and wind speed on background NO(2) concentrations, particularly in cases where monitoring data are limited. In contrast to previous studies which applied similar methods to sites directly affected by local pollution sources, the current study focuses on background sites with the aim of improving methods for predicting background concentrations adopted in air quality modelling studies. The relationship between measured NO(2) concentration in air at three such sites in Ireland and locally measured wind direction has been quantified using nonparametric regression methods. The major aim was to analyse a method for quantifying the effects of local wind direction on background levels of NO(2) in Ireland. The method was expanded to include wind speed as an added predictor variable. A Gaussian kernel function is used in the analysis and circular statistics employed for the wind direction variable. Wind direction and wind speed were both found to have a statistically significant effect on background levels of NO(2) at all three sites. Frequently environmental impact assessments are based on short term baseline monitoring producing a limited dataset. The presented non-parametric regression methods, in contrast to the frequently used methods such as binning of the data, allow concentrations for missing data pairs to be estimated and distinction between spurious and true peaks in concentrations to be made. The methods were found to provide a realistic estimation of long term concentration variation with wind direction and speed, even for cases where the data set is limited. Accurate identification of the actual variation at each location and causative factors could be made, thus supporting the improved definition of background concentrations for use in air quality modelling
Linear regression in astronomy. I
Isobe, Takashi; Feigelson, Eric D.; Akritas, Michael G.; Babu, Gutti Jogesh
1990-01-01
Five methods for obtaining linear regression fits to bivariate data with unknown or insignificant measurement errors are discussed: ordinary least-squares (OLS) regression of Y on X, OLS regression of X on Y, the bisector of the two OLS lines, orthogonal regression, and 'reduced major-axis' regression. These methods have been used by various researchers in observational astronomy, most importantly in cosmic distance scale applications. Formulas for calculating the slope and intercept coefficients and their uncertainties are given for all the methods, including a new general form of the OLS variance estimates. The accuracy of the formulas was confirmed using numerical simulations. The applicability of the procedures is discussed with respect to their mathematical properties, the nature of the astronomical data under consideration, and the scientific purpose of the regression. It is found that, for problems needing symmetrical treatment of the variables, the OLS bisector performs significantly better than orthogonal or reduced major-axis regression.
International Nuclear Information System (INIS)
Hoffman, M J H; Claassens, C H
2006-01-01
A density matrix based fictitious electron dynamics method for calculating electronic structure has been implemented within a semi-empirical quantum chemistry environment. This method uses an equation of motion that implicitly ensures the idempotency constraint on the density matrix. Test calculations showed that this method has potential of being combined with simultaneous atomic dynamics, in analogy to the popular Car-Parrinello method. In addition, the sparsity of the density matrix and the sophisticated though flexible way of ensuring idempotency conservation while integrating the equation of motion creates the potential of developing a fast linear scaling method
Classification and regression trees
Breiman, Leo; Olshen, Richard A; Stone, Charles J
1984-01-01
The methodology used to construct tree structured rules is the focus of this monograph. Unlike many other statistical procedures, which moved from pencil and paper to calculators, this text's use of trees was unthinkable before computers. Both the practical and theoretical sides have been developed in the authors' study of tree methods. Classification and Regression Trees reflects these two sides, covering the use of trees as a data analysis method, and in a more mathematical framework, proving some of their fundamental properties.
International Nuclear Information System (INIS)
Toderean, A; Ilonca, Gh.
1981-01-01
The discovery of different kinds of interactions between solids and fotonic, respectively electronic and ionic beams, leads to the development of many new, very sensitive, physical methods for the study of solids. This monograph tries to present some of these methods, useful in compositional analysis, in the study of electronic properties and of the surface processes of solid substances. This is done from the point of view both of physical phenomena underlying them and of the information obtainable with such methods. But the whole monograph is limited only to the methods based on the electronic properties of the elements existing in the solid probes studied and this paper presents only those of them in which the detected beam is an electronic one, like: ELS, DAPS, ILS, AES, AEAPS, INS, TSS, XPS and UPS. (authors)
New method of ionization energy calculation for two-electron ions
International Nuclear Information System (INIS)
Ershov, D.K.
1997-01-01
A new method for calculation of the ionization energy of two-electron ions is proposed. The method is based on the calculation of the energy of second electron interaction with the field of an one-electron ion the potential of which is well known
Barnwell-Ménard, Jean-Louis; Li, Qing; Cohen, Alan A
2015-03-15
The loss of signal associated with categorizing a continuous variable is well known, and previous studies have demonstrated that this can lead to an inflation of Type-I error when the categorized variable is a confounder in a regression analysis estimating the effect of an exposure on an outcome. However, it is not known how the Type-I error may vary under different circumstances, including logistic versus linear regression, different distributions of the confounder, and different categorization methods. Here, we analytically quantified the effect of categorization and then performed a series of 9600 Monte Carlo simulations to estimate the Type-I error inflation associated with categorization of a confounder under different regression scenarios. We show that Type-I error is unacceptably high (>10% in most scenarios and often 100%). The only exception was when the variable categorized was a continuous mixture proxy for a genuinely dichotomous latent variable, where both the continuous proxy and the categorized variable are error-ridden proxies for the dichotomous latent variable. As expected, error inflation was also higher with larger sample size, fewer categories, and stronger associations between the confounder and the exposure or outcome. We provide online tools that can help researchers estimate the potential error inflation and understand how serious a problem this is. Copyright © 2014 John Wiley & Sons, Ltd.
Directory of Open Access Journals (Sweden)
Mok Tik
2014-06-01
Full Text Available This study formulates regression of vector data that will enable statistical analysis of various geodetic phenomena such as, polar motion, ocean currents, typhoon/hurricane tracking, crustal deformations, and precursory earthquake signals. The observed vector variable of an event (dependent vector variable is expressed as a function of a number of hypothesized phenomena realized also as vector variables (independent vector variables and/or scalar variables that are likely to impact the dependent vector variable. The proposed representation has the unique property of solving the coefficients of independent vector variables (explanatory variables also as vectors, hence it supersedes multivariate multiple regression models, in which the unknown coefficients are scalar quantities. For the solution, complex numbers are used to rep- resent vector information, and the method of least squares is deployed to estimate the vector model parameters after transforming the complex vector regression model into a real vector regression model through isomorphism. Various operational statistics for testing the predictive significance of the estimated vector parameter coefficients are also derived. A simple numerical example demonstrates the use of the proposed vector regression analysis in modeling typhoon paths.
International Nuclear Information System (INIS)
Molchanov, V.N.; Kazanskij, L.P.; Torchenkova, E.A.; Spitsyn, V.I.
1978-01-01
X-ray electron spectra of some iso- and heteropolymolybdates relating to different structure types are investigated to study electron structure of complex polyoxyion-heteropolyanions. Binding energies of Modsub(5/2) and 01s-electrons in iso- and heteropolycompounds line are measured and their interdependence is detected. The effective charge of oxygen and molybdenum atoms in heteropolymolybdates increases with decreasing a number of external sphere cations per an oxygen atom and a number of Mo=0 multiple bonds
Differentiating regressed melanoma from regressed lichenoid keratosis.
Chan, Aegean H; Shulman, Kenneth J; Lee, Bonnie A
2017-04-01
Distinguishing regressed lichen planus-like keratosis (LPLK) from regressed melanoma can be difficult on histopathologic examination, potentially resulting in mismanagement of patients. We aimed to identify histopathologic features by which regressed melanoma can be differentiated from regressed LPLK. Twenty actively inflamed LPLK, 12 LPLK with regression and 15 melanomas with regression were compared and evaluated by hematoxylin and eosin staining as well as Melan-A, microphthalmia transcription factor (MiTF) and cytokeratin (AE1/AE3) immunostaining. (1) A total of 40% of regressed melanomas showed complete or near complete loss of melanocytes within the epidermis with Melan-A and MiTF immunostaining, while 8% of regressed LPLK exhibited this finding. (2) Necrotic keratinocytes were seen in the epidermis in 33% regressed melanomas as opposed to all of the regressed LPLK. (3) A dense infiltrate of melanophages in the papillary dermis was seen in 40% of regressed melanomas, a feature not seen in regressed LPLK. In summary, our findings suggest that a complete or near complete loss of melanocytes within the epidermis strongly favors a regressed melanoma over a regressed LPLK. In addition, necrotic epidermal keratinocytes and the presence of a dense band-like distribution of dermal melanophages can be helpful in differentiating these lesions. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Fridgeirsdottir, Gudrun A; Harris, Robert J; Dryden, Ian L; Fischer, Peter M; Roberts, Clive J
2018-03-29
Solid dispersions can be a successful way to enhance the bioavailability of poorly soluble drugs. Here 60 solid dispersion formulations were produced using ten chemically diverse, neutral, poorly soluble drugs, three commonly used polymers, and two manufacturing techniques, spray-drying and melt extrusion. Each formulation underwent a six-month stability study at accelerated conditions, 40 °C and 75% relative humidity (RH). Significant differences in times to crystallization (onset of crystallization) were observed between both the different polymers and the two processing methods. Stability from zero days to over one year was observed. The extensive experimental data set obtained from this stability study was used to build multiple linear regression models to correlate physicochemical properties of the active pharmaceutical ingredients (API) with the stability data. The purpose of these models is to indicate which combination of processing method and polymer carrier is most likely to give a stable solid dispersion. Six quantitative mathematical multiple linear regression-based models were produced based on selection of the most influential independent physical and chemical parameters from a set of 33 possible factors, one model for each combination of polymer and processing method, with good predictability of stability. Three general rules are proposed from these models for the formulation development of suitably stable solid dispersions. Namely, increased stability is correlated with increased glass transition temperature ( T g ) of solid dispersions, as well as decreased number of H-bond donors and increased molecular flexibility (such as rotatable bonds and ring count) of the drug molecule.
System and method for compressive scanning electron microscopy
Reed, Bryan W
2015-01-13
A scanning transmission electron microscopy (STEM) system is disclosed. The system may make use of an electron beam scanning system configured to generate a plurality of electron beam scans over substantially an entire sample, with each scan varying in electron-illumination intensity over a course of the scan. A signal acquisition system may be used for obtaining at least one of an image, a diffraction pattern, or a spectrum from the scans, the image, diffraction pattern, or spectrum representing only information from at least one of a select subplurality or linear combination of all pixel locations comprising the image. A dataset may be produced from the information. A subsystem may be used for mathematically analyzing the dataset to predict actual information that would have been produced by each pixel location of the image.
Managing electronic records methods, best practices, and technologies
Smallwood, Robert F
2013-01-01
The ultimate guide to electronic records management, featuring a collaboration of expert practitioners including over 400 cited references documenting today's global trends, standards, and best practices Nearly all business records created today are electronic, and are increasing in number at breathtaking rates, yet most organizations do not have the policies and technologies in place to effectively organize, search, protect, preserve, and produce these records. Authored by an internationally recognized expert on e-records in collaboration with leading subject matter experts worldwide
A Reliability-Oriented Design Method for Power Electronic Converters
DEFF Research Database (Denmark)
Wang, Huai; Zhou, Dao; Blaabjerg, Frede
2013-01-01
Reliability is a crucial performance indicator of power electronic systems in terms of availability, mission accomplishment and life cycle cost. A paradigm shift in the research on reliability of power electronics is going on from simple handbook based calculations (e.g. models in MIL-HDBK-217F h...... and reliability prediction models are provided. A case study on a 2.3 MW wind power converter is discussed with emphasis on the reliability critical component IGBT modules....
International Nuclear Information System (INIS)
Berthiau, G.
1995-10-01
The circuit design problem consists in determining acceptable parameter values (resistors, capacitors, transistors geometries ...) which allow the circuit to meet various user given operational criteria (DC consumption, AC bandwidth, transient times ...). This task is equivalent to a multidimensional and/or multi objective optimization problem: n-variables functions have to be minimized in an hyper-rectangular domain ; equality constraints can be eventually specified. A similar problem consists in fitting component models. In this way, the optimization variables are the model parameters and one aims at minimizing a cost function built on the error between the model response and the data measured on the component. The chosen optimization method for this kind of problem is the simulated annealing method. This method, provided by the combinatorial optimization domain, has been adapted and compared with other global optimization methods for the continuous variables problems. An efficient strategy of variables discretization and a set of complementary stopping criteria have been proposed. The different parameters of the method have been adjusted with analytical functions of which minima are known, classically used in the literature. Our simulated annealing algorithm has been coupled with an open electrical simulator SPICE-PAC of which the modular structure allows the chaining of simulations required by the circuit optimization process. We proposed, for high-dimensional problems, a partitioning technique which ensures proportionality between CPU-time and variables number. To compare our method with others, we have adapted three other methods coming from combinatorial optimization domain - the threshold method, a genetic algorithm and the Tabu search method - The tests have been performed on the same set of test functions and the results allow a first comparison between these methods applied to continuous optimization variables. Finally, our simulated annealing program
International Nuclear Information System (INIS)
Lahreche, A.; Beggah, Y.; Corkish, R.
2011-01-01
The effect of electron range on electron beam induced current (EBIC) is demonstrated and the problem of the choice of the optimal electron ranges to use with simple uniform and point generation function models is resolved by proposing a method to extract an electron range-energy relationship (ERER). The results show that the use of these extracted electron ranges remove the previous disagreement between the EBIC curves computed with simple forms of generation model and those based on a more realistic generation model. The impact of these extracted electron ranges on the extraction of diffusion length, surface recombination velocity and EBIC contrast of defects is discussed. It is also demonstrated that, for the case of uniform generation, the computed EBIC current is independent of the assumed shape of the generation volume. -- Highlights: → Effect of electron ranges on modeling electron beam induced current is shown. → A method to extract an electron range for simple form of generation is proposed. → For uniform generation the EBIC current is independent of the choice of it shape. → Uses of the extracted electron ranges remove some existing literature ambiguity.
Czech Academy of Sciences Publication Activity Database
Zelinka, Jiří; Oral, Martin; Radlička, Tomáš
2018-01-01
Roč. 184, JAN (2018), s. 66-76 ISSN 0304-3991 R&D Projects: GA MŠk(CZ) LO1212; GA MŠk ED0017/01/01 Institutional support: RVO:68081731 Keywords : space charge * self-consistent simulation * aberration polynomial * electron emission Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering Impact factor: 2.843, year: 2016
Method and electronic database search engine for exposing the content of an electronic database
Stappers, P.J.
2000-01-01
The invention relates to an electronic database search engine comprising an electronic memory device suitable for storing and releasing elements from the database, a display unit, a user interface for selecting and displaying at least one element from the database on the display unit, and control
Method for controlling low-energy high current density electron beams
International Nuclear Information System (INIS)
Lee, J.N.; Oswald, R.B. Jr.
1977-01-01
A method and an apparatus for controlling the angle of incidence of low-energy, high current density electron beams are disclosed. The apparatus includes a current generating diode arrangement with a mesh anode for producing a drifting electron beam. An auxiliary grounded screen electrode is placed between the anode and a target for controlling the average angle of incidence of electrons in the drifting electron beam. According to the method of the present invention, movement of the auxiliary screen electrode relative to the target and the anode permits reliable and reproducible adjustment of the average angle of incidence of the electrons in low energy, high current density relativistic electron beams
A data driven method to measure electron charge mis-identification rate
Bakhshiansohi, Hamed
2009-01-01
Electron charge mis-measurement is an important challenge in analyses which depend on the charge of electron. To estimate the probability of {\\it electron charge mis-measurement} a data driven method is introduced and a good agreement with MC based methods is achieved.\\\\ The third moment of $\\phi$ distribution of hits in electron SuperCluster is studied. The correlation between this variable and the electron charge is also investigated. Using this `new' variable and some other variables the electron charge measurement is improved by two different approaches.
Rossi, M.; Apuani, T.; Felletti, F.
2009-04-01
The aim of this paper is to compare the results of two statistical methods for landslide susceptibility analysis: 1) univariate probabilistic method based on landslide susceptibility index, 2) multivariate method (logistic regression). The study area is the Febbraro valley, located in the central Italian Alps, where different types of metamorphic rocks croup out. On the eastern part of the studied basin a quaternary cover represented by colluvial and secondarily, by glacial deposits, is dominant. In this study 110 earth flows, mainly located toward NE portion of the catchment, were analyzed. They involve only the colluvial deposits and their extension mainly ranges from 36 to 3173 m2. Both statistical methods require to establish a spatial database, in which each landslide is described by several parameters that can be assigned using a main scarp central point of landslide. The spatial database is constructed using a Geographical Information System (GIS). Each landslide is described by several parameters corresponding to the value of main scarp central point of the landslide. Based on bibliographic review a total of 15 predisposing factors were utilized. The width of the intervals, in which the maps of the predisposing factors have to be reclassified, has been defined assuming constant intervals to: elevation (100 m), slope (5 °), solar radiation (0.1 MJ/cm2/year), profile curvature (1.2 1/m), tangential curvature (2.2 1/m), drainage density (0.5), lineament density (0.00126). For the other parameters have been used the results of the probability-probability plots analysis and the statistical indexes of landslides site. In particular slope length (0 ÷ 2, 2 ÷ 5, 5 ÷ 10, 10 ÷ 20, 20 ÷ 35, 35 ÷ 260), accumulation flow (0 ÷ 1, 1 ÷ 2, 2 ÷ 5, 5 ÷ 12, 12 ÷ 60, 60 ÷27265), Topographic Wetness Index 0 ÷ 0.74, 0.74 ÷ 1.94, 1.94 ÷ 2.62, 2.62 ÷ 3.48, 3.48 ÷ 6,00, 6.00 ÷ 9.44), Stream Power Index (0 ÷ 0.64, 0.64 ÷ 1.28, 1.28 ÷ 1.81, 1.81 ÷ 4.20, 4.20 ÷ 9
Ozdemir, Adnan
2011-07-01
SummaryThe purpose of this study is to produce a groundwater spring potential map of the Sultan Mountains in central Turkey, based on a logistic regression method within a Geographic Information System (GIS) environment. Using field surveys, the locations of the springs (440 springs) were determined in the study area. In this study, 17 spring-related factors were used in the analysis: geology, relative permeability, land use/land cover, precipitation, elevation, slope, aspect, total curvature, plan curvature, profile curvature, wetness index, stream power index, sediment transport capacity index, distance to drainage, distance to fault, drainage density, and fault density map. The coefficients of the predictor variables were estimated using binary logistic regression analysis and were used to calculate the groundwater spring potential for the entire study area. The accuracy of the final spring potential map was evaluated based on the observed springs. The accuracy of the model was evaluated by calculating the relative operating characteristics. The area value of the relative operating characteristic curve model was found to be 0.82. These results indicate that the model is a good estimator of the spring potential in the study area. The spring potential map shows that the areas of very low, low, moderate and high groundwater spring potential classes are 105.586 km 2 (28.99%), 74.271 km 2 (19.906%), 101.203 km 2 (27.14%), and 90.05 km 2 (24.671%), respectively. The interpretations of the potential map showed that stream power index, relative permeability of lithologies, geology, elevation, aspect, wetness index, plan curvature, and drainage density play major roles in spring occurrence and distribution in the Sultan Mountains. The logistic regression approach has not yet been used to delineate groundwater potential zones. In this study, the logistic regression method was used to locate potential zones for groundwater springs in the Sultan Mountains. The evolved model
NEW METHOD TO ATTACH WEARABLE ELECTRONICS TO CLOTHS
Directory of Open Access Journals (Sweden)
FERRI PASCUAL Josué
2015-05-01
Full Text Available The integration of electronic devices and sensors into textiles has many different potential applications. Textile fabrics, from clothing to upholstery and home textiles, are an integral part of daily life and the ability to combine electronics into textiles means that a huge range of valuable data can be collected and used by the wearer to monitor their health, performance and wellbeing, among other uses. One of the most pressing challenges is that of interconnecting electronic components via the textile fibres in a robust and reliable way. Another aspect to be studied is the ability for the electronics to be connected and disconnected when necessary; for example, when charging the batteries or washing the garment. It is this aspect that has been considered by this development to facilitate ease-of-use among the older people. In addition, the complete package must be comfortable enough not to restrict movement, and must be unobtrusive so as to avoid any embarrassment to the wearer. The present paper presents a new solution for the connection of electronic measuring and monitoring devices to textile sensors to monitor variables such as movement, temperature, heart rate and breathing.
New characterisation method of electrical and electronic equipment wastes (WEEE)
Energy Technology Data Exchange (ETDEWEB)
Menad, N., E-mail: n.menad@brgm.fr [BRGM, 3 av. C. Guillemin, 45060 Orléans (France); Guignot, S. [BRGM, 3 av. C. Guillemin, 45060 Orléans (France); Houwelingen, J.A. van, E-mail: recy.cling@iae.nl [Recycling Consult, Eindhoven (Netherlands)
2013-03-15
Highlights: ► A novel method of characterisation of components contained in WEEE has been developed. ► This technique was applied on several samples generated from different recycling plants. ► Handheld NIR and XRF were used to determine types of plastics and flame retardants. ► WEEE processing flow-sheet was suggested. - Abstract: Innovative separation and beneficiation techniques of various materials encountered in electrical and electronic equipment wastes (WEEE) is a major improvement for its recycling. Mechanical separation-oriented characterisation of WEEE was conducted in an attempt to evaluate the amenability of mechanical separation processes. Properties such as liberation degree of fractions (plastics, metals ferrous and non-ferrous), which are essential for mechanical separation, are analysed by means of a grain counting approach. Two different samples from different recycling industries were characterised in this work. The first sample is a heterogeneous material containing different types of plastics, metals (ferrous and non-ferrous), printed circuit board (PCB), rubber and wood. The second sample contains a mixture of mainly plastics. It is found for the first sample that all aluminium particles are free (100%) in all investigated size fractions. Between 92% and 95% of plastics are present as free particles; however, 67% in average of ferromagnetic particles are liberated. It can be observed that only 42% of ferromagnetic particles are free in the size fraction larger than 20 mm. Particle shapes were also quantified manually particle by particle. The results show that the particle shapes as a result of shredding, turn out to be heterogeneous, thereby complicating mechanical separation processes. In addition, the separability of various materials was ascertained by a sink–float analysis and eddy current separation. The second sample was separated by automatic sensor sorting in four different products: ABS, PC–ABS, PS and rest product. The
New characterisation method of electrical and electronic equipment wastes (WEEE)
International Nuclear Information System (INIS)
Menad, N.; Guignot, S.; Houwelingen, J.A. van
2013-01-01
Highlights: ► A novel method of characterisation of components contained in WEEE has been developed. ► This technique was applied on several samples generated from different recycling plants. ► Handheld NIR and XRF were used to determine types of plastics and flame retardants. ► WEEE processing flow-sheet was suggested. - Abstract: Innovative separation and beneficiation techniques of various materials encountered in electrical and electronic equipment wastes (WEEE) is a major improvement for its recycling. Mechanical separation-oriented characterisation of WEEE was conducted in an attempt to evaluate the amenability of mechanical separation processes. Properties such as liberation degree of fractions (plastics, metals ferrous and non-ferrous), which are essential for mechanical separation, are analysed by means of a grain counting approach. Two different samples from different recycling industries were characterised in this work. The first sample is a heterogeneous material containing different types of plastics, metals (ferrous and non-ferrous), printed circuit board (PCB), rubber and wood. The second sample contains a mixture of mainly plastics. It is found for the first sample that all aluminium particles are free (100%) in all investigated size fractions. Between 92% and 95% of plastics are present as free particles; however, 67% in average of ferromagnetic particles are liberated. It can be observed that only 42% of ferromagnetic particles are free in the size fraction larger than 20 mm. Particle shapes were also quantified manually particle by particle. The results show that the particle shapes as a result of shredding, turn out to be heterogeneous, thereby complicating mechanical separation processes. In addition, the separability of various materials was ascertained by a sink–float analysis and eddy current separation. The second sample was separated by automatic sensor sorting in four different products: ABS, PC–ABS, PS and rest product. The
Pedrini, D. T.; Pedrini, Bonnie C.
Regression, another mechanism studied by Sigmund Freud, has had much research, e.g., hypnotic regression, frustration regression, schizophrenic regression, and infra-human-animal regression (often directly related to fixation). Many investigators worked with hypnotic age regression, which has a long history, going back to Russian reflexologists.…
Ozdemir, Adnan; Altural, Tolga
2013-03-01
This study evaluated and compared landslide susceptibility maps produced with three different methods, frequency ratio, weights of evidence, and logistic regression, by using validation datasets. The field surveys performed as part of this investigation mapped the locations of 90 landslides that had been identified in the Sultan Mountains of south-western Turkey. The landslide influence parameters used for this study are geology, relative permeability, land use/land cover, precipitation, elevation, slope, aspect, total curvature, plan curvature, profile curvature, wetness index, stream power index, sediment transportation capacity index, distance to drainage, distance to fault, drainage density, fault density, and spring density maps. The relationships between landslide distributions and these parameters were analysed using the three methods, and the results of these methods were then used to calculate the landslide susceptibility of the entire study area. The accuracy of the final landslide susceptibility maps was evaluated based on the landslides observed during the fieldwork, and the accuracy of the models was evaluated by calculating each model's relative operating characteristic curve. The predictive capability of each model was determined from the area under the relative operating characteristic curve and the areas under the curves obtained using the frequency ratio, logistic regression, and weights of evidence methods are 0.976, 0.952, and 0.937, respectively. These results indicate that the frequency ratio and weights of evidence models are relatively good estimators of landslide susceptibility in the study area. Specifically, the results of the correlation analysis show a high correlation between the frequency ratio and weights of evidence results, and the frequency ratio and logistic regression methods exhibit correlation coefficients of 0.771 and 0.727, respectively. The frequency ratio model is simple, and its input, calculation and output processes are
Subset selection in regression
Miller, Alan
2002-01-01
Originally published in 1990, the first edition of Subset Selection in Regression filled a significant gap in the literature, and its critical and popular success has continued for more than a decade. Thoroughly revised to reflect progress in theory, methods, and computing power, the second edition promises to continue that tradition. The author has thoroughly updated each chapter, incorporated new material on recent developments, and included more examples and references. New in the Second Edition:A separate chapter on Bayesian methodsComplete revision of the chapter on estimationA major example from the field of near infrared spectroscopyMore emphasis on cross-validationGreater focus on bootstrappingStochastic algorithms for finding good subsets from large numbers of predictors when an exhaustive search is not feasible Software available on the Internet for implementing many of the algorithms presentedMore examplesSubset Selection in Regression, Second Edition remains dedicated to the techniques for fitting...
International Nuclear Information System (INIS)
Borowik, Piotr; Thobel, Jean-Luc; Adamowicz, Leszek
2017-01-01
Standard computational methods used to take account of the Pauli Exclusion Principle into Monte Carlo (MC) simulations of electron transport in semiconductors may give unphysical results in low field regime, where obtained electron distribution function takes values exceeding unity. Modified algorithms were already proposed and allow to correctly account for electron scattering on phonons or impurities. Present paper extends this approach and proposes improved simulation scheme allowing including Pauli exclusion principle for electron–electron (e–e) scattering into MC simulations. Simulations with significantly reduced computational cost recreate correct values of the electron distribution function. Proposed algorithm is applied to study transport properties of degenerate electrons in graphene with e–e interactions. This required adapting the treatment of e–e scattering in the case of linear band dispersion relation. Hence, this part of the simulation algorithm is described in details.
Energy Technology Data Exchange (ETDEWEB)
Borowik, Piotr, E-mail: pborow@poczta.onet.pl [Warsaw University of Technology, Faculty of Physics, ul. Koszykowa 75, 00-662 Warszawa (Poland); Thobel, Jean-Luc, E-mail: jean-luc.thobel@iemn.univ-lille1.fr [Institut d' Electronique, de Microélectronique et de Nanotechnologies, UMR CNRS 8520, Université Lille 1, Avenue Poincaré, CS 60069, 59652 Villeneuve d' Ascq Cédex (France); Adamowicz, Leszek, E-mail: adamo@if.pw.edu.pl [Warsaw University of Technology, Faculty of Physics, ul. Koszykowa 75, 00-662 Warszawa (Poland)
2017-07-15
Standard computational methods used to take account of the Pauli Exclusion Principle into Monte Carlo (MC) simulations of electron transport in semiconductors may give unphysical results in low field regime, where obtained electron distribution function takes values exceeding unity. Modified algorithms were already proposed and allow to correctly account for electron scattering on phonons or impurities. Present paper extends this approach and proposes improved simulation scheme allowing including Pauli exclusion principle for electron–electron (e–e) scattering into MC simulations. Simulations with significantly reduced computational cost recreate correct values of the electron distribution function. Proposed algorithm is applied to study transport properties of degenerate electrons in graphene with e–e interactions. This required adapting the treatment of e–e scattering in the case of linear band dispersion relation. Hence, this part of the simulation algorithm is described in details.
An Efficient Method for Electron-Atom Scattering Using Ab-initio Calculations
Energy Technology Data Exchange (ETDEWEB)
Xu, Yuan; Yang, Yonggang; Xiao, Liantuan; Jia, Suotang [Shanxi University, Taiyuan (China)
2017-02-15
We present an efficient method based on ab-initio calculations to investigate electron-atom scatterings. Those calculations profit from methods implemented in standard quantum chemistry programs. The new approach is applied to electron-helium scattering. The results are compared with experimental and other theoretical references to demonstrate the efficiency of our method.
Ground state of the electron gas by a stochastic method
International Nuclear Information System (INIS)
Ceperley, D.M.; Alder, B.J.
1980-05-01
An exact stochastic simulation of the Schroedinger equation for charged Bosons and Fermions was used to calculate the correlation energies, to locate the transitions to their respective crystal phases at zero temperature within 10%, and to establish the stability at intermediate densities of a ferromagnetic fluid of electrons
CLOPW; a mixed basis set full potential electronic structure method
Bekker, H.G.; Bekker, Hermie Gerhard
1997-01-01
This thesis is about the development of the full potental CLOPW package for electronic structure calculations. Chapter 1 provides the necessary background in the theory of solid state physics. It gives a short overview of the effective one particle model as commonly used in solid state physics. It
Mansilha, C; Melo, A; Rebelo, H; Ferreira, I M P L V O; Pinho, O; Domingues, V; Pinho, C; Gameiro, P
2010-10-22
A multi-residue methodology based on a solid phase extraction followed by gas chromatography-tandem mass spectrometry was developed for trace analysis of 32 compounds in water matrices, including estrogens and several pesticides from different chemical families, some of them with endocrine disrupting properties. Matrix standard calibration solutions were prepared by adding known amounts of the analytes to a residue-free sample to compensate matrix-induced chromatographic response enhancement observed for certain pesticides. Validation was done mainly according to the International Conference on Harmonisation recommendations, as well as some European and American validation guidelines with specifications for pesticides analysis and/or GC-MS methodology. As the assumption of homoscedasticity was not met for analytical data, weighted least squares linear regression procedure was applied as a simple and effective way to counteract the greater influence of the greater concentrations on the fitted regression line, improving accuracy at the lower end of the calibration curve. The method was considered validated for 31 compounds after consistent evaluation of the key analytical parameters: specificity, linearity, limit of detection and quantification, range, precision, accuracy, extraction efficiency, stability and robustness. Copyright © 2010 Elsevier B.V. All rights reserved.
Machado, Fabiana Andrade; Nakamura, Fábio Yuzo; Moraes, Solange Marta Franzói De
2012-01-01
This study examined the influence of the regression model and initial intensity of an incremental test on the relationship between the lactate threshold estimated by the maximal-deviation method and the endurance performance. Sixteen non-competitive, recreational female runners performed a discontinuous incremental treadmill test. The initial speed was set at 7 km · h⁻¹, and increased every 3 min by 1 km · h⁻¹ with a 30-s rest between the stages used for earlobe capillary blood sample collection. Lactate-speed data were fitted by an exponential-plus-constant and a third-order polynomial equation. The lactate threshold was determined for both regression equations, using all the coordinates, excluding the first and excluding the first and second initial points. Mean speed of a 10-km road race was the performance index (3.04 ± 0.22 m · s⁻¹). The exponentially-derived lactate threshold had a higher correlation (0.98 ≤ r ≤ 0.99) and smaller standard error of estimate (SEE) (0.04 ≤ SEE ≤ 0.05 m · s⁻¹) with performance than the polynomially-derived equivalent (0.83 ≤ r ≤ 0.89; 0.10 ≤ SEE ≤ 0.13 m · s⁻¹). The exponential lactate threshold was greater than the polynomial equivalent (P performance index that is independent of the initial intensity of the incremental test and better than the polynomial equivalent.
Yulia, M.; Suhandy, D.
2018-03-01
NIR spectra obtained from spectral data acquisition system contains both chemical information of samples as well as physical information of the samples, such as particle size and bulk density. Several methods have been established for developing calibration models that can compensate for sample physical information variations. One common approach is to include physical information variation in the calibration model both explicitly and implicitly. The objective of this study was to evaluate the feasibility of using explicit method to compensate the influence of different particle size of coffee powder in NIR calibration model performance. A number of 220 coffee powder samples with two different types of coffee (civet and non-civet) and two different particle sizes (212 and 500 µm) were prepared. Spectral data was acquired using NIR spectrometer equipped with an integrating sphere for diffuse reflectance measurement. A discrimination method based on PLS-DA was conducted and the influence of different particle size on the performance of PLS-DA was investigated. In explicit method, we add directly the particle size as predicted variable results in an X block containing only the NIR spectra and a Y block containing the particle size and type of coffee. The explicit inclusion of the particle size into the calibration model is expected to improve the accuracy of type of coffee determination. The result shows that using explicit method the quality of the developed calibration model for type of coffee determination is a little bit superior with coefficient of determination (R2) = 0.99 and root mean square error of cross-validation (RMSECV) = 0.041. The performance of the PLS2 calibration model for type of coffee determination with particle size compensation was quite good and able to predict the type of coffee in two different particle sizes with relatively high R2 pred values. The prediction also resulted in low bias and RMSEP values.
Steganalysis using logistic regression
Lubenko, Ivans; Ker, Andrew D.
2011-02-01
We advocate Logistic Regression (LR) as an alternative to the Support Vector Machine (SVM) classifiers commonly used in steganalysis. LR offers more information than traditional SVM methods - it estimates class probabilities as well as providing a simple classification - and can be adapted more easily and efficiently for multiclass problems. Like SVM, LR can be kernelised for nonlinear classification, and it shows comparable classification accuracy to SVM methods. This work is a case study, comparing accuracy and speed of SVM and LR classifiers in detection of LSB Matching and other related spatial-domain image steganography, through the state-of-art 686-dimensional SPAM feature set, in three image sets.
International Nuclear Information System (INIS)
Blais, N.; Podgorsak, E.B.
1992-01-01
A method for determining the kinetic energy of clinical electron beams is described, based on the measurement in air of the spatial spread of a pencil electron beam which is produced from the broad clinical electron beam. As predicted by the Fermi-Eyges theory, the dose distribution measured in air on a plane, perpendicular to the incident direction of the initial pencil electron beam, is Gaussian. The square of its spatial spread is related to the mass angular scattering power which in turn is related to the kinetic energy of the electron beam. The measured spatial spread may thus be used to determine the mass angular scattering power, which is then used to determine the kinetic energy of the electron beam from the known relationship between mass angular scattering power and kinetic energy. Energies obtained with the mass angular scattering power method agree with those obtained with the electron range method. (author)
Santos-Concejero, Jordan; Tucker, Ross; Granados, Cristina; Irazusta, Jon; Bidaurrazaga-Letona, Iraia; Zabala-Lili, Jon; Gil, Susana María
2014-01-01
This study investigated the influence of the regression model and initial intensity during an incremental test on the relationship between the lactate threshold estimated by the maximal-deviation method and performance in elite-standard runners. Twenty-three well-trained runners completed a discontinuous incremental running test on a treadmill. Speed started at 9 km · h(-1) and increased by 1.5 km · h(-1) every 4 min until exhaustion, with a minute of recovery for blood collection. Lactate-speed data were fitted by exponential and polynomial models. The lactate threshold was determined for both models, using all the co-ordinates, excluding the first and excluding the first and second points. The exponential lactate threshold was greater than the polynomial equivalent in any co-ordinate condition (P performance and is independent of the initial intensity of the test.
Hwang, Jae Joon; Kim, Kee-Deog; Park, Hyok; Park, Chang Seo; Jeong, Ho-Gul
2014-01-01
Superimposition has been used as a method to evaluate the changes of orthodontic or orthopedic treatment in the dental field. With the introduction of cone beam CT (CBCT), evaluating 3 dimensional changes after treatment became possible by superimposition. 4 point plane orientation is one of the simplest ways to achieve superimposition of 3 dimensional images. To find factors influencing superimposition error of cephalometric landmarks by 4 point plane orientation method and to evaluate the reproducibility of cephalometric landmarks for analyzing superimposition error, 20 patients were analyzed who had normal skeletal and occlusal relationship and took CBCT for diagnosis of temporomandibular disorder. The nasion, sella turcica, basion and midpoint between the left and the right most posterior point of the lesser wing of sphenoidal bone were used to define a three-dimensional (3D) anatomical reference co-ordinate system. Another 15 reference cephalometric points were also determined three times in the same image. Reorientation error of each landmark could be explained substantially (23%) by linear regression model, which consists of 3 factors describing position of each landmark towards reference axes and locating error. 4 point plane orientation system may produce an amount of reorientation error that may vary according to the perpendicular distance between the landmark and the x-axis; the reorientation error also increases as the locating error and shift of reference axes viewed from each landmark increases. Therefore, in order to reduce the reorientation error, accuracy of all landmarks including the reference points is important. Construction of the regression model using reference points of greater precision is required for the clinical application of this model.
Huybrechts, Inge; Lioret, Sandrine; Mouratidou, Theodora; Gunter, Marc J; Manios, Yannis; Kersting, Mathilde; Gottrand, Frederic; Kafatos, Anthony; De Henauw, Stefaan; Cuenca-García, Magdalena; Widhalm, Kurt; Gonzales-Gross, Marcela; Molnar, Denes; Moreno, Luis A; McNaughton, Sarah A
2017-01-01
This study aims to examine repeatability of reduced rank regression (RRR) methods in calculating dietary patterns (DP) and cross-sectional associations with overweight (OW)/obesity across European and Australian samples of adolescents. Data from two cross-sectional surveys in Europe (2006/2007 Healthy Lifestyle in Europe by Nutrition in Adolescence study, including 1954 adolescents, 12-17 years) and Australia (2007 National Children's Nutrition and Physical Activity Survey, including 1498 adolescents, 12-16 years) were used. Dietary intake was measured using two non-consecutive, 24-h recalls. RRR was used to identify DP using dietary energy density, fibre density and percentage of energy intake from fat as the intermediate variables. Associations between DP scores and body mass/fat were examined using multivariable linear and logistic regression as appropriate, stratified by sex. The first DP extracted (labelled 'energy dense, high fat, low fibre') explained 47 and 31 % of the response variation in Australian and European adolescents, respectively. It was similar for European and Australian adolescents and characterised by higher consumption of biscuits/cakes, chocolate/confectionery, crisps/savoury snacks, sugar-sweetened beverages, and lower consumption of yogurt, high-fibre bread, vegetables and fresh fruit. DP scores were inversely associated with BMI z-scores in Australian adolescent boys and borderline inverse in European adolescent boys (so as with %BF). Similarly, a lower likelihood for OW in boys was observed with higher DP scores in both surveys. No such relationships were observed in adolescent girls. In conclusion, the DP identified in this cross-country study was comparable for European and Australian adolescents, demonstrating robustness of the RRR method in calculating DP among populations. However, longitudinal designs are more relevant when studying diet-obesity associations, to prevent reverse causality.
Seyedmahmoud, Rasoul
2014-04-07
This two-articles series presents an in-depth discussion of electrospun poly-l-lactide scaffolds for tissue engineering by means of statistical methodologies that can be used, in general, to gain a quantitative and systematic insight about effects and interactions between a handful of key scaffold properties (Ys) and a set of process parameters (Xs) in electrospinning. While Part-1 dealt with the DOE methods to unveil the interactions between Xs in determining the morphomechanical properties (ref. Y1-4), this Part-2 article continues and refocuses the discussion on the interdependence of scaffold properties investigated by standard regression methods. The discussion first explores the connection between mechanical properties (Y4) and morphological descriptors of the scaffolds (Y1-3) in 32 types of scaffolds, finding that the mean fiber diameter (Y1) plays a predominant role which is nonetheless and crucially modulated by the molecular weight (MW) of PLLA. The second part examines the biological performance (Y5) (i.e. the cell proliferation of seeded bone marrow-derived mesenchymal stromal cells) on a random subset of eight scaffolds vs. the mechanomorphological properties (Y1-4). In this case, the featured regression analysis on such an incomplete set was not conclusive, though, indirectly suggesting in quantitative terms that cell proliferation could not fully be explained as a function of considered mechanomorphological properties (Y1-4), but in the early stage seeding, and that a randomization effects occurs over time such that the differences in initial cell proliferation performance (at day 1) is smeared over time. The findings may be the cornerstone of a novel route to accrue sufficient understanding and establish design rules for scaffold biofunctional vs. architecture, mechanical properties, and process parameters.
Ali, M Sanni; Groenwold, Rolf H H; Belitser, Svetlana V; Souverein, Patrick C; Martín, Elisa; Gatto, Nicolle M; Huerta, Consuelo; Gardarsdottir, Helga; Roes, Kit C B; Hoes, Arno W; de Boer, Antonius; Klungel, Olaf H
2016-03-01
Observational studies including time-varying treatments are prone to confounding. We compared time-varying Cox regression analysis, propensity score (PS) methods, and marginal structural models (MSMs) in a study of antidepressant [selective serotonin reuptake inhibitors (SSRIs)] use and the risk of hip fracture. A cohort of patients with a first prescription for antidepressants (SSRI or tricyclic antidepressants) was extracted from the Dutch Mondriaan and Spanish Base de datos para la Investigación Farmacoepidemiológica en Atención Primaria (BIFAP) general practice databases for the period 2001-2009. The net (total) effect of SSRI versus no SSRI on the risk of hip fracture was estimated using time-varying Cox regression, stratification and covariate adjustment using the PS, and MSM. In MSM, censoring was accounted for by inverse probability of censoring weights. The crude hazard ratio (HR) of SSRI use versus no SSRI use on hip fracture was 1.75 (95%CI: 1.12, 2.72) in Mondriaan and 2.09 (1.89, 2.32) in BIFAP. After confounding adjustment using time-varying Cox regression, stratification, and covariate adjustment using the PS, HRs increased in Mondriaan [2.59 (1.63, 4.12), 2.64 (1.63, 4.25), and 2.82 (1.63, 4.25), respectively] and decreased in BIFAP [1.56 (1.40, 1.73), 1.54 (1.39, 1.71), and 1.61 (1.45, 1.78), respectively]. MSMs with stabilized weights yielded HR 2.15 (1.30, 3.55) in Mondriaan and 1.63 (1.28, 2.07) in BIFAP when accounting for censoring and 2.13 (1.32, 3.45) in Mondriaan and 1.66 (1.30, 2.12) in BIFAP without accounting for censoring. In this empirical study, differences between the different methods to control for time-dependent confounding were small. The observed differences in treatment effect estimates between the databases are likely attributable to different confounding information in the datasets, illustrating that adequate information on (time-varying) confounding is crucial to prevent bias. Copyright © 2016 John Wiley & Sons, Ltd.
Advanced statistics: linear regression, part I: simple linear regression.
Marill, Keith A
2004-01-01
Simple linear regression is a mathematical technique used to model the relationship between a single independent predictor variable and a single dependent outcome variable. In this, the first of a two-part series exploring concepts in linear regression analysis, the four fundamental assumptions and the mechanics of simple linear regression are reviewed. The most common technique used to derive the regression line, the method of least squares, is described. The reader will be acquainted with other important concepts in simple linear regression, including: variable transformations, dummy variables, relationship to inference testing, and leverage. Simplified clinical examples with small datasets and graphic models are used to illustrate the points. This will provide a foundation for the second article in this series: a discussion of multiple linear regression, in which there are multiple predictor variables.
Electronic firing systems and methods for firing a device
Frickey, Steven J [Boise, ID; Svoboda, John M [Idaho Falls, ID
2012-04-24
An electronic firing system comprising a control system, a charging system, an electrical energy storage device, a shock tube firing circuit, a shock tube connector, a blasting cap firing circuit, and a blasting cap connector. The control system controls the charging system, which charges the electrical energy storage device. The control system also controls the shock tube firing circuit and the blasting cap firing circuit. When desired, the control system signals the shock tube firing circuit or blasting cap firing circuit to electrically connect the electrical energy storage device to the shock tube connector or the blasting cap connector respectively.
International Nuclear Information System (INIS)
Yuan, Haibo; Liu, Xiaowei; Xiang, Maosheng; Huang, Yang; Zhang, Huihua; Chen, Bingqiu
2015-01-01
In this paper we propose a spectroscopy-based stellar color regression (SCR) method to perform accurate color calibration for modern imaging surveys, taking advantage of millions of stellar spectra now available. The method is straightforward, insensitive to systematic errors in the spectroscopically determined stellar atmospheric parameters, applicable to regions that are effectively covered by spectroscopic surveys, and capable of delivering an accuracy of a few millimagnitudes for color calibration. As an illustration, we have applied the method to the Sloan Digital Sky Survey (SDSS) Stripe 82 data. With a total number of 23,759 spectroscopically targeted stars, we have mapped out the small but strongly correlated color zero-point errors present in the photometric catalog of Stripe 82, and we improve the color calibration by a factor of two to three. Our study also reveals some small but significant magnitude dependence errors in the z band for some charge-coupled devices (CCDs). Such errors are likely to be present in all the SDSS photometric data. Our results are compared with those from a completely independent test based on the intrinsic colors of red galaxies presented by Ivezić et al. The comparison, as well as other tests, shows that the SCR method has achieved a color calibration internally consistent at a level of about 5 mmag in u – g, 3 mmag in g – r, and 2 mmag in r – i and i – z. Given the power of the SCR method, we discuss briefly the potential benefits by applying the method to existing, ongoing, and upcoming imaging surveys
Mohammad, Fahim; Theisen-Toupal, Jesse C.; Arnaout, Ramy
2014-01-01
Laboratory testing is the single highest-volume medical activity, making it useful to ask how well one can anticipate whether a given test result will be high, low, or within the reference interval ("normal"). We analyzed 10 years of electronic health records--a total of 69.4 million blood tests--to see how well standard rule-mining techniques can anticipate test results based on patient age and gender, recent diagnoses, and recent laboratory test results. We evaluated rules according to thei...
A False Alarm Reduction Method for a Gas Sensor Based Electronic Nose
Directory of Open Access Journals (Sweden)
Mohammad Mizanur Rahman
2017-09-01
Full Text Available Electronic noses (E-Noses are becoming popular for food and fruit quality assessment due to their robustness and repeated usability without fatigue, unlike human experts. An E-Nose equipped with classification algorithms and having open ended classification boundaries such as the k-nearest neighbor (k-NN, support vector machine (SVM, and multilayer perceptron neural network (MLPNN, are found to suffer from false classification errors of irrelevant odor data. To reduce false classification and misclassification errors, and to improve correct rejection performance; algorithms with a hyperspheric boundary, such as a radial basis function neural network (RBFNN and generalized regression neural network (GRNN with a Gaussian activation function in the hidden layer should be used. The simulation results presented in this paper show that GRNN has more correct classification efficiency and false alarm reduction capability compared to RBFNN. As the design of a GRNN and RBFNN is complex and expensive due to large numbers of neuron requirements, a simple hyperspheric classification method based on minimum, maximum, and mean (MMM values of each class of the training dataset was presented. The MMM algorithm was simple and found to be fast and efficient in correctly classifying data of training classes, and correctly rejecting data of extraneous odors, and thereby reduced false alarms.
Directory of Open Access Journals (Sweden)
JEMMAH A I
2018-01-01
Full Text Available Taounate region is known by a high density of mass movements which cause several human and economic losses. The goal of this paper is to assess the landslide susceptibility of Taounate using the Weight of Evidence method (WofE and the Logistic Regression method (LR. Seven conditioning factors were used in this study: lithology, fault, drainage, slope, elevation, exposure and land use. Over the years, this site and its surroundings have experienced repeated landslides. For this reason, landslide susceptibility mapping is mandatory for risk prevention and land-use management. In this study, we have focused on recent large-scale mass movements. Finally, the ROC curves were established to evaluate the degree of fit of the model and to choose the best landslide susceptibility zonation. A total mass movements location were detected; 50% were randomly selected as input data for the entire process using the Spatial Data Model (SDM and the remaining locations were used for validation purposes. The obtained WofE’s landslide susceptibility map shows that high to very high susceptibility zones contain 62% of the total of inventoried landslides, while the same zones contain only 47% of landslides in the map obtained by the LR method. This landslide susceptibility map obtained is a major contribution to various urban and regional development plans under the Taounate Region National Development Program.
Fungible weights in logistic regression.
Jones, Jeff A; Waller, Niels G
2016-06-01
In this article we develop methods for assessing parameter sensitivity in logistic regression models. To set the stage for this work, we first review Waller's (2008) equations for computing fungible weights in linear regression. Next, we describe 2 methods for computing fungible weights in logistic regression. To demonstrate the utility of these methods, we compute fungible logistic regression weights using data from the Centers for Disease Control and Prevention's (2010) Youth Risk Behavior Surveillance Survey, and we illustrate how these alternate weights can be used to evaluate parameter sensitivity. To make our work accessible to the research community, we provide R code (R Core Team, 2015) that will generate both kinds of fungible logistic regression weights. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Mixed ionic-electronic conductor-based radiation detectors and methods of fabrication
Conway, Adam; Beck, Patrick R; Graff, Robert T; Nelson, Art; Nikolic, Rebecca J; Payne, Stephen A; Voss, Lars; Kim, Hadong
2015-04-07
A method of fabricating a mixed ionic-electronic conductor (e.g. TlBr)-based radiation detector having halide-treated surfaces and associated methods of fabrication, which controls polarization of the mixed ionic-electronic MIEC material to improve stability and operational lifetime.
Testing discontinuities in nonparametric regression
Dai, Wenlin
2017-01-19
In nonparametric regression, it is often needed to detect whether there are jump discontinuities in the mean function. In this paper, we revisit the difference-based method in [13 H.-G. Müller and U. Stadtmüller, Discontinuous versus smooth regression, Ann. Stat. 27 (1999), pp. 299–337. doi: 10.1214/aos/1018031100
Testing discontinuities in nonparametric regression
Dai, Wenlin; Zhou, Yuejin; Tong, Tiejun
2017-01-01
In nonparametric regression, it is often needed to detect whether there are jump discontinuities in the mean function. In this paper, we revisit the difference-based method in [13 H.-G. Müller and U. Stadtmüller, Discontinuous versus smooth regression, Ann. Stat. 27 (1999), pp. 299–337. doi: 10.1214/aos/1018031100
Quantile regression theory and applications
Davino, Cristina; Vistocco, Domenico
2013-01-01
A guide to the implementation and interpretation of Quantile Regression models This book explores the theory and numerous applications of quantile regression, offering empirical data analysis as well as the software tools to implement the methods. The main focus of this book is to provide the reader with a comprehensivedescription of the main issues concerning quantile regression; these include basic modeling, geometrical interpretation, estimation and inference for quantile regression, as well as issues on validity of the model, diagnostic tools. Each methodological aspect is explored and
Time-Dependent Close-Coupling Methods for Electron-Atom/Molecule Scattering
International Nuclear Information System (INIS)
Colgan, James
2014-01-01
The time-dependent close-coupling (TDCC) method centers on an accurate representation of the interaction between two outgoing electrons moving in the presence of a Coulomb field. It has been extensively applied to many problems of electrons, photons, and ions scattering from light atomic targets. Theoretical Description: The TDCC method centers on a solution of the time-dependent Schrödinger equation for two interacting electrons. The advantages of a time-dependent approach are two-fold; one treats the electron-electron interaction essentially in an exact manner (within numerical accuracy) and a time-dependent approach avoids the difficult boundary condition encountered when two free electrons move in a Coulomb field (the classic three-body Coulomb problem). The TDCC method has been applied to many fundamental atomic collision processes, including photon-, electron- and ion-impact ionization of light atoms. For application to electron-impact ionization of atomic systems, one decomposes the two-electron wavefunction in a partial wave expansion and represents the subsequent two-electron radial wavefunctions on a numerical lattice. The number of partial waves required to converge the ionization process depends on the energy of the incoming electron wavepacket and on the ionization threshold of the target atom or ion.
Hilbe, Joseph M
2009-01-01
This book really does cover everything you ever wanted to know about logistic regression … with updates available on the author's website. Hilbe, a former national athletics champion, philosopher, and expert in astronomy, is a master at explaining statistical concepts and methods. Readers familiar with his other expository work will know what to expect-great clarity.The book provides considerable detail about all facets of logistic regression. No step of an argument is omitted so that the book will meet the needs of the reader who likes to see everything spelt out, while a person familiar with some of the topics has the option to skip "obvious" sections. The material has been thoroughly road-tested through classroom and web-based teaching. … The focus is on helping the reader to learn and understand logistic regression. The audience is not just students meeting the topic for the first time, but also experienced users. I believe the book really does meet the author's goal … .-Annette J. Dobson, Biometric...
A simplified method for scanning electron microscopy (SEM) autoradiography
International Nuclear Information System (INIS)
Shahar, A.; Lasher, R.
1980-01-01
The combination of autoradiography with SEM provides a valuable tool for the study of labeled biological materials, but the previously described methods are complicated because they call first for the removal of gelatin from the film emulsion and this is then followed by deposition of gold vapor on the specimen. The authors describe a much simpler method which can easily be adapted to routine examination of cell cultures. In this method, gelatin is not removed; the film is coated with vaporized carbon only. This procedure permits visualization of both cellular image and distribution of silver grains. (Auth.)
New developments in radiation protection instrumentation via active electronic methods
International Nuclear Information System (INIS)
Umbarger, C.J.
1981-01-01
New developments in electronics and radiation detectors are improving on real-time data acquisition of radiation exposure and contamination conditions. Recent developments in low power circuit designs, hybrid and integrated circuits, and microcomputers have all contributed to smaller and lighter radiation detection instruments that are, at the same time, more sensitive and provide more information (e.g., radioisotope identification) than previous devices. New developments in radiation detectors, such as cadmium telluride, gas scintillation proportional counters, and imaging counters (both charged particle and photon) promise higher sensitivities and expanded uses over present instruments. These developments are being applied in such areas as health physics, waste management, environmental monitoring, in vivo measurements, and nuclear safeguards
Method for secure electronic voting system: face recognition based approach
Alim, M. Affan; Baig, Misbah M.; Mehboob, Shahzain; Naseem, Imran
2017-06-01
In this paper, we propose a framework for low cost secure electronic voting system based on face recognition. Essentially Local Binary Pattern (LBP) is used for face feature characterization in texture format followed by chi-square distribution is used for image classification. Two parallel systems are developed based on smart phone and web applications for face learning and verification modules. The proposed system has two tire security levels by using person ID followed by face verification. Essentially class specific threshold is associated for controlling the security level of face verification. Our system is evaluated three standard databases and one real home based database and achieve the satisfactory recognition accuracies. Consequently our propose system provides secure, hassle free voting system and less intrusive compare with other biometrics.
Linear regression in astronomy. II
Feigelson, Eric D.; Babu, Gutti J.
1992-01-01
A wide variety of least-squares linear regression procedures used in observational astronomy, particularly investigations of the cosmic distance scale, are presented and discussed. The classes of linear models considered are (1) unweighted regression lines, with bootstrap and jackknife resampling; (2) regression solutions when measurement error, in one or both variables, dominates the scatter; (3) methods to apply a calibration line to new data; (4) truncated regression models, which apply to flux-limited data sets; and (5) censored regression models, which apply when nondetections are present. For the calibration problem we develop two new procedures: a formula for the intercept offset between two parallel data sets, which propagates slope errors from one regression to the other; and a generalization of the Working-Hotelling confidence bands to nonstandard least-squares lines. They can provide improved error analysis for Faber-Jackson, Tully-Fisher, and similar cosmic distance scale relations.
Time-adaptive quantile regression
DEFF Research Database (Denmark)
Møller, Jan Kloppenborg; Nielsen, Henrik Aalborg; Madsen, Henrik
2008-01-01
and an updating procedure are combined into a new algorithm for time-adaptive quantile regression, which generates new solutions on the basis of the old solution, leading to savings in computation time. The suggested algorithm is tested against a static quantile regression model on a data set with wind power......An algorithm for time-adaptive quantile regression is presented. The algorithm is based on the simplex algorithm, and the linear optimization formulation of the quantile regression problem is given. The observations have been split to allow a direct use of the simplex algorithm. The simplex method...... production, where the models combine splines and quantile regression. The comparison indicates superior performance for the time-adaptive quantile regression in all the performance parameters considered....
On some methods to produce high-energy polarized electron beams by means of proton synchrotrons
International Nuclear Information System (INIS)
Bessonov, E.G.; Vazdik, Ya.A.
1980-01-01
Some methods of production of high-energy polarized electron beams by means of proton synchrotrons are considered. These methods are based on transfer by protons of a part of their energy to the polarized electrons of a thin target placed inside the working volume of the synchrotron. It is suggested to use as a polarized electron target a magnetized crystalline iron in which proton channeling is realized, polarized atomic beams and the polarized plasma. It is shown that by this method one can produce polarized electron beams with energy approximately 100 GeV, energy spread +- 5 % and intensity approximately 10 7 electron/c, polarization approximately 30% and with intensity approximately 10 4 -10 5 electron/c, polarization approximately 100% [ru
Discrete variational methods and their application to electronic structures
International Nuclear Information System (INIS)
Ellis, D.E.
1987-01-01
Some general concepts concerning Discrete Variational methods are developed and applied to problems of determination of eletronic spectra, charge densities and bonding of free molecules, surface-chemisorbed species and bulk solids. (M.W.O.) [pt
On a method for high-energy electron beam production in proton synchrotrons
International Nuclear Information System (INIS)
Bessonov, E.G.; Vazdik, Ya.A.
1979-01-01
It is suggested to produce high-energy electron beams in such a way that the ultrarelativistic protons give an amount of their kinetic energy to the electrons of a thin target, placed inside the working volume of the proton synchrotron. The kinematics of the elastic scattering of relativistic protons on electrons at rest is treated. Evaluation of a number of elastically-scattered electrons by 1000 GeV and 3000 GeV proton beams is presented. The method under consideration is of certain practical interest and may appear to be preferable in a definite energy range of protons and electrons
SEPARATION PHENOMENA LOGISTIC REGRESSION
Directory of Open Access Journals (Sweden)
Ikaro Daniel de Carvalho Barreto
2014-03-01
Full Text Available This paper proposes an application of concepts about the maximum likelihood estimation of the binomial logistic regression model to the separation phenomena. It generates bias in the estimation and provides different interpretations of the estimates on the different statistical tests (Wald, Likelihood Ratio and Score and provides different estimates on the different iterative methods (Newton-Raphson and Fisher Score. It also presents an example that demonstrates the direct implications for the validation of the model and validation of variables, the implications for estimates of odds ratios and confidence intervals, generated from the Wald statistics. Furthermore, we present, briefly, the Firth correction to circumvent the phenomena of separation.
Directory of Open Access Journals (Sweden)
Yi Liang
2016-11-01
Full Text Available The power industry is the main battlefield of CO2 emission reduction, which plays an important role in the implementation and development of the low carbon economy. The forecasting of electricity demand can provide a scientific basis for the country to formulate a power industry development strategy and further promote the sustained, healthy and rapid development of the national economy. Under the goal of low-carbon economy, medium and long term electricity demand forecasting will have very important practical significance. In this paper, a new hybrid electricity demand model framework is characterized as follows: firstly, integration of grey relation degree (GRD with induced ordered weighted harmonic averaging operator (IOWHA to propose a new weight determination method of hybrid forecasting model on basis of forecasting accuracy as induced variables is presented; secondly, utilization of the proposed weight determination method to construct the optimal hybrid forecasting model based on extreme learning machine (ELM forecasting model and multiple regression (MR model; thirdly, three scenarios in line with the level of realization of various carbon emission targets and dynamic simulation of effect of low-carbon economy on future electricity demand are discussed. The resulting findings show that, the proposed model outperformed and concentrated some monomial forecasting models, especially in boosting the overall instability dramatically. In addition, the development of a low-carbon economy will increase the demand for electricity, and have an impact on the adjustment of the electricity demand structure.
Wang, Xi; Chen, Shouhui; Zheng, Tianyong; Ning, Xiangchun; Dai, Yifei
2018-03-01
The filament yarns spreading techniques of electronic fiberglass fabric were developed in the past few years in order to meet the requirements of the development of electronic industry. Copper clad laminate (CCL) requires that the warp and weft yarns of the fabric could be spread out of apart and formed flat. The penetration performance of resin could be improved due to the filament yarns spreading techniques of electronic fiberglass fabric, the same as peeling strength of CCL and drilling performance of printed circuit board (PCB). This paper shows the filament yarns spreading techniques of electronic fiberglass fabric from several aspects, such as methods and functions, also with the assessment methods of their effects.
International Nuclear Information System (INIS)
Leng Ling; Zhang Tianyi; Kleinman, Lawrence; Zhu Wei
2007-01-01
Regression analysis, especially the ordinary least squares method which assumes that errors are confined to the dependent variable, has seen a fair share of its applications in aerosol science. The ordinary least squares approach, however, could be problematic due to the fact that atmospheric data often does not lend itself to calling one variable independent and the other dependent. Errors often exist for both measurements. In this work, we examine two regression approaches available to accommodate this situation. They are orthogonal regression and geometric mean regression. Comparisons are made theoretically as well as numerically through an aerosol study examining whether the ratio of organic aerosol to CO would change with age
DEFF Research Database (Denmark)
Johansen, Søren
2008-01-01
The reduced rank regression model is a multivariate regression model with a coefficient matrix with reduced rank. The reduced rank regression algorithm is an estimation procedure, which estimates the reduced rank regression model. It is related to canonical correlations and involves calculating...
Puzo, Quirino; Qin, Ping; Mehlum, Lars
2016-03-11
Suicide mortality and the rates by specific methods in a population may change over time in response to concurrent changes in relevant factors in society. This study aimed to identify significant changing points in method-specific suicide mortality from 1969 to 2012 in Norway. Data on suicide mortality by specific methods and by sex and age were retrieved from the Norwegian Cause-of-Death Register. Long-term trends in age-standardized rates of suicide mortality were analyzed by using joinpoint regression analysis. The most frequently used suicide method in the total population was hanging, followed by poisoning and firearms. Men chose suicide by firearms more often than women, whereas poisoning and drowning were more frequently used by women. The joinpoint analysis revealed that the overall trend of suicide mortality significantly changed twice along the period of 1969 to 2012 for both sexes. The male age-standardized suicide rate increased by 3.1% per year until 1989, and decreased by 1.2% per year between 1994 and 2012. Among females the long-term suicide rate increased by 4.0% per year until 1988, decreased by 5.5% through 1995, and then stabilized. Both sexes experienced an upward trend for suicide by hanging during the 44-year observation period, with a particularly significant increase in 15-24 year old males. The most distinct change among men was seen for firearms after 1988 with a significant decrease through 2012 of around 5% per year. For women, significant reductions since 1985-88 were observed for suicide by drowning and poisoning. The present study demonstrates different time trends for different suicide methods with significant reductions in suicide by firearms, drowning and poisoning after the peak in the suicide rate in the late 1980s. Suicide by means of hanging continuously increased, but did not fully compensate for the reduced use of other methods. This lends some support for the effectiveness of method-specific suicide preventive measures
Adaptive Electronic Quizzing Method for Introductory Electrical Circuit Course
Directory of Open Access Journals (Sweden)
Issa Batarseh
2009-08-01
Full Text Available The interactive technical electronic book, TechEBook, currently under development at the University of Central Florida, provides a useful tool for engineers and scientists through unique features compared to the most used traditional electrical circuit textbooks available in the market. TechEBook has comprised the two worlds of classical circuit books and an interactive operating platform such as laptops and desktops utilizing Java Virtual Machine operator. The TechEBook provides an interactive applets screen that holds many modules, in which each had a specific application in the self learning process. This paper describes one of the interactive techniques in the TechEBook known as, QuizMe, for evaluating the readers’ performance and the overall understanding for all subjects at any stage. The QuizMe will be displayed after each section in the TechEBook for the user to evaluate his/her understanding, which introduces the term me-learning, as a comprehensive full experience for self or individualized education. In this paper, a practical example of applying the QuizMe feature is discussed as part of a basic electrical engineering course currently given at the University of Central Florida.
Dose evaluation due to electron spin resonance method
International Nuclear Information System (INIS)
Nakajima, Toshiyuki
1989-01-01
Radiation dosimeter has been developed with free radical created in sucrose. Free radical was observed with using the electron spin resonance (ESR) equipment. The ESR absorption due to free radical in sucrose appeared at the magnetic field between the third and fourth ESR ones of Mn +2 standard sample. Sucrose as radiation dosimeter can linearly measure the dose from 5 x 10 -3 Gy to 10 5 Gy. If the new model of the ESR equipment is used and ESR observation is carried out at lower temperature such as liquid nitrogen or liquid helium temperature, the sucrose ESR dosimeter will be detectable about 5 x 10 -4 Gy or less. Fading of the free radicals in the irradiated sucrose was scarcely obtained about six months after irradiation and in the irradiated sucrose stored at 55deg C and 100deg C for one hour or more also scarcely observed. It is concluded from these radiation property that sucrose is useful for the accidental or emergency dosimeter for the inhabitants. (author)
Using an electron paramagnetic resonance method for testing motor oils
Energy Technology Data Exchange (ETDEWEB)
Krais, S; Tkac, T
1982-01-01
Using an ER-9 spectrometer from the Karl Zeiss company, the relative effectiveness is studied of antioxidation additives. Motor oils of the E group, M6AD, 465, M6AD, 466, M6AD 467, 15 W/40, S-3/2 M/4, R-950, which contain the antioxidation additive were tested in Petter AV-1 motors at a temperature of 50 degrees for 120 hours and Petter AVB at a temperature of 90 degrees for 53 hours. To measure the concentration of free radicals of the antioxidation additives one part of 2,2-diphenyl-1-picrylhydrazine (I), which forms stable dimagnetic products with the radicals of the antioxidation additives was introduced into each three parts of the oil. The reduction in the intensity of the signal of I was the measure of the radical concentration. The spectrum was taken for 1 to 2 minutes. The graphs of the dependence of the electron paramagnetic resonance on the test time and the concentration of I are built. The beginning and end of the induction period of oxidation of the oils and the change in the hourly activity of the PP was recorded.
Quantile Regression With Measurement Error
Wei, Ying; Carroll, Raymond J.
2009-01-01
. The finite sample performance of the proposed method is investigated in a simulation study, and compared to the standard regression calibration approach. Finally, we apply our methodology to part of the National Collaborative Perinatal Project growth data, a
Synthesis method for using in the design of an electron gun for gyrotion
International Nuclear Information System (INIS)
Silva, C.A.B.
1987-09-01
In this work a synthesis method is applied to the design of an electron gun for a 94GHz gyrotron. Using the synthesis method, it is found the shape of the electrodes compatible with the laminar flow which minimizes the action of space change on the electron velocity dispersion. A sistematic procedure is presented to fuid the parameters of the synthesis method which, in turn, are closely related to the characteristics of the aptoclechonic system. (author) [pt
Linear electron accelerator body and method of its manufacture
International Nuclear Information System (INIS)
Landa, V.; Maresova, V.; Lucek, J.; Prusa, F.
1988-01-01
The accelerator body consists of a hollow casing made of a high electric conductivity metal. The inside is partitioned with a system of resonators. The resonator body is made of one piece of the same metal as the casing or a related one (e.g., copper -coper, silver-copper, copper-copper alloy). The accelerator body is manufactured using the cathodic process on the periphery of a system of metal partitions and negative models of resonator cavities fitted to a metal pin. The pin is then removed from the system and the soluble models of the cavities are dissolved in a solvent. The advantage of the design and the method of manufacture is that the result is a compact, perfectly tight body with a perfectly lustre surface. The casing wall can be very thin, which improves accelerator performance. The claimed method can also be used in manufacturing miniature accelerators. (E.J.). 1 fig
Energy Technology Data Exchange (ETDEWEB)
Geloni, Gianluca; Ilinski, Petr; Saldin, Evgeni; Schneidmiller, Evgeni; Yurkov, Mikhail
2009-05-15
We describe a novel technique to characterize ultrashort electron bunches in Xray Free-Electron Lasers. Namely, we propose to use coherent Optical Transition Radiation to measure three-dimensional (3D) electron density distributions. Our method relies on the combination of two known diagnostics setups, an Optical Replica Synthesizer (ORS) and an Optical Transition Radiation (OTR) imager. Electron bunches are modulated at optical wavelengths in the ORS setup.When these electron bunches pass through a metal foil target, coherent radiation pulses of tens MW power are generated. It is thereafter possible to exploit advantages of coherent imaging techniques, such as direct imaging, diffractive imaging, Fourier holography and their combinations. The proposed method opens up the possibility of real-time, wavelength-limited, single-shot 3D imaging of an ultrashort electron bunch. (orig.)
Application of Macro Response Monte Carlo method for electron spectrum simulation
International Nuclear Information System (INIS)
Perles, L.A.; Almeida, A. de
2007-01-01
During the past years several variance reduction techniques for Monte Carlo electron transport have been developed in order to reduce the electron computation time transport for absorbed dose distribution. We have implemented the Macro Response Monte Carlo (MRMC) method to evaluate the electron spectrum which can be used as a phase space input for others simulation programs. Such technique uses probability distributions for electron histories previously simulated in spheres (called kugels). These probabilities are used to sample the primary electron final state, as well as the creation secondary electrons and photons. We have compared the MRMC electron spectra simulated in homogeneous phantom against the Geant4 spectra. The results showed an agreement better than 6% in the spectra peak energies and that MRMC code is up to 12 time faster than Geant4 simulations
Aszyk, Justyna; Kot, Jacek; Tkachenko, Yurii; Woźniak, Michał; Bogucka-Kocka, Anna; Kot-Wasik, Agata
2017-04-15
A simple, fast, sensitive and accurate methodology based on a LLE followed by liquid chromatography-tandem mass spectrometry for simultaneous determination of four regioisomers (8-iso prostaglandin F 2α , 8-iso-15(R)-prostaglandin F 2α , 11β-prostaglandin F 2α , 15(R)-prostaglandin F 2α ) in routine analysis of human plasma samples was developed. Isoprostanes are stable products of arachidonic acid peroxidation and are regarded as the most reliable markers of oxidative stress in vivo. Validation of method was performed by evaluation of the key analytical parameters such as: matrix effect, analytical curve, trueness, precision, limits of detection and limits of quantification. As a homoscedasticity was not met for analytical data, weighted linear regression was applied in order to improve the accuracy at the lower end points of calibration curve. The detection limits (LODs) ranged from 1.0 to 2.1pg/mL. For plasma samples spiked with the isoprostanes at the level of 50pg/mL, intra-and interday repeatability ranged from 2.1 to 3.5% and 0.1 to 5.1%, respectively. The applicability of the proposed approach has been verified by monitoring of isoprostane isomers level in plasma samples collected from young patients (n=8) subjected to hyperbaric hyperoxia (100% oxygen at 280kPa(a) for 30min) in a multiplace hyperbaric chamber. Copyright © 2017 Elsevier B.V. All rights reserved.
International Nuclear Information System (INIS)
Visbal, Jorge H. Wilches; Costa, Alessandro M.
2016-01-01
Percentage depth dose of electron beams represents an important item of data in radiation therapy treatment since it describes the dosimetric properties of these. Using an accurate transport theory, or the Monte Carlo method, has been shown obvious differences between the dose distribution of electron beams of a clinical accelerator in a water simulator object and the dose distribution of monoenergetic electrons of nominal energy of the clinical accelerator in water. In radiotherapy, the electron spectra should be considered to improve the accuracy of dose calculation since the shape of PDP curve depends of way how radiation particles deposit their energy in patient/phantom, that is, the spectrum. Exist three principal approaches to obtain electron energy spectra from central PDP: Monte Carlo Method, Direct Measurement and Inverse Reconstruction. In this work it will be presented the Simulated Annealing method as a practical, reliable and simple approach of inverse reconstruction as being an optimal alternative to other options. (author)
Regression analysis with categorized regression calibrated exposure: some interesting findings
Directory of Open Access Journals (Sweden)
Hjartåker Anette
2006-07-01
Full Text Available Abstract Background Regression calibration as a method for handling measurement error is becoming increasingly well-known and used in epidemiologic research. However, the standard version of the method is not appropriate for exposure analyzed on a categorical (e.g. quintile scale, an approach commonly used in epidemiologic studies. A tempting solution could then be to use the predicted continuous exposure obtained through the regression calibration method and treat it as an approximation to the true exposure, that is, include the categorized calibrated exposure in the main regression analysis. Methods We use semi-analytical calculations and simulations to evaluate the performance of the proposed approach compared to the naive approach of not correcting for measurement error, in situations where analyses are performed on quintile scale and when incorporating the original scale into the categorical variables, respectively. We also present analyses of real data, containing measures of folate intake and depression, from the Norwegian Women and Cancer study (NOWAC. Results In cases where extra information is available through replicated measurements and not validation data, regression calibration does not maintain important qualities of the true exposure distribution, thus estimates of variance and percentiles can be severely biased. We show that the outlined approach maintains much, in some cases all, of the misclassification found in the observed exposure. For that reason, regression analysis with the corrected variable included on a categorical scale is still biased. In some cases the corrected estimates are analytically equal to those obtained by the naive approach. Regression calibration is however vastly superior to the naive method when applying the medians of each category in the analysis. Conclusion Regression calibration in its most well-known form is not appropriate for measurement error correction when the exposure is analyzed on a
Directory of Open Access Journals (Sweden)
Guanghao Sun
2016-11-01
Full Text Available Background and Objectives: Heart rate variability (HRV has been intensively studied as a promising biological marker of major depressive disorder (MDD. Our previous study confirmed that autonomic activity and reactivity in depression revealed by HRV during rest and mental task (MT conditions can be used as diagnostic measures and in clinical evaluation. In this study, logistic regression analysis (LRA was utilized for the classification and prediction of MDD based on HRV data obtained in an MT paradigm.Methods: Power spectral analysis of HRV on R-R intervals before, during, and after an MT (random number generation was performed in 44 drug-naïve patients with MDD and 47 healthy control subjects at Department of Psychiatry in Shizuoka Saiseikai General Hospital. Logit scores of LRA determined by HRV indices and heart rates discriminated patients with MDD from healthy subjects. The high frequency (HF component of HRV and the ratio of the low frequency (LF component to the HF component (LF/HF correspond to parasympathetic and sympathovagal balance, respectively.Results: The LRA achieved a sensitivity and specificity of 80.0% and 79.0%, respectively, at an optimum cutoff logit score (0.28. Misclassifications occurred only when the logit score was close to the cutoff score. Logit scores also correlated significantly with subjective self-rating depression scale scores (p < 0.05.Conclusion: HRV indices recorded during a mental task may be an objective tool for screening patients with MDD in psychiatric practice. The proposed method appears promising for not only objective and rapid MDD screening, but also evaluation of its severity.
Apparatus and methods for controlling electron microscope stages
Duden, Thomas
2015-08-11
Methods and apparatus for generating an image of a specimen with a microscope (e.g., TEM) are disclosed. In one aspect, the microscope may generally include a beam generator, a stage, a detector, and an image generator. A plurality of crystal parameters, which describe a plurality of properties of a crystal sample, are received. In a display associated with the microscope, an interactive control sphere based at least in part on the received crystal parameters and that is rotatable by a user to different sphere orientations is presented. The sphere includes a plurality of stage coordinates that correspond to a plurality of positions of the stage and a plurality of crystallographic pole coordinates that correspond to a plurality of polar orientations of the crystal sample. Movement of the sphere causes movement of the stage, wherein the stage coordinates move in conjunction with the crystallographic coordinates represented by pole positions so as to show a relationship between stage positions and the pole positions.
Allison, Linden; Hoxie, Steven; Andrew, Trisha L
2017-06-29
Traditional textile materials can be transformed into functional electronic components upon being dyed or coated with films of intrinsically conducting polymers, such as poly(aniline), poly(pyrrole) and poly(3,4-ethylenedioxythiophene). A variety of textile electronic devices are built from the conductive fibers and fabrics thus obtained, including: physiochemical sensors, thermoelectric fibers/fabrics, heated garments, artificial muscles and textile supercapacitors. In all these cases, electrical performance and device ruggedness is determined by the morphology of the conducting polymer active layer on the fiber or fabric substrate. Tremendous variation in active layer morphology can be observed with different coating or dyeing conditions. Here, we summarize various methods used to create fiber- and fabric-based devices and highlight the influence of the coating method on active layer morphology and device stability.
Korany, Mohamed A; Gazy, Azza A; Khamis, Essam F; Ragab, Marwa A A; Kamal, Miranda F
2018-03-26
This study outlines two robust regression approaches, namely least median of squares (LMS) and iteratively re-weighted least squares (IRLS) to investigate their application in instrument analysis of nutraceuticals (that is, fluorescence quenching of merbromin reagent upon lipoic acid addition). These robust regression methods were used to calculate calibration data from the fluorescence quenching reaction (∆F and F-ratio) under ideal or non-ideal linearity conditions. For each condition, data were treated using three regression fittings: Ordinary Least Squares (OLS), LMS and IRLS. Assessment of linearity, limits of detection (LOD) and quantitation (LOQ), accuracy and precision were carefully studied for each condition. LMS and IRLS regression line fittings showed significant improvement in correlation coefficients and all regression parameters for both methods and both conditions. In the ideal linearity condition, the intercept and slope changed insignificantly, but a dramatic change was observed for the non-ideal condition and linearity intercept. Under both linearity conditions, LOD and LOQ values after the robust regression line fitting of data were lower than those obtained before data treatment. The results obtained after statistical treatment indicated that the linearity ranges for drug determination could be expanded to lower limits of quantitation by enhancing the regression equation parameters after data treatment. Analysis results for lipoic acid in capsules, using both fluorimetric methods, treated by parametric OLS and after treatment by robust LMS and IRLS were compared for both linearity conditions. Copyright © 2018 John Wiley & Sons, Ltd.
Electron-phonon thermalization in a scalable method for real-time quantum dynamics
Rizzi, Valerio; Todorov, Tchavdar N.; Kohanoff, Jorge J.; Correa, Alfredo A.
2016-01-01
We present a quantum simulation method that follows the dynamics of out-of-equilibrium many-body systems of electrons and oscillators in real time. Its cost is linear in the number of oscillators and it can probe time scales from attoseconds to hundreds of picoseconds. Contrary to Ehrenfest dynamics, it can thermalize starting from a variety of initial conditions, including electronic population inversion. While an electronic temperature can be defined in terms of a nonequilibrium entropy, a Fermi-Dirac distribution in general emerges only after thermalization. These results can be used to construct a kinetic model of electron-phonon equilibration based on the explicit quantum dynamics.
SU-F-T-71: A Practical Method for Evaluation of Electron Virtual Source Position
Energy Technology Data Exchange (ETDEWEB)
Huang, Z; Jiang, W; Stuart, B; Leu, S; Feng, Y [East Carolina University, Greenville, North Carolina (United States); Liu, T [Houston Methodist Hospital, Sugar Land, TX (United States)
2016-06-15
Purpose: Since electrons are easily scattered, the virtual source position for electrons is expected to locate below the x-ray target of Medical Linacs. However, the effective SSD method yields the electron virtual position above the x-ray target for some applicators for some energy in Siemens Linacs. In this study, we propose to use IC Profiler (Sun Nuclear) for evaluating the electron virtual source position for the standard electron applicators for various electron energies. Methods: The profile measurements for various nominal source-to-detector distances (SDDs) of 100–115 cm were carried out for electron beam energies of 6–18 MeV. Two methods were used: one was to use a 0.125 cc ion chamber (PTW, Type 31010) with buildup mounted in a PTW water tank without water filled; and the other was to use IC Profiler with a buildup to achieve charge particle equilibrium. The full width at half-maximum (FWHM) method was used to determine the field sizes for the measured profiles. Backprojecting (by a straight line) the distance between the 50% points on the beam profiles for the various SDDs, yielded the virtual source position for each applicator. Results: The profiles were obtained and the field sizes were determined by FWHM. The virtual source positions were determined through backprojection of profiles for applicators (5, 10, 15, 20, 25). For instance, they were 96.415 cm (IC Profiler) vs 95.844 cm (scanning ion chamber) for 9 MeV electrons with 10×10 cm applicator and 97.160 cm vs 97.161 cm for 12 MeV electrons with 10×10 cm applicator. The differences in the virtual source positions between IC profiler and scanning ion chamber were within 1.5%. Conclusion: IC Profiler provides a practical method for determining the electron virtual source position and its results are consistent with those obtained by profiles of scanning ion chamber with buildup.
SU-F-T-71: A Practical Method for Evaluation of Electron Virtual Source Position
International Nuclear Information System (INIS)
Huang, Z; Jiang, W; Stuart, B; Leu, S; Feng, Y; Liu, T
2016-01-01
Purpose: Since electrons are easily scattered, the virtual source position for electrons is expected to locate below the x-ray target of Medical Linacs. However, the effective SSD method yields the electron virtual position above the x-ray target for some applicators for some energy in Siemens Linacs. In this study, we propose to use IC Profiler (Sun Nuclear) for evaluating the electron virtual source position for the standard electron applicators for various electron energies. Methods: The profile measurements for various nominal source-to-detector distances (SDDs) of 100–115 cm were carried out for electron beam energies of 6–18 MeV. Two methods were used: one was to use a 0.125 cc ion chamber (PTW, Type 31010) with buildup mounted in a PTW water tank without water filled; and the other was to use IC Profiler with a buildup to achieve charge particle equilibrium. The full width at half-maximum (FWHM) method was used to determine the field sizes for the measured profiles. Backprojecting (by a straight line) the distance between the 50% points on the beam profiles for the various SDDs, yielded the virtual source position for each applicator. Results: The profiles were obtained and the field sizes were determined by FWHM. The virtual source positions were determined through backprojection of profiles for applicators (5, 10, 15, 20, 25). For instance, they were 96.415 cm (IC Profiler) vs 95.844 cm (scanning ion chamber) for 9 MeV electrons with 10×10 cm applicator and 97.160 cm vs 97.161 cm for 12 MeV electrons with 10×10 cm applicator. The differences in the virtual source positions between IC profiler and scanning ion chamber were within 1.5%. Conclusion: IC Profiler provides a practical method for determining the electron virtual source position and its results are consistent with those obtained by profiles of scanning ion chamber with buildup.
Organizational-methodical provisions for the audit of operations with electronic money
Directory of Open Access Journals (Sweden)
Semenetz A.P.
2017-06-01
Full Text Available To obtain objective and unbiased information about the accuracy and completeness of electronic money transactions at the enterprise, it is necessary to conduct an audit. The results of the external audit of electronic money transactions help the company’s management personnel to assess the efficiency and rationality of using such a modern means of payment, such as electronic money, as well as verify the proper functioning of the internal control service. The work substantiates organizational and methodical provisions of the process of conducting an external audit of transactions with electronic money in terms of clarifying the organizational provisions for conducting an audit of transactions with electronic money, namely the definition of the purpose, task, subjects and objects of audit and sources of information. Accordingly, the purpose of the audit of operations with electronic money is to provide the auditor’s unbiased opinion on the reliability of the financial statements of the enterprise in terms of operations with electronic money. Within the scope of this dissertation, the object of external audit is operations with electronic money, since electronic money is a new and contemporary object of accounting, and therefore the development of scientifically grounded order of conducting external audit of the investigated object is necessary. The subject of an external audit of electronic money transactions is a set of business transactions in electronic money settlements, that is, transactions with their acquisition and repayment and the accuracy of displaying information about them in the financial statements. In the course of the study, the procedure for the implementation of external audit procedures during the stages of the process of electronic money audit at the enterprise was determined, which allowed to confirm the correctness of the accounting of a new and modern means of payment such as electronic money. These proposals are aimed
Methods for coupling radiation, ion, and electron energies in grey Implicit Monte Carlo
International Nuclear Information System (INIS)
Evans, T.M.; Densmore, J.D.
2007-01-01
We present three methods for extending the Implicit Monte Carlo (IMC) method to treat the time-evolution of coupled radiation, electron, and ion energies. The first method splits the ion and electron coupling and conduction from the standard IMC radiation-transport process. The second method recasts the IMC equations such that part of the coupling is treated during the Monte Carlo calculation. The third method treats all of the coupling and conduction in the Monte Carlo simulation. We apply modified equation analysis (MEA) to simplified forms of each method that neglects the errors in the conduction terms. Through MEA we show that the third method is theoretically the most accurate. We demonstrate the effectiveness of each method on a series of 0-dimensional, nonlinear benchmark problems where the accuracy of the third method is shown to be up to ten times greater than the other coupling methods for selected calculations
Efficient k⋅p method for the calculation of total energy and electronic density of states
Iannuzzi, Marcella; Parrinello, Michele
2001-01-01
An efficient method for calculating the electronic structure in large systems with a fully converged BZ sampling is presented. The method is based on a k.p-like approximation developed in the framework of the density functional perturbation theory. The reliability and efficiency of the method are demostrated in test calculations on Ar and Si supercells
On the absorbed dose determination method in high energy electrons beams
International Nuclear Information System (INIS)
Scarlat, F.; Scarisoreanu, A.; Oane, M.; Mitru, E.; Avadanei, C.
2008-01-01
The absorbed dose determination method in water for electron beams with energies in the range from 1 MeV to 50 MeV is presented herein. The dosimetry equipment for measurements is composed of an UNIDOS.PTW electrometer and different ionization chambers calibrated in air kerma in a Co 60 beam. Starting from the code of practice for high energy electron beams, this paper describes the method adopted by the secondary standard dosimetry laboratory (SSDL) in NILPRP - Bucharest
Method of transport simulation for electrons between 10eV and 30keV
International Nuclear Information System (INIS)
Terrissol, Michel.
1978-01-01
A transport simulation of low energy electrons in matter using a Monte-Carlo method and studying all the interactions of the electrons with atoms, molecules or assembly of them is described. Elastic scattering, ionization, excitation, plasmon creation, reorganization following inner-shell ionization, electron-hole pair creation ... are simulated individually by sampling of confirmed experimental or theoretical cross sections. So atomic and molecular gases, metals such as aluminium and liquid water have been studied. The simulation allows to follow the electrons until their energy reaches the atomic or molecular ionization potential of the irradiated matter. The entire trajectories of primary electron and of all secondaries set in motion are exactly reproduced. Several applications to multiple scattering, radiobiology, microdosimetry, electronic microscope are represented and some results are directly compared with experimental ones [fr
A comparative study of different methods for calculating electronic transition rates
Kananenka, Alexei A.; Sun, Xiang; Schubert, Alexander; Dunietz, Barry D.; Geva, Eitan
2018-03-01
We present a comprehensive comparison of the following mixed quantum-classical methods for calculating electronic transition rates: (1) nonequilibrium Fermi's golden rule, (2) mixed quantum-classical Liouville method, (3) mean-field (Ehrenfest) mixed quantum-classical method, and (4) fewest switches surface-hopping method (in diabatic and adiabatic representations). The comparison is performed on the Garg-Onuchic-Ambegaokar benchmark charge-transfer model, over a broad range of temperatures and electronic coupling strengths, with different nonequilibrium initial states, in the normal and inverted regimes. Under weak to moderate electronic coupling, the nonequilibrium Fermi's golden rule rates are found to be in good agreement with the rates obtained via the mixed quantum-classical Liouville method that coincides with the fully quantum-mechanically exact results for the model system under study. Our results suggest that the nonequilibrium Fermi's golden rule can serve as an inexpensive yet accurate alternative to Ehrenfest and the fewest switches surface-hopping methods.
International Nuclear Information System (INIS)
Zunger, A.
1975-07-01
Semiempirical all-valence-electron LCAO methods, that were previously used to study the electronic structure of molecules are applied to three problems in solid state physics: the electronic band structure of covalent crystals, point defect problems in solids and lattice dynamical study of molecular crystals. Calculation methods for the electronic band structure of regular solids are introduced and problems regarding the computation of the density matrix in solids are discussed. Three models for treating the electronic eigenvalue problem in the solid, within the proposed calculation schemes, are discussed and the proposed models and calculation schemes are applied to the calculation of the electronic structure of several solids belonging to different crystal types. The calculation models also describe electronic properties of deep defects in covalent insulating crystals. The possible usefulness of the semieipirical LCAO methods in determining the first order intermolecular interaction potential in solids and an improved model for treating the lattice dynamics and related thermodynamical properties of molecular solids are presented. The improved lattice dynamical is used to compute phonon dispersion curves, phonon density of states, stable unit cell structure, lattice heat capacity and thermal crystal parameters, in α and γ-N 2 crystals, using the N 2 -N 2 intermolecular interaction potential that has been computed from the semiempirical LCAO methods. (B.G.)
International Nuclear Information System (INIS)
Kong Xiaoxiao; Li Quanfeng
2003-01-01
A synthesis technique for the preliminary design of convergent Pierce electron guns is introduced briefly which has a series of advantages over the traditional methods. A thermal cathode electron gun used in the accelerator for radiation sterilization with the synthesis method is redesigned, and the validity of this method is proved. Based on the preliminary design parameters given by the synthesis method, a simulating calculation program, EGUN, was used in the numerical figure design of the focusing electrode and the anode. The final results can meet the engineering requirement as the current being 1A, the normalized emittance being less than 4 mm·mrad, and the final current density showing uniformity
Tuzun, Burak; Yavuz, Sevtap Caglar; Sabanci, Nazmiye; Saripinar, Emin
2018-05-13
In the present work, pharmacophore identification and biological activity prediction for 86 pyrazole pyridine carboxylic acid derivatives were made using the electron conformational genetic algorithm approach which was introduced as a 4D-QSAR analysis by us in recent years. In the light of the data obtained from quantum chemical calculations at HF/6-311 G** level, the electron conformational matrices of congruity (ECMC) were constructed by EMRE software. Comparing the matrices, electron conformational submatrix of activity (ECSA, Pha) was revealed that are common for these compounds within a minimum tolerance. A parameter pool was generated considering the obtained pharmacophore. To determine the theoretical biological activity of molecules and identify the best subset of variables affecting bioactivities, we used the nonlinear least square regression method and genetic algorithm. The results obtained in this study are in good agreement with the experimental data presented in the literature. The model for training and test sets attained by the optimum 12 parameters gave highly satisfactory results with R2training= 0.889, q2=0.839 and SEtraining=0.066, q2ext1 = 0.770, q2ext2 = 0.750, q2ext3=0.824, ccctr = 0.941, ccctest = 0.869 and cccall = 0.927. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Kernel polynomial method for a nonorthogonal electronic-structure calculation of amorphous diamond
International Nuclear Information System (INIS)
Roeder, H.; Silver, R.N.; Drabold, D.A.; Dong, J.J.
1997-01-01
The Kernel polynomial method (KPM) has been successfully applied to tight-binding electronic-structure calculations as an O(N) method. Here we extend this method to nonorthogonal basis sets with a sparse overlap matrix S and a sparse Hamiltonian H. Since the KPM method utilizes matrix vector multiplications it is necessary to apply S -1 H onto a vector. The multiplication of S -1 is performed using a preconditioned conjugate-gradient method and does not involve the explicit inversion of S. Hence the method scales the same way as the original KPM method, i.e., O(N), although there is an overhead due to the additional conjugate-gradient part. We apply this method to a large scale electronic-structure calculation of amorphous diamond. copyright 1997 The American Physical Society
DEFF Research Database (Denmark)
Christensen, Steen; Moore, C.; Doherty, J.
2006-01-01
accurate and required a few hundred model calls to be computed. (b) The linearized regression-based interval (Cooley, 2004) required just over a hundred model calls and also appeared to be nearly correct. (c) The calibration-constrained Monte-Carlo interval (Doherty, 2003) was found to be narrower than......For a synthetic case we computed three types of individual prediction intervals for the location of the aquifer entry point of a particle that moves through a heterogeneous aquifer and ends up in a pumping well. (a) The nonlinear regression-based interval (Cooley, 2004) was found to be nearly...... the regression-based intervals but required about half a million model calls. It is unclear whether or not this type of prediction interval is accurate....
Directory of Open Access Journals (Sweden)
Amanda Araujo Tosi
Full Text Available Abstract Cathodoluminescence (CL imaging is an outstanding method for sub classification of Unequilibrated Ordinary Chondrites (UOC - petrological type 3. CL can be obtained by several electron beam apparatuses. The traditional method uses an electron gun coupled to an optical microscope (OM. Although many scanning electron microscopes (SEM and electron microprobes (EPMA have been equipped with a cathodoluminescence, this technique was not fully explored. Images obtained by the two methods differ due to a different kind of signal acquisition. While in the CL-OM optical photography true colors are obtained, in the CL-EPMA the results are grayscale monochromatic electronic signals. L-RGB filters were used in the CL-EPMA analysis in order to obtain color data. The aim of this work is to compare cathodoluminescence data obtained from both techniques, optical microscope and electron microprobe, on the Bishunpur meteorite classified as LL 3.1 chondrite. The present study allows concluding that 20 KeV and 7 nA is the best analytical condition at EPMA in order to test the equivalence between CL-EPMA and CL-OM colour results. Moreover, the color index revealed to be a method for aiding the study of the thermal metamorphism, but it is not definitive for the meteorite classification.
A new method of testing space-based high-energy electron detectors with radioactive electron sources
Zhang, S. Y.; Shen, G. H.; Sun, Y.; Zhou, D. Z.; Zhang, X. X.; Li, J. W.; Huang, C.; Zhang, X. G.; Dong, Y. J.; Zhang, W. J.; Zhang, B. Q.; Shi, C. Y.
2016-05-01
Space-based electron detectors are commonly tested using radioactive β-sources which emit a continuous spectrum without spectral lines. Therefore, the tests are often to be considered only qualitative. This paper introduces a method, which results in more than a qualitative test even when using a β-source. The basic idea is to use the simulated response function of the instrument to invert the measured spectrum and compare this inverted spectrum with a reference spectrum obtained from the same source. Here we have used Geant4 to simulate the instrument response function (IRF) and a 3.5 mm thick Li-drifted Si detector to obtain the reference 90Sr/90Yi source spectrum to test and verify the geometric factors of the Omni-Direction Particle Detector (ODPD) on the Tiangong-1 (TG-1) and Tiangong-2 (TG-2) spacecraft. The TG spacecraft are experimental space laboratories and prototypes of the Chinese space station. The excellent agreement between the measured and reference spectra demonstrates that this test method can be used to quantitatively assess the quality of the instrument. Due to its simplicity, the method is faster and therefore more efficient than traditional full calibrations using an electron accelerator.
A new method of testing space-based high-energy electron detectors with radioactive electron sources
Energy Technology Data Exchange (ETDEWEB)
Zhang, S.Y. [National Space Science Center, Chinese Academy of Sciences, Beijing (China); Beijing Key Laboratory of Space Environment Exploration, Beijing (China); Shen, G.H., E-mail: shgh@nssc.ac.cn [National Space Science Center, Chinese Academy of Sciences, Beijing (China); Beijing Key Laboratory of Space Environment Exploration, Beijing (China); Sun, Y., E-mail: sunying@nssc.ac.cn [National Space Science Center, Chinese Academy of Sciences, Beijing (China); Beijing Key Laboratory of Space Environment Exploration, Beijing (China); Zhou, D.Z., E-mail: dazhuang.zhou@gmail.com [National Space Science Center, Chinese Academy of Sciences, Beijing (China); Beijing Key Laboratory of Space Environment Exploration, Beijing (China); Zhang, X.X., E-mail: xxzhang@cma.gov.cn [National Center for Space Weather, Beijing (China); Li, J.W., E-mail: lijw@cma.gov.cn [National Center for Space Weather, Beijing (China); Huang, C., E-mail: huangc@cma.gov.cn [National Center for Space Weather, Beijing (China); Zhang, X.G., E-mail: zhangxg@nssc.ac.cn [National Space Science Center, Chinese Academy of Sciences, Beijing (China); Beijing Key Laboratory of Space Environment Exploration, Beijing (China); Dong, Y.J., E-mail: dyj@nssc.ac.cn [National Space Science Center, Chinese Academy of Sciences, Beijing (China); Beijing Key Laboratory of Space Environment Exploration, Beijing (China); Zhang, W.J., E-mail: zhangreatest@163.com [National Space Science Center, Chinese Academy of Sciences, Beijing (China); Beijing Key Laboratory of Space Environment Exploration, Beijing (China); Zhang, B.Q., E-mail: zhangbinquan@nssc.ac.cn [National Space Science Center, Chinese Academy of Sciences, Beijing (China); Beijing Key Laboratory of Space Environment Exploration, Beijing (China); Shi, C.Y., E-mail: scy@nssc.ac.cn [National Space Science Center, Chinese Academy of Sciences, Beijing (China); Beijing Key Laboratory of Space Environment Exploration, Beijing (China)
2016-05-01
Space-based electron detectors are commonly tested using radioactive β-sources which emit a continuous spectrum without spectral lines. Therefore, the tests are often to be considered only qualitative. This paper introduces a method, which results in more than a qualitative test even when using a β-source. The basic idea is to use the simulated response function of the instrument to invert the measured spectrum and compare this inverted spectrum with a reference spectrum obtained from the same source. Here we have used Geant4 to simulate the instrument response function (IRF) and a 3.5 mm thick Li-drifted Si detector to obtain the reference {sup 90}Sr/{sup 90}Yi source spectrum to test and verify the geometric factors of the Omni-Direction Particle Detector (ODPD) on the Tiangong-1 (TG-1) and Tiangong-2 (TG-2) spacecraft. The TG spacecraft are experimental space laboratories and prototypes of the Chinese space station. The excellent agreement between the measured and reference spectra demonstrates that this test method can be used to quantitatively assess the quality of the instrument. Due to its simplicity, the method is faster and therefore more efficient than traditional full calibrations using an electron accelerator.
A new method of testing space-based high-energy electron detectors with radioactive electron sources
International Nuclear Information System (INIS)
Zhang, S.Y.; Shen, G.H.; Sun, Y.; Zhou, D.Z.; Zhang, X.X.; Li, J.W.; Huang, C.; Zhang, X.G.; Dong, Y.J.; Zhang, W.J.; Zhang, B.Q.; Shi, C.Y.
2016-01-01
Space-based electron detectors are commonly tested using radioactive β-sources which emit a continuous spectrum without spectral lines. Therefore, the tests are often to be considered only qualitative. This paper introduces a method, which results in more than a qualitative test even when using a β-source. The basic idea is to use the simulated response function of the instrument to invert the measured spectrum and compare this inverted spectrum with a reference spectrum obtained from the same source. Here we have used Geant4 to simulate the instrument response function (IRF) and a 3.5 mm thick Li-drifted Si detector to obtain the reference "9"0Sr/"9"0Yi source spectrum to test and verify the geometric factors of the Omni-Direction Particle Detector (ODPD) on the Tiangong-1 (TG-1) and Tiangong-2 (TG-2) spacecraft. The TG spacecraft are experimental space laboratories and prototypes of the Chinese space station. The excellent agreement between the measured and reference spectra demonstrates that this test method can be used to quantitatively assess the quality of the instrument. Due to its simplicity, the method is faster and therefore more efficient than traditional full calibrations using an electron accelerator.
Quantum chemistry the development of ab initio methods in molecular electronic structure theory
Schaefer III, Henry F
2004-01-01
This guide is guaranteed to prove of keen interest to the broad spectrum of experimental chemists who use electronic structure theory to assist in the interpretation of their laboratory findings. A list of 150 landmark papers in ab initio molecular electronic structure methods, it features the first page of each paper (which usually encompasses the abstract and introduction). Its primary focus is methodology, rather than the examination of particular chemical problems, and the selected papers either present new and important methods or illustrate the effectiveness of existing methods in predi
Methods of organization of SCORM-compliant teaching materials in electronic format
Directory of Open Access Journals (Sweden)
Jacek Marciniak
2012-06-01
Full Text Available This paper presents a method of organizing electronic teaching materials based on their role in the teaching process rather than their technical structure. Our method allows SCORM materials stored as e-learning courses („electronic books” to be subdivided and structured so that content can be used in multiple contexts. As a standard, SCORM defines rules for organizing content, but not how to divide and structure it. Our method uses UCTS nomenclature to divide content, define relationships between content entities, and aggregate those entities into courses. This allows content to be shared in different implementations of SCORM while guaranteeing that usability and consistency are maintained.
Gaussian process regression analysis for functional data
Shi, Jian Qing
2011-01-01
Gaussian Process Regression Analysis for Functional Data presents nonparametric statistical methods for functional regression analysis, specifically the methods based on a Gaussian process prior in a functional space. The authors focus on problems involving functional response variables and mixed covariates of functional and scalar variables.Covering the basics of Gaussian process regression, the first several chapters discuss functional data analysis, theoretical aspects based on the asymptotic properties of Gaussian process regression models, and new methodological developments for high dime
A modified method of calculating the lateral build-up ratio for small electron fields
International Nuclear Information System (INIS)
Tyner, E; McCavana, P; McClean, B
2006-01-01
This note outlines an improved method of calculating dose per monitor unit values for small electron fields using Khan's lateral build-up ratio (LBR). This modified method obtains the LBR directly from the ratio of measured, surface normalized, electron beam percentage depth dose curves. The LBR calculated using this modified method more accurately accounts for the change in lateral scatter with decreasing field size. The LBR is used along with Khan's dose per monitor unit formula to calculate dose per monitor unit values for a set of small fields. These calculated dose per monitor unit values are compared to measured values to within 3.5% for all circular fields and electron energies examined. The modified method was further tested using a small triangular field. A maximum difference of 4.8% was found. (note)
Cluster Analysis of the Newcastle Electronic Corpus of Tyneside English: A Comparison of Methods
Moisl, Hermann; Jones, Valerie M.
2005-01-01
This article examines the feasibility of an empirical approach to sociolinguistic analysis of the Newcastle Electronic Corpus of Tyneside English using exploratory multivariate methods. It addresses a known problem with one class of such methods, hierarchical cluster analysis—that different
Cluster Analysis of the Newcastle Electronic Corpus of Tyneside English: In A Comparison of Methods
Moisl, Hermann; Jones, Valerie M.
2005-01-01
This article examines the feasibility of an empirical approach to sociolinguistic analysis of the Newcastle Electronic Corpus of Tyneside English using exploratory multivariate methods. It addresses a known problem with one class of such methods, hierarchical cluster analysis—that different
Methods of measurements on incidental X-radiation from electron tubes
International Nuclear Information System (INIS)
1977-01-01
The standard describes the method for detection of x-radiation and the method for the direct and indirect measurement of field pattern and exposure rate of random incidental radiation emanating from high voltage electron tubes. Required apparatus and calibration procedure for the exposure rate meter or film mount are described. (M.G.B.)
Hoogerheide, L.F.; Kaashoek, J.F.; van Dijk, H.K.
2007-01-01
Likelihoods and posteriors of instrumental variable (IV) regression models with strong endogeneity and/or weak instruments may exhibit rather non-elliptical contours in the parameter space. This may seriously affect inference based on Bayesian credible sets. When approximating posterior
L.F. Hoogerheide (Lennart); J.F. Kaashoek (Johan); H.K. van Dijk (Herman)
2005-01-01
textabstractLikelihoods and posteriors of instrumental variable regression models with strong endogeneity and/or weak instruments may exhibit rather non-elliptical contours in the parameter space. This may seriously affect inference based on Bayesian credible sets. When approximating such contours
Understanding logistic regression analysis
Sperandei, Sandro
2014-01-01
Logistic regression is used to obtain odds ratio in the presence of more than one explanatory variable. The procedure is quite similar to multiple linear regression, with the exception that the response variable is binomial. The result is the impact of each variable on the odds ratio of the observed event of interest. The main advantage is to avoid confounding effects by analyzing the association of all variables together. In this article, we explain the logistic regression procedure using ex...
Introduction to regression graphics
Cook, R Dennis
2009-01-01
Covers the use of dynamic and interactive computer graphics in linear regression analysis, focusing on analytical graphics. Features new techniques like plot rotation. The authors have composed their own regression code, using Xlisp-Stat language called R-code, which is a nearly complete system for linear regression analysis and can be utilized as the main computer program in a linear regression course. The accompanying disks, for both Macintosh and Windows computers, contain the R-code and Xlisp-Stat. An Instructor's Manual presenting detailed solutions to all the problems in the book is ava
Method of synthesizing small-diameter carbon nanotubes with electron field emission properties
Liu, Jie (Inventor); Du, Chunsheng (Inventor); Qian, Cheng (Inventor); Gao, Bo (Inventor); Qiu, Qi (Inventor); Zhou, Otto Z. (Inventor)
2009-01-01
Carbon nanotube material having an outer diameter less than 10 nm and a number of walls less than ten are disclosed. Also disclosed are an electron field emission device including a substrate, an optionally layer of adhesion-promoting layer, and a layer of electron field emission material. The electron field emission material includes a carbon nanotube having a number of concentric graphene shells per tube of from two to ten, an outer diameter from 2 to 8 nm, and a nanotube length greater than 0.1 microns. One method to fabricate carbon nanotubes includes the steps of (a) producing a catalyst containing Fe and Mo supported on MgO powder, (b) using a mixture of hydrogen and carbon containing gas as precursors, and (c) heating the catalyst to a temperature above 950.degree. C. to produce a carbon nanotube. Another method of fabricating an electron field emission cathode includes the steps of (a) synthesizing electron field emission materials containing carbon nanotubes with a number of concentric graphene shells per tube from two to ten, an outer diameter of from 2 to 8 nm, and a length greater than 0.1 microns, (b) dispersing the electron field emission material in a suitable solvent, (c) depositing the electron field emission materials onto a substrate, and (d) annealing the substrate.
Nuclear electronics equipment for control and monitoring panel. Ratemeter data and test methods
International Nuclear Information System (INIS)
1977-09-01
This document first reviews the main notations used, and some definitions, then states its scope and gives a bibliography. The main characteristics of ratemeter electronic sub-assemblies are then given, and corresponding test methods are described. This type of instrument indicates, on a linear or logarithmic scale, the counting rate of pulses applied to its input. The document reviews analogue and digital ratemeters with linear or logarithmic characteristics, for general purpose applications, reactor control, health physics, plant and laboratory applications. The document is intended for electronics manufacturers, designers, persons participating in acceptance trials, plant operators and, generally, for members of the electronics profession [fr
Application of CTOF method to detect secondly charged particle from 2 GeV electron
International Nuclear Information System (INIS)
Takahashi, Kazutoshi; Sanami, Toshiya; Ban, Syuichi; Lee, Hee-Seok; Sato, Tatsuhiko
2002-01-01
To design a shield and evaluate leakage radiation at high energy electron accelerators, the energy and angular data of secondary particle from the reaction of electrons with structural materials are required. Secondly neutron spectrum from structural materials has been measured by using electron accelerator in PAL (Pohang Accelerator Laboratory). In the neutron measurement, the electronics with Multi-hit TDC (MHTDC) was adopted to measure Time of Flight of every particles (TOFs) emitted from the reactions by each single electron bunch. The measurements are extended to secondly charged particles. For the charged particles measurement, the pulse height data for every particles are indispensable to distinguish charged particles by Δ E-E method. A new system which can measure pulse height for every particle is required instead of the MHTDC system. For this requirement, the method which can take output current from detectors was developed by using digital storage oscilloscope system is named ''Current Time of Flight method'' (CTOF). The CTOF method is able to measure pulse height and TOF for every particles produced by single electron bunch. Electrons are accelerated to 2.04 GeV and the repetition rate is 10 Hz. These electrons bombard thin disk samples of Cu 1mm, Al 4 mm and W 0.5 mm. Secondly charged particles, proton and deuteron, are produced in the samples by photonuclear reaction. Two dimensional of Δ E-E spectrum for each the samples measured by CTOF shows separation between proton and deuteron perfectly. Thus, proton and deuteron spectrum are obtained from this data. (M. Suetake)
Krylov subspace method for evaluating the self-energy matrices in electron transport calculations
DEFF Research Database (Denmark)
Sørensen, Hans Henrik Brandenborg; Hansen, Per Christian; Petersen, D. E.
2008-01-01
We present a Krylov subspace method for evaluating the self-energy matrices used in the Green's function formulation of electron transport in nanoscale devices. A procedure based on the Arnoldi method is employed to obtain solutions of the quadratic eigenvalue problem associated with the infinite...... calculations. Numerical tests within a density functional theory framework are provided to validate the accuracy and robustness of the proposed method, which in most cases is an order of magnitude faster than conventional methods.......We present a Krylov subspace method for evaluating the self-energy matrices used in the Green's function formulation of electron transport in nanoscale devices. A procedure based on the Arnoldi method is employed to obtain solutions of the quadratic eigenvalue problem associated with the infinite...
Lv, C L; Liu, Q B; Cai, C Y; Huang, J; Zhou, G W; Wang, Y G
2015-01-01
In the transmission electron microscopy, a revised real space (RRS) method has been confirmed to be a more accurate dynamical electron diffraction simulation method for low-energy electron diffraction than the conventional multislice method (CMS). However, the RRS method can be only used to calculate the dynamical electron diffraction of orthogonal crystal system. In this work, the expression of the RRS method for non-orthogonal crystal system is derived. By taking Na2 Ti3 O7 and Si as examples, the correctness of the derived RRS formula for non-orthogonal crystal system is confirmed by testing the coincidence of numerical results of both sides of Schrödinger equation; moreover, the difference between the RRS method and the CMS for non-orthogonal crystal system is compared at the accelerating voltage range from 40 to 10 kV. Our results show that the CMS method is almost the same as the RRS method for the accelerating voltage above 40 kV. However, when the accelerating voltage is further lowered to 20 kV or below, the CMS method introduces significant errors, not only for the higher-order Laue zone diffractions, but also for zero-order Laue zone. These indicate that the RRS method for non-orthogonal crystal system is necessary to be used for more accurate dynamical simulation when the accelerating voltage is low. Furthermore, the reason for the increase of differences between those diffraction patterns calculated by the RRS method and the CMS method with the decrease of the accelerating voltage is discussed. © 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.
Study and Handling Methods of Power IGBT Module Failures in Power Electronic Converter Systems
DEFF Research Database (Denmark)
Choi, Uimin; Blaabjerg, Frede; Lee, Kyo-Beum
2015-01-01
Power electronics plays an important role in a wide range of applications in order to achieve high efficiency and performance. Increasing efforts are being made to improve the reliability of power electronics systems to ensure compliance with more stringent constraints on cost, safety......, and availability in different applications. This paper presents an overview of the major failure mechanisms of IGBT modules and their handling methods in power converter systems improving reliability. The major failure mechanisms of IGBT modules are presented first, and methods for predicting lifetime...... and estimating the junction temperature of IGBT modules are then discussed. Subsequently, different methods for detecting open- and short-circuit faults are presented. Finally, fault-tolerant strategies for improving the reliability of power electronic systems under field operation are explained and compared...
Relativistic convergent close-coupling method applied to electron scattering from mercury
International Nuclear Information System (INIS)
Bostock, Christopher J.; Fursa, Dmitry V.; Bray, Igor
2010-01-01
We report on the extension of the recently formulated relativistic convergent close-coupling (RCCC) method to accommodate two-electron and quasi-two-electron targets. We apply the theory to electron scattering from mercury and obtain differential and integrated cross sections for elastic and inelastic scattering. We compared with previous nonrelativistic convergent close-coupling (CCC) calculations and for a number of transitions obtained significantly better agreement with the experiment. The RCCC method is able to resolve structure in the integrated cross sections for the energy regime in the vicinity of the excitation thresholds for the (6s6p) 3 P 0,1,2 states. These cross sections are associated with the formation of negative ion (Hg - ) resonances that could not be resolved with the nonrelativistic CCC method. The RCCC results are compared with the experiment and other relativistic theories.
Directory of Open Access Journals (Sweden)
Ivana Bilić
2011-02-01
Full Text Available Human resources represent one of the most important companies’ resources responsible in creation of companies’ competitive advantage. In search for the most valuable resources, companies use different methods. Lately, one of the growing methods is electronic recruiting, not only as a recruitment tool, but also as a mean of external communication. Additionally, in the process of corporate communication, companies nowadays use the electronic corporate communication as the easiest, the cheapest and the simplest form of business communication. The aim of this paper is to investigate relationship between three groups of different criteria; including main characteristics of performed electronic recruiting, corporate communication and selected financial performances. Selected companies were ranked separately by each group of criteria by usage of multicriteria decision making method PROMETHEE II. The main idea is to research whether companies which are the highest performers by certain group of criteria obtain the similar results regarding other group of criteria or performing results.
Energy Technology Data Exchange (ETDEWEB)
Alloyeau, D., E-mail: alloyeau.damien@gmail.com [Laboratoire Materiaux et Phenomenes Quantiques, Universite Paris 7/CNRS, UMR 7162, 2 Place Jussieu, 75251 Paris (France); Laboratoire d' Etude des Microstructures - ONERA/CNRS, UMR 104, B.P. 72, 92322 Chatillon (France); Ricolleau, C. [Laboratoire Materiaux et Phenomenes Quantiques, Universite Paris 7/CNRS, UMR 7162, 2 Place Jussieu, 75251 Paris (France); Oikawa, T. [Laboratoire Materiaux et Phenomenes Quantiques, Universite Paris 7/CNRS, UMR 7162, 2 Place Jussieu, 75251 Paris (France); JEOL (Europe) SAS, Espace Claude Monet, 1 Allee de Giverny, 78290 Croissy-sur-Seine (France); Langlois, C. [Laboratoire Materiaux et Phenomenes Quantiques, Universite Paris 7/CNRS, UMR 7162, 2 Place Jussieu, 75251 Paris (France); Le Bouar, Y.; Loiseau, A. [Laboratoire d' Etude des Microstructures - ONERA/CNRS, UMR 104, B.P. 72, 92322 Chatillon (France)
2009-06-15
Nanoparticles' morphology is a key parameter in the understanding of their thermodynamical, optical, magnetic and catalytic properties. In general, nanoparticles, observed in transmission electron microscopy (TEM), are viewed in projection so that the determination of their thickness (along the projection direction) with respect to their projected lateral size is highly questionable. To date, the widely used methods to measure nanoparticles thickness in a transmission electron microscope are to use cross-section images or focal series in high-resolution transmission electron microscopy imaging (HRTEM 'slicing'). In this paper, we compare the focal series method with the electron tomography method to show that both techniques yield similar particle thickness in a range of size from 1 to 5 nm, but the electron tomography method provides better statistics since more particles can be analyzed at one time. For this purpose, we have compared, on the same samples, the nanoparticles thickness measurements obtained from focal series with the ones determined from cross-section profiles of tomograms (tomogram slicing) perpendicular to the plane of the substrate supporting the nanoparticles. The methodology is finally applied to the comparison of CoPt nanoparticles annealed ex situ at two different temperatures to illustrate the accuracy of the techniques in detecting small particle thickness changes.
Peng, Ying; Li, Su-Ning; Pei, Xuexue; Hao, Kun
2018-03-01
Amultivariate regression statisticstrategy was developed to clarify multi-components content-effect correlation ofpanaxginseng saponins extract and predict the pharmacological effect by components content. In example 1, firstly, we compared pharmacological effects between panax ginseng saponins extract and individual saponin combinations. Secondly, we examined the anti-platelet aggregation effect in seven different saponin combinations of ginsenoside Rb1, Rg1, Rh, Rd, Ra3 and notoginsenoside R1. Finally, the correlation between anti-platelet aggregation and the content of multiple components was analyzed by a partial least squares algorithm. In example 2, firstly, 18 common peaks were identified in ten different batches of panax ginseng saponins extracts from different origins. Then, we investigated the anti-myocardial ischemia reperfusion injury effects of the ten different panax ginseng saponins extracts. Finally, the correlation between the fingerprints and the cardioprotective effects was analyzed by a partial least squares algorithm. Both in example 1 and 2, the relationship between the components content and pharmacological effect was modeled well by the partial least squares regression equations. Importantly, the predicted effect curve was close to the observed data of dot marked on the partial least squares regression model. This study has given evidences that themulti-component content is a promising information for predicting the pharmacological effects of traditional Chinese medicine.
Directory of Open Access Journals (Sweden)
Ying Peng
2018-03-01
Full Text Available Amultivariate regression statisticstrategy was developed to clarify multi-components content-effect correlation ofpanaxginseng saponins extract and predict the pharmacological effect by components content. In example 1, firstly, we compared pharmacological effects between panax ginseng saponins extract and individual saponin combinations. Secondly, we examined the anti-platelet aggregation effect in seven different saponin combinations of ginsenoside Rb1, Rg1, Rh, Rd, Ra3 and notoginsenoside R1. Finally, the correlation between anti-platelet aggregation and the content of multiple components was analyzed by a partial least squares algorithm. In example 2, firstly, 18 common peaks were identified in ten different batches of panax ginseng saponins extracts from different origins. Then, we investigated the anti-myocardial ischemia reperfusion injury effects of the ten different panax ginseng saponins extracts. Finally, the correlation between the fingerprints and the cardioprotective effects was analyzed by a partial least squares algorithm. Both in example 1 and 2, the relationship between the components content and pharmacological effect was modeled well by the partial least squares regression equations. Importantly, the predicted effect curve was close to the observed data of dot marked on the partial least squares regression model. This study has given evidences that themulti-component content is a promising information for predicting the pharmacological effects of traditional Chinese medicine.
International Nuclear Information System (INIS)
Eberhart, J.-P.
1976-01-01
The following topics are discussed: theoretical aspects of radiation-matter interactions; production and measurement of radiations (X rays, electrons, neutrons); applications of radiation interactions to the study of crystalline materials. The following techniques are presented: X-ray and neutron diffraction, electron microscopy, electron diffraction, X-ray fluorescence analysis, electron probe microanalysis, surface analysis by electron emission spectrometry (ESCA and Auger electrons), scanning electron microscopy, secondary ion emission analysis [fr
Fekete, Gábor; Fodor, Emese; Pesznyák, Csilla
2015-03-08
A novel method has been put forward for very large electron beam profile measurement. With this method, absorbed dose profiles can be measured at any depth in a solid phantom for total skin electron therapy. Electron beam dose profiles were collected with two different methods. Profile measurements were performed at 0.2 and 1.2 cm depths with a parallel plate and a thimble chamber, respectively. 108cm × 108 cm and 45 cm × 45 cm projected size electron beams were scanned by vertically moving phantom and detector at 300 cm source-to-surface distance with 90° and 270° gantry angles. The profiles collected this way were used as reference. Afterwards, the phantom was fixed on the central axis and the gantry was rotated with certain angular steps. After applying correction for the different source-to-detector distances and incidence of angle, the profiles measured in the two different setups were compared. Correction formalism has been developed. The agreement between the cross profiles taken at the depth of maximum dose with the 'classical' scanning and with the new moving gantry method was better than 0.5 % in the measuring range from zero to 71.9 cm. Inverse square and attenuation corrections had to be applied. The profiles measured with the parallel plate chamber agree better than 1%, except for the penumbra region, where the maximum difference is 1.5%. With the moving gantry method, very large electron field profiles can be measured at any depth in a solid phantom with high accuracy and reproducibility and with much less time per step. No special instrumentation is needed. The method can be used for commissioning of very large electron beams for computer-assisted treatment planning, for designing beam modifiers to improve dose uniformity, and for verification of computed dose profiles.
International Nuclear Information System (INIS)
Jacoby, B.A.; York, T.M.
1979-02-01
With the presumption that a shifted Maxwellian velocity distribution adequately describes the electrons in a flowing plasma, the details of a method to measure their directed velocity are described. The system consists of a ruby laser source and two detectors set 180 0 from each other and both set at 90 0 with respect to the incident laser beam. The lowest velocity that can be determined by this method depends on the electron thermal velocity. The application of this diagnostic to the measurement of flow velocities in plasma being lost from the ends of theta-pinch devices is described
A rapid method of reprocessing for electronic microscopy of cut histological in paraffin
International Nuclear Information System (INIS)
Hernandez Chavarri, F.; Vargas Montero, M.; Rivera, P.; Carranza, A.
2000-01-01
A simple and rapid method is described for re-processing of light microscopy paraffin sections to observe they under transmission electron microscopy (TEM) and scanning electron microscopy (SEM) The paraffin-embedded tissue is sectioned and deparaffinized in toluene; then exposed to osmium vapor under microwave irradiation using a domestic microwave oven. The tissues were embedded in epoxy resin, polymerized and ultrathin sectioned. The method requires a relatively short time (about 30 minutes for TEM and 15 for SEM), and produces a reasonable quality of the ultrastructure for diagnostic purposes. (Author) [es
Principal component regression analysis with SPSS.
Liu, R X; Kuang, J; Gong, Q; Hou, X L
2003-06-01
The paper introduces all indices of multicollinearity diagnoses, the basic principle of principal component regression and determination of 'best' equation method. The paper uses an example to describe how to do principal component regression analysis with SPSS 10.0: including all calculating processes of the principal component regression and all operations of linear regression, factor analysis, descriptives, compute variable and bivariate correlations procedures in SPSS 10.0. The principal component regression analysis can be used to overcome disturbance of the multicollinearity. The simplified, speeded up and accurate statistical effect is reached through the principal component regression analysis with SPSS.
International Nuclear Information System (INIS)
Fukuda, Yoshiyuki; Schrod, Nikolas; Schaffer, Miroslava; Feng, Li Rebekah; Baumeister, Wolfgang; Lucic, Vladan
2014-01-01
Correlative microscopy allows imaging of the same feature over multiple length scales, combining light microscopy with high resolution information provided by electron microscopy. We demonstrate two procedures for coordinate transformation based correlative microscopy of vitrified biological samples applicable to different imaging modes. The first procedure aims at navigating cryo-electron tomography to cellular regions identified by fluorescent labels. The second procedure, allowing navigation of focused ion beam milling to fluorescently labeled molecules, is based on the introduction of an intermediate scanning electron microscopy imaging step to overcome the large difference between cryo-light microscopy and focused ion beam imaging modes. These methods make it possible to image fluorescently labeled macromolecular complexes in their natural environments by cryo-electron tomography, while minimizing exposure to the electron beam during the search for features of interest. - Highlights: • Correlative light microscopy and focused ion beam milling of vitrified samples. • Coordinate transformation based cryo-correlative method. • Improved correlative light microscopy and cryo-electron tomography
Directory of Open Access Journals (Sweden)
Pang Fubin
2015-09-01
Full Text Available In this paper the origin problem of data synchronization is analyzed first, and then three common interpolation methods are introduced to solve the problem. Allowing for the most general situation, the paper divides the interpolation error into harmonic and transient interpolation error components, and the error expression of each method is derived and analyzed. Besides, the interpolation errors of linear, quadratic and cubic methods are computed at different sampling rates, harmonic orders and transient components. Further, the interpolation accuracy and calculation amount of each method are compared. The research results provide theoretical guidance for selecting the interpolation method in the data synchronization application of electronic transformer.
Relativistic electrons of the outer radiation belt and methods of their forecast (review
Directory of Open Access Journals (Sweden)
Potapov A.S.
2017-03-01
Full Text Available The paper reviews studies of the dynamics of relativistic electrons in the geosynchronous region. It lists the physical processes that lead to the acceleration of electrons filling the outer radiation belt. As one of the space weather factors, high-energy electron fluxes pose a serious threat to the operation of satellite equipment in one of the most populated orbital regions. Necessity is emphasized for efforts to develop methods for forecasting the situation in this part of the magnetosphere, possible predictors are listed, and their classification is given. An example of a predictive model for forecasting relativistic electron flux with a 1–2-day lead time is proposed. Some questions of practical organization of prediction are discussed; the main objectives of short-term, medium-term, and long-term forecasts are listed.
Calculational methods for estimating skin dose from electrons in Co-60 gamma-ray beams
International Nuclear Information System (INIS)
Higgins, P.D.; Sibata, C.H.; Attix, F.H.; Paliwal, B.R.
1983-01-01
Several methods have been employed to calculate the relative contribution to skin dose due to scattered electrons in Co-60 gamma-ray beams. Either the Klein-Nishina differential scattering probability is employed to determine the number and initial energy of electrons scattered into the direction of a detector, or a Gaussian approximation is used to specify the surface distribution of initial pencil electron beams created by parallel or diverging photon fields. Results of these calculations are compared with experimental data. In addition, that fraction of relative surface dose resulting from photon interactions in air alone is estimated and compared with data extrapolated from measurements at large source-surface distance (SSD). The contribution to surface dose from electrons generated in air is 50% or more of the total skin dose for SSDs greater than 80 cm
Calculational methods for estimating skin dose from electrons in Co-60 gamma-ray beams
International Nuclear Information System (INIS)
Higgins, P.D.; Sibata, C.H.; Attix, F.H.; Paliwal, B.R.
1983-01-01
Several methods have been employed to calculate the relative contribution to skin dose due to scattered electrons in Co-60 γ-ray beams. Either the Klein--Nishina differential scattering probability is employed to determine the number and initial energy of electrons scattered into the direction of a detector, or a Gaussian approximation is used to specify the surface distribution of initial pencil electron beams created by parallel or diverging photon fields. Results of these calculations are compared with experimental data. In addition, that fraction of relative surface dose resulting from photon interactions in air alone is estimated and compared with data extrapolated from measurements at large source--surface distance (SSD). The contribution to surface dose from electrons generated in air is 50% or more of the total skin dose for SSDs greater than 80 cm
Electronic Structure Calculation of Permanent Magnets using the KKR Green's Function Method
Doi, Shotaro; Akai, Hisazumi
2014-03-01
Electronic structure and magnetic properties of permanent magnetic materials, especially Nd2Fe14B, are investigated theoretically using the KKR Green's function method. Important physical quantities in magnetism, such as magnetic moment, Curie temperature, and anisotropy constant, which are obtained from electronics structure calculations in both cases of atomic-sphere-approximation and full-potential treatment, are compared with past band structure calculations and experiments. The site preference of heavy rare-earth impurities are also evaluated through the calculation of formation energy with the use of coherent potential approximations. Further, the development of electronic structure calculation code using the screened KKR for large super-cells, which is aimed at studying the electronic structure of realistic microstructures (e.g. grain boundary phase), is introduced with some test calculations.
Regression Analysis by Example. 5th Edition
Chatterjee, Samprit; Hadi, Ali S.
2012-01-01
Regression analysis is a conceptually simple method for investigating relationships among variables. Carrying out a successful application of regression analysis, however, requires a balance of theoretical results, empirical rules, and subjective judgment. "Regression Analysis by Example, Fifth Edition" has been expanded and thoroughly…
Treatment of liquid separated from sludge by the method using electron beam and ozone in combination
International Nuclear Information System (INIS)
Hosono, Masakazu; Arai, Hidehiko; Aizawa, Masaki; Shimooka, Toshio; Shimizu, Ken; Sugiyama, Masashi.
1995-01-01
Since the liquid separated from sludge in the dehydration or concentration process of sewer sludge contains considerable amount of organic compositions that are hard to be decomposed by microorganisms, it has become difficult to be treated by conventional activated sludge process. In the case of discharging the separated liquid into closed water areas, the higher quality treatment is required. The method of using electron beam irradiation and ozone oxidation in combination for cleaning the liquid separated from sludge was examined, therefore, the results are reported. The water quality of the sample from the sludge treatment plant in A City is shown. The method of bio-pretreatment, the treatment method by using electron beam and ozone in combination, and the method of analyzing the water quality are described. The effect of the treatment by activated sludge process, as the effect of the treatment by the combined use of electron beam and ozone, the change of COD and TOC, the change of chromaticity, the change of gel chromatogram, and the reaction mechanism are reported. In this paper, only the basic concept on the model plant for applying the method of the combined use of electron beam and ozone to the treatment of the liquid separated from sludge is discussed. (K.I.)
Understanding logistic regression analysis.
Sperandei, Sandro
2014-01-01
Logistic regression is used to obtain odds ratio in the presence of more than one explanatory variable. The procedure is quite similar to multiple linear regression, with the exception that the response variable is binomial. The result is the impact of each variable on the odds ratio of the observed event of interest. The main advantage is to avoid confounding effects by analyzing the association of all variables together. In this article, we explain the logistic regression procedure using examples to make it as simple as possible. After definition of the technique, the basic interpretation of the results is highlighted and then some special issues are discussed.
Weisberg, Sanford
2013-01-01
Praise for the Third Edition ""...this is an excellent book which could easily be used as a course text...""-International Statistical Institute The Fourth Edition of Applied Linear Regression provides a thorough update of the basic theory and methodology of linear regression modeling. Demonstrating the practical applications of linear regression analysis techniques, the Fourth Edition uses interesting, real-world exercises and examples. Stressing central concepts such as model building, understanding parameters, assessing fit and reliability, and drawing conclusions, the new edition illus
Hosmer, David W; Sturdivant, Rodney X
2013-01-01
A new edition of the definitive guide to logistic regression modeling for health science and other applications This thoroughly expanded Third Edition provides an easily accessible introduction to the logistic regression (LR) model and highlights the power of this model by examining the relationship between a dichotomous outcome and a set of covariables. Applied Logistic Regression, Third Edition emphasizes applications in the health sciences and handpicks topics that best suit the use of modern statistical software. The book provides readers with state-of-
Survival analysis II: Cox regression
Stel, Vianda S.; Dekker, Friedo W.; Tripepi, Giovanni; Zoccali, Carmine; Jager, Kitty J.
2011-01-01
In contrast to the Kaplan-Meier method, Cox proportional hazards regression can provide an effect estimate by quantifying the difference in survival between patient groups and can adjust for confounding effects of other variables. The purpose of this article is to explain the basic concepts of the
Schiewe, M C; Fitz, T A; Brown, J L; Stuart, L D; Wildt, D E
1991-09-01
Ewes were treated with exogenous follicle-stimulating hormone (FSH) and oestrus was synchronized using either a dual prostaglandin F-2 alpha (PGF-2 alpha) injection regimen or pessaries impregnated with medroxy progesterone acetate (MAP). Natural cycling ewes served as controls. After oestrus or AI (Day 0), corpora lutea (CL) were enucleated surgically from the left and right ovaries on Days 3 and 6, respectively. The incidence of premature luteolysis was related (P less than 0.05) to PGF-2 alpha treatment and occurred in 7 of 8 ewes compared with 0 of 4 controls and 1 of 8 MAP-exposed females. Sheep with regressing CL had lower circulating and intraluteal progesterone concentrations and fewer total and small dissociated luteal cells on Day 3 than gonadotrophin-treated counterparts with normal CL. Progesterone concentration in the serum and luteal tissue was higher (P less than 0.05) in gonadotrophin-treated ewes with normal CL than in the controls; but luteinizing hormone (LH) receptors/cell were not different on Days 3 and 6. There were no apparent differences in the temporal patterns of circulating oestradiol-17 beta, FSH and LH. High progesterone in gonadotrophin-treated ewes with normal CL coincided with an increase in total luteal mass and numbers of cells, which were primarily reflected in more small luteal cells than in control ewes. Gonadotrophin-treated ewes with regressing CL on Day 3 tended (P less than 0.10) to have fewer small luteal cells and fewer (P less than 0.05) low-affinity PGF-2 alpha binding sites than sheep with normal CL. By Day 6, luteal integrity and cell viability was absent in ewes with prematurely regressed CL. These data demonstrate that (i) the incidence of premature luteal regression is highly correlated with the use of PGF-2 alpha; (ii) this abnormal luteal tissue is functionally competent for 2-3 days after ovulation, but deteriorates rapidly thereafter and (iii) luteal-dysfunctioning ewes experience a reduction in numbers of
Modeling and Simulation of DC Power Electronics Systems Using Harmonic State Space (HSS) Method
DEFF Research Database (Denmark)
Kwon, Jun Bum; Wang, Xiongfei; Bak, Claus Leth
2015-01-01
based on the state-space averaging and generalized averaging, these also have limitations to show the same results as with the non-linear time domain simulations. This paper presents a modeling and simulation method for a large dc power electronic system by using Harmonic State Space (HSS) modeling......For the efficiency and simplicity of electric systems, the dc based power electronics systems are widely used in variety applications such as electric vehicles, ships, aircrafts and also in homes. In these systems, there could be a number of dynamic interactions between loads and other dc-dc....... Through this method, the required computation time and CPU memory for large dc power electronics systems can be reduced. Besides, the achieved results show the same results as with the non-linear time domain simulation, but with the faster simulation time which is beneficial in a large network....
Electronic structure prediction via data-mining the empirical pseudopotential method
Energy Technology Data Exchange (ETDEWEB)
Zenasni, H; Aourag, H [LEPM, URMER, Departement of Physics, University Abou Bakr Belkaid, Tlemcen 13000 (Algeria); Broderick, S R; Rajan, K [Department of Materials Science and Engineering, Iowa State University, Ames, Iowa 50011-2230 (United States)
2010-01-15
We introduce a new approach for accelerating the calculation of the electronic structure of new materials by utilizing the empirical pseudopotential method combined with data mining tools. Combining data mining with the empirical pseudopotential method allows us to convert an empirical approach to a predictive approach. Here we consider tetrahedrally bounded III-V Bi semiconductors, and through the prediction of form factors based on basic elemental properties we can model the band structure and charge density for these semi-conductors, for which limited results exist. This work represents a unique approach to modeling the electronic structure of a material which may be used to identify new promising semi-conductors and is one of the few efforts utilizing data mining at an electronic level. (Abstract Copyright [2010], Wiley Periodicals, Inc.)
Application of the method of continued fractions for electron scattering by linear molecules
International Nuclear Information System (INIS)
Lee, M.-T.; Iga, I.; Fujimoto, M.M.; Lara, O.; Brasilia Univ., DF
1995-01-01
The method of continued fractions (MCF) of Horacek and Sasakawa is adapted for the first time to study low-energy electron scattering by linear molecules. Particularly, we have calculated the reactance K-matrices for an electron scattered by hydrogen molecule and hydrogen molecular ion as well as by a polar LiH molecule in the static-exchange level. For all the applications studied herein. the calculated physical quantities converge rapidly, even for a strongly polar molecule such as LiH, to the correct values and in most cases the convergence is monotonic. Our study suggests that the MCF could be an efficient method for studying electron-molecule scattering and also photoionization of molecules. (Author)
Method for calculating ionic and electronic defect concentrations in y-stabilised zirconia
Energy Technology Data Exchange (ETDEWEB)
Poulsen, F W [Risoe National Lab., Materials Research Dept., Roskilde (Denmark)
1997-10-01
A numerical (trial and error) method for calculation of concentration of ions, vacancies and ionic and electronic defects in solids (Brouwer-type diagrams) is presented. No approximations or truncations of the set of equations describing the chemistry for the various defect regions are used. Doped zirconia and doped thoria with simultaneous presence of protonic and electronic defects are taken as examples: 7 concentrations as function of oxygen partial pressure and/or water vapour partial pressure are determined. Realistic values for the equilibrium constants for equilibration with oxygen gas and water vapour, as well as for the internal equilibrium between holes and electrons were taken from the literature. The present mathematical method is versatile - it has also been employed by the author to treat more complex systems, such as perovskite structure oxides with over- and under-stoichiometry in oxygen, cation vacancies and simultaneous presence of protons. (au) 6 refs.
A method for ultrashort electron pulse-shape measurement using coherent synchrotron radiation
International Nuclear Information System (INIS)
Geloni, G.; Yurkov, M.V.
2003-03-01
In this paper we discuss a method for nondestructive measurements of the longitudinal profile of sub-picosecond electron bunches for X-ray free electron lasers (XFELs). The method is based on the detection of the coherent synchrotron radiation (CSR) spectrum produced by a bunch passing a dipole magnet system. This work also contains a systematic treatment of synchrotron radiation theory which lies at the basis of CSR. Standard theory of synchrotron radiation uses several approximations whose applicability limits are often forgotten: here we present a systematic discussion about these assumptions. Properties of coherent synchrotron radiation from an electron moving along an arc of a circle are then derived and discussed. We describe also an effective and practical diagnostic technique based on the utilization of an electromagnetic undulator to record the energy of the coherent radiation pulse into the central cone. This measurement must be repeated many times with different undulator resonant frequencies in order to reconstruct the modulus of the bunch form-factor. The retrieval of the bunch profile function from these data is performed by means of deconvolution techniques: for the present work we take advantage of a constrained deconvolution method. We illustrate with numerical examples the potential of the proposed method for electron beam diagnostics at the TESLA test facility (TTF) accelerator. Here we choose, for emphasis, experiments aimed at the measure of the strongly non-Gaussian electron bunch profile in the TTF femtosecond-mode operation. We demonstrate that a tandem combination of a picosecond streak camera and a CSR spectrometer can be used to extract shape information from electron bunches with a narrow leading peak and a long tail. (orig.)
Correlation and simple linear regression.
Zou, Kelly H; Tuncali, Kemal; Silverman, Stuart G
2003-06-01
In this tutorial article, the concepts of correlation and regression are reviewed and demonstrated. The authors review and compare two correlation coefficients, the Pearson correlation coefficient and the Spearman rho, for measuring linear and nonlinear relationships between two continuous variables. In the case of measuring the linear relationship between a predictor and an outcome variable, simple linear regression analysis is conducted. These statistical concepts are illustrated by using a data set from published literature to assess a computed tomography-guided interventional technique. These statistical methods are important for exploring the relationships between variables and can be applied to many radiologic studies.
Regression filter for signal resolution
International Nuclear Information System (INIS)
Matthes, W.
1975-01-01
The problem considered is that of resolving a measured pulse height spectrum of a material mixture, e.g. gamma ray spectrum, Raman spectrum, into a weighed sum of the spectra of the individual constituents. The model on which the analytical formulation is based is described. The problem reduces to that of a multiple linear regression. A stepwise linear regression procedure was constructed. The efficiency of this method was then tested by transforming the procedure in a computer programme which was used to unfold test spectra obtained by mixing some spectra, from a library of arbitrary chosen spectra, and adding a noise component. (U.K.)
Description of the electron-hydrogen collision by the Coulomb Fourier transform method
International Nuclear Information System (INIS)
Levin, S.B.
2005-01-01
A recently developed Coulomb Fourier Transform method is applied to the system containing one heavy ion and two electrons. The transformed Hamiltonian is described with a controlled accuracy in an effective finite basis set as a finite dimensional operator matrix. The kernels of interaction are formulated in terms of the so called Nordsieck integrals
Yazdani, Ali; Ong, N. Phuan; Cava, Robert J.
2016-05-03
An interconnect is disclosed with enhanced immunity of electrical conductivity to defects. The interconnect includes a material with charge carriers having topological surface states. Also disclosed is a method for fabricating such interconnects. Also disclosed is an integrated circuit including such interconnects. Also disclosed is a gated electronic device including a material with charge carriers having topological surface states.
Miniworkshop on Methods of Electronic Structure Calculations and Working Group on Disordered Alloys
Andersen, O K; Mookerjee, A
1994-01-01
Developments in the density functional theory and the methods of electronic structure calculations have made it possible to carry out ab-initio studies of a variety of materials efficiently and at a predictable level. This book covers many of those state-of-the-art developments and their applications to ordered and disordered materials, surfaces and interfaces and clusters, etc.
A new method for detecting the contribution of high Rydberg states to electron-ion recombination
International Nuclear Information System (INIS)
Orban, I; Boehm, S; Fogle, M; Paal, A; Schuch, R
2007-01-01
A position sensitive detector for measuring field ionized electrons in the fringe field of a dipole magnet is presented. The detector provides a means to study, in a state selective fashion, recombination into high Rydberg states and offers a new method to investigate recombination enhancement effects. Several experimental considerations and possibilities are discussed in the text
Energy Technology Data Exchange (ETDEWEB)
Yazdani, Ali; Ong, N. Phuan; Cava, Robert J.
2017-04-04
An interconnect is disclosed with enhanced immunity of electrical conductivity to defects. The interconnect includes a material with charge carriers having topological surface states. Also disclosed is a method for fabricating such interconnects. Also disclosed is an integrated circuit including such interconnects. Also disclosed is a gated electronic device including a material with charge carriers having topological surface states.
Directory of Open Access Journals (Sweden)
Mustafa Kemal BAHAR
2010-06-01
Full Text Available In this study, the effects of applied electric field on the isolated square quantum well was investigated by analytic and perturbative method. The energy eigen values and wave functions in quantum well were found by perturbative method. Later, the electric field effects were investigated by analytic method, the results of perturbative and analytic method were compared. As well as both of results fit with each other, it was observed that externally applied electric field changed importantly electronic properties of the system.
De Almeida, Wagner B.
2000-01-01
The determination of the molecular structure of molecules is of fundamental importance in chemistry. X-rays and electron diffraction methods constitute in important tools for the elucidation of the molecular structure of systems in the solid state and gas phase, respectively. The use of quantum mechanical molecular orbital ab initio methods offer an alternative for conformational analysis studies. Comparison between theoretical results and those obtained experimentally in the gas phase can ma...
International Nuclear Information System (INIS)
Dragt, A.J.
1987-01-01
A review is given of elementary Lie algebraic methods for treating Hamiltonian systems. This review is followed by a brief exposition of advanced Lie algebraic methods including resonance bases and conjugacy theorems. Finally, applications are made to the design of third-order achromats for use in accelerators, to the design of subangstroem resolution electron microscopes, and to the classification and study of high order aberrations in light optics. (orig.)
A parallel orbital-updating based plane-wave basis method for electronic structure calculations
International Nuclear Information System (INIS)
Pan, Yan; Dai, Xiaoying; Gironcoli, Stefano de; Gong, Xin-Gao; Rignanese, Gian-Marco; Zhou, Aihui
2017-01-01
Highlights: • Propose three parallel orbital-updating based plane-wave basis methods for electronic structure calculations. • These new methods can avoid the generating of large scale eigenvalue problems and then reduce the computational cost. • These new methods allow for two-level parallelization which is particularly interesting for large scale parallelization. • Numerical experiments show that these new methods are reliable and efficient for large scale calculations on modern supercomputers. - Abstract: Motivated by the recently proposed parallel orbital-updating approach in real space method , we propose a parallel orbital-updating based plane-wave basis method for electronic structure calculations, for solving the corresponding eigenvalue problems. In addition, we propose two new modified parallel orbital-updating methods. Compared to the traditional plane-wave methods, our methods allow for two-level parallelization, which is particularly interesting for large scale parallelization. Numerical experiments show that these new methods are more reliable and efficient for large scale calculations on modern supercomputers.
Inventory of electronic money as method of its control: process approach
Directory of Open Access Journals (Sweden)
A.Р. Semenets
2016-09-01
Full Text Available The extent of legal regulation of inventory of electronic money in the company is considered. The absence of developed techniques of valuation as well as reflection of electronic money on the accounts, which results in distortion of indicators of financial statements are detected. The author develops the organizational and methodical provisions of inventory of electronic money in accordance with the stages that will ensure the avoidance of misstatements in the financial statements and providing users with more reliable information about the amount and as well as oddments of electronic money at the company on the balance sheet date. The effect of accounting policies, provisions for the organization of accounting as well as job description on the control system for transactions with electronic money, including their inventory, are determined. The author discovers the typical violations that occur during reflecting the transactions with electronic money in accounting, early detection of which will enable appropriate adjustments for the avoidance of misstatements of the information provided in the financial statements of the company.
Method for pulse to pulse dose reproducibility applied to electron linear accelerators
International Nuclear Information System (INIS)
Ighigeanu, D.; Martin, D.; Oproiu, C.; Cirstea, E.; Craciun, G.
2002-01-01
An original method for obtaining programmed beam single shots and pulse trains with programmed pulse number, pulse repetition frequency, pulse duration and pulse dose is presented. It is particularly useful for automatic control of absorbed dose rate level, irradiation process control as well as in pulse radiolysis studies, single pulse dose measurement or for research experiments where pulse-to-pulse dose reproducibility is required. This method is applied to the electron linear accelerators, ALIN-10 of 6.23 MeV and 82 W and ALID-7, of 5.5 MeV and 670 W, built in NILPRP. In order to implement this method, the accelerator triggering system (ATS) consists of two branches: the gun branch and the magnetron branch. ATS, which synchronizes all the system units, delivers trigger pulses at a programmed repetition rate (up to 250 pulses/s) to the gun (80 kV, 10 A and 4 ms) and magnetron (45 kV, 100 A, and 4 ms).The accelerated electron beam existence is determined by the electron gun and magnetron pulses overlapping. The method consists in controlling the overlapping of pulses in order to deliver the beam in the desired sequence. This control is implemented by a discrete pulse position modulation of gun and/or magnetron pulses. The instabilities of the gun and magnetron transient regimes are avoided by operating the accelerator with no accelerated beam for a certain time. At the operator 'beam start' command, the ATS controls electron gun and magnetron pulses overlapping and the linac beam is generated. The pulse-to-pulse absorbed dose variation is thus considerably reduced. Programmed absorbed dose, irradiation time, beam pulse number or other external events may interrupt the coincidence between the gun and magnetron pulses. Slow absorbed dose variation is compensated by the control of the pulse duration and repetition frequency. Two methods are reported in the electron linear accelerators' development for obtaining the pulse to pulse dose reproducibility: the method
2001-01-01
International Acer Incorporated, Hsin Chu, Taiwan Aerospace Industrial Development Corporation, Taichung, Taiwan American Institute of Taiwan, Taipei, Taiwan...Singapore and Malaysia .5 - 4 - The largest market for semiconductor products is the high technology consumer electronics industry that consumes up...Singapore, and Malaysia . A new semiconductor facility costs around $3 billion to build and takes about two years to become operational
Logistic regression for dichotomized counts.
Preisser, John S; Das, Kalyan; Benecha, Habtamu; Stamm, John W
2016-12-01
Sometimes there is interest in a dichotomized outcome indicating whether a count variable is positive or zero. Under this scenario, the application of ordinary logistic regression may result in efficiency loss, which is quantifiable under an assumed model for the counts. In such situations, a shared-parameter hurdle model is investigated for more efficient estimation of regression parameters relating to overall effects of covariates on the dichotomous outcome, while handling count data with many zeroes. One model part provides a logistic regression containing marginal log odds ratio effects of primary interest, while an ancillary model part describes the mean count of a Poisson or negative binomial process in terms of nuisance regression parameters. Asymptotic efficiency of the logistic model parameter estimators of the two-part models is evaluated with respect to ordinary logistic regression. Simulations are used to assess the properties of the models with respect to power and Type I error, the latter investigated under both misspecified and correctly specified models. The methods are applied to data from a randomized clinical trial of three toothpaste formulations to prevent incident dental caries in a large population of Scottish schoolchildren. © The Author(s) 2014.
Saputro, Dewi Retno Sari; Widyaningsih, Purnami
2017-08-01
In general, the parameter estimation of GWOLR model uses maximum likelihood method, but it constructs a system of nonlinear equations, making it difficult to find the solution. Therefore, an approximate solution is needed. There are two popular numerical methods: the methods of Newton and Quasi-Newton (QN). Newton's method requires large-scale time in executing the computation program since it contains Jacobian matrix (derivative). QN method overcomes the drawback of Newton's method by substituting derivative computation into a function of direct computation. The QN method uses Hessian matrix approach which contains Davidon-Fletcher-Powell (DFP) formula. The Broyden-Fletcher-Goldfarb-Shanno (BFGS) method is categorized as the QN method which has the DFP formula attribute of having positive definite Hessian matrix. The BFGS method requires large memory in executing the program so another algorithm to decrease memory usage is needed, namely Low Memory BFGS (LBFGS). The purpose of this research is to compute the efficiency of the LBFGS method in the iterative and recursive computation of Hessian matrix and its inverse for the GWOLR parameter estimation. In reference to the research findings, we found out that the BFGS and LBFGS methods have arithmetic operation schemes, including O(n2) and O(nm).
DEFF Research Database (Denmark)
Kwon, Jun Bum; Wang, Xiongfei; Blaabjerg, Frede
2017-01-01
For the efficiency and simplicity of electric systems, the dc power electronic systems are widely used in a variety of applications such as electric vehicles, ships, aircraft and also in homes. In these systems, there could be a number of dynamic interactions and frequency coupling between network...... with different switching frequency or harmonics from ac-dc converters makes that harmonics and frequency coupling are both problems of ac system and challenges of dc system. This paper presents a modeling and simulation method for a large dc power electronic system by using Harmonic State Space (HSS) modeling...
International Nuclear Information System (INIS)
Babenkov, M.I.; Zhdanov, V.S.; Ryzhikh, V.Yu.; Chubisov, M.A.
2001-01-01
At the Institute of Nuclear Physics of the National Nuclear Center of the Republic of Kazakhstan the depth selective conversion electrons Moessbauer spectroscopy (DSCEMS) method was realized on the facility designed on the magnet sector beta-spectrometer base with the dual focusing equipped with non-equipotential electron source in the multi-ribbon variant and the position-sensitive detector. In the work the model statistical calculations of energy and angular distributions experienced not so many times of inelastic scattering acts were carried out
METHOD FOR OBSERVATION OF DEEMBEDDED SECTIONS OF FISH GONAD BY SCANNING ELECTRON MICROSCOPY
Institute of Scientific and Technical Information of China (English)
无
2000-01-01
This article reports a method for examining the intracellular structure of fish gonads using a scanning electron microscope(SEM). The specimen preparation procedure is similar to that for transmission electron microscopy wherein samples cut into semi-thin sections are fixed and embedded in plastic. The embedment matrix was removed by solvents. Risen-free specimens could be observed by SEM. The morphology of matured sperms in the gonad was very clear, and the oocyte internal structures appeared in three-dimensional images. Spheroidal nucleoli and yolk vesicles and several bundles of filaments adhered on the nucleoli could be viewed by SEM for the first time.
Wu, Madeline; Davidson, Norman
1981-01-01
A transmission electron microscope method for gene mapping by in situ hybridization to Drosophila polytene chromosomes has been developed. As electron-opaque labels, we use colloidal gold spheres having a diameter of 25 nm. The spheres are coated with a layer of protein to which Escherichia coli single-stranded DNA is photochemically crosslinked. Poly(dT) tails are added to the 3' OH ends of these DNA strands, and poly(dA) tails are added to the 3' OH ends of a fragmented cloned Drosophila DN...
Wang, Yu; Chou, Chia-Chun
2018-05-01
The coupled complex quantum Hamilton-Jacobi equations for electronic nonadiabatic transitions are approximately solved by propagating individual quantum trajectories in real space. Equations of motion are derived through use of the derivative propagation method for the complex actions and their spatial derivatives for wave packets moving on each of the coupled electronic potential surfaces. These equations for two surfaces are converted into the moving frame with the same grid point velocities. Excellent wave functions can be obtained by making use of the superposition principle even when nodes develop in wave packet scattering.
Energy Technology Data Exchange (ETDEWEB)
Kudryavtsev, Anatoly A., E-mail: akud@ak2138.spb.edu [St. Petersburg State University, 7-9 Universitetskaya nab., 199034 St. Petersburg (Russian Federation); Stefanova, Margarita S.; Pramatarov, Petko M. [Institute of Solid State Physics, Bulgarian Academy of Sciences, 72 Tzarigradsko Chaussee blvd., 1784 Sofia (Bulgaria)
2015-10-15
The collisional electron spectroscopy (CES) method, which lays the ground for a new field for analytical detection of gas impurities at high pressures, has been verified. The CES method enables the identification of gas impurities in the collisional mode of electron movement, where the advantages of nonlocal formation of the electron energy distribution function (EEDF) are fulfilled. Important features of dc negative glow microplasma and probe method for plasma diagnostics are applied. A new microplasma gas analyzer design is proposed. Admixtures of 0.2% Ar, 0.6% Kr, 0.1% N{sub 2}, and 0.05% CO{sub 2} are used as examples of atomic and molecular impurities to prove the possibility for detecting and identifying their presence in high pressure He plasma (50–250 Torr). The identification of the particles under analysis is made from the measurements of the high energy part of the EEDF, where maxima appear, resulting from the characteristic electrons released in Penning reactions of He metastable atoms with impurity particles. Considerable progress in the development of a novel miniature gas analyzer for chemical sensing in gas phase environments has been made.
Quantile Regression With Measurement Error
Wei, Ying
2009-08-27
Regression quantiles can be substantially biased when the covariates are measured with error. In this paper we propose a new method that produces consistent linear quantile estimation in the presence of covariate measurement error. The method corrects the measurement error induced bias by constructing joint estimating equations that simultaneously hold for all the quantile levels. An iterative EM-type estimation algorithm to obtain the solutions to such joint estimation equations is provided. The finite sample performance of the proposed method is investigated in a simulation study, and compared to the standard regression calibration approach. Finally, we apply our methodology to part of the National Collaborative Perinatal Project growth data, a longitudinal study with an unusual measurement error structure. © 2009 American Statistical Association.
Directory of Open Access Journals (Sweden)
Nam Lyong Kang
2013-07-01
Full Text Available The projection-reduction method introduced by the present authors is known to give a validated theory for optical transitions in the systems of electrons interacting with phonons. In this work, using this method, we derive the linear and first order nonlinear optical conductivites for an electron-impurity system and examine whether the expressions faithfully satisfy the quantum mechanical philosophy, in the same way as for the electron-phonon systems. The result shows that the Fermi distribution function for electrons, energy denominators, and electron-impurity coupling factors are contained properly in organized manners along with absorption of photons for each electron transition process in the final expressions. Furthermore, the result is shown to be represented properly by schematic diagrams, as in the formulation of electron-phonon interaction. Therefore, in conclusion, we claim that this method can be applied in modeling optical transitions of electrons interacting with both impurities and phonons.
Study on time of flight property of electron optical systems by differential algebraic method
International Nuclear Information System (INIS)
Cheng Min; Tang Tiantong; Yao Zhenhua
2002-01-01
Differential algebraic method is a powerful and promising technique in computer numerical analysis. When applied to nonlinear dynamics systems, the arbitrary high-order transfer properties of the systems can be computed directly with high precision. In this paper, the principle of differential algebra is applied to study on the time of flight (TOF) property of electron optical systems and their arbitrary order TOF transfer properties can be numerically calculated out. As an example, TOF transfer properties of a uniform magnetic sector field analyzer have been studied by differential algebraic method. Relative errors of the first-order and second-order TOF transfer coefficients of the magnetic sector field analyzer are of the order 10 -11 or smaller compared with the analytic solutions. It is proved that differential algebraic TOF method is of high accuracy and very helpful for high-order TOF transfer property analysis of electron optical systems. (author)
Kerosuo, E; Kolehmainen, L
1982-01-01
The susceptibility of a tooth to dental caries has been proposed to depend on tooth color. So far there has, however, been no reliable method for tooth color determination. The aims of this study were to evaluate the reliability of an opto-electronic method and to examine the relationship between tooth color and past caries experience. The color of upper right central incisors of 64 school-children was determined using an opto-electronic tri-stimulus color comparator. The intra- and interexaminer reliability of the method was evaluated in vitro and in vivo being 85% and 83%, respectively. To assess the past caries experience the DMFS-index was calculated. Oral hygiene and dietary habits were also assessed. No significant difference in DMFS scores was obtained between the 'white teeth' group and the 'yellow teeth' group. The conclusion is, that the practical importance of possible colorrelated differences in caries resistance is negligible due to the multifaceted nature of dental caries.
Application of the Green's function method to some nonlinear problems of an electron storage ring
International Nuclear Information System (INIS)
Kheifets, S.
1984-01-01
One of the most important characteristics of an electron storage ring is the size of the beam. However analytical calculations of beam size are beset with problems and the computational methods and programs which are used to overcome these are inadequate for all problems in which stochastic noise is an essential part. Two examples are, for an electron storage ring, beam-size evaluation including beam-beam interactions, and finding the beam size for a nonlinear machine. The method described should overcome some of the problems. It uses the Green's function method applied to the Fokker-Planck equation governing the distribution function in the phase space of particle motion. The new step is to consider the particle motion in two degrees of freedom rather than in one dimension. The technique is described fully and is then applied to a strong-focusing machine. (U.K.)
International Nuclear Information System (INIS)
Adrich, Przemysław
2016-01-01
In Part I of this work existing methods and problems in dual foil electron beam forming system design are presented. On this basis, a new method of designing these systems is introduced. The motivation behind this work is to eliminate the shortcomings of the existing design methods and improve overall efficiency of the dual foil design process. The existing methods are based on approximate analytical models applied in an unrealistically simplified geometry. Designing a dual foil system with these methods is a rather labor intensive task as corrections to account for the effects not included in the analytical models have to be calculated separately and accounted for in an iterative procedure. To eliminate these drawbacks, the new design method is based entirely on Monte Carlo modeling in a realistic geometry and using physics models that include all relevant processes. In our approach, an optimal configuration of the dual foil system is found by means of a systematic, automatized scan of the system performance in function of parameters of the foils. The new method, while being computationally intensive, minimizes the involvement of the designer and considerably shortens the overall design time. The results are of high quality as all the relevant physics and geometry details are naturally accounted for. To demonstrate the feasibility of practical implementation of the new method, specialized software tools were developed and applied to solve a real life design problem, as described in Part II of this work.
Energy Technology Data Exchange (ETDEWEB)
Adrich, Przemysław, E-mail: Przemyslaw.Adrich@ncbj.gov.pl
2016-05-01
In Part I of this work existing methods and problems in dual foil electron beam forming system design are presented. On this basis, a new method of designing these systems is introduced. The motivation behind this work is to eliminate the shortcomings of the existing design methods and improve overall efficiency of the dual foil design process. The existing methods are based on approximate analytical models applied in an unrealistically simplified geometry. Designing a dual foil system with these methods is a rather labor intensive task as corrections to account for the effects not included in the analytical models have to be calculated separately and accounted for in an iterative procedure. To eliminate these drawbacks, the new design method is based entirely on Monte Carlo modeling in a realistic geometry and using physics models that include all relevant processes. In our approach, an optimal configuration of the dual foil system is found by means of a systematic, automatized scan of the system performance in function of parameters of the foils. The new method, while being computationally intensive, minimizes the involvement of the designer and considerably shortens the overall design time. The results are of high quality as all the relevant physics and geometry details are naturally accounted for. To demonstrate the feasibility of practical implementation of the new method, specialized software tools were developed and applied to solve a real life design problem, as described in Part II of this work.
Energy Technology Data Exchange (ETDEWEB)
Dixon, D.A., E-mail: ddixon@lanl.gov [Los Alamos National Laboratory, P.O. Box 1663, MS P365, Los Alamos, NM 87545 (United States); Prinja, A.K., E-mail: prinja@unm.edu [Department of Nuclear Engineering, MSC01 1120, 1 University of New Mexico, Albuquerque, NM 87131-0001 (United States); Franke, B.C., E-mail: bcfrank@sandia.gov [Sandia National Laboratories, Albuquerque, NM 87123 (United States)
2015-09-15
This paper presents the theoretical development and numerical demonstration of a moment-preserving Monte Carlo electron transport method. Foremost, a full implementation of the moment-preserving (MP) method within the Geant4 particle simulation toolkit is demonstrated. Beyond implementation details, it is shown that the MP method is a viable alternative to the condensed history (CH) method for inclusion in current and future generation transport codes through demonstration of the key features of the method including: systematically controllable accuracy, computational efficiency, mathematical robustness, and versatility. A wide variety of results common to electron transport are presented illustrating the key features of the MP method. In particular, it is possible to achieve accuracy that is statistically indistinguishable from analog Monte Carlo, while remaining up to three orders of magnitude more efficient than analog Monte Carlo simulations. Finally, it is shown that the MP method can be generalized to any applicable analog scattering DCS model by extending previous work on the MP method beyond analytical DCSs to the partial-wave (PW) elastic tabulated DCS data.
Multicollinearity and Regression Analysis
Daoud, Jamal I.
2017-12-01
In regression analysis it is obvious to have a correlation between the response and predictor(s), but having correlation among predictors is something undesired. The number of predictors included in the regression model depends on many factors among which, historical data, experience, etc. At the end selection of most important predictors is something objective due to the researcher. Multicollinearity is a phenomena when two or more predictors are correlated, if this happens, the standard error of the coefficients will increase [8]. Increased standard errors means that the coefficients for some or all independent variables may be found to be significantly different from In other words, by overinflating the standard errors, multicollinearity makes some variables statistically insignificant when they should be significant. In this paper we focus on the multicollinearity, reasons and consequences on the reliability of the regression model.
Hershkowitz, Noah [Madison, WI; Longmier, Benjamin [Madison, WI; Baalrud, Scott [Madison, WI
2009-03-03
An electron generating device extracts electrons, through an electron sheath, from plasma produced using RF fields. The electron sheath is located near a grounded ring at one end of a negatively biased conducting surface, which is normally a cylinder. Extracted electrons pass through the grounded ring in the presence of a steady state axial magnetic field. Sufficiently large magnetic fields and/or RF power into the plasma allow for helicon plasma generation. The ion loss area is sufficiently large compared to the electron loss area to allow for total non-ambipolar extraction of all electrons leaving the plasma. Voids in the negatively-biased conducting surface allow the time-varying magnetic fields provided by the antenna to inductively couple to the plasma within the conducting surface. The conducting surface acts as a Faraday shield, which reduces any time-varying electric fields from entering the conductive surface, i.e. blocks capacitive coupling between the antenna and the plasma.
Free Electron Laser Induced Forward Transfer Method of Biomaterial for Marking
Suzuki, Kaoru
Biomaterial, such as chitosan, poly lactic acid, etc., containing fluorescence agent was deposited onto biology hard tissue, such as teeth, fingernail of dog or cat, or sapphire substrate by free electron laser induced forward transfer method for direct write marking. Spin-coated biomaterial with fluorescence agent of rhodamin-6G or zinc phthalochyamine target on sapphire plate was ablated by free electron laser (resonance absorption wavelength of biomaterial : 3380 nm). The influence of the spin-coating film-forming temperature on hardness and adhesion strength of biomaterial is particularly studied. Effect of resonance excitation of biomaterial target by turning free electron laser was discussed to damage of biomaterial, rhodamin-6G or zinc phtarochyamine for direct write marking
1984-01-01
That there have been remarkable advances in the field of molecular electronic structure during the last decade is clear not only to those working in the field but also to anyone else who has used quantum chemical results to guide their own investiga tions. The progress in calculating the electronic structures of molecules has occurred through the truly ingenious theoretical and methodological developments that have made computationally tractable the underlying physics of electron distributions around a collection of nuclei. At the same time there has been consider able benefit from the great advances in computer technology. The growing sophistication, declining costs and increasing accessibi lity of computers have let theorists apply their methods to prob lems in virtually all areas of molecular science. Consequently, each year witnesses calculations on larger molecules than in the year before and calculations with greater accuracy and more com plete information on molecular properties. We can surel...
Cameron, Isobel M; Scott, Neil W; Adler, Mats; Reid, Ian C
2014-12-01
It is important for clinical practice and research that measurement scales of well-being and quality of life exhibit only minimal differential item functioning (DIF). DIF occurs where different groups of people endorse items in a scale to different extents after being matched by the intended scale attribute. We investigate the equivalence or otherwise of common methods of assessing DIF. Three methods of measuring age- and sex-related DIF (ordinal logistic regression, Rasch analysis and Mantel χ(2) procedure) were applied to Hospital Anxiety Depression Scale (HADS) data pertaining to a sample of 1,068 patients consulting primary care practitioners. Three items were flagged by all three approaches as having either age- or sex-related DIF with a consistent direction of effect; a further three items identified did not meet stricter criteria for important DIF using at least one method. When applying strict criteria for significant DIF, ordinal logistic regression was slightly less sensitive. Ordinal logistic regression, Rasch analysis and contingency table methods yielded consistent results when identifying DIF in the HADS depression and HADS anxiety scales. Regardless of methods applied, investigators should use a combination of statistical significance, magnitude of the DIF effect and investigator judgement when interpreting the results.
DEFF Research Database (Denmark)
Bache, Stefan Holst
A new and alternative quantile regression estimator is developed and it is shown that the estimator is root n-consistent and asymptotically normal. The estimator is based on a minimax ‘deviance function’ and has asymptotically equivalent properties to the usual quantile regression estimator. It is......, however, a different and therefore new estimator. It allows for both linear- and nonlinear model specifications. A simple algorithm for computing the estimates is proposed. It seems to work quite well in practice but whether it has theoretical justification is still an open question....
DEFF Research Database (Denmark)
Ozenne, Brice; Sørensen, Anne Lyngholm; Scheike, Thomas
2017-01-01
In the presence of competing risks a prediction of the time-dynamic absolute risk of an event can be based on cause-specific Cox regression models for the event and the competing risks (Benichou and Gail, 1990). We present computationally fast and memory optimized C++ functions with an R interface...... for predicting the covariate specific absolute risks, their confidence intervals, and their confidence bands based on right censored time to event data. We provide explicit formulas for our implementation of the estimator of the (stratified) baseline hazard function in the presence of tied event times. As a by...... functionals. The software presented here is implemented in the riskRegression package....
Second-principles method for materials simulations including electron and lattice degrees of freedom
García-Fernández, Pablo; Wojdeł, Jacek C.; Íñiguez, Jorge; Junquera, Javier
2016-05-01
We present a first-principles-based (second-principles) scheme that permits large-scale materials simulations including both atomic and electronic degrees of freedom on the same footing. The method is based on a predictive quantum-mechanical theory—e.g., density functional theory—and its accuracy can be systematically improved at a very modest computational cost. Our approach is based on dividing the electron density of the system into a reference part—typically corresponding to the system's neutral, geometry-dependent ground state—and a deformation part—defined as the difference between the actual and reference densities. We then take advantage of the fact that the bulk part of the system's energy depends on the reference density alone; this part can be efficiently and accurately described by a force field, thus avoiding explicit consideration of the electrons. Then, the effects associated to the difference density can be treated perturbatively with good precision by working in a suitably chosen Wannier function basis. Further, the electronic model can be restricted to the bands of interest. All these features combined yield a very flexible and computationally very efficient scheme. Here we present the basic formulation of this approach, as well as a practical strategy to compute model parameters for realistic materials. We illustrate the accuracy and scope of the proposed method with two case studies, namely, the relative stability of various spin arrangements in NiO (featuring complex magnetic interactions in a strongly-correlated oxide) and the formation of a two-dimensional electron gas at the interface between band insulators LaAlO3 and SrTiO3 (featuring subtle electron-lattice couplings and screening effects). We conclude by discussing ways to overcome the limitations of the present approach (most notably, the assumption of a fixed bonding topology), as well as its many envisioned possibilities and future extensions.
Regression with Sparse Approximations of Data
DEFF Research Database (Denmark)
Noorzad, Pardis; Sturm, Bob L.
2012-01-01
We propose sparse approximation weighted regression (SPARROW), a method for local estimation of the regression function that uses sparse approximation with a dictionary of measurements. SPARROW estimates the regression function at a point with a linear combination of a few regressands selected...... by a sparse approximation of the point in terms of the regressors. We show SPARROW can be considered a variant of \\(k\\)-nearest neighbors regression (\\(k\\)-NNR), and more generally, local polynomial kernel regression. Unlike \\(k\\)-NNR, however, SPARROW can adapt the number of regressors to use based...
Discrimination of Rice with Different Pretreatment Methods by Using a Voltammetric Electronic Tongue
Directory of Open Access Journals (Sweden)
Li Wang
2015-07-01
Full Text Available In this study, an application of a voltammetric electronic tongue for discrimination and prediction of different varieties of rice was investigated. Different pretreatment methods were selected, which were subsequently used for the discrimination of different varieties of rice and prediction of unknown rice samples. To this aim, a voltammetric array of sensors based on metallic electrodes was used as the sensing part. The different samples were analyzed by cyclic voltammetry with two sample-pretreatment methods. Discriminant Factorial Analysis was used to visualize the different categories of rice samples; however, radial basis function (RBF artificial neural network with leave-one-out cross-validation method was employed for prediction modeling. The collected signal data were first compressed employing fast Fourier transform (FFT and then significant features were extracted from the voltammetric signals. The experimental results indicated that the sample solutions obtained by the non-crushed pretreatment method could efficiently meet the effect of discrimination and recognition. The satisfactory prediction results of voltammetric electronic tongue based on RBF artificial neural network were obtained with less than five-fold dilution of the sample solution. The main objective of this study was to develop primary research on the application of an electronic tongue system for the discrimination and prediction of solid foods and provide an objective assessment tool for the food industry.