Anderson, Carl A; McRae, Allan F; Visscher, Peter M
2006-07-01
Standard quantitative trait loci (QTL) mapping techniques commonly assume that the trait is both fully observed and normally distributed. When considering survival or age-at-onset traits these assumptions are often incorrect. Methods have been developed to map QTL for survival traits; however, they are both computationally intensive and not available in standard genome analysis software packages. We propose a grouped linear regression method for the analysis of continuous survival data. Using simulation we compare this method to both the Cox and Weibull proportional hazards models and a standard linear regression method that ignores censoring. The grouped linear regression method is of equivalent power to both the Cox and Weibull proportional hazards methods and is significantly better than the standard linear regression method when censored observations are present. The method is also robust to the proportion of censored individuals and the underlying distribution of the trait. On the basis of linear regression methodology, the grouped linear regression model is computationally simple and fast and can be implemented readily in freely available statistical software.
Correlation and simple linear regression.
Zou, Kelly H; Tuncali, Kemal; Silverman, Stuart G
2003-06-01
In this tutorial article, the concepts of correlation and regression are reviewed and demonstrated. The authors review and compare two correlation coefficients, the Pearson correlation coefficient and the Spearman rho, for measuring linear and nonlinear relationships between two continuous variables. In the case of measuring the linear relationship between a predictor and an outcome variable, simple linear regression analysis is conducted. These statistical concepts are illustrated by using a data set from published literature to assess a computed tomography-guided interventional technique. These statistical methods are important for exploring the relationships between variables and can be applied to many radiologic studies.
A Simple and Convenient Method of Multiple Linear Regression to Calculate Iodine Molecular Constants
Cooper, Paul D.
2010-01-01
A new procedure using a student-friendly least-squares multiple linear-regression technique utilizing a function within Microsoft Excel is described that enables students to calculate molecular constants from the vibronic spectrum of iodine. This method is advantageous pedagogically as it calculates molecular constants for ground and excited…
Anderson, Carl A.; McRae, Allan F.; Visscher, Peter M.
2006-01-01
Standard quantitative trait loci (QTL) mapping techniques commonly assume that the trait is both fully observed and normally distributed. When considering survival or age-at-onset traits these assumptions are often incorrect. Methods have been developed to map QTL for survival traits; however, they are both computationally intensive and not available in standard genome analysis software packages. We propose a grouped linear regression method for the analysis of continuous survival data. Using...
Advanced statistics: linear regression, part I: simple linear regression.
Marill, Keith A
2004-01-01
Simple linear regression is a mathematical technique used to model the relationship between a single independent predictor variable and a single dependent outcome variable. In this, the first of a two-part series exploring concepts in linear regression analysis, the four fundamental assumptions and the mechanics of simple linear regression are reviewed. The most common technique used to derive the regression line, the method of least squares, is described. The reader will be acquainted with other important concepts in simple linear regression, including: variable transformations, dummy variables, relationship to inference testing, and leverage. Simplified clinical examples with small datasets and graphic models are used to illustrate the points. This will provide a foundation for the second article in this series: a discussion of multiple linear regression, in which there are multiple predictor variables.
Alternative Methods of Regression
Birkes, David
2011-01-01
Of related interest. Nonlinear Regression Analysis and its Applications Douglas M. Bates and Donald G. Watts ".an extraordinary presentation of concepts and methods concerning the use and analysis of nonlinear regression models.highly recommend[ed].for anyone needing to use and/or understand issues concerning the analysis of nonlinear regression models." --Technometrics This book provides a balance between theory and practice supported by extensive displays of instructive geometrical constructs. Numerous in-depth case studies illustrate the use of nonlinear regression analysis--with all data s
Misyura, Maksym; Sukhai, Mahadeo A; Kulasignam, Vathany; Zhang, Tong; Kamel-Reid, Suzanne; Stockley, Tracy L
2018-02-01
A standard approach in test evaluation is to compare results of the assay in validation to results from previously validated methods. For quantitative molecular diagnostic assays, comparison of test values is often performed using simple linear regression and the coefficient of determination (R 2 ), using R 2 as the primary metric of assay agreement. However, the use of R 2 alone does not adequately quantify constant or proportional errors required for optimal test evaluation. More extensive statistical approaches, such as Bland-Altman and expanded interpretation of linear regression methods, can be used to more thoroughly compare data from quantitative molecular assays. We present the application of Bland-Altman and linear regression statistical methods to evaluate quantitative outputs from next-generation sequencing assays (NGS). NGS-derived data sets from assay validation experiments were used to demonstrate the utility of the statistical methods. Both Bland-Altman and linear regression were able to detect the presence and magnitude of constant and proportional error in quantitative values of NGS data. Deming linear regression was used in the context of assay comparison studies, while simple linear regression was used to analyse serial dilution data. Bland-Altman statistical approach was also adapted to quantify assay accuracy, including constant and proportional errors, and precision where theoretical and empirical values were known. The complementary application of the statistical methods described in this manuscript enables more extensive evaluation of performance characteristics of quantitative molecular assays, prior to implementation in the clinical molecular laboratory. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Neutrosophic Correlation and Simple Linear Regression
Directory of Open Access Journals (Sweden)
A. A. Salama
2014-09-01
Full Text Available Since the world is full of indeterminacy, the neutrosophics found their place into contemporary research. The fundamental concepts of neutrosophic set, introduced by Smarandache. Recently, Salama et al., introduced the concept of correlation coefficient of neutrosophic data. In this paper, we introduce and study the concepts of correlation and correlation coefficient of neutrosophic data in probability spaces and study some of their properties. Also, we introduce and study the neutrosophic simple linear regression model. Possible applications to data processing are touched upon.
Doran, Kara S.; Howd, Peter A.; Sallenger,, Asbury H.
2016-01-04
This report documents the development of statistical tools used to quantify the hazard presented by the response of sea-level elevation to natural or anthropogenic changes in climate and ocean circulation. A hazard is a physical process (or processes) that, when combined with vulnerability (or susceptibility to the hazard), results in risk. This study presents the development and comparison of new and existing sea-level analysis methods, exploration of the strengths and weaknesses of the methods using synthetic time series, and when appropriate, synthesis of the application of the method to observed sea-level time series. These reports are intended to enhance material presented in peer-reviewed journal articles where it is not always possible to provide the level of detail that might be necessary to fully support or recreate published results.
DEFF Research Database (Denmark)
Fitzenberger, Bernd; Wilke, Ralf Andreas
2015-01-01
if the mean regression model does not. We provide a short informal introduction into the principle of quantile regression which includes an illustrative application from empirical labor market research. This is followed by briefly sketching the underlying statistical model for linear quantile regression based......Quantile regression is emerging as a popular statistical approach, which complements the estimation of conditional mean models. While the latter only focuses on one aspect of the conditional distribution of the dependent variable, the mean, quantile regression provides more detailed insights...... by modeling conditional quantiles. Quantile regression can therefore detect whether the partial effect of a regressor on the conditional quantiles is the same for all quantiles or differs across quantiles. Quantile regression can provide evidence for a statistical relationship between two variables even...
Vaeth, Michael; Skovlund, Eva
2004-06-15
For a given regression problem it is possible to identify a suitably defined equivalent two-sample problem such that the power or sample size obtained for the two-sample problem also applies to the regression problem. For a standard linear regression model the equivalent two-sample problem is easily identified, but for generalized linear models and for Cox regression models the situation is more complicated. An approximately equivalent two-sample problem may, however, also be identified here. In particular, we show that for logistic regression and Cox regression models the equivalent two-sample problem is obtained by selecting two equally sized samples for which the parameters differ by a value equal to the slope times twice the standard deviation of the independent variable and further requiring that the overall expected number of events is unchanged. In a simulation study we examine the validity of this approach to power calculations in logistic regression and Cox regression models. Several different covariate distributions are considered for selected values of the overall response probability and a range of alternatives. For the Cox regression model we consider both constant and non-constant hazard rates. The results show that in general the approach is remarkably accurate even in relatively small samples. Some discrepancies are, however, found in small samples with few events and a highly skewed covariate distribution. Comparison with results based on alternative methods for logistic regression models with a single continuous covariate indicates that the proposed method is at least as good as its competitors. The method is easy to implement and therefore provides a simple way to extend the range of problems that can be covered by the usual formulas for power and sample size determination. Copyright 2004 John Wiley & Sons, Ltd.
Regression methods for medical research
Tai, Bee Choo
2013-01-01
Regression Methods for Medical Research provides medical researchers with the skills they need to critically read and interpret research using more advanced statistical methods. The statistical requirements of interpreting and publishing in medical journals, together with rapid changes in science and technology, increasingly demands an understanding of more complex and sophisticated analytic procedures.The text explains the application of statistical models to a wide variety of practical medical investigative studies and clinical trials. Regression methods are used to appropriately answer the
Teaching the Concept of Breakdown Point in Simple Linear Regression.
Chan, Wai-Sum
2001-01-01
Most introductory textbooks on simple linear regression analysis mention the fact that extreme data points have a great influence on ordinary least-squares regression estimation; however, not many textbooks provide a rigorous mathematical explanation of this phenomenon. Suggests a way to fill this gap by teaching students the concept of breakdown…
Schwantes-An, Tae-Hwi; Sung, Heejong; Sabourin, Jeremy A; Justice, Cristina M; Sorant, Alexa J M; Wilson, Alexander F
2016-01-01
In this study, the effects of (a) the minor allele frequency of the single nucleotide variant (SNV), (b) the degree of departure from normality of the trait, and (c) the position of the SNVs on type I error rates were investigated in the Genetic Analysis Workshop (GAW) 19 whole exome sequence data. To test the distribution of the type I error rate, 5 simulated traits were considered: standard normal and gamma distributed traits; 2 transformed versions of the gamma trait (log 10 and rank-based inverse normal transformations); and trait Q1 provided by GAW 19. Each trait was tested with 313,340 SNVs. Tests of association were performed with simple linear regression and average type I error rates were determined for minor allele frequency classes. Rare SNVs (minor allele frequency < 0.05) showed inflated type I error rates for non-normally distributed traits that increased as the minor allele frequency decreased. The inflation of average type I error rates increased as the significance threshold decreased. Normally distributed traits did not show inflated type I error rates with respect to the minor allele frequency for rare SNVs. There was no consistent effect of transformation on the uniformity of the distribution of the location of SNVs with a type I error.
Simple and multiple linear regression: sample size considerations.
Hanley, James A
2016-11-01
The suggested "two subjects per variable" (2SPV) rule of thumb in the Austin and Steyerberg article is a chance to bring out some long-established and quite intuitive sample size considerations for both simple and multiple linear regression. This article distinguishes two of the major uses of regression models that imply very different sample size considerations, neither served well by the 2SPV rule. The first is etiological research, which contrasts mean Y levels at differing "exposure" (X) values and thus tends to focus on a single regression coefficient, possibly adjusted for confounders. The second research genre guides clinical practice. It addresses Y levels for individuals with different covariate patterns or "profiles." It focuses on the profile-specific (mean) Y levels themselves, estimating them via linear compounds of regression coefficients and covariates. By drawing on long-established closed-form variance formulae that lie beneath the standard errors in multiple regression, and by rearranging them for heuristic purposes, one arrives at quite intuitive sample size considerations for both research genres. Copyright Â© 2016 Elsevier Inc. All rights reserved.
Characteristics and Properties of a Simple Linear Regression Model
Directory of Open Access Journals (Sweden)
Kowal Robert
2016-12-01
Full Text Available A simple linear regression model is one of the pillars of classic econometrics. Despite the passage of time, it continues to raise interest both from the theoretical side as well as from the application side. One of the many fundamental questions in the model concerns determining derivative characteristics and studying the properties existing in their scope, referring to the first of these aspects. The literature of the subject provides several classic solutions in that regard. In the paper, a completely new design is proposed, based on the direct application of variance and its properties, resulting from the non-correlation of certain estimators with the mean, within the scope of which some fundamental dependencies of the model characteristics are obtained in a much more compact manner. The apparatus allows for a simple and uniform demonstration of multiple dependencies and fundamental properties in the model, and it does it in an intuitive manner. The results were obtained in a classic, traditional area, where everything, as it might seem, has already been thoroughly studied and discovered.
Linear regression methods a ccording to objective functions
Yasemin Sisman; Sebahattin Bektas
2012-01-01
The aim of the study is to explain the parameter estimation methods and the regression analysis. The simple linear regressionmethods grouped according to the objective function are introduced. The numerical solution is achieved for the simple linear regressionmethods according to objective function of Least Squares and theLeast Absolute Value adjustment methods. The success of the appliedmethods is analyzed using their objective function values.
A simple method for α determination
International Nuclear Information System (INIS)
Ho Manh Dung; Seung Yeon Cho
2003-01-01
The a term is a primary parameter that is used to indicate the deviation of the epithermal neutron distribution in the k 0 -standardization method of neutron activation analysis, k 0 -NAA. The calculation of a using a mathematical procedure is a challenge for some researchers. The calculation of a by the 'bare-triple monitor' method is possible using the dedicated commercial software KAYZERO R /SOLCOI R . However, when this software is not available in the laboratory it is possible to carry out the calculation of a applying a simple iterative linear regression using any spreadsheets. This approach is described. The experimental data used in the example were obtained by the irradiation of a set of suitable monitors in the NAA no.1 irradiation channel of the HANARO research reactor (KAERI, Korea). The results obtained by this iterative linear regression method agree well with the results calculated by the validated mathematical method. (author)
Regression modeling methods, theory, and computation with SAS
Panik, Michael
2009-01-01
Regression Modeling: Methods, Theory, and Computation with SAS provides an introduction to a diverse assortment of regression techniques using SAS to solve a wide variety of regression problems. The author fully documents the SAS programs and thoroughly explains the output produced by the programs.The text presents the popular ordinary least squares (OLS) approach before introducing many alternative regression methods. It covers nonparametric regression, logistic regression (including Poisson regression), Bayesian regression, robust regression, fuzzy regression, random coefficients regression,
Novikov, I; Fund, N; Freedman, L S
2010-01-15
Different methods for the calculation of sample size for simple logistic regression (LR) with one normally distributed continuous covariate give different results. Sometimes the difference can be large. Furthermore, some methods require the user to specify the prevalence of cases when the covariate equals its population mean, rather than the more natural population prevalence. We focus on two commonly used methods and show through simulations that the power for a given sample size may differ substantially from the nominal value for one method, especially when the covariate effect is large, while the other method performs poorly if the user provides the population prevalence instead of the required parameter. We propose a modification of the method of Hsieh et al. that requires specification of the population prevalence and that employs Schouten's sample size formula for a t-test with unequal variances and group sizes. This approach appears to increase the accuracy of the sample size estimates for LR with one continuous covariate.
Stochastic development regression using method of moments
DEFF Research Database (Denmark)
Kühnel, Line; Sommer, Stefan Horst
2017-01-01
This paper considers the estimation problem arising when inferring parameters in the stochastic development regression model for manifold valued non-linear data. Stochastic development regression captures the relation between manifold-valued response and Euclidean covariate variables using...... the stochastic development construction. It is thereby able to incorporate several covariate variables and random effects. The model is intrinsically defined using the connection of the manifold, and the use of stochastic development avoids linearizing the geometry. We propose to infer parameters using...... the Method of Moments procedure that matches known constraints on moments of the observations conditional on the latent variables. The performance of the model is investigated in a simulation example using data on finite dimensional landmark manifolds....
Methods of Detecting Outliers in A Regression Analysis Model ...
African Journals Online (AJOL)
PROF. O. E. OSUAGWU
2013-06-01
Jun 1, 2013 ... especially true in observational studies .... Simple linear regression and multiple ... The simple linear ..... Grubbs,F.E (1950): Sample Criteria for Testing Outlying observations: Annals of ... In experimental design, the Relative.
Substoichiometric method in the simple radiometric analysis
International Nuclear Information System (INIS)
Ikeda, N.; Noguchi, K.
1979-01-01
The substoichiometric method is applied to simple radiometric analysis. Two methods - the standard reagent method and the standard sample method - are proposed. The validity of the principle of the methods is verified experimentally in the determination of silver by the precipitation method, or of zinc by the ion-exchange or solvent-extraction method. The proposed methods are simple and rapid compared with the conventional superstoichiometric method. (author)
Wrist arthrography: a simple method
Energy Technology Data Exchange (ETDEWEB)
Berna-Serna, Juan D.; Reus, Manuel; Alonso, Jose [Virgen de la Arrixaca University Hospital, Department of Radiology, El Palmar (Murcia) (Spain); Martinez, Francisco; Domenech-Ratto, Gines [University of Murcia, Department of Human Anatomy, Faculty of Medicine, Murcia (Spain)
2006-02-01
A technique of wrist arthrography is presented using an adhesive marker-plate with radiopaque coordinates to identify precisely sites for puncture arthrography of the wrist and to obviate the need for fluoroscopic guidance. Radiocarpal joint arthrography was performed successfully in all 24 cases, 14 in the cadaveric wrists and 10 in the live patients. The arthrographic procedure described in this study is simple, safe, and rapid, and has the advantage of precise localisation of the site for puncture without need for fluoroscopic guidance. (orig.)
Methadone radioimmunoassay: two simple methods
International Nuclear Information System (INIS)
Robinson, K.; Smith, R.N.
1983-01-01
Two simple and economical radioimmunoassays for methadone in blood or urine are described. Haemolysis, decomposition, common anticoagulants and sodium fluoride do not affect the results. One assay used commercially-available [1- 3 H](-)-methadone hydrobromide as the label, while the other uses a radioiodinated conjugate of 4-dimethylamino-2,2-diphenylpentanoic acid and L-tyrosine methyl ester. A commercially-available antiserum is used in both assays. Normethadone and α-methadol cross-react to a small extent with the antiserum while methadone metabolites, dextropropoxyphene, dipipanone and phenadoxone have negligible cross-reactivities. The 'cut-offs' of the two assays as described are 30 and 33 ng ml -1 for blood, and 24 and 21 ng ml -1 for urine. The assay using the radioiodinated conjugate can be made more sensitive if required by increasing the specific activity of the label. (author)
Method for nonlinear exponential regression analysis
Junkin, B. G.
1972-01-01
Two computer programs developed according to two general types of exponential models for conducting nonlinear exponential regression analysis are described. Least squares procedure is used in which the nonlinear problem is linearized by expanding in a Taylor series. Program is written in FORTRAN 5 for the Univac 1108 computer.
Linking Simple Economic Theory Models and the Cointegrated Vector AutoRegressive Model
DEFF Research Database (Denmark)
Møller, Niels Framroze
This paper attempts to clarify the connection between simple economic theory models and the approach of the Cointegrated Vector-Auto-Regressive model (CVAR). By considering (stylized) examples of simple static equilibrium models, it is illustrated in detail, how the theoretical model and its stru....... Further fundamental extensions and advances to more sophisticated theory models, such as those related to dynamics and expectations (in the structural relations) are left for future papers......This paper attempts to clarify the connection between simple economic theory models and the approach of the Cointegrated Vector-Auto-Regressive model (CVAR). By considering (stylized) examples of simple static equilibrium models, it is illustrated in detail, how the theoretical model and its......, it is demonstrated how other controversial hypotheses such as Rational Expectations can be formulated directly as restrictions on the CVAR-parameters. A simple example of a "Neoclassical synthetic" AS-AD model is also formulated. Finally, the partial- general equilibrium distinction is related to the CVAR as well...
A Simple Preparation Method for Diphosphoimidazole
DEFF Research Database (Denmark)
Rosenberg, T.
1964-01-01
A simple method for the preparation of diphosphoimidazole is presented that involves direct phosphorylation of imidazole by phosphorus oxychloride in alkaline aqueous solution. Details are given on the use of diphosphoimidazole in preparing sodium phosphoramidate and certain phosphorylated amino...
A method for nonlinear exponential regression analysis
Junkin, B. G.
1971-01-01
A computer-oriented technique is presented for performing a nonlinear exponential regression analysis on decay-type experimental data. The technique involves the least squares procedure wherein the nonlinear problem is linearized by expansion in a Taylor series. A linear curve fitting procedure for determining the initial nominal estimates for the unknown exponential model parameters is included as an integral part of the technique. A correction matrix was derived and then applied to the nominal estimate to produce an improved set of model parameters. The solution cycle is repeated until some predetermined criterion is satisfied.
Herminiati, A.; Rahman, T.; Turmala, E.; Fitriany, C. G.
2017-12-01
The purpose of this study was to determine the correlation of different concentrations of modified cassava flour that was processed for banana fritter flour. The research method consists of two stages: (1) to determine the different types of flour: cassava flour, modified cassava flour-A (using the method of the lactid acid bacteria), and modified cassava flour-B (using the method of the autoclaving cooling cycle), then conducted on organoleptic test and physicochemical analysis; (2) to determine the correlation of concentration of modified cassava flour for banana fritter flour, by design was used simple linear regression. The factors were used different concentrations of modified cassava flour-B (y1) 40%, (y2) 50%, and (y3) 60%. The response in the study includes physical analysis (whiteness of flour, water holding capacity-WHC, oil holding capacity-OHC), chemical analysis (moisture content, ash content, crude fiber content, starch content), and organoleptic (color, aroma, taste, texture). The results showed that the type of flour selected from the organoleptic test was modified cassava flour-B. Analysis results of modified cassava flour-B component containing whiteness of flour 60.42%; WHC 41.17%; OHC 21.15%; moisture content 4.4%; ash content 1.75%; crude fiber content 1.86%; starch content 67.31%. The different concentrations of modified cassava flour-B with the results of the analysis provides correlation to the whiteness of flour, WHC, OHC, moisture content, ash content, crude fiber content, and starch content. The different concentrations of modified cassava flour-B does not affect the color, aroma, taste, and texture.
Simple Calculation Programs for Biology Immunological Methods
Indian Academy of Sciences (India)
First page Back Continue Last page Overview Graphics. Simple Calculation Programs for Biology Immunological Methods. Computation of Ab/Ag Concentration from EISA data. Graphical Method; Raghava et al., 1992, J. Immuno. Methods 153: 263. Determination of affinity of Monoclonal Antibody. Using non-competitive ...
Kwan, Johnny S H; Kung, Annie W C; Sham, Pak C
2011-09-01
Selective genotyping can increase power in quantitative trait association. One example of selective genotyping is two-tail extreme selection, but simple linear regression analysis gives a biased genetic effect estimate. Here, we present a simple correction for the bias.
Simple Calculation Programs for Biology Other Methods
Indian Academy of Sciences (India)
First page Back Continue Last page Overview Graphics. Simple Calculation Programs for Biology Other Methods. Hemolytic potency of drugs. Raghava et al., (1994) Biotechniques 17: 1148. FPMAP: methods for classification and identification of microorganisms 16SrRNA. graphical display of restriction and fragment map of ...
Simple gas chromatographic method for furfural analysis.
Gaspar, Elvira M S M; Lopes, João F
2009-04-03
A new, simple, gas chromatographic method was developed for the direct analysis of 5-hydroxymethylfurfural (5-HMF), 2-furfural (2-F) and 5-methylfurfural (5-MF) in liquid and water soluble foods, using direct immersion SPME coupled to GC-FID and/or GC-TOF-MS. The fiber (DVB/CAR/PDMS) conditions were optimized: pH effect, temperature, adsorption and desorption times. The method is simple and accurate (RSDfurfurals will contribute to characterise and quantify their presence in the human diet.
Koeneman, Margot M; van Lint, Freyja H M; van Kuijk, Sander M J; Smits, Luc J M; Kooreman, Loes F S; Kruitwagen, Roy F P M; Kruse, Arnold J
2017-01-01
This study aims to develop a prediction model for spontaneous regression of cervical intraepithelial neoplasia grade 2 (CIN 2) lesions based on simple clinicopathological parameters. The study was conducted at Maastricht University Medical Center, the Netherlands. The prediction model was developed in a retrospective cohort of 129 women with a histologic diagnosis of CIN 2 who were managed by watchful waiting for 6 to 24months. Five potential predictors for spontaneous regression were selected based on the literature and expert opinion and were analyzed in a multivariable logistic regression model, followed by backward stepwise deletion based on the Wald test. The prediction model was internally validated by the bootstrapping method. Discriminative capacity and accuracy were tested by assessing the area under the receiver operating characteristic curve (AUC) and a calibration plot. Disease regression within 24months was seen in 91 (71%) of 129 patients. A prediction model was developed including the following variables: smoking, Papanicolaou test outcome before the CIN 2 diagnosis, concomitant CIN 1 diagnosis in the same biopsy, and more than 1 biopsy containing CIN 2. Not smoking, Papanicolaou class predictive of disease regression. The AUC was 69.2% (95% confidence interval, 58.5%-79.9%), indicating a moderate discriminative ability of the model. The calibration plot indicated good calibration of the predicted probabilities. This prediction model for spontaneous regression of CIN 2 may aid physicians in the personalized management of these lesions. Copyright © 2016 Elsevier Inc. All rights reserved.
Simple method for calculating island widths
International Nuclear Information System (INIS)
Cary, J.R.; Hanson, J.D.; Carreras, B.A.; Lynch, V.E.
1989-01-01
A simple method for calculating magnetic island widths has been developed. This method uses only information obtained from integrating along the closed field line at the island center. Thus, this method is computationally less intensive than the usual method of producing surfaces of section of sufficient detail to locate and resolve the island separatrix. This method has been implemented numerically and used to analyze the buss work islands of ATF. In this case the method proves to be accurate to at least within 30%. 7 refs
Simple method for quick estimation of aquifer hydrogeological parameters
Ma, C.; Li, Y. Y.
2017-08-01
Development of simple and accurate methods to determine the aquifer hydrogeological parameters was of importance for groundwater resources assessment and management. Aiming at the present issue of estimating aquifer parameters based on some data of the unsteady pumping test, a fitting function of Theis well function was proposed using fitting optimization method and then a unitary linear regression equation was established. The aquifer parameters could be obtained by solving coefficients of the regression equation. The application of the proposed method was illustrated, using two published data sets. By the error statistics and analysis on the pumping drawdown, it showed that the method proposed in this paper yielded quick and accurate estimates of the aquifer parameters. The proposed method could reliably identify the aquifer parameters from long distance observed drawdowns and early drawdowns. It was hoped that the proposed method in this paper would be helpful for practicing hydrogeologists and hydrologists.
Casero-Alonso, V; López-Fidalgo, J; Torsney, B
2017-01-01
Binary response models are used in many real applications. For these models the Fisher information matrix (FIM) is proportional to the FIM of a weighted simple linear regression model. The same is also true when the weight function has a finite integral. Thus, optimal designs for one binary model are also optimal for the corresponding weighted linear regression model. The main objective of this paper is to provide a tool for the construction of MV-optimal designs, minimizing the maximum of the variances of the estimates, for a general design space. MV-optimality is a potentially difficult criterion because of its nondifferentiability at equal variance designs. A methodology for obtaining MV-optimal designs where the design space is a compact interval [a, b] will be given for several standard weight functions. The methodology will allow us to build a user-friendly computer tool based on Mathematica to compute MV-optimal designs. Some illustrative examples will show a representation of MV-optimal designs in the Euclidean plane, taking a and b as the axes. The applet will be explained using two relevant models. In the first one the case of a weighted linear regression model is considered, where the weight function is directly chosen from a typical family. In the second example a binary response model is assumed, where the probability of the outcome is given by a typical probability distribution. Practitioners can use the provided applet to identify the solution and to know the exact support points and design weights. Copyright Â© 2016 Elsevier Ireland Ltd. All rights reserved.
Simple Synthesis Method for Alumina Nanoparticle
Directory of Open Access Journals (Sweden)
Daniel Damian
2017-11-01
Full Text Available Globally, the human population steady increase, expansion of urban areas, excessive industrialization including in agriculture, caused not only decrease to depletion of non-renewable resources, a rapid deterioration of the environment with negative impact on water quality, soil productivity and of course quality of life in general. This paper aims to prepare size controlled nanoparticles of aluminum oxide using a simple synthesis method. The morphology and dimensions of nanomaterial was investigated using modern analytical techniques: SEM/EDAX and XRD spectroscopy.
Ridge regression estimator: combining unbiased and ordinary ridge regression methods of estimation
Directory of Open Access Journals (Sweden)
Sharad Damodar Gore
2009-10-01
Full Text Available Statistical literature has several methods for coping with multicollinearity. This paper introduces a new shrinkage estimator, called modified unbiased ridge (MUR. This estimator is obtained from unbiased ridge regression (URR in the same way that ordinary ridge regression (ORR is obtained from ordinary least squares (OLS. Properties of MUR are derived. Results on its matrix mean squared error (MMSE are obtained. MUR is compared with ORR and URR in terms of MMSE. These results are illustrated with an example based on data generated by Hoerl and Kennard (1975.
A multiple regression method for genomewide association studies ...
Indian Academy of Sciences (India)
Bujun Mei
2018-06-07
Jun 7, 2018 ... Similar to the typical genomewide association tests using LD ... new approach performed validly when the multiple regression based on linkage method was employed. .... the model, two groups of scenarios were simulated.
Kwan, Johnny S. H.; Kung, Annie W. C.; Sham, Pak C.
2011-01-01
Selective genotyping can increase power in quantitative trait association. One example of selective genotyping is two-tail extreme selection, but simple linear regression analysis gives a biased genetic effect estimate. Here, we present a simple correction for the bias. © The Author(s) 2011.
BOX-COX REGRESSION METHOD IN TIME SCALING
Directory of Open Access Journals (Sweden)
ATİLLA GÖKTAŞ
2013-06-01
Full Text Available Box-Cox regression method with λj, for j = 1, 2, ..., k, power transformation can be used when dependent variable and error term of the linear regression model do not satisfy the continuity and normality assumptions. The situation obtaining the smallest mean square error when optimum power λj, transformation for j = 1, 2, ..., k, of Y has been discussed. Box-Cox regression method is especially appropriate to adjust existence skewness or heteroscedasticity of error terms for a nonlinear functional relationship between dependent and explanatory variables. In this study, the advantage and disadvantage use of Box-Cox regression method have been discussed in differentiation and differantial analysis of time scale concept.
On two flexible methods of 2-dimensional regression analysis
Czech Academy of Sciences Publication Activity Database
Volf, Petr
2012-01-01
Roč. 18, č. 4 (2012), s. 154-164 ISSN 1803-9782 Grant - others:GA ČR(CZ) GAP209/10/2045 Institutional support: RVO:67985556 Keywords : regression analysis * Gordon surface * prediction error * projection pursuit Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2013/SI/volf-on two flexible methods of 2-dimensional regression analysis.pdf
Hassan, A K
2015-01-01
In this work, O/W emulsion sets were prepared by using different concentrations of two nonionic surfactants. The two surfactants, tween 80(HLB=15.0) and span 80(HLB=4.3) were used in a fixed proportions equal to 0.55:0.45 respectively. HLB value of the surfactants blends were fixed at 10.185. The surfactants blend concentration is starting from 3% up to 19%. For each O/W emulsion set the conductivity was measured at room temperature (25±2°), 40, 50, 60, 70 and 80°. Applying the simple linear regression least squares method statistical analysis to the temperature-conductivity obtained data determines the effective surfactants blend concentration required for preparing the most stable O/W emulsion. These results were confirmed by applying the physical stability centrifugation testing and the phase inversion temperature range measurements. The results indicated that, the relation which represents the most stable O/W emulsion has the strongest direct linear relationship between temperature and conductivity. This relationship is linear up to 80°. This work proves that, the most stable O/W emulsion is determined via the determination of the maximum R² value by applying of the simple linear regression least squares method to the temperature-conductivity obtained data up to 80°, in addition to, the true maximum slope is represented by the equation which has the maximum R² value. Because the conditions would be changed in a more complex formulation, the method of the determination of the effective surfactants blend concentration was verified by applying it for more complex formulations of 2% O/W miconazole nitrate cream and the results indicate its reproducibility.
Fuzzy Linear Regression for the Time Series Data which is Fuzzified with SMRGT Method
Directory of Open Access Journals (Sweden)
Seçil YALAZ
2016-10-01
Full Text Available Our work on regression and classification provides a new contribution to the analysis of time series used in many areas for years. Owing to the fact that convergence could not obtained with the methods used in autocorrelation fixing process faced with time series regression application, success is not met or fall into obligation of changing the models’ degree. Changing the models’ degree may not be desirable in every situation. In our study, recommended for these situations, time series data was fuzzified by using the simple membership function and fuzzy rule generation technique (SMRGT and to estimate future an equation has created by applying fuzzy least square regression (FLSR method which is a simple linear regression method to this data. Although SMRGT has success in determining the flow discharge in open channels and can be used confidently for flow discharge modeling in open canals, as well as in pipe flow with some modifications, there is no clue about that this technique is successful in fuzzy linear regression modeling. Therefore, in order to address the luck of such a modeling, a new hybrid model has been described within this study. In conclusion, to demonstrate our methods’ efficiency, classical linear regression for time series data and linear regression for fuzzy time series data were applied to two different data sets, and these two approaches performances were compared by using different measures.
Thermal Efficiency Degradation Diagnosis Method Using Regression Model
International Nuclear Information System (INIS)
Jee, Chang Hyun; Heo, Gyun Young; Jang, Seok Won; Lee, In Cheol
2011-01-01
This paper proposes an idea for thermal efficiency degradation diagnosis in turbine cycles, which is based on turbine cycle simulation under abnormal conditions and a linear regression model. The correlation between the inputs for representing degradation conditions (normally unmeasured but intrinsic states) and the simulation outputs (normally measured but superficial states) was analyzed with the linear regression model. The regression models can inversely response an associated intrinsic state for a superficial state observed from a power plant. The diagnosis method proposed herein is classified into three processes, 1) simulations for degradation conditions to get measured states (referred as what-if method), 2) development of the linear model correlating intrinsic and superficial states, and 3) determination of an intrinsic state using the superficial states of current plant and the linear regression model (referred as inverse what-if method). The what-if method is to generate the outputs for the inputs including various root causes and/or boundary conditions whereas the inverse what-if method is the process of calculating the inverse matrix with the given superficial states, that is, component degradation modes. The method suggested in this paper was validated using the turbine cycle model for an operating power plant
International Nuclear Information System (INIS)
Shuke, Noriyuki
1991-01-01
In hepatobiliary scintigraphy, kinetic model analysis, which provides kinetic parameters like hepatic extraction or excretion rate, have been done for quantitative evaluation of liver function. In this analysis, unknown model parameters are usually determined using nonlinear least square regression method (NLS method) where iterative calculation and initial estimate for unknown parameters are required. As a simple alternative to NLS method, direct integral linear least square regression method (DILS method), which can determine model parameters by a simple calculation without initial estimate, is proposed, and tested the applicability to analysis of hepatobiliary scintigraphy. In order to see whether DILS method could determine model parameters as good as NLS method, or to determine appropriate weight for DILS method, simulated theoretical data based on prefixed parameters were fitted to 1 compartment model using both DILS method with various weightings and NLS method. The parameter values obtained were then compared with prefixed values which were used for data generation. The effect of various weights on the error of parameter estimate was examined, and inverse of time was found to be the best weight to make the error minimum. When using this weight, DILS method could give parameter values close to those obtained by NLS method and both parameter values were very close to prefixed values. With appropriate weighting, the DILS method could provide reliable parameter estimate which is relatively insensitive to the data noise. In conclusion, the DILS method could be used as a simple alternative to NLS method, providing reliable parameter estimate. (author)
Statistical approach for selection of regression model during validation of bioanalytical method
Directory of Open Access Journals (Sweden)
Natalija Nakov
2014-06-01
Full Text Available The selection of an adequate regression model is the basis for obtaining accurate and reproducible results during the bionalytical method validation. Given the wide concentration range, frequently present in bioanalytical assays, heteroscedasticity of the data may be expected. Several weighted linear and quadratic regression models were evaluated during the selection of the adequate curve fit using nonparametric statistical tests: One sample rank test and Wilcoxon signed rank test for two independent groups of samples. The results obtained with One sample rank test could not give statistical justification for the selection of linear vs. quadratic regression models because slight differences between the error (presented through the relative residuals were obtained. Estimation of the significance of the differences in the RR was achieved using Wilcoxon signed rank test, where linear and quadratic regression models were treated as two independent groups. The application of this simple non-parametric statistical test provides statistical confirmation of the choice of an adequate regression model.
Parker, Peter A.; Geoffrey, Vining G.; Wilson, Sara R.; Szarka, John L., III; Johnson, Nels G.
2010-01-01
The calibration of measurement systems is a fundamental but under-studied problem within industrial statistics. The origins of this problem go back to basic chemical analysis based on NIST standards. In today's world these issues extend to mechanical, electrical, and materials engineering. Often, these new scenarios do not provide "gold standards" such as the standard weights provided by NIST. This paper considers the classic "forward regression followed by inverse regression" approach. In this approach the initial experiment treats the "standards" as the regressor and the observed values as the response to calibrate the instrument. The analyst then must invert the resulting regression model in order to use the instrument to make actual measurements in practice. This paper compares this classical approach to "reverse regression," which treats the standards as the response and the observed measurements as the regressor in the calibration experiment. Such an approach is intuitively appealing because it avoids the need for the inverse regression. However, it also violates some of the basic regression assumptions.
Comparing parametric and nonparametric regression methods for panel data
DEFF Research Database (Denmark)
Czekaj, Tomasz Gerard; Henningsen, Arne
We investigate and compare the suitability of parametric and non-parametric stochastic regression methods for analysing production technologies and the optimal firm size. Our theoretical analysis shows that the most commonly used functional forms in empirical production analysis, Cobb......-Douglas and Translog, are unsuitable for analysing the optimal firm size. We show that the Translog functional form implies an implausible linear relationship between the (logarithmic) firm size and the elasticity of scale, where the slope is artificially related to the substitutability between the inputs....... The practical applicability of the parametric and non-parametric regression methods is scrutinised and compared by an empirical example: we analyse the production technology and investigate the optimal size of Polish crop farms based on a firm-level balanced panel data set. A nonparametric specification test...
Regression dilution bias: tools for correction methods and sample size calculation.
Berglund, Lars
2012-08-01
Random errors in measurement of a risk factor will introduce downward bias of an estimated association to a disease or a disease marker. This phenomenon is called regression dilution bias. A bias correction may be made with data from a validity study or a reliability study. In this article we give a non-technical description of designs of reliability studies with emphasis on selection of individuals for a repeated measurement, assumptions of measurement error models, and correction methods for the slope in a simple linear regression model where the dependent variable is a continuous variable. Also, we describe situations where correction for regression dilution bias is not appropriate. The methods are illustrated with the association between insulin sensitivity measured with the euglycaemic insulin clamp technique and fasting insulin, where measurement of the latter variable carries noticeable random error. We provide software tools for estimation of a corrected slope in a simple linear regression model assuming data for a continuous dependent variable and a continuous risk factor from a main study and an additional measurement of the risk factor in a reliability study. Also, we supply programs for estimation of the number of individuals needed in the reliability study and for choice of its design. Our conclusion is that correction for regression dilution bias is seldom applied in epidemiological studies. This may cause important effects of risk factors with large measurement errors to be neglected.
Simple method for the estimation of glomerular filtration rate
Energy Technology Data Exchange (ETDEWEB)
Groth, T [Group for Biomedical Informatics, Uppsala Univ. Data Center, Uppsala (Sweden); Tengstroem, B [District General Hospital, Skoevde (Sweden)
1977-02-01
A simple method is presented for indirect estimation of the glomerular filtration rate from two venous blood samples, drawn after a single injection of a small dose of (/sup 125/I)sodium iothalamate (10 ..mu..Ci). The method does not require exact dosage, as the first sample, taken after a few minutes (t=5 min) after injection, is used to normilize the value of the second sample, which should be taken in between 2 to 4 h after injection. The glomerular filtration rate, as measured by standard insulin clearance, may then be predicted from the logarithm of the normalized value and linear regression formulas with a standard error of estimate of the order of 1 to 2 ml/min/1.73 m/sup 2/. The slope-intercept method for direct estimation of glomerular filtration rate is also evaluated and found to significantly underestimate standard insulin clearance. The normalized 'single-point' method is concluded to be superior to the slope-intercept method and more sophisticated methods using curve fitting technique, with regard to predictive force and clinical applicability.
FATAL, General Experiment Fitting Program by Nonlinear Regression Method
International Nuclear Information System (INIS)
Salmon, L.; Budd, T.; Marshall, M.
1982-01-01
1 - Description of problem or function: A generalized fitting program with a free-format keyword interface to the user. It permits experimental data to be fitted by non-linear regression methods to any function describable by the user. The user requires the minimum of computer experience but needs to provide a subroutine to define his function. Some statistical output is included as well as 'best' estimates of the function's parameters. 2 - Method of solution: The regression method used is based on a minimization technique devised by Powell (Harwell Subroutine Library VA05A, 1972) which does not require the use of analytical derivatives. The method employs a quasi-Newton procedure balanced with a steepest descent correction. Experience shows this to be efficient for a very wide range of application. 3 - Restrictions on the complexity of the problem: The current version of the program permits functions to be defined with up to 20 parameters. The function may be fitted to a maximum of 400 points, preferably with estimated values of weight given
USE OF THE SIMPLE LINEAR REGRESSION MODEL IN MACRO-ECONOMICAL ANALYSES
Directory of Open Access Journals (Sweden)
Constantin ANGHELACHE
2011-10-01
Full Text Available The article presents the fundamental aspects of the linear regression, as a toolbox which can be used in macroeconomic analyses. The article describes the estimation of the parameters, the statistical tests used, the homoscesasticity and heteroskedasticity. The use of econometrics instrument in macroeconomics is an important factor that guarantees the quality of the models, analyses, results and possible interpretation that can be drawn at this level.
Mapping urban environmental noise: a land use regression method.
Xie, Dan; Liu, Yi; Chen, Jining
2011-09-01
Forecasting and preventing urban noise pollution are major challenges in urban environmental management. Most existing efforts, including experiment-based models, statistical models, and noise mapping, however, have limited capacity to explain the association between urban growth and corresponding noise change. Therefore, these conventional methods can hardly forecast urban noise at a given outlook of development layout. This paper, for the first time, introduces a land use regression method, which has been applied for simulating urban air quality for a decade, to construct an urban noise model (LUNOS) in Dalian Municipality, Northwest China. The LUNOS model describes noise as a dependent variable of surrounding various land areas via a regressive function. The results suggest that a linear model performs better in fitting monitoring data, and there is no significant difference of the LUNOS's outputs when applied to different spatial scales. As the LUNOS facilitates a better understanding of the association between land use and urban environmental noise in comparison to conventional methods, it can be regarded as a promising tool for noise prediction for planning purposes and aid smart decision-making.
Directory of Open Access Journals (Sweden)
Abdul Ghafoor Memon
2014-03-01
Full Text Available In this study, thermodynamic and statistical analyses were performed on a gas turbine system, to assess the impact of some important operating parameters like CIT (Compressor Inlet Temperature, PR (Pressure Ratio and TIT (Turbine Inlet Temperature on its performance characteristics such as net power output, energy efficiency, exergy efficiency and fuel consumption. Each performance characteristic was enunciated as a function of operating parameters, followed by a parametric study and optimization. The results showed that the performance characteristics increase with an increase in the TIT and a decrease in the CIT, except fuel consumption which behaves oppositely. The net power output and efficiencies increase with the PR up to certain initial values and then start to decrease, whereas the fuel consumption always decreases with an increase in the PR. The results of exergy analysis showed the combustion chamber as a major contributor to the exergy destruction, followed by stack gas. Subsequently, multiple regression models were developed to correlate each of the response variables (performance characteristic with the predictor variables (operating parameters. The regression model equations showed a significant statistical relationship between the predictor and response variables.
Action Research Methods: Plain and Simple
Klein, Sheri R., Ed.
2012-01-01
Among the plethora of action research books on the market, there is no one text exclusively devoted to understanding how to acquire and interpret research data. Action Research Methods provides a balanced overview of the quantitative and qualitative methodologies and methods for conducting action research within a variety of educational…
A Simple Microsoft Excel Method to Predict Antibiotic Outbreaks and Underutilization.
Miglis, Cristina; Rhodes, Nathaniel J; Avedissian, Sean N; Zembower, Teresa R; Postelnick, Michael; Wunderink, Richard G; Sutton, Sarah H; Scheetz, Marc H
2017-07-01
Benchmarking strategies are needed to promote the appropriate use of antibiotics. We have adapted a simple regressive method in Microsoft Excel that is easily implementable and creates predictive indices. This method trends consumption over time and can identify periods of over- and underuse at the hospital level. Infect Control Hosp Epidemiol 2017;38:860-862.
Dimension Reduction and Discretization in Stochastic Problems by Regression Method
DEFF Research Database (Denmark)
Ditlevsen, Ove Dalager
1996-01-01
The chapter mainly deals with dimension reduction and field discretizations based directly on the concept of linear regression. Several examples of interesting applications in stochastic mechanics are also given.Keywords: Random fields discretization, Linear regression, Stochastic interpolation, ...
Analyzing Big Data with the Hybrid Interval Regression Methods
Directory of Open Access Journals (Sweden)
Chia-Hui Huang
2014-01-01
Full Text Available Big data is a new trend at present, forcing the significant impacts on information technologies. In big data applications, one of the most concerned issues is dealing with large-scale data sets that often require computation resources provided by public cloud services. How to analyze big data efficiently becomes a big challenge. In this paper, we collaborate interval regression with the smooth support vector machine (SSVM to analyze big data. Recently, the smooth support vector machine (SSVM was proposed as an alternative of the standard SVM that has been proved more efficient than the traditional SVM in processing large-scale data. In addition the soft margin method is proposed to modify the excursion of separation margin and to be effective in the gray zone that the distribution of data becomes hard to be described and the separation margin between classes.
DEFF Research Database (Denmark)
Sharifzadeh, Sara; Skytte, Jacob Lercke; Nielsen, Otto Højager Attermann
2012-01-01
Statistical solutions find wide spread use in food and medicine quality control. We investigate the effect of different regression and sparse regression methods for a viscosity estimation problem using the spectro-temporal features from new Sub-Surface Laser Scattering (SLS) vision system. From...... with sparse LAR, lasso and Elastic Net (EN) sparse regression methods. Due to the inconsistent measurement condition, Locally Weighted Scatter plot Smoothing (Loess) has been employed to alleviate the undesired variation in the estimated viscosity. The experimental results of applying different methods show...
Afantitis, Antreas; Melagraki, Georgia; Sarimveis, Haralambos; Koutentis, Panayiotis A; Markopoulos, John; Igglessi-Markopoulou, Olga
2006-08-01
A quantitative-structure activity relationship was obtained by applying Multiple Linear Regression Analysis to a series of 80 1-[2-hydroxyethoxy-methyl]-6-(phenylthio) thymine (HEPT) derivatives with significant anti-HIV activity. For the selection of the best among 37 different descriptors, the Elimination Selection Stepwise Regression Method (ES-SWR) was utilized. The resulting QSAR model (R (2) (CV) = 0.8160; S (PRESS) = 0.5680) proved to be very accurate both in training and predictive stages.
A simple method for human peripheral blood monocyte Isolation
Directory of Open Access Journals (Sweden)
Marcos C de Almeida
2000-04-01
Full Text Available We describe a simple method using percoll gradient for isolation of highly enriched human monocytes. High numbers of fully functional cells are obtained from whole blood or buffy coat cells. The use of simple laboratory equipment and a relatively cheap reagent makes the described method a convenient approach to obtaining human monocytes.
A simple and rapid method to estimate radiocesium in man
International Nuclear Information System (INIS)
Kindl, P.; Steger, F.
1990-09-01
A simple and rapid method for monitoring internal contamination of radiocesium in man was developed. This method is based on measurements of the γ-rays emitted from the muscular parts between the thights by a simple NaJ(Tl)-system. The experimental procedure, the calibration, the estimation of the body activity and results are explained and discussed. (Authors)
A simple method to estimate interwell autocorrelation
Energy Technology Data Exchange (ETDEWEB)
Pizarro, J.O.S.; Lake, L.W. [Univ. of Texas, Austin, TX (United States)
1997-08-01
The estimation of autocorrelation in the lateral or interwell direction is important when performing reservoir characterization studies using stochastic modeling. This paper presents a new method to estimate the interwell autocorrelation based on parameters, such as the vertical range and the variance, that can be estimated with commonly available data. We used synthetic fields that were generated from stochastic simulations to provide data to construct the estimation charts. These charts relate the ratio of areal to vertical variance and the autocorrelation range (expressed variously) in two directions. Three different semivariogram models were considered: spherical, exponential and truncated fractal. The overall procedure is demonstrated using field data. We find that the approach gives the most self-consistent results when it is applied to previously identified facies. Moreover, the autocorrelation trends follow the depositional pattern of the reservoir, which gives confidence in the validity of the approach.
Sun, Jianguo; Feng, Yanqin; Zhao, Hui
2015-01-01
Interval-censored failure time data occur in many fields including epidemiological and medical studies as well as financial and sociological studies, and many authors have investigated their analysis (Sun, The statistical analysis of interval-censored failure time data, 2006; Zhang, Stat Modeling 9:321-343, 2009). In particular, a number of procedures have been developed for regression analysis of interval-censored data arising from the proportional hazards model (Finkelstein, Biometrics 42:845-854, 1986; Huang, Ann Stat 24:540-568, 1996; Pan, Biometrics 56:199-203, 2000). For most of these procedures, however, one drawback is that they involve estimation of both regression parameters and baseline cumulative hazard function. In this paper, we propose two simple estimation approaches that do not need estimation of the baseline cumulative hazard function. The asymptotic properties of the resulting estimates are given, and an extensive simulation study is conducted and indicates that they work well for practical situations.
Simple, miniaturized blood plasma extraction method.
Kim, Jin-Hee; Woenker, Timothy; Adamec, Jiri; Regnier, Fred E
2013-12-03
A rapid plasma extraction technology that collects a 2.5 μL aliquot of plasma within three minutes from a finger-stick derived drop of blood was evaluated. The utility of the plasma extraction cards used was that a paper collection disc bearing plasma was produced that could be air-dried in fifteen minutes and placed in a mailing envelop for transport to an analytical laboratory. This circumvents the need for venipuncture and blood collection in specialized vials by a phlebotomist along with centrifugation and refrigerated storage. Plasma extraction was achieved by applying a blood drop to a membrane stack through which plasma was drawn by capillary action. During the course of plasma migration to a collection disc at the bottom of the membrane stack blood cells were removed by a combination of adsorption and filtration. After the collection disc filled with an aliquot of plasma the upper membranes were stripped from the collection card and the collection disc was air-dried. Intercard differences in the volume of plasma collected varied approximately 1% while volume variations of less than 2% were seen with hematocrit levels ranging from 20% to 71%. Dried samples bearing metabolites and proteins were then extracted from the disc and analyzed. 25-Hydroxy vitamin D was quantified by LC-MS/MS analysis following derivatization with a secosteroid signal enhancing tag that imparted a permanent positive charge to the vitamin and reduced the limit of quantification (LOQ) to 1 pg of collected vitamin on the disc; comparable to values observed with liquid-liquid extraction (LLE) of a venipuncture sample. A similar study using conventional proteomics methods and spectral counting for quantification was conducted with yeast enolase added to serum as an internal standard. The LOQ with extracted serum samples for enolase was 1 μM, linear from 1 to 40 μM, the highest concentration examined. In all respects protein quantification with extracted serum samples was comparable to
Solution of the schrodinger equation in one dimension by simple method for a simple step potential
International Nuclear Information System (INIS)
Ertik, H.
2005-01-01
The coefficients of the transmission and reflection for the simple-step barrier potential were calculated by a simple method. Their values were entirely different from those often encountered in the literature. Especially in the case that the total energy is equal to the barrier potential, the value of 0,20 for the reflection coefficient was obtained whereas this is zero in the literature. This may be considered as an interesting point
A Simple UV Spectrophotometric Method for the Determination of ...
African Journals Online (AJOL)
The method was also used in the determination of the content of levofloxacin in two commercial brands of levofloxacin in the Nigerian market. Results: The regression data for the calibration plots exhibited good linear relationship (r = 0.999) over a concentration range of 0.25 – 12.0 ìg/ml and the linear regression equation ...
A simple and efficient electrochemical reductive method for ...
Indian Academy of Sciences (India)
Administrator
This approach opens up a new, practical and green reducing method to prepare large- scale graphene. ... has the following significant advantages: (1) It is simple to operate. .... The authors thank the National High Technology Research.
A simple flow-concentration modelling method for integrating water ...
African Journals Online (AJOL)
A simple flow-concentration modelling method for integrating water quality and ... flow requirements are assessed for maintenance low flow, drought low flow ... the instream concentrations of chemical constituents that will arise from different ...
Simple-MSSM: a simple and efficient method for simultaneous multi-site saturation mutagenesis.
Cheng, Feng; Xu, Jian-Miao; Xiang, Chao; Liu, Zhi-Qiang; Zhao, Li-Qing; Zheng, Yu-Guo
2017-04-01
To develop a practically simple and robust multi-site saturation mutagenesis (MSSM) method that enables simultaneously recombination of amino acid positions for focused mutant library generation. A general restriction enzyme-free and ligase-free MSSM method (Simple-MSSM) based on prolonged overlap extension PCR (POE-PCR) and Simple Cloning techniques. As a proof of principle of Simple-MSSM, the gene of eGFP (enhanced green fluorescent protein) was used as a template gene for simultaneous mutagenesis of five codons. Forty-eight randomly selected clones were sequenced. Sequencing revealed that all the 48 clones showed at least one mutant codon (mutation efficiency = 100%), and 46 out of the 48 clones had mutations at all the five codons. The obtained diversities at these five codons are 27, 24, 26, 26 and 22, respectively, which correspond to 84, 75, 81, 81, 69% of the theoretical diversity offered by NNK-degeneration (32 codons; NNK, K = T or G). The enzyme-free Simple-MSSM method can simultaneously and efficiently saturate five codons within one day, and therefore avoid missing interactions between residues in interacting amino acid networks.
A simple finite element method for linear hyperbolic problems
International Nuclear Information System (INIS)
Mu, Lin; Ye, Xiu
2017-01-01
Here, we introduce a simple finite element method for solving first order hyperbolic equations with easy implementation and analysis. Our new method, with a symmetric, positive definite system, is designed to use discontinuous approximations on finite element partitions consisting of arbitrary shape of polygons/polyhedra. Error estimate is established. Extensive numerical examples are tested that demonstrate the robustness and flexibility of the method.
A simple approximation method for dilute Ising systems
International Nuclear Information System (INIS)
Saber, M.
1996-10-01
We describe a simple approximate method to analyze dilute Ising systems. The method takes into consideration the fluctuations of the effective field, and is based on a probability distribution of random variables which correctly accounts for all the single site kinematic relations. It is shown that the simplest approximation gives satisfactory results when compared with other methods. (author). 12 refs, 2 tabs
Simple and inexpensive method for CT-guided stereotaxy
Energy Technology Data Exchange (ETDEWEB)
Wester, K; Sortland, O; Hauglie-Hanssen, E
1981-01-01
A simple and inexpensive method for CT-guided stereotaxy is described. The method requires neither sophisticated computer programs nor additional stereotactic equipment, such as special head holders for the CT, and can be easily obtained without technical assistance. The method is designed to yield the vertical coordinates.
A simple method for generating exactly solvable quantum mechanical potentials
Williams, B W
1993-01-01
A simple transformation method permitting the generation of exactly solvable quantum mechanical potentials from special functions solving second-order differential equations is reviewed. This method is applied to Gegenbauer polynomials to generate an attractive radial potential. The relationship of this method to the determination of supersymmetric quantum mechanical superpotentials is discussed, and the superpotential for the radial potential is also derived. (author)
Analysis of some methods for reduced rank Gaussian process regression
DEFF Research Database (Denmark)
Quinonero-Candela, J.; Rasmussen, Carl Edward
2005-01-01
While there is strong motivation for using Gaussian Processes (GPs) due to their excellent performance in regression and classification problems, their computational complexity makes them impractical when the size of the training set exceeds a few thousand cases. This has motivated the recent...... proliferation of a number of cost-effective approximations to GPs, both for classification and for regression. In this paper we analyze one popular approximation to GPs for regression: the reduced rank approximation. While generally GPs are equivalent to infinite linear models, we show that Reduced Rank...... Gaussian Processes (RRGPs) are equivalent to finite sparse linear models. We also introduce the concept of degenerate GPs and show that they correspond to inappropriate priors. We show how to modify the RRGP to prevent it from being degenerate at test time. Training RRGPs consists both in learning...
Chen, Carla Chia-Ming; Schwender, Holger; Keith, Jonathan; Nunkesser, Robin; Mengersen, Kerrie; Macrossan, Paula
2011-01-01
Due to advancements in computational ability, enhanced technology and a reduction in the price of genotyping, more data are being generated for understanding genetic associations with diseases and disorders. However, with the availability of large data sets comes the inherent challenges of new methods of statistical analysis and modeling. Considering a complex phenotype may be the effect of a combination of multiple loci, various statistical methods have been developed for identifying genetic epistasis effects. Among these methods, logic regression (LR) is an intriguing approach incorporating tree-like structures. Various methods have built on the original LR to improve different aspects of the model. In this study, we review four variations of LR, namely Logic Feature Selection, Monte Carlo Logic Regression, Genetic Programming for Association Studies, and Modified Logic Regression-Gene Expression Programming, and investigate the performance of each method using simulated and real genotype data. We contrast these with another tree-like approach, namely Random Forests, and a Bayesian logistic regression with stochastic search variable selection.
DEFF Research Database (Denmark)
Kirkeby, Carsten Thure; Hisham Beshara Halasa, Tariq; Gussmann, Maya Katrin
2017-01-01
the transmission rate. We use data from the two simulation models and vary the sampling intervals and the size of the population sampled. We devise two new methods to determine transmission rate, and compare these to the frequently used Poisson regression method in both epidemic and endemic situations. For most...... tested scenarios these new methods perform similar or better than Poisson regression, especially in the case of long sampling intervals. We conclude that transmission rate estimates are easily biased, which is important to take into account when using these rates in simulation models....
Helmreich, James E.; Krog, K. Peter
2018-01-01
We present a short, inquiry-based learning course on concepts and methods underlying ordinary least squares (OLS), least absolute deviation (LAD), and quantile regression (QR). Students investigate squared, absolute, and weighted absolute distance functions (metrics) as location measures. Using differential calculus and properties of convex…
A Simple HPLC Bioanalytical Method for the Determination of ...
African Journals Online (AJOL)
Purpose: To develop a simple, accurate, and precise high performance chromatography (HPLC) method with spectrophotometric detection for the determination of doxorubicin hydrochloride in rat plasma. Methods: Doxorubicin hydrochloride and daunorubicin hydrochloride (internal standard, IS) were separated on a C18 ...
Gallium determination with Rodamina B: a simple method
International Nuclear Information System (INIS)
Queiroz, R.R.U. de.
1981-01-01
A simple method for determining gallium with Rhodamine B, by the modification of the method proposed by Onishi and Sandell. The complex (RH) GaCl 4 is extracted with a mixture benzene-ethylacetate (3:1 V/V), from an aqueous medium 6 M in hydrochloric acid. The interference of foreign ions is studied. (C.G.C.) [pt
A method for fitting regression splines with varying polynomial order in the linear mixed model.
Edwards, Lloyd J; Stewart, Paul W; MacDougall, James E; Helms, Ronald W
2006-02-15
The linear mixed model has become a widely used tool for longitudinal analysis of continuous variables. The use of regression splines in these models offers the analyst additional flexibility in the formulation of descriptive analyses, exploratory analyses and hypothesis-driven confirmatory analyses. We propose a method for fitting piecewise polynomial regression splines with varying polynomial order in the fixed effects and/or random effects of the linear mixed model. The polynomial segments are explicitly constrained by side conditions for continuity and some smoothness at the points where they join. By using a reparameterization of this explicitly constrained linear mixed model, an implicitly constrained linear mixed model is constructed that simplifies implementation of fixed-knot regression splines. The proposed approach is relatively simple, handles splines in one variable or multiple variables, and can be easily programmed using existing commercial software such as SAS or S-plus. The method is illustrated using two examples: an analysis of longitudinal viral load data from a study of subjects with acute HIV-1 infection and an analysis of 24-hour ambulatory blood pressure profiles.
Kim, Yoonsang; Choi, Young-Ku; Emery, Sherry
2013-08-01
Several statistical packages are capable of estimating generalized linear mixed models and these packages provide one or more of three estimation methods: penalized quasi-likelihood, Laplace, and Gauss-Hermite. Many studies have investigated these methods' performance for the mixed-effects logistic regression model. However, the authors focused on models with one or two random effects and assumed a simple covariance structure between them, which may not be realistic. When there are multiple correlated random effects in a model, the computation becomes intensive, and often an algorithm fails to converge. Moreover, in our analysis of smoking status and exposure to anti-tobacco advertisements, we have observed that when a model included multiple random effects, parameter estimates varied considerably from one statistical package to another even when using the same estimation method. This article presents a comprehensive review of the advantages and disadvantages of each estimation method. In addition, we compare the performances of the three methods across statistical packages via simulation, which involves two- and three-level logistic regression models with at least three correlated random effects. We apply our findings to a real dataset. Our results suggest that two packages-SAS GLIMMIX Laplace and SuperMix Gaussian quadrature-perform well in terms of accuracy, precision, convergence rates, and computing speed. We also discuss the strengths and weaknesses of the two packages in regard to sample sizes.
International Nuclear Information System (INIS)
Gupta, N
2008-01-01
3013 containers are designed in accordance with the DOE-STD-3013-2004. These containers are qualified to store plutonium (Pu) bearing materials such as PuO2 for 50 years. DOT shipping packages such as the 9975 are used to store the 3013 containers in the K-Area Material Storage (KAMS) facility at Savannah River Site (SRS). DOE-STD-3013-2004 requires that a comprehensive surveillance program be set up to ensure that the 3013 container design parameters are not violated during the long term storage. To ensure structural integrity of the 3013 containers, thermal analyses using finite element models were performed to predict the contents and component temperatures for different but well defined parameters such as storage ambient temperature, PuO 2 density, fill heights, weights, and thermal loading. Interpolation is normally used to calculate temperatures if the actual parameter values are different from the analyzed values. A statistical analysis technique using regression methods is proposed to develop simple polynomial relations to predict temperatures for the actual parameter values found in the containers. The analysis shows that regression analysis is a powerful tool to develop simple relations to assess component temperatures
Kim, Yoonsang; Emery, Sherry
2013-01-01
Several statistical packages are capable of estimating generalized linear mixed models and these packages provide one or more of three estimation methods: penalized quasi-likelihood, Laplace, and Gauss-Hermite. Many studies have investigated these methods’ performance for the mixed-effects logistic regression model. However, the authors focused on models with one or two random effects and assumed a simple covariance structure between them, which may not be realistic. When there are multiple correlated random effects in a model, the computation becomes intensive, and often an algorithm fails to converge. Moreover, in our analysis of smoking status and exposure to anti-tobacco advertisements, we have observed that when a model included multiple random effects, parameter estimates varied considerably from one statistical package to another even when using the same estimation method. This article presents a comprehensive review of the advantages and disadvantages of each estimation method. In addition, we compare the performances of the three methods across statistical packages via simulation, which involves two- and three-level logistic regression models with at least three correlated random effects. We apply our findings to a real dataset. Our results suggest that two packages—SAS GLIMMIX Laplace and SuperMix Gaussian quadrature—perform well in terms of accuracy, precision, convergence rates, and computing speed. We also discuss the strengths and weaknesses of the two packages in regard to sample sizes. PMID:24288415
Estimating HIES Data through Ratio and Regression Methods for Different Sampling Designs
Directory of Open Access Journals (Sweden)
Faqir Muhammad
2007-01-01
Full Text Available In this study, comparison has been made for different sampling designs, using the HIES data of North West Frontier Province (NWFP for 2001-02 and 1998-99 collected from the Federal Bureau of Statistics, Statistical Division, Government of Pakistan, Islamabad. The performance of the estimators has also been considered using bootstrap and Jacknife. A two-stage stratified random sample design is adopted by HIES. In the first stage, enumeration blocks and villages are treated as the first stage Primary Sampling Units (PSU. The sample PSU’s are selected with probability proportional to size. Secondary Sampling Units (SSU i.e., households are selected by systematic sampling with a random start. They have used a single study variable. We have compared the HIES technique with some other designs, which are: Stratified Simple Random Sampling. Stratified Systematic Sampling. Stratified Ranked Set Sampling. Stratified Two Phase Sampling. Ratio and Regression methods were applied with two study variables, which are: Income (y and Household sizes (x. Jacknife and Bootstrap are used for variance replication. Simple Random Sampling with sample size (462 to 561 gave moderate variances both by Jacknife and Bootstrap. By applying Systematic Sampling, we received moderate variance with sample size (467. In Jacknife with Systematic Sampling, we obtained variance of regression estimator greater than that of ratio estimator for a sample size (467 to 631. At a sample size (952 variance of ratio estimator gets greater than that of regression estimator. The most efficient design comes out to be Ranked set sampling compared with other designs. The Ranked set sampling with jackknife and bootstrap, gives minimum variance even with the smallest sample size (467. Two Phase sampling gave poor performance. Multi-stage sampling applied by HIES gave large variances especially if used with a single study variable.
Sun, Jin; Rutkoski, Jessica E; Poland, Jesse A; Crossa, José; Jannink, Jean-Luc; Sorrells, Mark E
2017-07-01
High-throughput phenotyping (HTP) platforms can be used to measure traits that are genetically correlated with wheat ( L.) grain yield across time. Incorporating such secondary traits in the multivariate pedigree and genomic prediction models would be desirable to improve indirect selection for grain yield. In this study, we evaluated three statistical models, simple repeatability (SR), multitrait (MT), and random regression (RR), for the longitudinal data of secondary traits and compared the impact of the proposed models for secondary traits on their predictive abilities for grain yield. Grain yield and secondary traits, canopy temperature (CT) and normalized difference vegetation index (NDVI), were collected in five diverse environments for 557 wheat lines with available pedigree and genomic information. A two-stage analysis was applied for pedigree and genomic selection (GS). First, secondary traits were fitted by SR, MT, or RR models, separately, within each environment. Then, best linear unbiased predictions (BLUPs) of secondary traits from the above models were used in the multivariate prediction models to compare predictive abilities for grain yield. Predictive ability was substantially improved by 70%, on average, from multivariate pedigree and genomic models when including secondary traits in both training and test populations. Additionally, (i) predictive abilities slightly varied for MT, RR, or SR models in this data set, (ii) results indicated that including BLUPs of secondary traits from the MT model was the best in severe drought, and (iii) the RR model was slightly better than SR and MT models under drought environment. Copyright © 2017 Crop Science Society of America.
The modified simple equation method for solving some fractional ...
Indian Academy of Sciences (India)
... and processes in various areas of natural science. Thus, many effective and powerful methods have been established and improved. In this study, we establish exact solutions of the time fractional biological population model equation and nonlinearfractional Klein–Gordon equation by using the modified simple equation ...
Using container weights to determine irrigation needs: A simple method
R. Kasten Dumroese; Mark E. Montville; Jeremiah R. Pinto
2015-01-01
Proper irrigation can reduce water use, water waste, and incidence of disease. Knowing when to irrigate plants in container nurseries can be determined by weighing containers. This simple method is quantifiable, which is a benefit when more than one worker is responsible for irrigation. Irrigation is necessary when the container weighs some target as a proportion of...
Simple Calculation Programs for Biology Methods in Molecular ...
Indian Academy of Sciences (India)
First page Back Continue Last page Overview Graphics. Simple Calculation Programs for Biology Methods in Molecular Biology. GMAP: A program for mapping potential restriction sites. RE sites in ambiguous and non-ambiguous DNA sequence; Minimum number of silent mutations required for introducing a RE sites; Set ...
A simple method for estimating thermal response of building ...
African Journals Online (AJOL)
This paper develops a simple method for estimating the thermal response of building materials in the tropical climatic zone using the basic heat equation. The efficacy of the developed model has been tested with data from three West African cities, namely Kano (lat. 12.1 ºN) Nigeria, Ibadan (lat. 7.4 ºN) Nigeria and Cotonou ...
A simple method of dosimetry for E-beam radiation
International Nuclear Information System (INIS)
Spencer, D.S.; Thalacker, V.P.; Chasman, J.N.; Siegel, S.
1985-01-01
A simple method utilizing a photochromic 'intensity label' for monitoring electron-beam sources was evaluated. The labels exhibit a color change upon exposure to UV or e-beam radiation. A correlation was found between absorbed energy and Gardner Color Index at low electron-beam doses. (author)
Simple and convenient method for culturing anaerobic bacteria.
Behbehani, M J; Jordan, H V; Santoro, D L
1982-01-01
A simple and convenient method for culturing anaerobic bacteria is described. Cultures can be grown in commercially available flasks normally used for preparation of sterile external solutions. A special disposable rubber flask closure maintains anaerobic conditions in the flask after autoclaving. Growth of a variety of anaerobic oral bacteria was comparable to that obtained after anaerobic incubation of broth cultures in Brewer Anaerobic Jars.
A Simple Method for Determination of Critical Swimming Velocity in Swimming Flume
高橋, 繁浩; 若吉, 浩二; Shigehiro, TAKAHASHI; Kohji, WAKAYOSHI; 中京大学; 奈良教育大学教育学部
2001-01-01
The purpose of this study was to investigate a simple method for determination of critical swimming velocity (Vcri). Vcri is defined by Wakayoshi et al. (1992) as the swimming speed which could theoretically be maintained forever without exhaustion, and is expressed as the slope of a regression line between swimming distance (D) and swimming time (T) obtained at various swimming speeds. To determine Vcri, 20 well-trained swimmers were measured at several swimming speeds ranging from 1.25 m/se...
A simple three dimensional wide-angle beam propagation method
Ma, Changbao; van Keuren, Edward
2006-05-01
The development of three dimensional (3-D) waveguide structures for chip scale planar lightwave circuits (PLCs) is hampered by the lack of effective 3-D wide-angle (WA) beam propagation methods (BPMs). We present a simple 3-D wide-angle beam propagation method (WA-BPM) using Hoekstra’s scheme along with a new 3-D wave equation splitting method. The applicability, accuracy and effectiveness of our method are demonstrated by applying it to simulations of wide-angle beam propagation and comparing them with analytical solutions.
A simple method for multiday imaging of slice cultures.
Seidl, Armin H; Rubel, Edwin W
2010-01-01
The organotypic slice culture (Stoppini et al. A simple method for organotypic cultures of nervous tissue. 1991;37:173-182) has become the method of choice to answer a variety of questions in neuroscience. For many experiments, however, it would be beneficial to image or manipulate a slice culture repeatedly, for example, over the course of many days. We prepared organotypic slice cultures of the auditory brainstem of P3 and P4 mice and kept them in vitro for up to 4 weeks. Single cells in the auditory brainstem were transfected with plasmids expressing fluorescent proteins by way of electroporation (Haas et al. Single-cell electroporation for gene transfer in vivo. 2001;29:583-591). The culture was then placed in a chamber perfused with oxygenated ACSF and the labeled cell imaged with an inverted wide-field microscope repeatedly for multiple days, recording several time-points per day, before returning the slice to the incubator. We describe a simple method to image a slice culture preparation during the course of multiple days and over many continuous hours, without noticeable damage to the tissue or photobleaching. Our method uses a simple, inexpensive custom-built insulator constructed around the microscope to maintain controlled temperature and uses a perfusion chamber as used for in vitro slice recordings. (c) 2009 Wiley-Liss, Inc.
A simple method for DNA isolation from Xanthomonas spp.
Directory of Open Access Journals (Sweden)
Gomes Luiz Humberto
2000-01-01
Full Text Available A simple DNA isolation method was developed with routine chemicals that yields high quality and integrity preparations when compared to some of the most well known protocols. The method described does not require the use of lysing enzymes, water bath and the DNA was obtained within 40 minutes The amount of nucleic acid extracted (measured in terms of absorbancy at 260 nm from strains of Xanthomonas spp., Pseudomonas spp. and Erwinia spp. was two to five times higher than that of the most commonly used method.
A simple statistical method for catch comparison studies
DEFF Research Database (Denmark)
Holst, René; Revill, Andrew
2009-01-01
For analysing catch comparison data, we propose a simple method based on Generalised Linear Mixed Models (GLMM) and use polynomial approximations to fit the proportions caught in the test codend. The method provides comparisons of fish catch at length by the two gears through a continuous curve...... with a realistic confidence band. We demonstrate the versatility of this method, on field data obtained from the first known testing in European waters of the Rhode Island (USA) 'Eliminator' trawl. These data are interesting as they include a range of species with different selective patterns. Crown Copyright (C...
Simple statistical methods for software engineering data and patterns
Pandian, C Ravindranath
2015-01-01
Although there are countless books on statistics, few are dedicated to the application of statistical methods to software engineering. Simple Statistical Methods for Software Engineering: Data and Patterns fills that void. Instead of delving into overly complex statistics, the book details simpler solutions that are just as effective and connect with the intuition of problem solvers.Sharing valuable insights into software engineering problems and solutions, the book not only explains the required statistical methods, but also provides many examples, review questions, and case studies that prov
Finding-equal regression method and its application in predication of U resources
International Nuclear Information System (INIS)
Cao Huimo
1995-03-01
The commonly adopted deposit model method in mineral resources predication has two main part: one is model data that show up geological mineralization law for deposit, the other is statistics predication method that accords with characters of the data namely pretty regression method. This kind of regression method may be called finding-equal regression, which is made of the linear regression and distribution finding-equal method. Because distribution finding-equal method is a data pretreatment which accords with advanced mathematical precondition for the linear regression namely equal distribution theory, and this kind of data pretreatment is possible of realization. Therefore finding-equal regression not only can overcome nonlinear limitations, that are commonly occurred in traditional linear regression or other regression and always have no solution, but also can distinguish outliers and eliminate its weak influence, which would usually appeared when Robust regression possesses outlier in independent variables. Thus this newly finding-equal regression stands the best status in all kind of regression methods. Finally, two good examples of U resource quantitative predication are provided
Regression analysis by example
Chatterjee, Samprit
2012-01-01
Praise for the Fourth Edition: ""This book is . . . an excellent source of examples for regression analysis. It has been and still is readily readable and understandable."" -Journal of the American Statistical Association Regression analysis is a conceptually simple method for investigating relationships among variables. Carrying out a successful application of regression analysis, however, requires a balance of theoretical results, empirical rules, and subjective judgment. Regression Analysis by Example, Fifth Edition has been expanded
Multi-step polynomial regression method to model and forecast malaria incidence.
Directory of Open Access Journals (Sweden)
Chandrajit Chatterjee
Full Text Available Malaria is one of the most severe problems faced by the world even today. Understanding the causative factors such as age, sex, social factors, environmental variability etc. as well as underlying transmission dynamics of the disease is important for epidemiological research on malaria and its eradication. Thus, development of suitable modeling approach and methodology, based on the available data on the incidence of the disease and other related factors is of utmost importance. In this study, we developed a simple non-linear regression methodology in modeling and forecasting malaria incidence in Chennai city, India, and predicted future disease incidence with high confidence level. We considered three types of data to develop the regression methodology: a longer time series data of Slide Positivity Rates (SPR of malaria; a smaller time series data (deaths due to Plasmodium vivax of one year; and spatial data (zonal distribution of P. vivax deaths for the city along with the climatic factors, population and previous incidence of the disease. We performed variable selection by simple correlation study, identification of the initial relationship between variables through non-linear curve fitting and used multi-step methods for induction of variables in the non-linear regression analysis along with applied Gauss-Markov models, and ANOVA for testing the prediction, validity and constructing the confidence intervals. The results execute the applicability of our method for different types of data, the autoregressive nature of forecasting, and show high prediction power for both SPR and P. vivax deaths, where the one-lag SPR values plays an influential role and proves useful for better prediction. Different climatic factors are identified as playing crucial role on shaping the disease curve. Further, disease incidence at zonal level and the effect of causative factors on different zonal clusters indicate the pattern of malaria prevalence in the city
Logistic Regression and Path Analysis Method to Analyze Factors influencing Students’ Achievement
Noeryanti, N.; Suryowati, K.; Setyawan, Y.; Aulia, R. R.
2018-04-01
Students' academic achievement cannot be separated from the influence of two factors namely internal and external factors. The first factors of the student (internal factors) consist of intelligence (X1), health (X2), interest (X3), and motivation of students (X4). The external factors consist of family environment (X5), school environment (X6), and society environment (X7). The objects of this research are eighth grade students of the school year 2016/2017 at SMPN 1 Jiwan Madiun sampled by using simple random sampling. Primary data are obtained by distributing questionnaires. The method used in this study is binary logistic regression analysis that aims to identify internal and external factors that affect student’s achievement and how the trends of them. Path Analysis was used to determine the factors that influence directly, indirectly or totally on student’s achievement. Based on the results of binary logistic regression, variables that affect student’s achievement are interest and motivation. And based on the results obtained by path analysis, factors that have a direct impact on student’s achievement are students’ interest (59%) and students’ motivation (27%). While the factors that have indirect influences on students’ achievement, are family environment (97%) and school environment (37).
A simple method for affinity purification of radiolabeled monoclonal antibodies
Energy Technology Data Exchange (ETDEWEB)
Juweid, M; Sato, J; Paik, C; Onay-Basaran, S; Weinstein, J N; Neumann, R D [National Cancer Inst., Bethesda, MD (United States)
1993-04-01
A simple method is described for affinity purification of radiolabeled antibodies using glutaraldehyde-fixed tumor target cells. The cell-bound antibody fraction is removed from the cells by an acid wash and then immediately subjected to buffer-exchange chromatography. The method was applied to the D3 murine monoclonal antibody which binds to a 290 kDa antigen on the surface of Line 10 guinea pig carcinoma cells. No alteration in the molecular size profile was detected after acid washing. Purification resulted in a significant increase in immunoreactivity by an average of 14 [+-] 47% (SD; range 4-30%). (author).
Process control and optimization with simple interval calculation method
DEFF Research Database (Denmark)
Pomerantsev, A.; Rodionova, O.; Høskuldsson, Agnar
2006-01-01
for the quality improvement in the course of production. The latter is an active quality optimization, which takes into account the actual history of the process. The advocate approach is allied to the conventional method of multivariate statistical process control (MSPC) as it also employs the historical process......Methods of process control and optimization are presented and illustrated with a real world example. The optimization methods are based on the PLS block modeling as well as on the simple interval calculation methods of interval prediction and object status classification. It is proposed to employ...... the series of expanding PLS/SIC models in order to support the on-line process improvements. This method helps to predict the effect of planned actions on the product quality and thus enables passive quality control. We have also considered an optimization approach that proposes the correcting actions...
Directory of Open Access Journals (Sweden)
Kowal Robert
2016-12-01
Full Text Available A simple linear regression model is one of the pillars of classic econometrics. Multiple areas of research function within its scope. One of the many fundamental questions in the model concerns proving the efficiency of the most commonly used OLS estimators and examining their properties. In the literature of the subject one can find taking back to this scope and certain solutions in that regard. Methodically, they are borrowed from the multiple regression model or also from a boundary partial model. Not everything, however, is here complete and consistent. In the paper a completely new scheme is proposed, based on the implementation of the Cauchy-Schwarz inequality in the arrangement of the constraint aggregated from calibrated appropriately secondary constraints of unbiasedness which in a result of choice the appropriate calibrator for each variable directly leads to showing this property. A separate range-is a matter of choice of such a calibrator. These deliberations, on account of the volume and kinds of the calibration, were divided into a few parts. In the one the efficiency of OLS estimators is proven in a mixed scheme of the calibration by averages, that is preliminary, and in the most basic frames of the proposed methodology. In these frames the future outlines and general premises constituting the base of more distant generalizations are being created.
A Simple and Automatic Method for Locating Surgical Guide Hole
Li, Xun; Chen, Ming; Tang, Kai
2017-12-01
Restoration-driven surgical guides are widely used in implant surgery. This study aims to provide a simple and valid method of automatically locating surgical guide hole, which can reduce operator's experiences and improve the design efficiency and quality of surgical guide. Few literatures can be found on this topic and the paper proposed a novel and simple method to solve this problem. In this paper, a local coordinate system for each objective tooth is geometrically constructed in CAD system. This coordinate system well represents dental anatomical features and the center axis of the objective tooth (coincide with the corresponding guide hole axis) can be quickly evaluated in this coordinate system, finishing the location of the guide hole. The proposed method has been verified by comparing two types of benchmarks: manual operation by one skilled doctor with over 15-year experiences (used in most hospitals) and automatic way using one popular commercial package Simplant (used in few hospitals).Both the benchmarks and the proposed method are analyzed in their stress distribution when chewing and biting. The stress distribution is visually shown and plotted as a graph. The results show that the proposed method has much better stress distribution than the manual operation and slightly better than Simplant, which will significantly reduce the risk of cervical margin collapse and extend the wear life of the restoration.
A simple transformation independent method for outlier definition.
Johansen, Martin Berg; Christensen, Peter Astrup
2018-04-10
Definition and elimination of outliers is a key element for medical laboratories establishing or verifying reference intervals (RIs). Especially as inclusion of just a few outlying observations may seriously affect the determination of the reference limits. Many methods have been developed for definition of outliers. Several of these methods are developed for the normal distribution and often data require transformation before outlier elimination. We have developed a non-parametric transformation independent outlier definition. The new method relies on drawing reproducible histograms. This is done by using defined bin sizes above and below the median. The method is compared to the method recommended by CLSI/IFCC, which uses Box-Cox transformation (BCT) and Tukey's fences for outlier definition. The comparison is done on eight simulated distributions and an indirect clinical datasets. The comparison on simulated distributions shows that without outliers added the recommended method in general defines fewer outliers. However, when outliers are added on one side the proposed method often produces better results. With outliers on both sides the methods are equally good. Furthermore, it is found that the presence of outliers affects the BCT, and subsequently affects the determined limits of current recommended methods. This is especially seen in skewed distributions. The proposed outlier definition reproduced current RI limits on clinical data containing outliers. We find our simple transformation independent outlier detection method as good as or better than the currently recommended methods.
Simple method to generate and fabricate stochastic porous scaffolds
Energy Technology Data Exchange (ETDEWEB)
Yang, Nan, E-mail: y79nzw@163.com; Gao, Lilan; Zhou, Kuntao
2015-11-01
Considerable effort has been made to generate regular porous structures (RPSs) using function-based methods, although little effort has been made for constructing stochastic porous structures (SPSs) using the same methods. In this short communication, we propose a straightforward method for SPS construction that is simple in terms of methodology and the operations used. Using our method, we can obtain a SPS with functionally graded, heterogeneous and interconnected pores, target pore size and porosity distributions, which are useful for applications in tissue engineering. The resulting SPS models can be directly fabricated using additive manufacturing (AM) techniques. - Highlights: • Random porous structures are constructed based on their regular counterparts. • Functionally graded random pores can be constructed easily. • The scaffolds can be directly fabricated using additive manufacturing techniques.
Simple Screening Methods for Drought and Heat Tolerance in Cowpea
International Nuclear Information System (INIS)
Singh, B. B.
2000-10-01
Success in breeding for drought tolerance has not been as pronounced as for other traits. This is partly due to lack of simple, cheap and reliable screening methods to select drought tolerant plants/progenies from the segregating populations and partly due to complexity of factors involved in drought tolerance. Measuring drought tolerance through physiological parameters is expensive, time consuming and difficult to use for screening large numbers of lines and segregating populations. Since several factors/mechanisms (in shoot and root) operate independently and/or jointly to enable plants to cope with drought stress, drought tolerance appears as a complex trait. However, if these factors/mechanisms can be separated and studied individually, the components leading to drought tolerance will appear less complex and may be easy to manipulate by breeders. We have developed a simple box screening method for shoot drought tolerance in cowpea, which eliminates the effects of roots and permits non-destructive visual identification of shoot dehydration tolerance. We have also developed a 'root-box pin-board' method to study two dimensional root architecture of individual plants. Using these methods, we have identified two mechanisms of shoot drought tolerance in cowpea which are controlled by single dominant genes and major difference for root architecture among cowpea varieties. Combining deep and dense root system with shoot dehydration tolerance results into highly drought tolerant plants
Directory of Open Access Journals (Sweden)
Hailun Wang
2017-01-01
Full Text Available Support vector regression algorithm is widely used in fault diagnosis of rolling bearing. A new model parameter selection method for support vector regression based on adaptive fusion of the mixed kernel function is proposed in this paper. We choose the mixed kernel function as the kernel function of support vector regression. The mixed kernel function of the fusion coefficients, kernel function parameters, and regression parameters are combined together as the parameters of the state vector. Thus, the model selection problem is transformed into a nonlinear system state estimation problem. We use a 5th-degree cubature Kalman filter to estimate the parameters. In this way, we realize the adaptive selection of mixed kernel function weighted coefficients and the kernel parameters, the regression parameters. Compared with a single kernel function, unscented Kalman filter (UKF support vector regression algorithms, and genetic algorithms, the decision regression function obtained by the proposed method has better generalization ability and higher prediction accuracy.
A simple method for estimating the entropy of neural activity
International Nuclear Information System (INIS)
Berry II, Michael J; Tkačik, Gašper; Dubuis, Julien; Marre, Olivier; Da Silveira, Rava Azeredo
2013-01-01
The number of possible activity patterns in a population of neurons grows exponentially with the size of the population. Typical experiments explore only a tiny fraction of the large space of possible activity patterns in the case of populations with more than 10 or 20 neurons. It is thus impossible, in this undersampled regime, to estimate the probabilities with which most of the activity patterns occur. As a result, the corresponding entropy—which is a measure of the computational power of the neural population—cannot be estimated directly. We propose a simple scheme for estimating the entropy in the undersampled regime, which bounds its value from both below and above. The lower bound is the usual ‘naive’ entropy of the experimental frequencies. The upper bound results from a hybrid approximation of the entropy which makes use of the naive estimate, a maximum entropy fit, and a coverage adjustment. We apply our simple scheme to artificial data, in order to check their accuracy; we also compare its performance to those of several previously defined entropy estimators. We then apply it to actual measurements of neural activity in populations with up to 100 cells. Finally, we discuss the similarities and differences between the proposed simple estimation scheme and various earlier methods. (paper)
Simple method for correct enumeration of Staphylococcus aureus
DEFF Research Database (Denmark)
Haaber, J.; Cohn, M. T.; Petersen, A.
2016-01-01
culture. When grown in such liquid cultures, the human pathogen Staphylococcus aureus is characterized by its aggregation of single cells into clusters of variable size. Here, we show that aggregation during growth in the laboratory standard medium tryptic soy broth (TSB) is common among clinical...... and laboratory S. aureus isolates and that aggregation may introduce significant bias when applying standard enumeration methods on S. aureus growing in laboratory batch cultures. We provide a simple and efficient sonication procedure, which can be applied prior to optical density measurements to give...
A new and simple gravimetric method for determination of uranium
International Nuclear Information System (INIS)
Saxena, A.K.
1994-01-01
A new and simple gravimetric method for determining uranium has been described. Using a known quantity of uranyl nitrate as the test solution, an alcoholic solution of 2-amino-2-methyl 1:3 propanediol (AMP) was added slowly. A yellow precipitate was obtained which was filtered through ashless filter paper, washed with alcohol, dried and ignited at 800 degC for 4h. It gave a black powder as a product which was shown by X-ray diffraction to be U 3 O 8 . The percentage error was found in the range -0.09 to +0.89. (author). 8 refs., 1 tab
A simple method suitable to study de novo root organogenesis
Directory of Open Access Journals (Sweden)
Xiaodong eChen
2014-05-01
Full Text Available De novo root organogenesis is the process in which adventitious roots regenerate from detached or wounded plant tissues or organs. In tissue culture, appropriate types and concentrations of plant hormones in the medium are critical for inducing adventitious roots. However, in natural conditions, regeneration from detached organs is likely to rely on endogenous hormones. To investigate the actions of endogenous hormones and the molecular mechanisms guiding de novo root organogenesis, we developed a simple method to imitate natural conditions for adventitious root formation by culturing Arabidopsis thaliana leaf explants on B5 medium without additive hormones. Here we show that the ability of the leaf explants to regenerate roots depends on the age of the leaf and on certain nutrients in the medium. Based on these observations, we provide examples of how this method can be used in different situations, and how it can be optimized. This simple method could be used to investigate the effects of various physiological and molecular changes on the regeneration of adventitious roots. It is also useful for tracing cell lineage during the regeneration process by differential interference contrast observation of -glucuronidase staining, and by live imaging of proteins labeled with fluorescent tags.
Percutaneous Method of Management of Simple Bone Cyst
Directory of Open Access Journals (Sweden)
O. P. Lakhwani
2013-01-01
Full Text Available Introduction. Simple bone cyst or unicameral bone cysts are benign osteolytic lesions seen in metadiaphysis of long bones in growing children. Various treatment modalities with variable outcomes have been described in the literature. The case report illustrates the surgical technique of minimally invasive method of treatment. Case Study. A 14-year-old boy was diagnosed as active simple bone cyst proximal humerus with pathological fracture. The patient was treated by minimally invasive percutaneous curettage with titanium elastic nail (TENS and allogenic bone grafting mixed with bone marrow under image intensifier guidance. Results. Pathological fracture was healed and allograft filled in the cavity was well taken up. The patient achieved full range of motion with successful outcome. Conclusion. Minimally invasive percutaneous method using elastic intramedullary nail gives benefit of curettage cyst decompression and stabilization of fracture. Allogenic bone graft fills the cavity and healing of lesion by osteointegration. This method may be considered with advantage of minimally invasive technique in treatment of benign cystic lesions of bone, and the level of evidence was therapeutic level V.
Percutaneous Method of Management of Simple Bone Cyst
Lakhwani, O. P.
2013-01-01
Introduction. Simple bone cyst or unicameral bone cysts are benign osteolytic lesions seen in metadiaphysis of long bones in growing children. Various treatment modalities with variable outcomes have been described in the literature. The case report illustrates the surgical technique of minimally invasive method of treatment. Case Study. A 14-year-old boy was diagnosed as active simple bone cyst proximal humerus with pathological fracture. The patient was treated by minimally invasive percutaneous curettage with titanium elastic nail (TENS) and allogenic bone grafting mixed with bone marrow under image intensifier guidance. Results. Pathological fracture was healed and allograft filled in the cavity was well taken up. The patient achieved full range of motion with successful outcome. Conclusion. Minimally invasive percutaneous method using elastic intramedullary nail gives benefit of curettage cyst decompression and stabilization of fracture. Allogenic bone graft fills the cavity and healing of lesion by osteointegration. This method may be considered with advantage of minimally invasive technique in treatment of benign cystic lesions of bone, and the level of evidence was therapeutic level V. PMID:23819089
A Simple Combinatorial Codon Mutagenesis Method for Targeted Protein Engineering.
Belsare, Ketaki D; Andorfer, Mary C; Cardenas, Frida S; Chael, Julia R; Park, Hyun June; Lewis, Jared C
2017-03-17
Directed evolution is a powerful tool for optimizing enzymes, and mutagenesis methods that improve enzyme library quality can significantly expedite the evolution process. Here, we report a simple method for targeted combinatorial codon mutagenesis (CCM). To demonstrate the utility of this method for protein engineering, CCM libraries were constructed for cytochrome P450 BM3 , pfu prolyl oligopeptidase, and the flavin-dependent halogenase RebH; 10-26 sites were targeted for codon mutagenesis in each of these enzymes, and libraries with a tunable average of 1-7 codon mutations per gene were generated. Each of these libraries provided improved enzymes for their respective transformations, which highlights the generality, simplicity, and tunability of CCM for targeted protein engineering.
A simple method of screening for metabolic bone disease
International Nuclear Information System (INIS)
Broughton, R.B.K.; Evans, W.D.
1982-01-01
The purpose of this investigation was to find a simple method -to be used as an adjunct to the conventional bone scintigram- that could differentiate between decreased bone metabolism or mass, i.e., osteoporosis -normal bone- and the group of conditions of increased bone metabolism or mass namely, osteomalacia, renal osteodystrophy, hyperparathyroidism and Paget's disease. The Fogelman's method using the bone to soft tissue ratios from region of interest analysis at 4 hours post injection, was adopted. An initial experience in measuring a value for the count rate density in lumbar vertebrae at 1 hr post injection during conventional bone scintigraphy appears to give a clear indication of the overall rate of bone metabolism. The advantage over whole body retention methods is that the scan performed at the end of the metabolic study will reveal localized bone disease that may otherwise not be anticipated
The Use of Nonparametric Kernel Regression Methods in Econometric Production Analysis
DEFF Research Database (Denmark)
Czekaj, Tomasz Gerard
and nonparametric estimations of production functions in order to evaluate the optimal firm size. The second paper discusses the use of parametric and nonparametric regression methods to estimate panel data regression models. The third paper analyses production risk, price uncertainty, and farmers' risk preferences...... within a nonparametric panel data regression framework. The fourth paper analyses the technical efficiency of dairy farms with environmental output using nonparametric kernel regression in a semiparametric stochastic frontier analysis. The results provided in this PhD thesis show that nonparametric......This PhD thesis addresses one of the fundamental problems in applied econometric analysis, namely the econometric estimation of regression functions. The conventional approach to regression analysis is the parametric approach, which requires the researcher to specify the form of the regression...
A simple and accurate onset detection method for a measured bell-shaped speed profile
Directory of Open Access Journals (Sweden)
Lior Botzer
2009-06-01
Full Text Available Motor control neuroscientists measure limb trajectories and extract the onset of the movement for a variety of purposes. Such trajectories are often aligned relative to the onset of individual movement before the features of that movement are extracted and their properties are inspected. Onset detection is performed either manually or automatically, typically by selecting a velocity threshold. Here, we present a simple onset detection algorithm that is more accurate than the conventional velocity threshold technique. The proposed method is based on a simple regression and follows the minimum acceleration with constraints model, in which the initial phase of the bell-shaped movement is modeled by a cubic power of the time. We demonstrate the performance of the suggested method and compare it to the velocity threshold technique and to manual onset detection by a group of motor control experts. The database for this comparison consists of simulated minimum jerk trajectories and recorded reaching movements.
A simple headspace equilibration method for measuring dissolved methane
Magen, C; Lapham, L.L.; Pohlman, John W.; Marshall, Kristin N.; Bosman, S.; Casso, Michael; Chanton, J.P.
2014-01-01
Dissolved methane concentrations in the ocean are close to equilibrium with the atmosphere. Because methane is only sparingly soluble in seawater, measuring it without contamination is challenging for samples collected and processed in the presence of air. Several methods for analyzing dissolved methane are described in the literature, yet none has conducted a thorough assessment of the method yield, contamination issues during collection, transport and storage, and the effect of temperature changes and preservative. Previous extraction methods transfer methane from water to gas by either a "sparge and trap" or a "headspace equilibration" technique. The gas is then analyzed for methane by gas chromatography. Here, we revisit the headspace equilibration technique and describe a simple, inexpensive, and reliable method to measure methane in fresh and seawater, regardless of concentration. Within the range of concentrations typically found in surface seawaters (2-1000 nmol L-1), the yield of the method nears 100% of what is expected from solubility calculation following the addition of known amount of methane. In addition to being sensitive (detection limit of 0.1 ppmv, or 0.74 nmol L-1), this method requires less than 10 min per sample, and does not use highly toxic chemicals. It can be conducted with minimum materials and does not require the use of a gas chromatograph at the collection site. It can therefore be used in various remote working environments and conditions.
A simple encapsulation method for organic optoelectronic devices
International Nuclear Information System (INIS)
Sun Qian-Qian; An Qiao-Shi; Zhang Fu-Jun
2014-01-01
The performances of organic optoelectronic devices, such as organic light emitting diodes and polymer solar cells, have rapidly improved in the past decade. The stability of an organic optoelectronic device has become a key problem for further development. In this paper, we report one simple encapsulation method for organic optoelectronic devices with a parafilm, based on ternary polymer solar cells (PSCs). The power conversion efficiencies (PCE) of PSCs with and without encapsulation decrease from 2.93% to 2.17% and from 2.87% to 1.16% after 168-hours of degradation under an ambient environment, respectively. The stability of PSCs could be enhanced by encapsulation with a parafilm. The encapsulation method is a competitive choice for organic optoelectronic devices, owing to its low cost and compatibility with flexible devices. (atomic and molecular physics)
A simple method for percutaneous resection of osteoid osteoma
International Nuclear Information System (INIS)
Kamrani, Reza S.; Kiani, K.; Mazlouman, Shahriar J.
2007-01-01
To introduce a method that can be performed with minimal equipments available to most orthopedic surgeons and precludes the extensive anesthetic and ablative requirements. A percutaneous lead tunnel was first established in the cortex next to the nidus under computerized tomography guidance with local anesthesia; then the nidus was curetted in the operating room through the lead tunnel. The study was performed in Shariati Hospital in Tehran, Iran, from September 2002 to December 2005. Nineteen patients were treated with this method with 94.7% cure rate. The diagnosis was histologically confirmed in 16 cases (84.2%). Failure occurred in one patient. The patients had a mean follow-up of 13.5 months with no recurrence of symptoms with mean hospitalization time of 1.6 days. This technique is simple, minimally invasive and effective. It needs no especial equipments and provides the material for tissue diagnosis. (author)
Simple Stacking Methods for Silicon Micro Fuel Cells
Directory of Open Access Journals (Sweden)
Gianmario Scotti
2014-08-01
Full Text Available We present two simple methods, with parallel and serial gas flows, for the stacking of microfabricated silicon fuel cells with integrated current collectors, flow fields and gas diffusion layers. The gas diffusion layer is implemented using black silicon. In the two stacking methods proposed in this work, the fluidic apertures and gas flow topology are rotationally symmetric and enable us to stack fuel cells without an increase in the number of electrical or fluidic ports or interconnects. Thanks to this simplicity and the structural compactness of each cell, the obtained stacks are very thin (~1.6 mm for a two-cell stack. We have fabricated two-cell stacks with two different gas flow topologies and obtained an open-circuit voltage (OCV of 1.6 V and a power density of 63 mW·cm−2, proving the viability of the design.
A simple scintigraphic method for continuous monitoring of gastric emptying
Energy Technology Data Exchange (ETDEWEB)
Lipp, R.W.; Hammer, H.F.; Schnedl, W.; Dobnig, H.; Passath, A.; Leb, G.; Krejs, G.J. (Graz Univ. (Austria). Div. of Nuclear Medicine and Endocrinology)
1993-03-01
A new and simple scintigraphic method for the measurement of gastric emptying was developed and validated. The test meal consists of 200 g potato mash mixed with 0.5 g Dowex 2X8 particles (mesh 20-50) labelled with 37 MBq (1 mCi) technetium-99m. After ingestion of the meal, sequential dynamic 15-s anteroposterior exposures in the supine position are obtained for 90 min. A second recording sequence of 20 min is added after a 30-min interval. The results can be displayed as immediate cine-replay, as time-activity diagrams and/or as acitivty retention values. Complicated mathematical fittings are not necessary. The method lends itself equally to the testing of in- and outpatients. (orig.).
A simple and inexpensive method for genomic restriction mapping analysis
International Nuclear Information System (INIS)
Huang, C.H.; Lam, V.M.S.; Tam, J.W.O.
1988-01-01
The Southern blotting procedure for the transfer of DNA fragments from agarose gels to nitrocellulose membranes has revolutionized nucleic acid detection methods, and it forms the cornerstone of research in molecular biology. Basically, the method involves the denaturation of DNA fragments that have been separated on an agarose gel, the immobilization of the fragments by transfer to a nitrocellulose membrane, and the identification of the fragments of interest through hybridization to /sup 32/P-labeled probes and autoradiography. While the method is sensitive and applicable to both genomic and cloned DNA, it suffers from the disadvantages of being time consuming and expensive, and fragments of greater than 15 kb are difficult to transfer. Moreover, although theoretically the nitrocellulose membrane can be washed and hybridized repeatedly using different probes, in practice, the membrane becomes brittle and difficult to handle after a few cycles. A direct hybridization method for pure DNA clones was developed in 1975 but has not been widely exploited. The authors report here a modification of their procedure as applied to genomic DNA. The method is simple, rapid, and inexpensive, and it does not involve transfer to nitrocellulose membranes
Aniela Balacescu; Marian Zaharia
2011-01-01
This paper aims to examine the causal relationship between GDP and final consumption. The authors used linear regression model in which GDP is considered variable results, and final consumption variable factor. In drafting article we used Excel software application that is a modern computing and statistical data analysis.
Easy methods for extracting individual regression slopes: Comparing SPSS, R, and Excel
Directory of Open Access Journals (Sweden)
Roland Pfister
2013-10-01
Full Text Available Three different methods for extracting coefficientsof linear regression analyses are presented. The focus is on automatic and easy-to-use approaches for common statistical packages: SPSS, R, and MS Excel / LibreOffice Calc. Hands-on examples are included for each analysis, followed by a brief description of how a subsequent regression coefficient analysis is performed.
A simple and efficient method to enhance audiovisual binding tendencies
Directory of Open Access Journals (Sweden)
Brian Odegaard
2017-04-01
Full Text Available Individuals vary in their tendency to bind signals from multiple senses. For the same set of sights and sounds, one individual may frequently integrate multisensory signals and experience a unified percept, whereas another individual may rarely bind them and often experience two distinct sensations. Thus, while this binding/integration tendency is specific to each individual, it is not clear how plastic this tendency is in adulthood, and how sensory experiences may cause it to change. Here, we conducted an exploratory investigation which provides evidence that (1 the brain’s tendency to bind in spatial perception is plastic, (2 that it can change following brief exposure to simple audiovisual stimuli, and (3 that exposure to temporally synchronous, spatially discrepant stimuli provides the most effective method to modify it. These results can inform current theories about how the brain updates its internal model of the surrounding sensory world, as well as future investigations seeking to increase integration tendencies.
A simple gel electrophoresis method for separating polyhedral gold nanoparticles
Kim, Suhee; Lee, Hye Jin
2015-07-01
In this paper, a simple approach to separate differently shaped and sized polyhedral gold nanoparticles (NPs) within colloidal solutions via gel electrophoresis is described. Gel running parameters for separating efficiently gold NPs including gel composition, added surfactant types and applied voltage were investigated. The plasmonic properties and physical structure of the separated NPs extracted from the gel matrix were then investigated using transmission electron microscopy (TEM) and UV-vis spectrophotometry respectively. Data analysis revealed that gel electrophoresis conditions of a 1.5 % agarose gel with 0.1 % sodium dodecyl sulfate (SDS) surfactant under an applied voltage of 100 V resulted in the selective isolation of ~ 50 nm polyhedral shaped gold nanoparticles. Further efforts are underway to apply the method to purify biomolecule-conjugated polyhedral Au NPs that can be readily used for NP-enhanced biosensing platforms.
A simple method for rapidly processing HEU from weapons returns
Energy Technology Data Exchange (ETDEWEB)
McLean, W. II; Miller, P.E.
1994-01-01
A method based on the use of a high temperature fluidized bed for rapidly oxidizing, homogenizing and down-blending Highly Enriched Uranium (HEU) from dismantled nuclear weapons is presented. This technology directly addresses many of the most important issues that inhibit progress in international commerce in HEU; viz., transaction verification, materials accountability, transportation and environmental safety. The equipment used to carry out the oxidation and blending is simple, inexpensive and highly portable. Mobile facilities to be used for point-of-sale blending and analysis of the product material are presented along with a phased implementation plan that addresses the conversion of HEU derived from domestic weapons and related waste streams as well as material from possible foreign sources such as South Africa or the former Soviet Union.
Olive, David J
2017-01-01
This text covers both multiple linear regression and some experimental design models. The text uses the response plot to visualize the model and to detect outliers, does not assume that the error distribution has a known parametric distribution, develops prediction intervals that work when the error distribution is unknown, suggests bootstrap hypothesis tests that may be useful for inference after variable selection, and develops prediction regions and large sample theory for the multivariate linear regression model that has m response variables. A relationship between multivariate prediction regions and confidence regions provides a simple way to bootstrap confidence regions. These confidence regions often provide a practical method for testing hypotheses. There is also a chapter on generalized linear models and generalized additive models. There are many R functions to produce response and residual plots, to simulate prediction intervals and hypothesis tests, to detect outliers, and to choose response trans...
International Nuclear Information System (INIS)
Tsushima, Motoo; Fujii, Shigeki; Yutani, Chikao; Yamamoto, Akira; Naitoh, Hiroaki.
1990-01-01
We evaluated the wall thickening and stenosis rate (ASI), the calcification rate (ACI), and the wall thickening and calcification stenosis rate (SCI) of the lower abdominal aorta calculated by the 12 sector method from simple or enhanced computed tomography. The intra-observer variation of the calculation of ASI was 5.7% and that of ACI was 2.4%. In 9 patients who underwent an autopsy examination, ACI was significantly correlated with the rate of the calcification dimension to the whole objective area of the abdominal aorta (r=0.856, p<0.01). However, there were no correlations between ASI and the surface involvement or the atherosclerotic index obtained by the point-counting method of the autopsy materials. In the analysis of 40 patients with atherosclerotic vascular diseases, ASI and ACI were also highly correlated with the percentage volume of the arterial wall in relation to the whole volume of the observed artery (r=0.852, p<0.0001) and also the percentage calcification volume (r=0.913, p<0.0001) calculated by the computed method, respectively. The percentage of atherosclerotic vascular diseases increased in the group of both high ASI (over 10%) and high ACI (over 20%). We used SCI as a reliable index when the progression and regression of atherosclerosis was considered. Among patients of hypercholesterolemia consisting of 15 with familial hypercholesterolemia (FH) and 6 non-FH patients, the change of SCI (d-SCI) was significantly correlated with the change of total cholesterol concentration (d-TC) after the treatment (r=0.466, p<0.05) and the change of the right Achilles' tendon thickening (d-ATT) was also correlated with d-TC (r=0.634, p<0.005). However, no correlation between d-SCI and d-ATT was observed. In conclusion, CT indices of atherosclerosis were useful as a noninvasive quantitative diagnostic method and we were able to use them to assess the progression and regression of atherosclerosis. (author)
New simple method for fast and accurate measurement of volumes
International Nuclear Information System (INIS)
Frattolillo, Antonio
2006-01-01
A new simple method is presented, which allows us to measure in just a few minutes but with reasonable accuracy (less than 1%) the volume confined inside a generic enclosure, regardless of the complexity of its shape. The technique proposed also allows us to measure the volume of any portion of a complex manifold, including, for instance, pipes and pipe fittings, valves, gauge heads, and so on, without disassembling the manifold at all. To this purpose an airtight variable volume is used, whose volume adjustment can be precisely measured; it has an overall capacity larger than that of the unknown volume. Such a variable volume is initially filled with a suitable test gas (for instance, air) at a known pressure, as carefully measured by means of a high precision capacitive gauge. By opening a valve, the test gas is allowed to expand into the previously evacuated unknown volume. A feedback control loop reacts to the resulting finite pressure drop, thus contracting the variable volume until the pressure exactly retrieves its initial value. The overall reduction of the variable volume achieved at the end of this process gives a direct measurement of the unknown volume, and definitively gets rid of the problem of dead spaces. The method proposed actually does not require the test gas to be rigorously held at a constant temperature, thus resulting in a huge simplification as compared to complex arrangements commonly used in metrology (gas expansion method), which can grant extremely accurate measurement but requires rather expensive equipments and results in time consuming methods, being therefore impractical in most applications. A simple theoretical analysis of the thermodynamic cycle and the results of experimental tests are described, which demonstrate that, in spite of its simplicity, the method provides a measurement accuracy within 0.5%. The system requires just a few minutes to complete a single measurement, and is ready immediately at the end of the process. The
Sensitive and simple method for measuring wire tensions
International Nuclear Information System (INIS)
Atac, M.; Mishina, M.
1982-08-01
Measuring tension of wires in drift chambers and multiwire proportional chambers after construction is an important process because sometimes wires get loose after soldering, crimping or glueing. One needs to sort out wires which have tensions below a required minimum value to prevent electrostatic instabilities. There have been several methods reported on this subject in which the wires were excited either with sinusoidal current under magnetic field or with sinusoidal voltage electrostatically coupled to the wire, searching for a resonating frequency with which the wires vibrate mechanically. Then the vibration is detected either visually, optically or with magnetic pick-up directly touching the wires. Any of these is only applicable to the usual multiwire chamber which has open access to the wire plane. They also need fairly large excitation currents to induce a detectable vibration to the wires. Here we report a very simple method that can be used for any type of wire chamber or proportional tube system for measuring wire tension. Only a very small current is required for the wire excitation to obtain a large enough signal because it detects the induced emf voltage across a wire. A sine-wave oscillator and a digital voltmeter are sufficient devices aside from a permanent magnet to provide the magnetic field around the wire. A useful application of this method to a large system is suggested
A simple micro-photometric method for urinary iodine determination.
Grimm, Gabriele; Lindorfer, Heidelinde; Kieweg, Heidi; Marculescu, Rodrig; Hoffmann, Martha; Gessl, Alois; Sager, Manfred; Bieglmayer, Christian
2011-10-01
Urinary iodide concentration (UIC) is useful to evaluate nutritional iodine status. In clinical settings UIC helps to exclude blocking of the thyroid gland by excessive endogenous iodine, if diagnostic or therapeutic administration of radio-iodine is indicated. Therefore, this study established a simple test for the measurement of UIC. UIC was analyzed in urine samples of 200 patients. Samples were pre-treated at 95°C for 45 min with ammonium persulfate in a thermal cycler, followed by a photometric Sandell-Kolthoff reaction (SK) carried out in microtiter plates. For method comparison, UIC was analyzed in 30 samples by inductivity coupled plasma mass spectro-metry (ICP-MS) as a reference method. Incubation conditions were optimized concerning recovery. The photometric test correlated well to the reference method (SK=0.91*ICP-MS+1, r=0.962) and presented with a functional sensitivity of 20 μg/L. UIC of patient samples ranged from photometric test provides satisfactory results and can be performed with the basic equipment of a clinical laboratory.
A simple method for the measurement of reflective foil emissivity
International Nuclear Information System (INIS)
Ballico, M. J.; Ham, E. W. M. van der
2013-01-01
Reflective metal foil is widely used to reduce radiative heat transfer within the roof space of buildings. Such foils are typically mass-produced by vapor-deposition of a thin metallic coating onto a variety of substrates, ranging from plastic-coated reinforced paper to 'bubble-wrap'. Although the emissivity of such surfaces is almost negligible in the thermal infrared, typically less than 0.03, an insufficiently thick metal coating, or organic contamination of the surface, can significantly increase this value. To ensure that the quality of the installed insulation is satisfactory, Australian building code AS/NZS 4201.5:1994 requires a practical agreed method for measurement of the emissivity, and the standard ASTM-E408 is implied. Unfortunately this standard is not a 'primary method' and requires the use of specified expensive apparatus and calibrated reference materials. At NMIA we have developed a simple primary technique, based on an apparatus to thermally modulate the sample and record the apparent modulation in infra-red radiance with commercially available radiation thermometers. The method achieves an absolute accuracy in the emissivity of approximately 0.004 (k=2). This paper theoretically analyses the equivalence between the thermal emissivity measured in this manner, the effective thermal emissivity in application, and the apparent emissivity measured in accordance with ASTM-E408
A simple method for the measurement of reflective foil emissivity
Ballico, M. J.; van der Ham, E. W. M.
2013-09-01
Reflective metal foil is widely used to reduce radiative heat transfer within the roof space of buildings. Such foils are typically mass-produced by vapor-deposition of a thin metallic coating onto a variety of substrates, ranging from plastic-coated reinforced paper to "bubble-wrap". Although the emissivity of such surfaces is almost negligible in the thermal infrared, typically less than 0.03, an insufficiently thick metal coating, or organic contamination of the surface, can significantly increase this value. To ensure that the quality of the installed insulation is satisfactory, Australian building code AS/NZS 4201.5:1994 requires a practical agreed method for measurement of the emissivity, and the standard ASTM-E408 is implied. Unfortunately this standard is not a "primary method" and requires the use of specified expensive apparatus and calibrated reference materials. At NMIA we have developed a simple primary technique, based on an apparatus to thermally modulate the sample and record the apparent modulation in infra-red radiance with commercially available radiation thermometers. The method achieves an absolute accuracy in the emissivity of approximately 0.004 (k=2). This paper theoretically analyses the equivalence between the thermal emissivity measured in this manner, the effective thermal emissivity in application, and the apparent emissivity measured in accordance with ASTM-E408.
Simple method of measuring pulmonary extravascular water using heavy water
Energy Technology Data Exchange (ETDEWEB)
Basset, G; Moreau, F; Scaringella, M; Tistchenko, S; Botter, F; Marsac, J
1975-11-20
The field of application of the multiple indicators dilution method in human pathology, already used to study pulmonary edema, can be extended to cover the identification and testing of all conditions leading to increase lung water. To be really practical it must be simple, fast, sensitive, inexpensive and subject to repetition; the use of non-radioactive tracers is implied. Indocyanine Green and heavy water were chosen respectively as vascular and diffusible indicators. Original methods have been developed for the treatment and isotopic analysis of blood: mass spectrometric analysis of aqueous blood extracts after deproteinisation by zinc sulphate then rapid distillation of the supernatant under helium; infrared analysis either of acetone extracts from small blood samples (100..mu..litre) or of blood itself in a continuous measurement. The infrared technique adopted has been used on rats and on men in normal and pathological situations. The results show that the method proposed for the determination of pulmonary extravascular water meets the requirements of clinicians while respecting the patients' safety, and could be generalized to other organs.
Gusriani, N.; Firdaniza
2018-03-01
The existence of outliers on multiple linear regression analysis causes the Gaussian assumption to be unfulfilled. If the Least Square method is forcedly used on these data, it will produce a model that cannot represent most data. For that, we need a robust regression method against outliers. This paper will compare the Minimum Covariance Determinant (MCD) method and the TELBS method on secondary data on the productivity of phytoplankton, which contains outliers. Based on the robust determinant coefficient value, MCD method produces a better model compared to TELBS method.
Sun, Lei; Jin, Hong-Yu; Tian, Run-Tao; Wang, Ming-Juan; Liu, Li-Na; Ye, Liu-Ping; Zuo, Tian-Tian; Ma, Shuang-Cheng
2017-01-01
Analysis of related substances in pharmaceutical chemicals and multi-components in traditional Chinese medicines needs bulk of reference substances to identify the chromatographic peaks accurately. But the reference substances are costly. Thus, the relative retention (RR) method has been widely adopted in pharmacopoeias and literatures for characterizing HPLC behaviors of those reference substances unavailable. The problem is it is difficult to reproduce the RR on different columns due to the error between measured retention time (t R ) and predicted t R in some cases. Therefore, it is useful to develop an alternative and simple method for prediction of t R accurately. In the present study, based on the thermodynamic theory of HPLC, a method named linear calibration using two reference substances (LCTRS) was proposed. The method includes three steps, procedure of two points prediction, procedure of validation by multiple points regression and sequential matching. The t R of compounds on a HPLC column can be calculated by standard retention time and linear relationship. The method was validated in two medicines on 30 columns. It was demonstrated that, LCTRS method is simple, but more accurate and more robust on different HPLC columns than RR method. Hence quality standards using LCTRS method are easy to reproduce in different laboratories with lower cost of reference substances.
Di Legge, A; Testa, A C; Ameye, L; Van Calster, B; Lissoni, A A; Leone, F P G; Savelli, L; Franchi, D; Czekierdowski, A; Trio, D; Van Holsbeke, C; Ferrazzi, E; Scambia, G; Timmerman, D; Valentin, L
2012-09-01
To estimate the ability to discriminate between benign and malignant adnexal masses of different size using: subjective assessment, two International Ovarian Tumor Analysis (IOTA) logistic regression models (LR1 and LR2), the IOTA simple rules and the risk of malignancy index (RMI). We used a multicenter IOTA database of 2445 patients with at least one adnexal mass, i.e. the database previously used to prospectively validate the diagnostic performance of LR1 and LR2. The masses were categorized into three subgroups according to their largest diameter: small tumors (diameter IOTA simple rules and the RMI were applied to each of the three groups. Sensitivity, specificity, positive and negative likelihood ratio (LR+, LR-), diagnostic odds ratio (DOR) and area under the receiver-operating characteristics curve (AUC) were used to describe diagnostic performance. A moving window technique was applied to estimate the effect of tumor size as a continuous variable on the AUC. The reference standard was the histological diagnosis of the surgically removed adnexal mass. The frequency of invasive malignancy was 10% in small tumors, 19% in medium-sized tumors and 40% in large tumors; 11% of the large tumors were borderline tumors vs 3% and 4%, respectively, of the small and medium-sized tumors. The type of benign histology also differed among the three subgroups. For all methods, sensitivity with regard to malignancy was lowest in small tumors (56-84% vs 67-93% in medium-sized tumors and 74-95% in large tumors) while specificity was lowest in large tumors (60-87%vs 83-95% in medium-sized tumors and 83-96% in small tumors ). The DOR and the AUC value were highest in medium-sized tumors and the AUC was largest in tumors with a largest diameter of 7-11 cm. Tumor size affects the performance of subjective assessment, LR1 and LR2, the IOTA simple rules and the RMI in discriminating correctly between benign and malignant adnexal masses. The likely explanation, at least in part, is
Borodachev, S. M.
2016-06-01
The simple derivation of recursive least squares (RLS) method equations is given as special case of Kalman filter estimation of a constant system state under changing observation conditions. A numerical example illustrates application of RLS to multicollinearity problem.
Simple method for generating adjustable trains of picosecond electron bunches
Directory of Open Access Journals (Sweden)
P. Muggli
2010-05-01
Full Text Available A simple, passive method for producing an adjustable train of picosecond electron bunches is demonstrated. The key component of this method is an electron beam mask consisting of an array of parallel wires that selectively spoils the beam emittance. This mask is positioned in a high magnetic dispersion, low beta-function region of the beam line. The incoming electron beam striking the mask has a time/energy correlation that corresponds to a time/position correlation at the mask location. The mask pattern is transformed into a time pattern or train of bunches when the dispersion is brought back to zero downstream of the mask. Results are presented of a proof-of-principle experiment demonstrating this novel technique that was performed at the Brookhaven National Laboratory Accelerator Test Facility. This technique allows for easy tailoring of the bunch train for a particular application, including varying the bunch width and spacing, and enabling the generation of a trailing witness bunch.
A Simple Method for High-Lift Propeller Conceptual Design
Patterson, Michael; Borer, Nick; German, Brian
2016-01-01
In this paper, we present a simple method for designing propellers that are placed upstream of the leading edge of a wing in order to augment lift. Because the primary purpose of these "high-lift propellers" is to increase lift rather than produce thrust, these props are best viewed as a form of high-lift device; consequently, they should be designed differently than traditional propellers. We present a theory that describes how these props can be designed to provide a relatively uniform axial velocity increase, which is hypothesized to be advantageous for lift augmentation based on a literature survey. Computational modeling indicates that such propellers can generate the same average induced axial velocity while consuming less power and producing less thrust than conventional propeller designs. For an example problem based on specifications for NASA's Scalable Convergent Electric Propulsion Technology and Operations Research (SCEPTOR) flight demonstrator, a propeller designed with the new method requires approximately 15% less power and produces approximately 11% less thrust than one designed for minimum induced loss. Higher-order modeling and/or wind tunnel testing are needed to verify the predicted performance.
A simple method for improving predictions of nuclear masses
International Nuclear Information System (INIS)
Yamada, Masami; Tsuchiya, Susumu; Tachibana, Takahiro
1991-01-01
The formula for atomic masses which exactly conforms to all nuclides does not exist in reality and cannot be anticipated for the time being hereafter. At present the masses of many nuclides are known experimentally with good accuracy, but the values of whichever mass formulas are more or less different from those experimental values except small number of accidental coincidence. Under such situation, for forecasting the mass of an unknown nuclide, how is it cleverly done ? Generally speaking, to take the value itself of a mass formula seems not the best means. It may be better to take the difference of the values of a mass formula and experiment for the nuclide close to that to be forecast in consideration and to correct the forecast value of the mass formula. In this report, the simple method for this correction is proposed. The formula which connects between two extreme cases, the difference between a true mass and the value of a mass formula is the sum of proton part and neutron part, and the difference distributes randomly around zero, was proposed. The procedure for its concrete application is explained. This method can be applied to other physical quantities than mass, for example the half life of beta decay. (K.I.)
Energy Technology Data Exchange (ETDEWEB)
Lopez Fontan, J.L.; Costa, J.; Ruso, J.M.; Prieto, G. [Dept. of Applied Physics, Univ. of Santiago de Compostela, Santiago de Compostela (Spain); Sarmiento, F. [Dept. of Mathematics, Faculty of Informatics, Univ. of A Coruna, A Coruna (Spain)
2004-02-01
The application of a statistical method, the local polynomial regression method, (LPRM), based on a nonparametric estimation of the regression function to determine the critical micelle concentration (cmc) is presented. The method is extremely flexible because it does not impose any parametric model on the subjacent structure of the data but rather allows the data to speak for themselves. Good concordance of cmc values with those obtained by other methods was found for systems in which the variation of a measured physical property with concentration showed an abrupt change. When this variation was slow, discrepancies between the values obtained by LPRM and others methods were found. (orig.)
Directory of Open Access Journals (Sweden)
Jason W. Osborne
2012-06-01
Full Text Available Logistic regression is slowly gaining acceptance in the social sciences, and fills an important niche in the researcher's toolkit: being able to predict important outcomes that are not continuous in nature. While OLS regression is a valuable tool, it cannot routinely be used to predict outcomes that are binary or categorical in nature. These outcomes represent important social science lines of research: retention in, or dropout from school, using illicit drugs, underage alcohol consumption, antisocial behavior, purchasing decisions, voting patterns, risky behavior, and so on. The goal of this paper is to briefly lead the reader through the surprisingly simple mathematics that underpins logistic regression: probabilities, odds, odds ratios, and logits. Anyone with spreadsheet software or a scientific calculator can follow along, and in turn, this knowledge can be used to make much more interesting, clear, and accurate presentations of results (especially to non-technical audiences. In particular, I will share an example of an interaction in logistic regression, how it was originally graphed, and how the graph was made substantially more user-friendly by converting the original metric (logits to a more readily interpretable metric (probability through three simple steps.
Laserspritzer: a simple method for optogenetic investigation with subcellular resolutions.
Directory of Open Access Journals (Sweden)
Qian-Quan Sun
Full Text Available To build a detailed circuit diagram in the brain, one needs to measure functional synaptic connections between specific types of neurons. A high-resolution circuit diagram should provide detailed information at subcellular levels such as soma, distal and basal dendrites. However, a limitation lies in the difficulty of studying long-range connections between brain areas separated by millimeters. Brain slice preparations have been widely used to help understand circuit wiring within specific brain regions. The challenge exists because long-range connections are likely to be cut in a brain slice. The optogenetic approach overcomes these limitations, as channelrhodopsin 2 (ChR2 is efficiently transported to axon terminals that can be stimulated in brain slices. Here, we developed a novel fiber optic based simple method of optogenetic stimulation: the laserspritzer approach. This method facilitates the study of both long-range and local circuits within brain slice preparations. This is a convenient and low cost approach that can be easily integrated with a slice electrophysiology setup, and repeatedly used upon initial validation. Our data with direct ChR2 mediated-current recordings demonstrates that the spatial resolution of the laserspritzer is correlated with the size of the laserspritzer, and the resolution lies within the 30 µm range for the 5 micrometer laserspritzer. Using olfactory cortical slices, we demonstrated that the laserspritzer approach can be applied to selectively activate monosynaptic perisomatic GABAergic basket synapses, or long-range intracortical glutamatergic inputs formed on different subcellular domains within the same cell (e.g. distal and proximal dendrites. We discuss significant advantages of the laserspritzer approach over the widely used collimated LED whole-field illumination method in brain slice electrophysiological research.
An NCME Instructional Module on Data Mining Methods for Classification and Regression
Sinharay, Sandip
2016-01-01
Data mining methods for classification and regression are becoming increasingly popular in various scientific fields. However, these methods have not been explored much in educational measurement. This module first provides a review, which should be accessible to a wide audience in education measurement, of some of these methods. The module then…
Cohen, Ayala; Nahum-Shani, Inbal; Doveh, Etti
2010-01-01
In their seminal paper, Edwards and Parry (1993) presented the polynomial regression as a better alternative to applying difference score in the study of congruence. Although this method is increasingly applied in congruence research, its complexity relative to other methods for assessing congruence (e.g., difference score methods) was one of the…
A SIMPLE AND EFFECTIVE CURSIVE WORD SEGMENTATION METHOD
nicchiotti, G.; Rimassa, S.; Scagliola, C.
2004-01-01
A simple procedure for cursive word oversegmentation is presented, which is based on the analysis of the handwritten profiles and on the extraction of ``white holes\\'\\'. It follows the policy of using simple rules on complex data and sophisticated rules on simpler data. Experimental results show
A simple graphical method for measuring inherent safety
International Nuclear Information System (INIS)
Gupta, J.P.; Edwards, David W.
2003-01-01
Inherently safer design (ISD) concepts have been with us for over two decades since their elaboration by Kletz [Chem. Ind. 9 (1978) 124]. Interest has really taken off globally since the early nineties after several major mishaps occurred during the eighties (Bhopal, Mexico city, Piper-alfa, Philips Petroleum, to name a few). Academic and industrial research personnel have been actively involved into devising inherently safer ways of production. The regulatory bodies have also shown deep interest since ISD makes the production safer and hence their tasks easier. Research funding has also been forthcoming for new developments as well as for demonstration projects. A natural question that arises is as to how to measure ISD characteristics of a process? Several researchers have worked on this [Trans. IChemE, Process Safety Environ. Protect. B 71 (4) (1993) 252; Inherent safety in process plant design, Ph.D. Thesis, VTT Publication Number 384, Helsinki University of Technology, Espoo, Finland, 1999; Proceedings of the Mary Kay O'Connor Process Safety Center Symposium, 2001, p. 509]. Many of the proposed methods are very elegant, yet too involved for easy adoption by the industry which is scared of yet another safety analysis regime. In a recent survey [Trans. IChemE, Process Safety Environ. Prog. B 80 (2002) 115], companies desired a rather simple method to measure ISD. Simplification is also an important characteristic of ISD. It is therefore desirable to have a simple ISD measurement procedure. The ISD measurement procedure proposed in this paper can be used to differentiate between two or more processes for the same end product. The salient steps are: Consider each of the important parameters affecting the safety (e.g., temperature, pressure, toxicity, flammability, etc.) and the range of possible values these parameters can have for all the process routes under consideration for an end product. Plot these values for each step in each process route and compare. No
Directory of Open Access Journals (Sweden)
Dwi Marisa Efendi
2018-04-01
Full Text Available Cassava is one type of plant that can be planted in tropical climates. Cassava commodity is one of the leading sub-sectors in the plantation area. Cassava plant is the main ingredient of sago flour which is now experiencing price decline. The condition of the abundant supply of sago or tapioca flour production is due to the increase of cassava planting in each farmer. With the increasing number of cassava planting in farmer's plantation cause the price of cassava received by farmer is not suitable. So for the need of making sago or tapioca flour often excess in buying raw material of cassava This resulted in a lot of rotten cassava and the factory bought cassava for a low price. Based on the problem, this research is done using data mining modeled with multiple linear regression algorithm which aim to estimate the amount of Sago or Tapioca flour that can be produced, so that the future can improve the balance between the amount of cassava supply and tapioca production. The variables used in linear regression analysis are dependent variable and independent variable . From the data obtained, the dependent variable is the number of Tapioca (kg symbolized by Y while the independent variable is milled cassava symbolized by X. From the results obtained with an accuracy of 95% confidence level, then obtained coefficient of determination (R2 is 1.00. While the estimation results almost closer to the actual data value, with an average error of 0.00.
A simple method for identification of irradiated spices
International Nuclear Information System (INIS)
Behere, A.; Desai, S.R.P.; Nair, P.M.; Rao, S.M.D.
1992-01-01
Thermoluminescence (TL) properties of curry powder, a salt containing spice mixture, and three different ground spices, viz, chilli, turmeric and pepper, were compared with TL of table salt. The spices other than curry powder, did not exhibit characteristic TL in the absence of salt. Therefore studies were initiated to develop a simple and reliable method using common salt for distinguishing irradiated spices (10 kGy) from unirradiated ones under normal conditions of storage. Common salt exhibited a characteristic TL glow at 170 o C. However, when present in curry powder, the TL glow of salt showed a shift to 208 o C. It was further observed that upon storage up to 6 months, the TL of irradiated curry powder retained about 10% of the original intensity and still could be distinguished from the untreated samples. From our results it is evident that common salt could be used as an indicator either internally or externally in small sachets for incorporating into prepacked spices. (author)
A simple method for identification of irradiated spices
Energy Technology Data Exchange (ETDEWEB)
Behere, A; Desai, S R.P.; Nair, P M [Bhabha Atomic Research Centre, Bombay (India). Food Technology and Enzyme Engineering Div.; Rao, S M.D. [Bhabha Atomic Research Centre, Bombay (India). Technical Physics and Prototype Engineering Div.
1992-07-01
Thermoluminescence (TL) properties of curry powder, a salt containing spice mixture, and three different ground spices, viz, chilli, turmeric and pepper, were compared with TL of table salt. The spices other than curry powder, did not exhibit characteristic TL in the absence of salt. Therefore studies were initiated to develop a simple and reliable method using common salt for distinguishing irradiated spices (10 kGy) from unirradiated ones under normal conditions of storage. Common salt exhibited a characteristic TL glow at 170{sup o}C. However, when present in curry powder, the TL glow of salt showed a shift to 208{sup o}C. It was further observed that upon storage up to 6 months, the TL of irradiated curry powder retained about 10% of the original intensity and still could be distinguished from the untreated samples. From our results it is evident that common salt could be used as an indicator either internally or externally in small sachets for incorporating into prepacked spices. (author).
Simple method for culture of peripheral blood lymphocytes of Testudinidae.
Silva, T L; Silva, M I A; Venancio, L P R; Zago, C E S; Moscheta, V A G; Lima, A V B; Vizotto, L D; Santos, J R; Bonini-Domingos, C R; Azeredo-Oliveira, M T V
2011-12-06
We developed and optimized a simple, efficient and inexpensive method for in vitro culture of peripheral blood lymphocytes from the Brazilian tortoise Chelonoidis carbonaria (Testudinidae), testing various parameters, including culture medium, mitogen concentration, mitotic index, culture volume, incubation time, and mitotic arrest. Peripheral blood samples were obtained from the costal vein of four couples. The conditions that gave a good mitotic index were lymphocytes cultured at 37°C in minimum essential medium (7.5 mL), with phytohemagglutinin as a mitogen (0.375 mL), plus streptomycin/penicillin (0.1 mL), and an incubation period of 72 h. Mitotic arrest was induced by 2-h exposure to colchicine (0.1 mL), 70 h after establishing the culture. After mitotic arrest, the cells were hypotonized with 0.075 M KCl for 2 h and fixed with methanol/acetic acid (3:1). The non-banded mitotic chromosomes were visualized by Giemsa staining. The diploid chromosome number of C. carbonaria was found to be 52 in females and males, and sex chromosomes were not observed. We were able to culture peripheral blood lymphocytes of a Brazilian tortoise in vitro, for the preparation of mitotic chromosomes.
Simple method of generating and distributing frequency-entangled qudits
Jin, Rui-Bo; Shimizu, Ryosuke; Fujiwara, Mikio; Takeoka, Masahiro; Wakabayashi, Ryota; Yamashita, Taro; Miki, Shigehito; Terai, Hirotaka; Gerrits, Thomas; Sasaki, Masahide
2016-11-01
High-dimensional, frequency-entangled photonic quantum bits (qudits for d-dimension) are promising resources for quantum information processing in an optical fiber network and can also be used to improve channel capacity and security for quantum communication. However, up to now, it is still challenging to prepare high-dimensional frequency-entangled qudits in experiments, due to technical limitations. Here we propose and experimentally implement a novel method for a simple generation of frequency-entangled qudts with d\\gt 10 without the use of any spectral filters or cavities. The generated state is distributed over 15 km in total length. This scheme combines the technique of spectral engineering of biphotons generated by spontaneous parametric down-conversion and the technique of spectrally resolved Hong-Ou-Mandel interference. Our frequency-entangled qudits will enable quantum cryptographic experiments with enhanced performances. This distribution of distinct entangled frequency modes may also be useful for improved metrology, quantum remote synchronization, as well as for fundamental test of stronger violation of local realism.
Directory of Open Access Journals (Sweden)
ELİF BULUT
2013-06-01
Full Text Available Partial Least Squares Regression (PLSR is a multivariate statistical method that consists of partial least squares and multiple linear regression analysis. Explanatory variables, X, having multicollinearity are reduced to components which explain the great amount of covariance between explanatory and response variable. These components are few in number and they don’t have multicollinearity problem. Then multiple linear regression analysis is applied to those components to model the response variable Y. There are various PLSR algorithms. In this study NIPALS and PLS-Kernel algorithms will be studied and illustrated on a real data set.
The Bland-Altman Method Should Not Be Used in Regression Cross-Validation Studies
O'Connor, Daniel P.; Mahar, Matthew T.; Laughlin, Mitzi S.; Jackson, Andrew S.
2011-01-01
The purpose of this study was to demonstrate the bias in the Bland-Altman (BA) limits of agreement method when it is used to validate regression models. Data from 1,158 men were used to develop three regression equations to estimate maximum oxygen uptake (R[superscript 2] = 0.40, 0.61, and 0.82, respectively). The equations were evaluated in a…
Sparling, D.W.; Barzen, J.A.; Lovvorn, J.R.; Serie, J.R.
1992-01-01
Regression equations that use mensural data to estimate body condition have been developed for several water birds. These equations often have been based on data that represent different sexes, age classes, or seasons, without being adequately tested for intergroup differences. We used proximate carcass analysis of 538 adult and juvenile canvasbacks (Aythya valisineria ) collected during fall migration, winter, and spring migrations in 1975-76 and 1982-85 to test regression methods for estimating body condition.
Treating experimental data of inverse kinetic method by unitary linear regression analysis
International Nuclear Information System (INIS)
Zhao Yusen; Chen Xiaoliang
2009-01-01
The theory of treating experimental data of inverse kinetic method by unitary linear regression analysis was described. Not only the reactivity, but also the effective neutron source intensity could be calculated by this method. Computer code was compiled base on the inverse kinetic method and unitary linear regression analysis. The data of zero power facility BFS-1 in Russia were processed and the results were compared. The results show that the reactivity and the effective neutron source intensity can be obtained correctly by treating experimental data of inverse kinetic method using unitary linear regression analysis and the precision of reactivity measurement is improved. The central element efficiency can be calculated by using the reactivity. The result also shows that the effect to reactivity measurement caused by external neutron source should be considered when the reactor power is low and the intensity of external neutron source is strong. (authors)
Regression Methods for Virtual Metrology of Layer Thickness in Chemical Vapor Deposition
DEFF Research Database (Denmark)
Purwins, Hendrik; Barak, Bernd; Nagi, Ahmed
2014-01-01
The quality of wafer production in semiconductor manufacturing cannot always be monitored by a costly physical measurement. Instead of measuring a quantity directly, it can be predicted by a regression method (Virtual Metrology). In this paper, a survey on regression methods is given to predict...... average Silicon Nitride cap layer thickness for the Plasma Enhanced Chemical Vapor Deposition (PECVD) dual-layer metal passivation stack process. Process and production equipment Fault Detection and Classification (FDC) data are used as predictor variables. Various variable sets are compared: one most...... algorithm, and Support Vector Regression (SVR). On a test set, SVR outperforms the other methods by a large margin, being more robust towards changes in the production conditions. The method performs better on high-dimensional multivariate input data than on the most predictive variables alone. Process...
Statistical methods in regression and calibration analysis of chromosome aberration data
International Nuclear Information System (INIS)
Merkle, W.
1983-01-01
The method of iteratively reweighted least squares for the regression analysis of Poisson distributed chromosome aberration data is reviewed in the context of other fit procedures used in the cytogenetic literature. As an application of the resulting regression curves methods for calculating confidence intervals on dose from aberration yield are described and compared, and, for the linear quadratic model a confidence interval is given. Emphasis is placed on the rational interpretation and the limitations of various methods from a statistical point of view. (orig./MG)
A SIMPLE METHOD FOR THE EXTRACTION AND QUANTIFICATION OF PHOTOPIGMENTS FROM SYMBIODINIUM SPP.
John E. Rogers and Dragoslav Marcovich. Submitted. Simple Method for the Extraction and Quantification of Photopigments from Symbiodinium spp.. Limnol. Oceanogr. Methods. 19 p. (ERL,GB 1192). We have developed a simple, mild extraction procedure using methanol which, when...
siMS Score: Simple Method for Quantifying Metabolic Syndrome.
Soldatovic, Ivan; Vukovic, Rade; Culafic, Djordje; Gajic, Milan; Dimitrijevic-Sreckovic, Vesna
2016-01-01
To evaluate siMS score and siMS risk score, novel continuous metabolic syndrome scores as methods for quantification of metabolic status and risk. Developed siMS score was calculated using formula: siMS score = 2*Waist/Height + Gly/5.6 + Tg/1.7 + TAsystolic/130-HDL/1.02 or 1.28 (for male or female subjects, respectively). siMS risk score was calculated using formula: siMS risk score = siMS score * age/45 or 50 (for male or female subjects, respectively) * family history of cardio/cerebro-vascular events (event = 1.2, no event = 1). A sample of 528 obese and non-obese participants was used to validate siMS score and siMS risk score. Scores calculated as sum of z-scores (each component of metabolic syndrome regressed with age and gender) and sum of scores derived from principal component analysis (PCA) were used for evaluation of siMS score. Variants were made by replacing glucose with HOMA in calculations. Framingham score was used for evaluation of siMS risk score. Correlation between siMS score with sum of z-scores and weighted sum of factors of PCA was high (r = 0.866 and r = 0.822, respectively). Correlation between siMS risk score and log transformed Framingham score was medium to high for age groups 18+,30+ and 35+ (0.835, 0.707 and 0.667, respectively). siMS score and siMS risk score showed high correlation with more complex scores. Demonstrated accuracy together with superior simplicity and the ability to evaluate and follow-up individual patients makes siMS and siMS risk scores very convenient for use in clinical practice and research as well.
A simple objective method for determining a dynamic journal collection.
Bastille, J D; Mankin, C J
1980-10-01
In order to determine the content of a journal collection responsive to both user needs and space and dollar constraints, quantitative measures of the use of a 647-title collection have been related to space and cost requirements to develop objective criteria for a dynamic collection for the Treadwell Library at the Massachusetts General Hospital, a large medical research center. Data were collected for one calendar year (1977) and stored with the elements for each title's profile in a computerized file. To account for the effect of the bulk of the journal runs on the number of uses, raw use data have been adjusted using linear shelf space required for each title to produce a factor called density of use. Titles have been ranked by raw use and by density of use with space and cost requirements for each. Data have also been analyzed for five special categories of use. Given automated means of collecting and storing data, use measures should be collected continuously. Using raw use frequency ranking to relate use to space and costs seems sensible since a decision point cutoff can be chosen in terms of the potential interlibrary loans generated. But it places new titles at risk while protecting titles with long, little used runs. Basing decisions on density of use frequency ranking seems to produce a larger yield of titles with fewer potential interlibrary loans and to identify titles with overlong runs which may be pruned or converted to microform. The method developed is simple and practical. Its design will be improved to apply to data collected in 1980 for a continuous study of journal use. The problem addressed is essentially one of inventory control. Viewed as such it makes good financial sense to measure use as part of the routine operation of the library to provide information for effective management decisions.
Thompson, Russel L.
Homoscedasticity is an important assumption of linear regression. This paper explains what it is and why it is important to the researcher. Graphical and mathematical methods for testing the homoscedasticity assumption are demonstrated. Sources of homoscedasticity and types of homoscedasticity are discussed, and methods for correction are…
Calculation of U, Ra, Th and K contents in uranium ore by multiple linear regression method
International Nuclear Information System (INIS)
Lin Chao; Chen Yingqiang; Zhang Qingwen; Tan Fuwen; Peng Guanghui
1991-01-01
A multiple linear regression method was used to compute γ spectra of uranium ore samples and to calculate contents of U, Ra, Th, and K. In comparison with the inverse matrix method, its advantage is that no standard samples of pure U, Ra, Th and K are needed for obtaining response coefficients
Martens, Edwin P; de Boer, Anthonius; Pestman, Wiebe R; Belitser, Svetlana V; Stricker, Bruno H Ch; Klungel, Olaf H
PURPOSE: To compare adjusted effects of drug treatment for hypertension on the risk of stroke from propensity score (PS) methods with a multivariable Cox proportional hazards (Cox PH) regression in an observational study with censored data. METHODS: From two prospective population-based cohort
Directory of Open Access Journals (Sweden)
Xiaoyan Yang
2018-04-01
Full Text Available The Advanced Spaceborne Thermal-Emission and Reflection Radiometer Global Digital Elevation Model (ASTER GDEM is important to a wide range of geographical and environmental studies. Its accuracy, to some extent associated with land-use types reflecting topography, vegetation coverage, and human activities, impacts the results and conclusions of these studies. In order to improve the accuracy of ASTER GDEM prior to its application, we investigated ASTER GDEM errors based on individual land-use types and proposed two linear regression calibration methods, one considering only land use-specific errors and the other considering the impact of both land-use and topography. Our calibration methods were tested on the coastal prefectural city of Lianyungang in eastern China. Results indicate that (1 ASTER GDEM is highly accurate for rice, wheat, grass and mining lands but less accurate for scenic, garden, wood and bare lands; (2 despite improvements in ASTER GDEM2 accuracy, multiple linear regression calibration requires more data (topography and a relatively complex calibration process; (3 simple linear regression calibration proves a practicable and simplified means to systematically investigate and improve the impact of land-use on ASTER GDEM accuracy. Our method is applicable to areas with detailed land-use data based on highly accurate field-based point-elevation measurements.
Boudghene Stambouli, Ahmed; Zendagui, Djawad; Bard, Pierre-Yves; Derras, Boumédiène
2017-07-01
Most modern seismic codes account for site effects using an amplification factor (AF) that modifies the rock acceleration response spectra in relation to a "site condition proxy," i.e., a parameter related to the velocity profile at the site under consideration. Therefore, for practical purposes, it is interesting to identify the site parameters that best control the frequency-dependent shape of the AF. The goal of the present study is to provide a quantitative assessment of the performance of various site condition proxies to predict the main AF features, including the often used short- and mid-period amplification factors, Fa and Fv, proposed by Borcherdt (in Earthq Spectra 10:617-653, 1994). In this context, the linear, viscoelastic responses of a set of 858 actual soil columns from Japan, the USA, and Europe are computed for a set of 14 real accelerograms with varying frequency contents. The correlation between the corresponding site-specific average amplification factors and several site proxies (considered alone or as multiple combinations) is analyzed using the generalized regression neural network (GRNN). The performance of each site proxy combination is assessed through the variance reduction with respect to the initial amplification factor variability of the 858 profiles. Both the whole period range and specific short- and mid-period ranges associated with the Borcherdt factors Fa and Fv are considered. The actual amplification factor of an arbitrary soil profile is found to be satisfactorily approximated with a limited number of site proxies (4-6). As the usual code practice implies a lower number of site proxies (generally one, sometimes two), a sensitivity analysis is conducted to identify the "best performing" site parameters. The best one is the overall velocity contrast between underlying bedrock and minimum velocity in the soil column. Because these are the most difficult and expensive parameters to measure, especially for thick deposits, other
Whole-Genome Regression and Prediction Methods Applied to Plant and Animal Breeding
de los Campos, Gustavo; Hickey, John M.; Pong-Wong, Ricardo; Daetwyler, Hans D.; Calus, Mario P. L.
2013-01-01
Genomic-enabled prediction is becoming increasingly important in animal and plant breeding and is also receiving attention in human genetics. Deriving accurate predictions of complex traits requires implementing whole-genome regression (WGR) models where phenotypes are regressed on thousands of markers concurrently. Methods exist that allow implementing these large-p with small-n regressions, and genome-enabled selection (GS) is being implemented in several plant and animal breeding programs. The list of available methods is long, and the relationships between them have not been fully addressed. In this article we provide an overview of available methods for implementing parametric WGR models, discuss selected topics that emerge in applications, and present a general discussion of lessons learned from simulation and empirical data analysis in the last decade. PMID:22745228
An improved partial least-squares regression method for Raman spectroscopy
Momenpour Tehran Monfared, Ali; Anis, Hanan
2017-10-01
It is known that the performance of partial least-squares (PLS) regression analysis can be improved using the backward variable selection method (BVSPLS). In this paper, we further improve the BVSPLS based on a novel selection mechanism. The proposed method is based on sorting the weighted regression coefficients, and then the importance of each variable of the sorted list is evaluated using root mean square errors of prediction (RMSEP) criterion in each iteration step. Our Improved BVSPLS (IBVSPLS) method has been applied to leukemia and heparin data sets and led to an improvement in limit of detection of Raman biosensing ranged from 10% to 43% compared to PLS. Our IBVSPLS was also compared to the jack-knifing (simpler) and Genetic Algorithm (more complex) methods. Our method was consistently better than the jack-knifing method and showed either a similar or a better performance compared to the genetic algorithm.
Wang, Jiangbo; Liu, Junhui; Li, Tiantian; Yin, Shuo; He, Xinhui
2018-01-01
The monthly electricity sales forecasting is a basic work to ensure the safety of the power system. This paper presented a monthly electricity sales forecasting method which comprehensively considers the coupled multi-factors of temperature, economic growth, electric power replacement and business expansion. The mathematical model is constructed by using regression method. The simulation results show that the proposed method is accurate and effective.
A simple time-delayed method to control chaotic systems
International Nuclear Information System (INIS)
Chen Maoyin; Zhou Donghua; Shang Yun
2004-01-01
Based on the adaptive iterative learning strategy, a simple time-delayed controller is proposed to stabilize unstable periodic orbits (UPOs) embedded in chaotic attractors. This controller includes two parts: one is a linear feedback part; the other is an adaptive iterative learning estimation part. Theoretical analysis and numerical simulation show the effectiveness of this controller
A different approach to estimate nonlinear regression model using numerical methods
Mahaboob, B.; Venkateswarlu, B.; Mokeshrayalu, G.; Balasiddamuni, P.
2017-11-01
This research paper concerns with the computational methods namely the Gauss-Newton method, Gradient algorithm methods (Newton-Raphson method, Steepest Descent or Steepest Ascent algorithm method, the Method of Scoring, the Method of Quadratic Hill-Climbing) based on numerical analysis to estimate parameters of nonlinear regression model in a very different way. Principles of matrix calculus have been used to discuss the Gradient-Algorithm methods. Yonathan Bard [1] discussed a comparison of gradient methods for the solution of nonlinear parameter estimation problems. However this article discusses an analytical approach to the gradient algorithm methods in a different way. This paper describes a new iterative technique namely Gauss-Newton method which differs from the iterative technique proposed by Gorden K. Smyth [2]. Hans Georg Bock et.al [10] proposed numerical methods for parameter estimation in DAE’s (Differential algebraic equation). Isabel Reis Dos Santos et al [11], Introduced weighted least squares procedure for estimating the unknown parameters of a nonlinear regression metamodel. For large-scale non smooth convex minimization the Hager and Zhang (HZ) conjugate gradient Method and the modified HZ (MHZ) method were presented by Gonglin Yuan et al [12].
Directory of Open Access Journals (Sweden)
Yi-Ming Kuo
2011-06-01
Full Text Available Fine airborne particulate matter (PM2.5 has adverse effects on human health. Assessing the long-term effects of PM2.5 exposure on human health and ecology is often limited by a lack of reliable PM2.5 measurements. In Taipei, PM2.5 levels were not systematically measured until August, 2005. Due to the popularity of geographic information systems (GIS, the landuse regression method has been widely used in the spatial estimation of PM concentrations. This method accounts for the potential contributing factors of the local environment, such as traffic volume. Geostatistical methods, on other hand, account for the spatiotemporal dependence among the observations of ambient pollutants. This study assesses the performance of the landuse regression model for the spatiotemporal estimation of PM2.5 in the Taipei area. Specifically, this study integrates the landuse regression model with the geostatistical approach within the framework of the Bayesian maximum entropy (BME method. The resulting epistemic framework can assimilate knowledge bases including: (a empirical-based spatial trends of PM concentration based on landuse regression, (b the spatio-temporal dependence among PM observation information, and (c site-specific PM observations. The proposed approach performs the spatiotemporal estimation of PM2.5 levels in the Taipei area (Taiwan from 2005–2007.
Yu, Hwa-Lung; Wang, Chih-Hsih; Liu, Ming-Che; Kuo, Yi-Ming
2011-06-01
Fine airborne particulate matter (PM2.5) has adverse effects on human health. Assessing the long-term effects of PM2.5 exposure on human health and ecology is often limited by a lack of reliable PM2.5 measurements. In Taipei, PM2.5 levels were not systematically measured until August, 2005. Due to the popularity of geographic information systems (GIS), the landuse regression method has been widely used in the spatial estimation of PM concentrations. This method accounts for the potential contributing factors of the local environment, such as traffic volume. Geostatistical methods, on other hand, account for the spatiotemporal dependence among the observations of ambient pollutants. This study assesses the performance of the landuse regression model for the spatiotemporal estimation of PM2.5 in the Taipei area. Specifically, this study integrates the landuse regression model with the geostatistical approach within the framework of the Bayesian maximum entropy (BME) method. The resulting epistemic framework can assimilate knowledge bases including: (a) empirical-based spatial trends of PM concentration based on landuse regression, (b) the spatio-temporal dependence among PM observation information, and (c) site-specific PM observations. The proposed approach performs the spatiotemporal estimation of PM2.5 levels in the Taipei area (Taiwan) from 2005-2007.
Bianca N.I. Eskelson; Hailemariam Temesgen; Tara M. Barrett
2009-01-01
Cavity tree and snag abundance data are highly variable and contain many zero observations. We predict cavity tree and snag abundance from variables that are readily available from forest cover maps or remotely sensed data using negative binomial (NB), zero-inflated NB, and zero-altered NB (ZANB) regression models as well as nearest neighbor (NN) imputation methods....
Cox regression with missing covariate data using a modified partial likelihood method
DEFF Research Database (Denmark)
Martinussen, Torben; Holst, Klaus K.; Scheike, Thomas H.
2016-01-01
Missing covariate values is a common problem in survival analysis. In this paper we propose a novel method for the Cox regression model that is close to maximum likelihood but avoids the use of the EM-algorithm. It exploits that the observed hazard function is multiplicative in the baseline hazard...
Convert a low-cost sensor to a colorimeter using an improved regression method
Wu, Yifeng
2008-01-01
Closed loop color calibration is a process to maintain consistent color reproduction for color printers. To perform closed loop color calibration, a pre-designed color target should be printed, and automatically measured by a color measuring instrument. A low cost sensor has been embedded to the printer to perform the color measurement. A series of sensor calibration and color conversion methods have been developed. The purpose is to get accurate colorimetric measurement from the data measured by the low cost sensor. In order to get high accuracy colorimetric measurement, we need carefully calibrate the sensor, and minimize all possible errors during the color conversion. After comparing several classical color conversion methods, a regression based color conversion method has been selected. The regression is a powerful method to estimate the color conversion functions. But the main difficulty to use this method is to find an appropriate function to describe the relationship between the input and the output data. In this paper, we propose to use 1D pre-linearization tables to improve the linearity between the input sensor measuring data and the output colorimetric data. Using this method, we can increase the accuracy of the regression method, so as to improve the accuracy of the color conversion.
Sidik, S. M.
1975-01-01
Ridge, Marquardt's generalized inverse, shrunken, and principal components estimators are discussed in terms of the objectives of point estimation of parameters, estimation of the predictive regression function, and hypothesis testing. It is found that as the normal equations approach singularity, more consideration must be given to estimable functions of the parameters as opposed to estimation of the full parameter vector; that biased estimators all introduce constraints on the parameter space; that adoption of mean squared error as a criterion of goodness should be independent of the degree of singularity; and that ordinary least-squares subset regression is the best overall method.
A Comparative Study of Pairwise Learning Methods Based on Kernel Ridge Regression.
Stock, Michiel; Pahikkala, Tapio; Airola, Antti; De Baets, Bernard; Waegeman, Willem
2018-06-12
Many machine learning problems can be formulated as predicting labels for a pair of objects. Problems of that kind are often referred to as pairwise learning, dyadic prediction, or network inference problems. During the past decade, kernel methods have played a dominant role in pairwise learning. They still obtain a state-of-the-art predictive performance, but a theoretical analysis of their behavior has been underexplored in the machine learning literature. In this work we review and unify kernel-based algorithms that are commonly used in different pairwise learning settings, ranging from matrix filtering to zero-shot learning. To this end, we focus on closed-form efficient instantiations of Kronecker kernel ridge regression. We show that independent task kernel ridge regression, two-step kernel ridge regression, and a linear matrix filter arise naturally as a special case of Kronecker kernel ridge regression, implying that all these methods implicitly minimize a squared loss. In addition, we analyze universality, consistency, and spectral filtering properties. Our theoretical results provide valuable insights into assessing the advantages and limitations of existing pairwise learning methods.
Estimation Methods for Non-Homogeneous Regression - Minimum CRPS vs Maximum Likelihood
Gebetsberger, Manuel; Messner, Jakob W.; Mayr, Georg J.; Zeileis, Achim
2017-04-01
Non-homogeneous regression models are widely used to statistically post-process numerical weather prediction models. Such regression models correct for errors in mean and variance and are capable to forecast a full probability distribution. In order to estimate the corresponding regression coefficients, CRPS minimization is performed in many meteorological post-processing studies since the last decade. In contrast to maximum likelihood estimation, CRPS minimization is claimed to yield more calibrated forecasts. Theoretically, both scoring rules used as an optimization score should be able to locate a similar and unknown optimum. Discrepancies might result from a wrong distributional assumption of the observed quantity. To address this theoretical concept, this study compares maximum likelihood and minimum CRPS estimation for different distributional assumptions. First, a synthetic case study shows that, for an appropriate distributional assumption, both estimation methods yield to similar regression coefficients. The log-likelihood estimator is slightly more efficient. A real world case study for surface temperature forecasts at different sites in Europe confirms these results but shows that surface temperature does not always follow the classical assumption of a Gaussian distribution. KEYWORDS: ensemble post-processing, maximum likelihood estimation, CRPS minimization, probabilistic temperature forecasting, distributional regression models
A Simple Introduction to Grobner Basis Methods in String Phenomenology
International Nuclear Information System (INIS)
Gray, J.
2011-01-01
I give an elementary introduction to the key algorithm used in recent applications of computational algebraic geometry to the subject of string phenomenology. I begin with a simple description of the algorithm itself and then give 3 examples of its use in physics. I describe how it can be used to obtain constraints on flux parameters, how it can simplify the equations describing vacua in 4D string models, and lastly how it can be used to compute the vacuum space of the electroweak sector of the MSSM.
A Fast Gradient Method for Nonnegative Sparse Regression With Self-Dictionary
Gillis, Nicolas; Luce, Robert
2018-01-01
A nonnegative matrix factorization (NMF) can be computed efficiently under the separability assumption, which asserts that all the columns of the given input data matrix belong to the cone generated by a (small) subset of them. The provably most robust methods to identify these conic basis columns are based on nonnegative sparse regression and self dictionaries, and require the solution of large-scale convex optimization problems. In this paper we study a particular nonnegative sparse regression model with self dictionary. As opposed to previously proposed models, this model yields a smooth optimization problem where the sparsity is enforced through linear constraints. We show that the Euclidean projection on the polyhedron defined by these constraints can be computed efficiently, and propose a fast gradient method to solve our model. We compare our algorithm with several state-of-the-art methods on synthetic data sets and real-world hyperspectral images.
Simple methods for the 3' biotinylation of RNA.
Moritz, Bodo; Wahle, Elmar
2014-03-01
Biotinylation of RNA allows its tight coupling to streptavidin and is thus useful for many types of experiments, e.g., pull-downs. Here we describe three simple techniques for biotinylating the 3' ends of RNA molecules generated by chemical or enzymatic synthesis. First, extension with either the Schizosaccharomyces pombe noncanonical poly(A) polymerase Cid1 or Escherichia coli poly(A) polymerase and N6-biotin-ATP is simple, efficient, and generally applicable independently of the 3'-end sequences of the RNA molecule to be labeled. However, depending on the enzyme and the reaction conditions, several or many biotinylated nucleotides are incorporated. Second, conditions are reported under which splint-dependent ligation by T4 DNA ligase can be used to join biotinylated and, presumably, other chemically modified DNA oligonucleotides to RNA 3' ends even if these are heterogeneous as is typical for products of enzymatic synthesis. Third, we describe the use of 29 DNA polymerase for a template-directed fill-in reaction that uses biotin-dUTP and, thanks to the enzyme's proofreading activity, can cope with more extended 3' heterogeneities.
Seber, George A F
2012-01-01
Concise, mathematically clear, and comprehensive treatment of the subject.* Expanded coverage of diagnostics and methods of model fitting.* Requires no specialized knowledge beyond a good grasp of matrix algebra and some acquaintance with straight-line regression and simple analysis of variance models.* More than 200 problems throughout the book plus outline solutions for the exercises.* This revision has been extensively class-tested.
Van Belle, Vanya; Pelckmans, Kristiaan; Van Huffel, Sabine; Suykens, Johan A K
2011-10-01
To compare and evaluate ranking, regression and combined machine learning approaches for the analysis of survival data. The literature describes two approaches based on support vector machines to deal with censored observations. In the first approach the key idea is to rephrase the task as a ranking problem via the concordance index, a problem which can be solved efficiently in a context of structural risk minimization and convex optimization techniques. In a second approach, one uses a regression approach, dealing with censoring by means of inequality constraints. The goal of this paper is then twofold: (i) introducing a new model combining the ranking and regression strategy, which retains the link with existing survival models such as the proportional hazards model via transformation models; and (ii) comparison of the three techniques on 6 clinical and 3 high-dimensional datasets and discussing the relevance of these techniques over classical approaches fur survival data. We compare svm-based survival models based on ranking constraints, based on regression constraints and models based on both ranking and regression constraints. The performance of the models is compared by means of three different measures: (i) the concordance index, measuring the model's discriminating ability; (ii) the logrank test statistic, indicating whether patients with a prognostic index lower than the median prognostic index have a significant different survival than patients with a prognostic index higher than the median; and (iii) the hazard ratio after normalization to restrict the prognostic index between 0 and 1. Our results indicate a significantly better performance for models including regression constraints above models only based on ranking constraints. This work gives empirical evidence that svm-based models using regression constraints perform significantly better than svm-based models based on ranking constraints. Our experiments show a comparable performance for methods
Directory of Open Access Journals (Sweden)
Giuliano de Oliveira Freitas
2013-10-01
Full Text Available PURPOSE: To determine linear regression models between Alpins descriptive indices and Thibos astigmatic power vectors (APV, assessing the validity and strength of such correlations. METHODS: This case series prospectively assessed 62 eyes of 31 consecutive cataract patients with preoperative corneal astigmatism between 0.75 and 2.50 diopters in both eyes. Patients were randomly assorted among two phacoemulsification groups: one assigned to receive AcrySof®Toric intraocular lens (IOL in both eyes and another assigned to have AcrySof Natural IOL associated with limbal relaxing incisions, also in both eyes. All patients were reevaluated postoperatively at 6 months, when refractive astigmatism analysis was performed using both Alpins and Thibos methods. The ratio between Thibos postoperative APV and preoperative APV (APVratio and its linear regression to Alpins percentage of success of astigmatic surgery, percentage of astigmatism corrected and percentage of astigmatism reduction at the intended axis were assessed. RESULTS: Significant negative correlation between the ratio of post- and preoperative Thibos APVratio and Alpins percentage of success (%Success was found (Spearman's ρ=-0.93; linear regression is given by the following equation: %Success = (-APVratio + 1.00x100. CONCLUSION: The linear regression we found between APVratio and %Success permits a validated mathematical inference concerning the overall success of astigmatic surgery.
Using the fuzzy linear regression method to benchmark the energy efficiency of commercial buildings
International Nuclear Information System (INIS)
Chung, William
2012-01-01
Highlights: ► Fuzzy linear regression method is used for developing benchmarking systems. ► The systems can be used to benchmark energy efficiency of commercial buildings. ► The resulting benchmarking model can be used by public users. ► The resulting benchmarking model can capture the fuzzy nature of input–output data. -- Abstract: Benchmarking systems from a sample of reference buildings need to be developed to conduct benchmarking processes for the energy efficiency of commercial buildings. However, not all benchmarking systems can be adopted by public users (i.e., other non-reference building owners) because of the different methods in developing such systems. An approach for benchmarking the energy efficiency of commercial buildings using statistical regression analysis to normalize other factors, such as management performance, was developed in a previous work. However, the field data given by experts can be regarded as a distribution of possibility. Thus, the previous work may not be adequate to handle such fuzzy input–output data. Consequently, a number of fuzzy structures cannot be fully captured by statistical regression analysis. This present paper proposes the use of fuzzy linear regression analysis to develop a benchmarking process, the resulting model of which can be used by public users. An illustrative example is given as well.
A simple method for solving the inverse scattering problem
International Nuclear Information System (INIS)
Melnikov, V.N.; Rudyak, B.V.; Zakhariev, V.N.
1977-01-01
A new method is proposed for approximate reconstruction of a potential as a step function from scattering data using the completeness relation of solutions of the Schroedinger equation. The suggested method allows one to take into account exactly the additional centrifugal barrier for partial waves with angular momentum l>0, and also the Coulomb potential. The method admits different generalizations. Numerical calculations for checking the method have been performed
A simple method for stem cell labeling with fluorine 18
International Nuclear Information System (INIS)
Ma Bing; Hankenson, Kurt D.; Dennis, James E.; Caplan, Arnold I.; Goldstein, Steven A.; Kilbourn, Michael R.
2005-01-01
Hexadecyl-4-[ 18 F]fluorobenzoate ([ 18 F]HFB), a long chain fluorinated benzoic acid ester, was prepared in a one-step synthesis by aromatic nucleophilic substitution of [ 18 F]fluoride ion on hexadecyl-4-(N,N,N-trimethylammonio)benzoate. The radiolabeled ester was obtained in good yields (52% decay corrected) and high purity (97%). [ 18 F]HFB was used to radiolabel rat mesenchymal stem cells (MSCs) by absorption into cell membranes. MicroPET imaging of [ 18 F]HFB-labeled MSCs following intravenous injection into the rat showed the expected high and persistent accumulation of radioactivity in the lungs. [ 18 F]HFB is thus simple to prepare and uses labeling agent for short-term distribution studies of injected stem cells
A simple method for stem cell labeling with fluorine 18
Energy Technology Data Exchange (ETDEWEB)
Ma Bing [Department of Radiology, Division of Nuclear Medicine, University of Michigan Medical School, Ann Arbor, MI 48109 (United States); Hankenson, Kurt D. [Department of Biology, Case Western Reserve University, Cleveland, OH 44106 (United States); Dennis, James E. [Department of Biology, Case Western Reserve University, Cleveland, OH 44106 (United States); Caplan, Arnold I. [Department of Biology, Case Western Reserve University, Cleveland, OH 44106 (United States); Goldstein, Steven A. [Department of Orthopaedic Surgery, University of Michigan Medical School, Ann Arbor, MI 48109 (United States); Kilbourn, Michael R. [Department of Radiology, Division of Nuclear Medicine, University of Michigan Medical School, Ann Arbor, MI 48109 (United States)
2005-10-01
Hexadecyl-4-[{sup 18}F]fluorobenzoate ([{sup 18}F]HFB), a long chain fluorinated benzoic acid ester, was prepared in a one-step synthesis by aromatic nucleophilic substitution of [{sup 18}F]fluoride ion on hexadecyl-4-(N,N,N-trimethylammonio)benzoate. The radiolabeled ester was obtained in good yields (52% decay corrected) and high purity (97%). [{sup 18}F]HFB was used to radiolabel rat mesenchymal stem cells (MSCs) by absorption into cell membranes. MicroPET imaging of [{sup 18}F]HFB-labeled MSCs following intravenous injection into the rat showed the expected high and persistent accumulation of radioactivity in the lungs. [{sup 18}F]HFB is thus simple to prepare and uses labeling agent for short-term distribution studies of injected stem cells.
A Rapid and Simple Bioassay Method for Herbicide Detection
Directory of Open Access Journals (Sweden)
Xiu-Qing Li
2008-01-01
Full Text Available Chlamydomonas reinhardtii, a unicellular green alga, has been used in bioassay detection of a variety of toxic compounds such as pesticides and toxic metals, but mainly using liquid culture systems. In this study, an algal lawn--agar system for semi-quantitative bioassay of herbicidal activities has been developed. Sixteen different herbicides belonging to 11 different categories were applied to paper disks and placed on green alga lawns in Petri dishes. Presence of herbicide activities was indicated by clearing zones around the paper disks on the lawn 2-3 days after application. The different groups of herbicides induced clearing zones of variable size that depended on the amount, mode of action, and chemical properties of the herbicides applied to the paper disks. This simple, paper-disk-algal system may be used to detect the presence of herbicides in water samples and act as a quick and inexpensive semi-quantitative screening for assessing herbicide contamination.
Simple, rapid method for the preparation of isotopically labeled formaldehyde
Hooker, Jacob Matthew [Port Jefferson, NY; Schonberger, Matthias [Mains, DE; Schieferstein, Hanno [Aabergen, DE; Fowler, Joanna S [Bellport, NY
2011-10-04
Isotopically labeled formaldehyde (*C.sup..sctn.H.sub.2O) is prepared from labeled methyl iodide (*C.sup..sctn.H.sub.3I) by reaction with an oxygen nucleophile having a pendant leaving group. The mild and efficient reaction conditions result in good yields of *C.sup..sctn.H.sub.2O with little or no *C isotopic dilution. The simple, efficient production of .sup.11CH.sub.2O is described. The use of the .sup.11CH.sub.2O for the formation of positron emission tomography tracer compounds is described. The reaction can be incorporated into automated equipment available to radiochemistry laboratories. The isotopically labeled formaldehyde can be used in a variety of reactions to provide radiotracer compounds for imaging studies as well as for scintillation counting and autoradiography.
Directory of Open Access Journals (Sweden)
Massoud Tabesh
2011-07-01
Full Text Available Optimum operation of water distribution networks is one of the priorities of sustainable development of water resources, considering the issues of increasing efficiency and decreasing the water losses. One of the key subjects in optimum operational management of water distribution systems is preparing rehabilitation and replacement schemes, prediction of pipes break rate and evaluation of their reliability. Several approaches have been presented in recent years regarding prediction of pipe failure rates which each one requires especial data sets. Deterministic models based on age and deterministic multi variables and stochastic group modeling are examples of the solutions which relate pipe break rates to parameters like age, material and diameters. In this paper besides the mentioned parameters, more factors such as pipe depth and hydraulic pressures are considered as well. Then using multi variable regression method, intelligent approaches (Artificial neural network and neuro fuzzy models and Evolutionary polynomial Regression method (EPR pipe burst rate are predicted. To evaluate the results of different approaches, a case study is carried out in a part ofMashhadwater distribution network. The results show the capability and advantages of ANN and EPR methods to predict pipe break rates, in comparison with neuro fuzzy and multi-variable regression methods.
Kaneko, Hiromasa
2018-02-26
To develop a new ensemble learning method and construct highly predictive regression models in chemoinformatics and chemometrics, applicability domains (ADs) are introduced into the ensemble learning process of prediction. When estimating values of an objective variable using subregression models, only the submodels with ADs that cover a query sample, i.e., the sample is inside the model's AD, are used. By constructing submodels and changing a list of selected explanatory variables, the union of the submodels' ADs, which defines the overall AD, becomes large, and the prediction performance is enhanced for diverse compounds. By analyzing a quantitative structure-activity relationship data set and a quantitative structure-property relationship data set, it is confirmed that the ADs can be enlarged and the estimation performance of regression models is improved compared with traditional methods.
Development of Compressive Failure Strength for Composite Laminate Using Regression Analysis Method
Energy Technology Data Exchange (ETDEWEB)
Lee, Myoung Keon [Agency for Defense Development, Daejeon (Korea, Republic of); Lee, Jeong Won; Yoon, Dong Hyun; Kim, Jae Hoon [Chungnam Nat’l Univ., Daejeon (Korea, Republic of)
2016-10-15
This paper provides the compressive failure strength value of composite laminate developed by using regression analysis method. Composite material in this document is a Carbon/Epoxy unidirection(UD) tape prepreg(Cycom G40-800/5276-1) cured at 350°F(177°C). The operating temperature is –60°F~+200°F(-55°C - +95°C). A total of 56 compression tests were conducted on specimens from eight (8) distinct laminates that were laid up by standard angle layers (0°, +45°, –45° and 90°). The ASTM-D-6484 standard was used for test method. The regression analysis was performed with the response variable being the laminate ultimate fracture strength and the regressor variables being two ply orientations (0° and ±45°)
Development of Compressive Failure Strength for Composite Laminate Using Regression Analysis Method
International Nuclear Information System (INIS)
Lee, Myoung Keon; Lee, Jeong Won; Yoon, Dong Hyun; Kim, Jae Hoon
2016-01-01
This paper provides the compressive failure strength value of composite laminate developed by using regression analysis method. Composite material in this document is a Carbon/Epoxy unidirection(UD) tape prepreg(Cycom G40-800/5276-1) cured at 350°F(177°C). The operating temperature is –60°F~+200°F(-55°C - +95°C). A total of 56 compression tests were conducted on specimens from eight (8) distinct laminates that were laid up by standard angle layers (0°, +45°, –45° and 90°). The ASTM-D-6484 standard was used for test method. The regression analysis was performed with the response variable being the laminate ultimate fracture strength and the regressor variables being two ply orientations (0° and ±45°)
James W. Hardin; Henrik Schmeidiche; Raymond J. Carroll
2003-01-01
This paper discusses and illustrates the method of regression calibration. This is a straightforward technique for fitting models with additive measurement error. We present this discussion in terms of generalized linear models (GLMs) following the notation defined in Hardin and Carroll (2003). Discussion will include specified measurement error, measurement error estimated by replicate error-prone proxies, and measurement error estimated by instrumental variables. The discussion focuses on s...
Assessing the performance of variational methods for mixed logistic regression models
Czech Academy of Sciences Publication Activity Database
Rijmen, F.; Vomlel, Jiří
2008-01-01
Roč. 78, č. 8 (2008), s. 765-779 ISSN 0094-9655 R&D Projects: GA MŠk 1M0572 Grant - others:GA MŠk(CZ) 2C06019 Institutional research plan: CEZ:AV0Z10750506 Keywords : Mixed models * Logistic regression * Variational methods * Lower bound approximation Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.353, year: 2008
Regression to fuzziness method for estimation of remaining useful life in power plant components
Alamaniotis, Miltiadis; Grelle, Austin; Tsoukalas, Lefteri H.
2014-10-01
Mitigation of severe accidents in power plants requires the reliable operation of all systems and the on-time replacement of mechanical components. Therefore, the continuous surveillance of power systems is a crucial concern for the overall safety, cost control, and on-time maintenance of a power plant. In this paper a methodology called regression to fuzziness is presented that estimates the remaining useful life (RUL) of power plant components. The RUL is defined as the difference between the time that a measurement was taken and the estimated failure time of that component. The methodology aims to compensate for a potential lack of historical data by modeling an expert's operational experience and expertise applied to the system. It initially identifies critical degradation parameters and their associated value range. Once completed, the operator's experience is modeled through fuzzy sets which span the entire parameter range. This model is then synergistically used with linear regression and a component's failure point to estimate the RUL. The proposed methodology is tested on estimating the RUL of a turbine (the basic electrical generating component of a power plant) in three different cases. Results demonstrate the benefits of the methodology for components for which operational data is not readily available and emphasize the significance of the selection of fuzzy sets and the effect of knowledge representation on the predicted output. To verify the effectiveness of the methodology, it was benchmarked against the data-based simple linear regression model used for predictions which was shown to perform equal or worse than the presented methodology. Furthermore, methodology comparison highlighted the improvement in estimation offered by the adoption of appropriate of fuzzy sets for parameter representation.
Comparison of Adaline and Multiple Linear Regression Methods for Rainfall Forecasting
Sutawinaya, IP; Astawa, INGA; Hariyanti, NKD
2018-01-01
Heavy rainfall can cause disaster, therefore need a forecast to predict rainfall intensity. Main factor that cause flooding is there is a high rainfall intensity and it makes the river become overcapacity. This will cause flooding around the area. Rainfall factor is a dynamic factor, so rainfall is very interesting to be studied. In order to support the rainfall forecasting, there are methods that can be used from Artificial Intelligence (AI) to statistic. In this research, we used Adaline for AI method and Regression for statistic method. The more accurate forecast result shows the method that used is good for forecasting the rainfall. Through those methods, we expected which is the best method for rainfall forecasting here.
Introducing a simple and economical method to purify Giardia ...
African Journals Online (AJOL)
Jane
2011-08-08
Aug 8, 2011 ... two-phase method was 1.5 × 104 cysts for each two grams of fecal sample. In this ... is a mélange of them with some changes. ... MATERIALS AND METHODS ... than 8 cysts in each microscopic field with the magnification of ×.
A Simple and Accurate Method for Measuring Enzyme Activity.
Yip, Din-Yan
1997-01-01
Presents methods commonly used for investigating enzyme activity using catalase and presents a new method for measuring catalase activity that is more reliable and accurate. Provides results that are readily reproduced and quantified. Can also be used for investigations of enzyme properties such as the effects of temperature, pH, inhibitors,…
A simple method for purification of herpesvirus DNA
DEFF Research Database (Denmark)
Christensen, Laurids Siig; Normann, Preben
1992-01-01
A rapid and reliable method for purification of herpesvirus DNA from cell cultures is described. The method is based on the isolation of virus particles and/or nucleocapsids by differential centrifugation and exploits the solubilizing and denaturing capabilities of cesium trifluoroacetate during...
Hassanzadeh, S.; Hosseinibalam, F.; Omidvari, M.
2008-04-01
Data of seven meteorological variables (relative humidity, wet temperature, dry temperature, maximum temperature, minimum temperature, ground temperature and sun radiation time) and ozone values have been used for statistical analysis. Meteorological variables and ozone values were analyzed using both multiple linear regression and principal component methods. Data for the period 1999-2004 are analyzed jointly using both methods. For all periods, temperature dependent variables were highly correlated, but were all negatively correlated with relative humidity. Multiple regression analysis was used to fit the meteorological variables using the meteorological variables as predictors. A variable selection method based on high loading of varimax rotated principal components was used to obtain subsets of the predictor variables to be included in the linear regression model of the meteorological variables. In 1999, 2001 and 2002 one of the meteorological variables was weakly influenced predominantly by the ozone concentrations. However, the model did not predict that the meteorological variables for the year 2000 were not influenced predominantly by the ozone concentrations that point to variation in sun radiation. This could be due to other factors that were not explicitly considered in this study.
A simple eigenfunction convergence acceleration method for Monte Carlo
International Nuclear Information System (INIS)
Booth, Thomas E.
2011-01-01
Monte Carlo transport codes typically use a power iteration method to obtain the fundamental eigenfunction. The standard convergence rate for the power iteration method is the ratio of the first two eigenvalues, that is, k_2/k_1. Modifications to the power method have accelerated the convergence by explicitly calculating the subdominant eigenfunctions as well as the fundamental. Calculating the subdominant eigenfunctions requires using particles of negative and positive weights and appropriately canceling the negative and positive weight particles. Incorporating both negative weights and a ± weight cancellation requires a significant change to current transport codes. This paper presents an alternative convergence acceleration method that does not require modifying the transport codes to deal with the problems associated with tracking and cancelling particles of ± weights. Instead, only positive weights are used in the acceleration method. (author)
A rapid, simple method for obtaining radiochemically pure hepatic heme
International Nuclear Information System (INIS)
Bonkowski, H.L.; Bement, W.J.; Erny, R.
1978-01-01
Radioactively-labelled heme has usually been isolated from liver to which unlabelled carrier has been added by long, laborious techniques involving organic solvent extraction followed by crystallization. A simpler, rapid method is devised for obtaining radiochemically-pure heme synthesized in vivo in rat liver from delta-amino[4- 14 C]levulinate. This method, in which the heme is extracted into ethyl acetate/glacial acetic acid and in which porphyrins are removed from the heme-containing organic phase with HCl washes, does not require addition of carrier heme. The new method gives better heme recoveries than and heme specific activities identical to, those obtained using the crystallization method. In this new method heme must be synthesized from delta-amino[4- 14 C]levulinate; it is not satisfactory to use [2- 14 C]glycine substrate because non-heme counts are isolated in the heme fraction. (Auth.)
A Simple Method for Identifying the Acromioclavicular Joint During Arthroscopic Procedures
Javed, Saqib; Heasley, Richard; Ravenscroft, Matt
2013-01-01
Arthroscopic acromioclavicular joint excision is performed via an anterior portal and is technically demanding. We present a simple method for identifying the acromioclavicular joint during arthroscopic procedures.
Correcting for cryptic relatedness by a regression-based genomic control method
Directory of Open Access Journals (Sweden)
Yang Yaning
2009-12-01
Full Text Available Abstract Background Genomic control (GC method is a useful tool to correct for the cryptic relatedness in population-based association studies. It was originally proposed for correcting for the variance inflation of Cochran-Armitage's additive trend test by using information from unlinked null markers, and was later generalized to be applicable to other tests with the additional requirement that the null markers are matched with the candidate marker in allele frequencies. However, matching allele frequencies limits the number of available null markers and thus limits the applicability of the GC method. On the other hand, errors in genotype/allele frequencies may cause further bias and variance inflation and thereby aggravate the effect of GC correction. Results In this paper, we propose a regression-based GC method using null markers that are not necessarily matched in allele frequencies with the candidate marker. Variation of allele frequencies of the null markers is adjusted by a regression method. Conclusion The proposed method can be readily applied to the Cochran-Armitage's trend tests other than the additive trend test, the Pearson's chi-square test and other robust efficiency tests. Simulation results show that the proposed method is effective in controlling type I error in the presence of population substructure.
A subagging regression method for estimating the qualitative and quantitative state of groundwater
Jeong, Jina; Park, Eungyu; Han, Weon Shik; Kim, Kue-Young
2017-08-01
A subsample aggregating (subagging) regression (SBR) method for the analysis of groundwater data pertaining to trend-estimation-associated uncertainty is proposed. The SBR method is validated against synthetic data competitively with other conventional robust and non-robust methods. From the results, it is verified that the estimation accuracies of the SBR method are consistent and superior to those of other methods, and the uncertainties are reasonably estimated; the others have no uncertainty analysis option. To validate further, actual groundwater data are employed and analyzed comparatively with Gaussian process regression (GPR). For all cases, the trend and the associated uncertainties are reasonably estimated by both SBR and GPR regardless of Gaussian or non-Gaussian skewed data. However, it is expected that GPR has a limitation in applications to severely corrupted data by outliers owing to its non-robustness. From the implementations, it is determined that the SBR method has the potential to be further developed as an effective tool of anomaly detection or outlier identification in groundwater state data such as the groundwater level and contaminant concentration.
A simple and rapid molecular method for Leptospira species identification
Ahmed, Ahmed; Anthony, Richard M.; Hartskeerl, Rudy A.
2010-01-01
Serological and DNA-based classification systems only have little correlation. Currently serological and molecular methods for characterizing Leptospira are complex and costly restricting their world-wide distribution and use. Ligation mediated amplification combined with microarray analysis
A simple method for potential flow simulation of cascades
Indian Academy of Sciences (India)
vortex panel method to simulate potential flow in cascades is presented. The cascade ... The fluid loading on the blades, such as the normal force and pitching moment, may ... of such discrete infinite array singularities along the blade surface.
A simple method for estimating the convection- dispersion equation ...
African Journals Online (AJOL)
Jane
2011-08-31
Aug 31, 2011 ... approach of modeling solute transport in porous media uses the deterministic ... Methods of estimating CDE transport parameters can be divided into statistical ..... diffusion-type model for longitudinal mixing of fluids in flow.
Impact of regression methods on improved effects of soil structure on soil water retention estimates
Nguyen, Phuong Minh; De Pue, Jan; Le, Khoa Van; Cornelis, Wim
2015-06-01
Increasing the accuracy of pedotransfer functions (PTFs), an indirect method for predicting non-readily available soil features such as soil water retention characteristics (SWRC), is of crucial importance for large scale agro-hydrological modeling. Adding significant predictors (i.e., soil structure), and implementing more flexible regression algorithms are among the main strategies of PTFs improvement. The aim of this study was to investigate whether the improved effect of categorical soil structure information on estimating soil-water content at various matric potentials, which has been reported in literature, could be enduringly captured by regression techniques other than the usually applied linear regression. Two data mining techniques, i.e., Support Vector Machines (SVM), and k-Nearest Neighbors (kNN), which have been recently introduced as promising tools for PTF development, were utilized to test if the incorporation of soil structure will improve PTF's accuracy under a context of rather limited training data. The results show that incorporating descriptive soil structure information, i.e., massive, structured and structureless, as grouping criterion can improve the accuracy of PTFs derived by SVM approach in the range of matric potential of -6 to -33 kPa (average RMSE decreased up to 0.005 m3 m-3 after grouping, depending on matric potentials). The improvement was primarily attributed to the outperformance of SVM-PTFs calibrated on structureless soils. No improvement was obtained with kNN technique, at least not in our study in which the data set became limited in size after grouping. Since there is an impact of regression techniques on the improved effect of incorporating qualitative soil structure information, selecting a proper technique will help to maximize the combined influence of flexible regression algorithms and soil structure information on PTF accuracy.
A Simple Deep Learning Method for Neuronal Spike Sorting
Yang, Kai; Wu, Haifeng; Zeng, Yu
2017-10-01
Spike sorting is one of key technique to understand brain activity. With the development of modern electrophysiology technology, some recent multi-electrode technologies have been able to record the activity of thousands of neuronal spikes simultaneously. The spike sorting in this case will increase the computational complexity of conventional sorting algorithms. In this paper, we will focus spike sorting on how to reduce the complexity, and introduce a deep learning algorithm, principal component analysis network (PCANet) to spike sorting. The introduced method starts from a conventional model and establish a Toeplitz matrix. Through the column vectors in the matrix, we trains a PCANet, where some eigenvalue vectors of spikes could be extracted. Finally, support vector machine (SVM) is used to sort spikes. In experiments, we choose two groups of simulated data from public databases availably and compare this introduced method with conventional methods. The results indicate that the introduced method indeed has lower complexity with the same sorting errors as the conventional methods.
Prastuti, M.; Suhartono; Salehah, NA
2018-04-01
The need for energy supply, especially for electricity in Indonesia has been increasing in the last past years. Furthermore, the high electricity usage by people at different times leads to the occurrence of heteroscedasticity issue. Estimate the electricity supply that could fulfilled the community’s need is very important, but the heteroscedasticity issue often made electricity forecasting hard to be done. An accurate forecast of electricity consumptions is one of the key challenges for energy provider to make better resources and service planning and also take control actions in order to balance the electricity supply and demand for community. In this paper, hybrid ARIMAX Quantile Regression (ARIMAX-QR) approach was proposed to predict the short-term electricity consumption in East Java. This method will also be compared to time series regression using RMSE, MAPE, and MdAPE criteria. The data used in this research was the electricity consumption per half-an-hour data during the period of September 2015 to April 2016. The results show that the proposed approach can be a competitive alternative to forecast short-term electricity in East Java. ARIMAX-QR using lag values and dummy variables as predictors yield more accurate prediction in both in-sample and out-sample data. Moreover, both time series regression and ARIMAX-QR methods with addition of lag values as predictor could capture accurately the patterns in the data. Hence, it produces better predictions compared to the models that not use additional lag variables.
The simple method of determination peaks areas in multiplets
International Nuclear Information System (INIS)
Loska, L.; Ptasinski, J.
1991-01-01
Semiconductor germanium detectors used in γ-spectrometry give spectra with well-separated peaks. However, in some cases, energies of γ-lines are too near, to produce resolved and undisturbed peaks. Then, there is a necessity to perform a mathematical separation. The method proposed here is based on the assumption, that areas of peaks composing the analysed multiplet are proportional to their heights. The method can be applied for any number of interfering peaks, providing, that the function of the background under the multiplet is accurately determined. The results of testing calculations performed on a simulated spectrum are given. The method works successfully in a computer program used for neutron activation analysis data processing. (author). 9 refs, 1 fig, 1 tab
A simple method of injecting tumescent fluid for liposuction
Directory of Open Access Journals (Sweden)
Arindam Sarkar
2011-01-01
Full Text Available Injection of tumescent fluid is essential to obtain a painless and relatively bloodless liposuction. There are many methods of injecting the tumescent fluid like power pumps, syringes and pressure cuffs. Our method consists of applying air pressure within the plastic transfusion fluid bottle by pricking with a wide bore needle and connecting it to a sphygmomanometer balloon pump. By inflation of the balloon pump and thus increasing pressure inside the plastic bottle, the rate and volume of infusion can be controlled. By applying the cuff outside the bottle the visibility inside is impaired and the bottle gets collapsed preventing a continued pressure and thereby impairing both the quantity as well as the rate of infusion. Power pumps are expensive. This method is inexpensive, infused volume of fluid being visible and the rate of infusion controllable.
Simple design of slanted grating with simplified modal method.
Li, Shubin; Zhou, Changhe; Cao, Hongchao; Wu, Jun
2014-02-15
A simplified modal method (SMM) is presented that offers a clear physical image for subwavelength slanted grating. The diffraction characteristic of the slanted grating under Littrow configuration is revealed by the SMM as an equivalent rectangular grating, which is in good agreement with rigorous coupled-wave analysis. Based on the equivalence, we obtained an effective analytic solution for simplifying the design and optimization of a slanted grating. It offers a new approach for design of the slanted grating, e.g., a 1×2 beam splitter can be easily designed. This method should be helpful for designing various new slanted grating devices.
siMS Score: Simple Method for Quantifying Metabolic Syndrome
Soldatovic, Ivan; Vukovic, Rade; Culafic, Djordje; Gajic, Milan; Dimitrijevic-Sreckovic, Vesna
2016-01-01
Objective To evaluate siMS score and siMS risk score, novel continuous metabolic syndrome scores as methods for quantification of metabolic status and risk. Materials and Methods Developed siMS score was calculated using formula: siMS score = 2*Waist/Height + Gly/5.6 + Tg/1.7 + TAsystolic/130?HDL/1.02 or 1.28 (for male or female subjects, respectively). siMS risk score was calculated using formula: siMS risk score = siMS score * age/45 or 50 (for male or female subjects, respectively) * famil...
A robust and efficient stepwise regression method for building sparse polynomial chaos expansions
Energy Technology Data Exchange (ETDEWEB)
Abraham, Simon, E-mail: Simon.Abraham@ulb.ac.be [Vrije Universiteit Brussel (VUB), Department of Mechanical Engineering, Research Group Fluid Mechanics and Thermodynamics, Pleinlaan 2, 1050 Brussels (Belgium); Raisee, Mehrdad [School of Mechanical Engineering, College of Engineering, University of Tehran, P.O. Box: 11155-4563, Tehran (Iran, Islamic Republic of); Ghorbaniasl, Ghader; Contino, Francesco; Lacor, Chris [Vrije Universiteit Brussel (VUB), Department of Mechanical Engineering, Research Group Fluid Mechanics and Thermodynamics, Pleinlaan 2, 1050 Brussels (Belgium)
2017-03-01
Polynomial Chaos (PC) expansions are widely used in various engineering fields for quantifying uncertainties arising from uncertain parameters. The computational cost of classical PC solution schemes is unaffordable as the number of deterministic simulations to be calculated grows dramatically with the number of stochastic dimension. This considerably restricts the practical use of PC at the industrial level. A common approach to address such problems is to make use of sparse PC expansions. This paper presents a non-intrusive regression-based method for building sparse PC expansions. The most important PC contributions are detected sequentially through an automatic search procedure. The variable selection criterion is based on efficient tools relevant to probabilistic method. Two benchmark analytical functions are used to validate the proposed algorithm. The computational efficiency of the method is then illustrated by a more realistic CFD application, consisting of the non-deterministic flow around a transonic airfoil subject to geometrical uncertainties. To assess the performance of the developed methodology, a detailed comparison is made with the well established LAR-based selection technique. The results show that the developed sparse regression technique is able to identify the most significant PC contributions describing the problem. Moreover, the most important stochastic features are captured at a reduced computational cost compared to the LAR method. The results also demonstrate the superior robustness of the method by repeating the analyses using random experimental designs.
Simple picrate method for the determination of cyanide in cassava ...
African Journals Online (AJOL)
The red coloured complex on the strips was extracted with 50% ethanol solution and the absorbance of the extract was measured at 510nm using a spectrophotometer. The method was reproducible and cyanide as low as 1 microgram could be determined. Cyanide levels of all the cassava varieties tested were higher than ...
A simple reliability block diagram method for safety integrity verification
International Nuclear Information System (INIS)
Guo Haitao; Yang Xianhui
2007-01-01
IEC 61508 requires safety integrity verification for safety related systems to be a necessary procedure in safety life cycle. PFD avg must be calculated to verify the safety integrity level (SIL). Since IEC 61508-6 does not give detailed explanations of the definitions and PFD avg calculations for its examples, it is difficult for common reliability or safety engineers to understand when they use the standard as guidance in practice. A method using reliability block diagram is investigated in this study in order to provide a clear and feasible way of PFD avg calculation and help those who take IEC 61508-6 as their guidance. The method finds mean down times (MDTs) of both channel and voted group first and then PFD avg . The calculated results of various voted groups are compared with those in IEC61508 part 6 and Ref. [Zhang T, Long W, Sato Y. Availability of systems with self-diagnostic components-applying Markov model to IEC 61508-6. Reliab Eng System Saf 2003;80(2):133-41]. An interesting outcome can be realized from the comparison. Furthermore, although differences in MDT of voted groups exist between IEC 61508-6 and this paper, PFD avg of voted groups are comparatively close. With detailed description, the method of RBD presented can be applied to the quantitative SIL verification, showing a similarity of the method in IEC 61508-6
Assay of spent fuel by a simple reactivity method
International Nuclear Information System (INIS)
Lee, D.M.; Lindquist, L.O.
1982-01-01
A new method for the assay of spent-fuel assemblies has been developed that eliminates the need for external isotopic neutron sources, yet retains the advantages of an active interrogation system. The assay is accomplished by changing the reactivity of the system and correlating the measurements to burnup. 7 figures
A Simple Alternative Method for the Synthesis of Aromatic Dialdehydes
KOZ, Gamze; ASTLEY, Demet; ASTLEY, Stephen
2011-01-01
Aromatic dialdehydes were synthesized from 5-t-butylsalicylaldehyde and o-vanilline in good yields using paraformaldehyde, hydrobromic acid and catalytic amounts of sulfuric acid in one step which was previously unavailable with present methods. Key Words: aromatic dialdehydes, bromomethylation, 5-t-butylsalicylaldehyde, o-vanilline.
A Simple Alternative Method for the Synthesis of Aromatic Dialdehydes
KOZ, Gamze; ASTLEY, Demet; ASTLEY, Stephen Thomas
2012-01-01
Aromatic dialdehydes were synthesized from 5-t-butylsalicylaldehyde and o-vanilline in good yields using paraformaldehyde, hydrobromic acid and catalytic amounts of sulfuric acid in one step which was previously unavailable with present methods. Key Words: aromatic dialdehydes, bromomethylation, 5-t-butylsalicylaldehyde, o-vanilline.
Simple and efficient methods for isolation and activity measurement ...
African Journals Online (AJOL)
Jane
2011-06-29
Jun 29, 2011 ... Key words: Hirudin, thrombin titration method, chromatography, purification. INTRODUCTION. Since recombinant ... Escherichia coli in 1986, intensive research had been .... mixed with 50 µl sample was incubated in 37°C water for 5 min, then 5 µl .... conclusion, the concise and efficient isolation line of the.
Simple method to calculate percolation, Ising and Potts clusters
International Nuclear Information System (INIS)
Tsallis, C.
1981-01-01
A procedure ('break-collapse method') is introduced which considerably simplifies the calculation of two - or multirooted clusters like those commonly appearing in real space renormalization group (RG) treatments of bond-percolation, and pure and random Ising and Potts problems. The method is illustrated through two applications for the q-state Potts ferromagnet. The first of them concerns a RG calculation of the critical exponent ν for the isotropic square lattice: numerical consistence is obtained (particularly for q→0) with den Nijs conjecture. The second application is a compact reformulation of the standard star-triangle and duality transformations which provide the exact critical temperature for the anisotropic triangular and honeycomb lattices. (Author) [pt
Captive solvent methods for fast, simple carbon-11 radioalkylations
International Nuclear Information System (INIS)
Jewett, D.M.; Mangner, T.J.; Watkins, G.L.
1991-01-01
Carbon-11 labeled radiopharmaceuticals for receptor studies usually require final purification by high performance liquid chromatography (HPLC). A significant simplification of the apparatus is possible if the radiolabeling reaction can be done directly in the HPLC injection circuit. Captive solvent methods in which the reaction is done in a small volume of solvent absorbed in a porous solid matrix are a general approach to this problem. For N-methylations with [ 11 C] methyl iodide, a basic catalyst may be incorporated in the polymeric or alumina solid phase. Reaction volumes are from 20 to 100 ML. Often no heating or cooling of the reaction column is necessary. The syntheses of [ 11 C]PK11195 and [ 11 C] flumazenil are described to illustrate some of the advantages and limitations of captive solvent methods
A method of solving simple harmonic oscillator Schroedinger equation
Maury, Juan Carlos F.
1995-01-01
A usual step in solving totally Schrodinger equation is to try first the case when dimensionless position independent variable w is large. In this case the Harmonic Oscillator equation takes the form (d(exp 2)/dw(exp 2) - w(exp 2))F = 0, and following W.K.B. method, it gives the intermediate corresponding solution F = exp(-w(exp 2)/2), which actually satisfies exactly another equation, (d(exp 2)/dw(exp 2) + 1 - w(exp 2))F = 0. We apply a different method, useful in anharmonic oscillator equations, similar to that of Rampal and Datta, and although it is slightly more complicated however it is also more general and systematic.
Standardized methods for photography in procedural dermatology using simple equipment.
Hexsel, Doris; Hexsel, Camile L; Dal'Forno, Taciana; Schilling de Souza, Juliana; Silva, Aline F; Siega, Carolina
2017-04-01
Photography is an important tool in dermatology. Reproducing the settings of before photos after interventions allows more accurate evaluation of treatment outcomes. In this article, we describe standardized methods and tips to obtain photographs, both for clinical practice and research procedural dermatology, using common equipment. Standards for the studio, cameras, photographer, patients, and framing are presented in this article. © 2017 The International Society of Dermatology.
Simple Room Temperature Method for Polymer Optical Fibre Cleaving
DEFF Research Database (Denmark)
Saez-Rodriguez, David; Nielsen, Kristian; Bang, Ole
2015-01-01
In this paper, we report on a new method to cleave polymer optical fibre. The most common way to cut a polymer optical fibre is chopping it with a razor blade; however, in this approach both the fibre and the blade must be preheated in order to turn the material ductile, and thus, prevent crazing...... of similar quality to those produced by more complex and expensive heated systems....
Directory of Open Access Journals (Sweden)
Shipra Singh
2012-01-01
Full Text Available The present study was undertaken to develop a validated, rapid, simple, and low-cost ultraviolet (UV spectrophotometric method for estimating Etoricoxib (ETX in pharmaceutical formulations. The analysis was performed on λ max 233 nm using 0.1 M HCl as blank/diluent. The proposed method was validated on International Conference on Harmonization (ICH guidelines including parameters as linearity, accuracy, precision, reproducibility, and specificity. The proposed method was also used to access the content of the ETX in two commercial brands of Indian market. Beer′s law was obeyed in concentration range of 0.1-0.5 μg/ml, and the regression equation was Y = 0.418x + 0.018. The mean accuracy values for 0.1 μg/ml and 0.2 μg/ml concentration of ETX were found to be 99.76 ± 0.52% and 99.12 ± 0.84, respectively, and relative standard deviation (RSD of interday and intraday was less than 2%. The developed method was suitable and specific to the analysis of ETX even in the presence of common excipients. The method was applied on two different marketed brands and ETX contents were 98.5 ± 0.56 and 99.33 ± 0.44, respectively, of labeled claim. The proposed method was validated as per ICH guidelines and statistically good results were obtained. This method can be employed for routine analysis of ETX in bulk and commercial formulations.
Directory of Open Access Journals (Sweden)
S Singh
2012-01-01
Full Text Available The present study was undertaken to develop a validated, rapid, simple, and low-cost ultraviolet (UV spectrophotometric method for estimating Etoricoxib (ETX in pharmaceutical formulations. The analysis was performed on Î max 233 nm using 0.1 M HCl as blank/diluent. The proposed method was validated on International Conference on Harmonization (ICH guidelines including parameters as linearity, accuracy, precision, reproducibility, and specificity. The proposed method was also used to access the content of the ETX in two commercial brands of Indian market. Beer′s law was obeyed in concentration range of 0.1-0.5 Î¼g/ml, and the regression equation was Y = 0.418x + 0.018. The mean accuracy values for 0.1 Î¼g/ml and 0.2 Î¼g/ml concentration of ETX were found to be 99.76 Â± 0.52% and 99.12 Â± 0.84, respectively, and relative standard deviation (RSD of interday and intraday was less than 2%. The developed method was suitable and specific to the analysis of ETX even in the presence of common excipients. The method was applied on two different marketed brands and ETX contents were 98.5 Â± 0.56 and 99.33 Â± 0.44, respectively, of labeled claim. The proposed method was validated as per ICH guidelines and statistically good results were obtained. This method can be employed for routine analysis of ETX in bulk and commercial formulations.
Directory of Open Access Journals (Sweden)
Guan Lian
2018-01-01
Full Text Available Accurate prediction of taxi-out time is significant precondition for improving the operationality of the departure process at an airport, as well as reducing the long taxi-out time, congestion, and excessive emission of greenhouse gases. Unfortunately, several of the traditional methods of predicting taxi-out time perform unsatisfactorily at congested airports. This paper describes and tests three of those conventional methods which include Generalized Linear Model, Softmax Regression Model, and Artificial Neural Network method and two improved Support Vector Regression (SVR approaches based on swarm intelligence algorithm optimization, which include Particle Swarm Optimization (PSO and Firefly Algorithm. In order to improve the global searching ability of Firefly Algorithm, adaptive step factor and Lévy flight are implemented simultaneously when updating the location function. Six factors are analysed, of which delay is identified as one significant factor in congested airports. Through a series of specific dynamic analyses, a case study of Beijing International Airport (PEK is tested with historical data. The performance measures show that the proposed two SVR approaches, especially the Improved Firefly Algorithm (IFA optimization-based SVR method, not only perform as the best modelling measures and accuracy rate compared with the representative forecast models, but also can achieve a better predictive performance when dealing with abnormal taxi-out time states.
da Silva, Claudia Pereira; Emídio, Elissandro Soares; de Marchi, Mary Rosa Rodrigues
2015-01-01
This paper describes the validation of a method consisting of solid-phase extraction followed by gas chromatography-tandem mass spectrometry for the analysis of the ultraviolet (UV) filters benzophenone-3, ethylhexyl salicylate, ethylhexyl methoxycinnamate and octocrylene. The method validation criteria included evaluation of selectivity, analytical curve, trueness, precision, limits of detection and limits of quantification. The non-weighted linear regression model has traditionally been used for calibration, but it is not necessarily the optimal model in all cases. Because the assumption of homoscedasticity was not met for the analytical data in this work, a weighted least squares linear regression was used for the calibration method. The evaluated analytical parameters were satisfactory for the analytes and showed recoveries at four fortification levels between 62% and 107%, with relative standard deviations less than 14%. The detection limits ranged from 7.6 to 24.1 ng L(-1). The proposed method was used to determine the amount of UV filters in water samples from water treatment plants in Araraquara and Jau in São Paulo, Brazil. Copyright © 2014 Elsevier B.V. All rights reserved.
Zhu, Xiaofeng; Suk, Heung-Il; Wang, Li; Lee, Seong-Whan; Shen, Dinggang
2017-05-01
In this paper, we focus on joint regression and classification for Alzheimer's disease diagnosis and propose a new feature selection method by embedding the relational information inherent in the observations into a sparse multi-task learning framework. Specifically, the relational information includes three kinds of relationships (such as feature-feature relation, response-response relation, and sample-sample relation), for preserving three kinds of the similarity, such as for the features, the response variables, and the samples, respectively. To conduct feature selection, we first formulate the objective function by imposing these three relational characteristics along with an ℓ 2,1 -norm regularization term, and further propose a computationally efficient algorithm to optimize the proposed objective function. With the dimension-reduced data, we train two support vector regression models to predict the clinical scores of ADAS-Cog and MMSE, respectively, and also a support vector classification model to determine the clinical label. We conducted extensive experiments on the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset to validate the effectiveness of the proposed method. Our experimental results showed the efficacy of the proposed method in enhancing the performances of both clinical scores prediction and disease status identification, compared to the state-of-the-art methods. Copyright © 2015 Elsevier B.V. All rights reserved.
A simple method for calculation of Glauber's amplitude
International Nuclear Information System (INIS)
Omboo, Z.
1983-01-01
A method of calculating the terms of Glauber series expansions for elastic scattering of composed systems are presented. The inclusion of general scattering diagram simplifies essentially the calculation procedure. In this case the complicated combinatorical problem of reduction of similar terms in Glauber series is solved easily and determinant corresponding to various terms of the series decreases at least by a factor of two, if numbers of constituents of scattered systems are equal. If these numbers are not equal, the determinant order is equal to the smallest one
Simple method for measuring reflectance of optical coatings
International Nuclear Information System (INIS)
Wen Gui Wang; Yi Sheng Chen
1995-01-01
The quality of optical coatings has an important effect on the performance of optical instrument. The last few years, the requirements for super low loss dielectric mirror coatings used in low gain laser systems such as free electron laser and the ring laser etc., have given an impetus to the development of the technology of precise reflectance measurement of optical coatings. A reliable and workable technique is to measure the light intensity decay time of optical resonant cavity. This paper describes a measuring method which is dependent on direct measurement of the light intensity decay time of a resonant cavity comprised of low loss optical components. According to the evolution of a luminous flux stored inside the cavity, this method guarantees not only a quick and precise reflectance measurements of low loss highly reflecting mirror coatings but also transmittance measurements of low loss antireflection coatings and is especially effective with super los loss highly reflecting mirror. From the round-trip path length of the cavity and the speed of light, the light intensity exponential decay time of an optical cavity is easy to obtain and the cavity losses can be deduced. An optical reflectance of low loss highly mirror coatings and antireflection coatings is precisely measured as well. This is highly significant for the discrimination of the coating surface characteristics, the improvement of the performance of optical instrument and the development of high technology
Simple measuring rod method for the coaxiality of serial holes
Wang, Lei; Yang, Tongyu; Wang, Zhong; Ji, Yuchen; Liu, Changjie; Fu, Luhua
2017-11-01
Aiming at the rapid coaxiality measurement of serial hole part with a small diameter, a coaxiality measuring rod for each layer hole with a single LDS (laser displacement sensor) is proposed. This method does not require the rotation angle information of the rod, and the coaxiality of serial holes can be calculated from the measured values of LDSs after randomly rotating the measuring rod several times. With the mathematical model of the coaxiality measuring rod, each factor affecting the accuracy of coaxiality measurement is analyzed by simulation, and the installation accuracy requirements of the measuring rod and LDSs are presented. In the tolerance of a certain installation error of the measuring rod, the relative center of the hole is calculated by setting the over-determined nonlinear equations of the fitting circles of the multi-layer holes. In experiment, coaxiality measurement accuracy is realized by a 16 μm precision LDS, and the validity of the measurement method is verified. The manufacture and measurement requirements of the coaxiality measuring rod are low, by changing the position of LDSs in the measuring rod, the serial holes with different sizes and numbers can be measured. The rapid coaxiality measurement of parts can be easily implemented in industrial sites.
DEFF Research Database (Denmark)
Gundersen, H J; Bendtsen, T F; Korbo, L
1988-01-01
Stereology is a set of simple and efficient methods for quantitation of three-dimensional microscopic structures which is specifically tuned to provide reliable data from sections. Within the last few years, a number of new methods has been developed which are of special interest to pathologists...... are invariably simple and easy....
Directory of Open Access Journals (Sweden)
Gholam Reza Sheykhzadeh
2017-02-01
Full Text Available Introduction: Penetration resistance is one of the criteria for evaluating soil compaction. It correlates with several soil properties such as vehicle trafficability, resistance to root penetration, seedling emergence, and soil compaction by farm machinery. Direct measurement of penetration resistance is time consuming and difficult because of high temporal and spatial variability. Therefore, many different regressions and artificial neural network pedotransfer functions have been proposed to estimate penetration resistance from readily available soil variables such as particle size distribution, bulk density (Db and gravimetric water content (θm. The lands of Ardabil Province are one of the main production regions of potato in Iran, thus, obtaining the soil penetration resistance in these regions help with the management of potato production. The objective of this research was to derive pedotransfer functions by using regression and artificial neural network to predict penetration resistance from some soil variations in the agricultural soils of Ardabil plain and to compare the performance of artificial neural network with regression models. Materials and methods: Disturbed and undisturbed soil samples (n= 105 were systematically taken from 0-10 cm soil depth with nearly 3000 m distance in the agricultural lands of the Ardabil plain ((lat 38°15' to 38°40' N, long 48°16' to 48°61' E. The contents of sand, silt and clay (hydrometer method, CaCO3 (titration method, bulk density (cylinder method, particle density (Dp (pychnometer method, organic carbon (wet oxidation method, total porosity(calculating from Db and Dp, saturated (θs and field soil water (θf using the gravimetric method were measured in the laboratory. Mean geometric diameter (dg and standard deviation (σg of soil particles were computed using the percentages of sand, silt and clay. Penetration resistance was measured in situ using cone penetrometer (analog model at 10
Landslide susceptibility mapping on a global scale using the method of logistic regression
Directory of Open Access Journals (Sweden)
L. Lin
2017-08-01
Full Text Available This paper proposes a statistical model for mapping global landslide susceptibility based on logistic regression. After investigating explanatory factors for landslides in the existing literature, five factors were selected for model landslide susceptibility: relative relief, extreme precipitation, lithology, ground motion and soil moisture. When building the model, 70 % of landslide and nonlandslide points were randomly selected for logistic regression, and the others were used for model validation. To evaluate the accuracy of predictive models, this paper adopts several criteria including a receiver operating characteristic (ROC curve method. Logistic regression experiments found all five factors to be significant in explaining landslide occurrence on a global scale. During the modeling process, percentage correct in confusion matrix of landslide classification was approximately 80 % and the area under the curve (AUC was nearly 0.87. During the validation process, the above statistics were about 81 % and 0.88, respectively. Such a result indicates that the model has strong robustness and stable performance. This model found that at a global scale, soil moisture can be dominant in the occurrence of landslides and topographic factor may be secondary.
Liu, Ke; Chen, Xiaojing; Li, Limin; Chen, Huiling; Ruan, Xiukai; Liu, Wenbin
2015-02-09
The successive projections algorithm (SPA) is widely used to select variables for multiple linear regression (MLR) modeling. However, SPA used only once may not obtain all the useful information of the full spectra, because the number of selected variables cannot exceed the number of calibration samples in the SPA algorithm. Therefore, the SPA-MLR method risks the loss of useful information. To make a full use of the useful information in the spectra, a new method named "consensus SPA-MLR" (C-SPA-MLR) is proposed herein. This method is the combination of consensus strategy and SPA-MLR method. In the C-SPA-MLR method, SPA-MLR is used to construct member models with different subsets of variables, which are selected from the remaining variables iteratively. A consensus prediction is obtained by combining the predictions of the member models. The proposed method is evaluated by analyzing the near infrared (NIR) spectra of corn and diesel. The results of C-SPA-MLR method showed a better prediction performance compared with the SPA-MLR and full-spectra PLS methods. Moreover, these results could serve as a reference for combination the consensus strategy and other variable selection methods when analyzing NIR spectra and other spectroscopic techniques. Copyright © 2014 Elsevier B.V. All rights reserved.
Development of K-Nearest Neighbour Regression Method in Forecasting River Stream Flow
Directory of Open Access Journals (Sweden)
Mohammad Azmi
2012-07-01
Full Text Available Different statistical, non-statistical and black-box methods have been used in forecasting processes. Among statistical methods, K-nearest neighbour non-parametric regression method (K-NN due to its natural simplicity and mathematical base is one of the recommended methods for forecasting processes. In this study, K-NN method is explained completely. Besides, development and improvement approaches such as best neighbour estimation, data transformation functions, distance functions and proposed extrapolation method are described. K-NN method in company with its development approaches is used in streamflow forecasting of Zayandeh-Rud Dam upper basin. Comparing between final results of classic K-NN method and modified K-NN (number of neighbour 5, transformation function of Range Scaling, distance function of Mahanalobis and proposed extrapolation method shows that modified K-NN in criteria of goodness of fit, root mean square error, percentage of volume of error and correlation has had performance improvement 45% , 59% and 17% respectively. These results approve necessity of applying mentioned approaches to derive more accurate forecasts.
Simple discretization method for autoionization widths. III. Molecules
International Nuclear Information System (INIS)
Macas, A.; Martn, F.; Riera, A.; Yanez, M.
1987-01-01
We apply a new method to calculate widths of two-electron Feshbach resonances, which was described in detail and applied to atomic systems in preceding articles (this issue), to molecular and quasimolecular autoionizing states. For simplicity in the programming effort, we restrict our calculations to the small-R region where one-centered expansions are sufficiently accurate to describe the wave functions. As test cases, positions and widths for the H 2 , He 2 /sup 2+/, HeH + , and LiHe/sup 3+/ resonances of lowest energy are computed for R<0.6 a.u. The advantage of using block-diagonalization techniques to define diabatic resonant states instead of generalizing the Feshbach formalism is pointed out
Simple Methods to Approximate CPC Shape to Preserve Collection Efficiency
Directory of Open Access Journals (Sweden)
David Jafrancesco
2012-01-01
Full Text Available The compound parabolic concentrator (CPC is the most efficient reflective geometry to collect light to an exit port. Anyway, to allow its actual use in solar plants or photovoltaic concentration systems, a tradeoff between system efficiency and cost reduction, the two key issues for sunlight exploitation, must be found. In this work, we analyze various methods to model an approximated CPC aimed to be simpler and more cost-effective than the ideal one, as well as to preserve the system efficiency. The manufacturing easiness arises from the use of truncated conic surfaces only, which can be realized by cheap machining techniques. We compare different configurations on the basis of their collection efficiency, evaluated by means of nonsequential ray-tracing software. Moreover, due to the fact that some configurations are beam dependent and for a closer approximation of a real case, the input beam is simulated as nonsymmetric, with a nonconstant irradiance on the CPC internal surface.
A simple method for estimation of phosphorous in urine
International Nuclear Information System (INIS)
Chaudhary, Seema; Gondane, Sonali; Sawant, Pramilla D.; Rao, D.D.
2016-01-01
Following internal contamination of 32 P, it is preferentially eliminated from the body in urine. It is estimated by in-situ precipitation of ammonium molybdo-phosphate (AMP) in urine followed by gross beta counting. The amount of AMP formed in-situ depends on the amount of stable phosphorous (P) present in the urine and hence, it was essential to generate information regarding urinary excretion of stable P. If amount of P excreted is significant then the amount of AMP formed would correspondingly increase leading to absorption of some of the β particles. The present study was taken up for the estimation of daily urinary excretion of P using the phospho-molybdate spectrophotometry method. Few urine samples received from radiation workers were analyzed and based on the observed range of stable P in urine; volume of sample required for 32 P estimation was finalized
A simple method for quantifying jump loads in volleyball athletes.
Charlton, Paula C; Kenneally-Dabrowski, Claire; Sheppard, Jeremy; Spratford, Wayne
2017-03-01
Evaluate the validity of a commercially available wearable device, the Vert, for measuring vertical displacement and jump count in volleyball athletes. Propose a potential method of quantifying external load during training and match play within this population. Validation study. The ability of the Vert device to measure vertical displacement in male, junior elite volleyball athletes was assessed against reference standard laboratory motion analysis. The ability of the Vert device to count jumps during training and match-play was assessed via comparison with retrospective video analysis to determine precision and recall. A method of quantifying external load, known as the load index (LdIx) algorithm was proposed using the product of the jump count and average kinetic energy. Correlation between two separate Vert devices and three-dimensional trajectory data were good to excellent for all jump types performed (r=0.83-0.97), with a mean bias of between 3.57-4.28cm. When matched against jumps identified through video analysis, the Vert demonstrated excellent precision (0.995-1.000) evidenced by a low number of false positives. The number of false negatives identified with the Vert was higher resulting in lower recall values (0.814-0.930). The Vert is a commercially available tool that has potential for measuring vertical displacement and jump count in elite junior volleyball athletes without the need for time-consuming analysis and bespoke software. Subsequently, allowing the collected data to better quantify load using the proposed algorithm (LdIx). Copyright © 2016 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.
Comparing the index-flood and multiple-regression methods using L-moments
Malekinezhad, H.; Nachtnebel, H. P.; Klik, A.
In arid and semi-arid regions, the length of records is usually too short to ensure reliable quantile estimates. Comparing index-flood and multiple-regression analyses based on L-moments was the main objective of this study. Factor analysis was applied to determine main influencing variables on flood magnitude. Ward’s cluster and L-moments approaches were applied to several sites in the Namak-Lake basin in central Iran to delineate homogeneous regions based on site characteristics. Homogeneity test was done using L-moments-based measures. Several distributions were fitted to the regional flood data and index-flood and multiple-regression methods as two regional flood frequency methods were compared. The results of factor analysis showed that length of main waterway, compactness coefficient, mean annual precipitation, and mean annual temperature were the main variables affecting flood magnitude. The study area was divided into three regions based on the Ward’s method of clustering approach. The homogeneity test based on L-moments showed that all three regions were acceptably homogeneous. Five distributions were fitted to the annual peak flood data of three homogeneous regions. Using the L-moment ratios and the Z-statistic criteria, GEV distribution was identified as the most robust distribution among five candidate distributions for all the proposed sub-regions of the study area, and in general, it was concluded that the generalised extreme value distribution was the best-fit distribution for every three regions. The relative root mean square error (RRMSE) measure was applied for evaluating the performance of the index-flood and multiple-regression methods in comparison with the curve fitting (plotting position) method. In general, index-flood method gives more reliable estimations for various flood magnitudes of different recurrence intervals. Therefore, this method should be adopted as regional flood frequency method for the study area and the Namak-Lake basin
International Nuclear Information System (INIS)
Wu, Jie; Wang, Jianzhou; Lu, Haiyan; Dong, Yao; Lu, Xiaoxiao
2013-01-01
Highlights: ► The seasonal and trend items of the data series are forecasted separately. ► Seasonal item in the data series is verified by the Kendall τ correlation testing. ► Different regression models are applied to the trend item forecasting. ► We examine the superiority of the combined models by the quartile value comparison. ► Paired-sample T test is utilized to confirm the superiority of the combined models. - Abstract: For an energy-limited economy system, it is crucial to forecast load demand accurately. This paper devotes to 1-week-ahead daily load forecasting approach in which load demand series are predicted by employing the information of days before being similar to that of the forecast day. As well as in many nonlinear systems, seasonal item and trend item are coexisting in load demand datasets. In this paper, the existing of the seasonal item in the load demand data series is firstly verified according to the Kendall τ correlation testing method. Then in the belief of the separate forecasting to the seasonal item and the trend item would improve the forecasting accuracy, hybrid models by combining seasonal exponential adjustment method (SEAM) with the regression methods are proposed in this paper, where SEAM and the regression models are employed to seasonal and trend items forecasting respectively. Comparisons of the quartile values as well as the mean absolute percentage error values demonstrate this forecasting technique can significantly improve the accuracy though models applied to the trend item forecasting are eleven different ones. This superior performance of this separate forecasting technique is further confirmed by the paired-sample T tests
Directory of Open Access Journals (Sweden)
Bangyong Sun
2014-01-01
Full Text Available The polynomial regression method is employed to calculate the relationship of device color space and CIE color space for color characterization, and the performance of different expressions with specific parameters is evaluated. Firstly, the polynomial equation for color conversion is established and the computation of polynomial coefficients is analysed. And then different forms of polynomial equations are used to calculate the RGB and CMYK’s CIE color values, while the corresponding color errors are compared. At last, an optimal polynomial expression is obtained by analysing several related parameters during color conversion, including polynomial numbers, the degree of polynomial terms, the selection of CIE visual spaces, and the linearization.
Face Hallucination with Linear Regression Model in Semi-Orthogonal Multilinear PCA Method
Asavaskulkiet, Krissada
2018-04-01
In this paper, we propose a new face hallucination technique, face images reconstruction in HSV color space with a semi-orthogonal multilinear principal component analysis method. This novel hallucination technique can perform directly from tensors via tensor-to-vector projection by imposing the orthogonality constraint in only one mode. In our experiments, we use facial images from FERET database to test our hallucination approach which is demonstrated by extensive experiments with high-quality hallucinated color faces. The experimental results assure clearly demonstrated that we can generate photorealistic color face images by using the SO-MPCA subspace with a linear regression model.
Real-time prediction of respiratory motion based on local regression methods
International Nuclear Information System (INIS)
Ruan, D; Fessler, J A; Balter, J M
2007-01-01
Recent developments in modulation techniques enable conformal delivery of radiation doses to small, localized target volumes. One of the challenges in using these techniques is real-time tracking and predicting target motion, which is necessary to accommodate system latencies. For image-guided-radiotherapy systems, it is also desirable to minimize sampling rates to reduce imaging dose. This study focuses on predicting respiratory motion, which can significantly affect lung tumours. Predicting respiratory motion in real-time is challenging, due to the complexity of breathing patterns and the many sources of variability. We propose a prediction method based on local regression. There are three major ingredients of this approach: (1) forming an augmented state space to capture system dynamics, (2) local regression in the augmented space to train the predictor from previous observation data using semi-periodicity of respiratory motion, (3) local weighting adjustment to incorporate fading temporal correlations. To evaluate prediction accuracy, we computed the root mean square error between predicted tumor motion and its observed location for ten patients. For comparison, we also investigated commonly used predictive methods, namely linear prediction, neural networks and Kalman filtering to the same data. The proposed method reduced the prediction error for all imaging rates and latency lengths, particularly for long prediction lengths
Yang, H; Li, A K; Yin, Y L; Li, T J; Wang, Z R; Wu, G; Huang, R L; Kong, X F; Yang, C B; Kang, P; Deng, J; Wang, S X; Tan, B E; Hu, Q; Xing, F F; Wu, X; He, Q H; Yao, K; Liu, Z J; Tang, Z R; Yin, F G; Deng, Z Y; Xie, M Y; Fan, M Z
2007-03-01
The objectives of this study were to determine true phosphorus (P) digestibility, degradability of phytate-P complex and the endogenous P outputs associated with brown rice feeding in weanling pigs by using the simple linear regression analysis technique. Six barrows with an average initial body weight of 12.5 kg were fitted with a T-cannula and fed six diets according to a 6 × 6 Latin-square design. Six maize starch-based diets, containing six levels of P at 0.80, 1.36, 1.93, 2.49, 3.04, and 3.61 g/kg per kg dry-matter (DM) intake (DMI), were formulated with brown rice. Each experimental period lasted 10 days. After a 7-day adaptation, all faecal samples were collected on days 8 and 9. Ileal digesta samples were collected for a total of 24 h on day 10. The apparent ileal and faecal P digestibility values of brown rice were affected ( P Linear relationships ( P simple regression analysis technique. There were no differences ( P>0.05) in true P digestibility values (57.7 ± 5.4 v. 58.2 ± 5.9%), phytate P degradability (76.4 ± 6.7 v. 79.0 ± 4.4%) and the endogenous P outputs (0.812 ± 0..096 v. 0.725 ± 0.083 g/kg DMI) between the ileal and the faecal levels. The endogenous faecal P output represented 14 and 25% of the National Research Council (1998) recommended daily total and available P requirements in the weanling pig, respectively. About 58% of the total P in brown rice could be digested and absorbed by the weanling pig. Our results suggest that the large intestine of the weanling pigs does not play a significant role in the digestion of P in brown rice. Diet formulation on the basis of total or apparent P digestibility with brown rice may lead to P overfeeding and excessive P excretion in pigs.
Local regression type methods applied to the study of geophysics and high frequency financial data
Mariani, M. C.; Basu, K.
2014-09-01
In this work we applied locally weighted scatterplot smoothing techniques (Lowess/Loess) to Geophysical and high frequency financial data. We first analyze and apply this technique to the California earthquake geological data. A spatial analysis was performed to show that the estimation of the earthquake magnitude at a fixed location is very accurate up to the relative error of 0.01%. We also applied the same method to a high frequency data set arising in the financial sector and obtained similar satisfactory results. The application of this approach to the two different data sets demonstrates that the overall method is accurate and efficient, and the Lowess approach is much more desirable than the Loess method. The previous works studied the time series analysis; in this paper our local regression models perform a spatial analysis for the geophysics data providing different information. For the high frequency data, our models estimate the curve of best fit where data are dependent on time.
Thinking Inside the Box: Simple Methods to Evaluate Complex Treatments
Directory of Open Access Journals (Sweden)
J. Michael Menke
2011-10-01
Full Text Available We risk ignoring cheaper and safer medical treatments because they cannot be patented, lack profit potential, require too much patient-contact time, or do not have scientific results. Novel medical treatments may be difficult to evaluate for a variety of reasons such as patient selection bias, the effect of the package of care, or the lack of identifying the active elements of treatment. Whole Systems Research (WSR is an approach designed to assess the performance of complete packages of clinical management. While the WSR method is compelling, there is no standard procedure for WSR, and its implementation may be intimidating. The truth is that WSR methodological tools are neither new nor complicated. There are two sequential steps, or boxes, that guide WSR methodology: establishing system predictability, followed by an audit of system element effectiveness. We describe the implementation of WSR with a particular attention to threats to validity (Shadish, Cook, & Campbell, 2002; Shadish & Heinsman, 1997. DOI: 10.2458/azu_jmmss.v2i1.12365
Title: a simple method to evaluate linac beam homogeneity
International Nuclear Information System (INIS)
Monti, A.F.; Ostinelli, A.; Gelosa, S.; Frigerio, M.
1995-01-01
Quality Control (QC) tests in Radiotherapy represent a basic requirement to asses treatment units performance and treatment quality. Since they are generally time consuming, it is worth while to introduce procedures and methods which can be carried on more easily and quickly. Since 1994 in the Radiotherapy Department of S. Anna Hospital, it had been employed a commercially available solid phantom (PRECITRON) with a 10 diodes array, to investigate beam homogeneity (symmetry and flatness). In particular, global symmetry percentage indexes were defined which consider pairs of corresponding points along each axis (x and y) and compare the readings of the respective diodes, following the formula: (I gs =((X d + X -d ) - (Y d + Y -d )((X d + X -d ) + (Y d + Y -d )*200 where X d and X -d are points 8 or 10 cm equally spaced from the beam centre along x axis and the same for Y d and Y -d along y axis. Even if non supporting international protocols requirements as a whole, this parameter gives an important information about beam homogeneity, when only few points of measure are available in a plane, and it can be daily determined, thus fulfilling the aim of lightning immediately each situation capable to compromise treatment accuracy and effectiveness. In this poster we report the results concerning this parameter for a linear accelerator (Varian Clinac 1800), since September 1994 to September 1995
A simple method for particle tracking with coherent synchrotron radiation
International Nuclear Information System (INIS)
Borland, M.
2001-01-01
Coherent synchrotron radiation (CSR) is of great interest to those designing accelerators as drivers for free-electron lasers (FELs). Although experimental evidence is incomplete, CSR is predicted to have potentially severe effects on the emittance of high-brightness electron beams. The performance of an FEL depends critically on the emittance, current, and energy spread of the beam. Attempts to increase the current through magnetic bunch compression can lead to increased emittance and energy spread due to CSR in the dipoles of such a compressor. The code elegant [1] was used for design and simulation of the bunch compressor [2] for the Low-Energy Undulator Test Line (LEUTL) FEL [3] at the Advanced Photon Source (APS). In order to facilitate this design, a fast algorithm was developed based on the 1-D formalism of Saldin and coworkers [4]. In addition, a plausible method of including CSR effects in drift spaces following the chicane magnets was developed and implemented. The algorithm is fast enough to permit running hundreds of tolerance simulations including CSR for 50 thousand particles. This article describes the details of the implementation and shows results for the APS bunch compressor
Consistency analysis of subspace identification methods based on a linear regression approach
DEFF Research Database (Denmark)
Knudsen, Torben
2001-01-01
In the literature results can be found which claim consistency for the subspace method under certain quite weak assumptions. Unfortunately, a new result gives a counter example showing inconsistency under these assumptions and then gives new more strict sufficient assumptions which however does n...... not include important model structures as e.g. Box-Jenkins. Based on a simple least squares approach this paper shows the possible inconsistency under the weak assumptions and develops only slightly stricter assumptions sufficient for consistency and which includes any model structure...
Geographically weighted regression based methods for merging satellite and gauge precipitation
Chao, Lijun; Zhang, Ke; Li, Zhijia; Zhu, Yuelong; Wang, Jingfeng; Yu, Zhongbo
2018-03-01
Real-time precipitation data with high spatiotemporal resolutions are crucial for accurate hydrological forecasting. To improve the spatial resolution and quality of satellite precipitation, a three-step satellite and gauge precipitation merging method was formulated in this study: (1) bilinear interpolation is first applied to downscale coarser satellite precipitation to a finer resolution (PS); (2) the (mixed) geographically weighted regression methods coupled with a weighting function are then used to estimate biases of PS as functions of gauge observations (PO) and PS; and (3) biases of PS are finally corrected to produce a merged precipitation product. Based on the above framework, eight algorithms, a combination of two geographically weighted regression methods and four weighting functions, are developed to merge CMORPH (CPC MORPHing technique) precipitation with station observations on a daily scale in the Ziwuhe Basin of China. The geographical variables (elevation, slope, aspect, surface roughness, and distance to the coastline) and a meteorological variable (wind speed) were used for merging precipitation to avoid the artificial spatial autocorrelation resulting from traditional interpolation methods. The results show that the combination of the MGWR and BI-square function (MGWR-BI) has the best performance (R = 0.863 and RMSE = 7.273 mm/day) among the eight algorithms. The MGWR-BI algorithm was then applied to produce hourly merged precipitation product. Compared to the original CMORPH product (R = 0.208 and RMSE = 1.208 mm/hr), the quality of the merged data is significantly higher (R = 0.724 and RMSE = 0.706 mm/hr). The developed merging method not only improves the spatial resolution and quality of the satellite product but also is easy to implement, which is valuable for hydrological modeling and other applications.
Directory of Open Access Journals (Sweden)
Adi Syahputra
2014-03-01
Full Text Available Quantitative structure activity relationship (QSAR for 21 insecticides of phthalamides containing hydrazone (PCH was studied using multiple linear regression (MLR, principle component regression (PCR and artificial neural network (ANN. Five descriptors were included in the model for MLR and ANN analysis, and five latent variables obtained from principle component analysis (PCA were used in PCR analysis. Calculation of descriptors was performed using semi-empirical PM6 method. ANN analysis was found to be superior statistical technique compared to the other methods and gave a good correlation between descriptors and activity (r2 = 0.84. Based on the obtained model, we have successfully designed some new insecticides with higher predicted activity than those of previously synthesized compounds, e.g.2-(decalinecarbamoyl-5-chloro-N’-((5-methylthiophen-2-ylmethylene benzohydrazide, 2-(decalinecarbamoyl-5-chloro-N’-((thiophen-2-yl-methylene benzohydrazide and 2-(decaline carbamoyl-N’-(4-fluorobenzylidene-5-chlorobenzohydrazide with predicted log LC50 of 1.640, 1.672, and 1.769 respectively.
Nonparametric Methods in Astronomy: Think, Regress, Observe—Pick Any Three
Steinhardt, Charles L.; Jermyn, Adam S.
2018-02-01
Telescopes are much more expensive than astronomers, so it is essential to minimize required sample sizes by using the most data-efficient statistical methods possible. However, the most commonly used model-independent techniques for finding the relationship between two variables in astronomy are flawed. In the worst case they can lead without warning to subtly yet catastrophically wrong results, and even in the best case they require more data than necessary. Unfortunately, there is no single best technique for nonparametric regression. Instead, we provide a guide for how astronomers can choose the best method for their specific problem and provide a python library with both wrappers for the most useful existing algorithms and implementations of two new algorithms developed here.
An Application of Robust Method in Multiple Linear Regression Model toward Credit Card Debt
Amira Azmi, Nur; Saifullah Rusiman, Mohd; Khalid, Kamil; Roslan, Rozaini; Sufahani, Suliadi; Mohamad, Mahathir; Salleh, Rohayu Mohd; Hamzah, Nur Shamsidah Amir
2018-04-01
Credit card is a convenient alternative replaced cash or cheque, and it is essential component for electronic and internet commerce. In this study, the researchers attempt to determine the relationship and significance variables between credit card debt and demographic variables such as age, household income, education level, years with current employer, years at current address, debt to income ratio and other debt. The provided data covers 850 customers information. There are three methods that applied to the credit card debt data which are multiple linear regression (MLR) models, MLR models with least quartile difference (LQD) method and MLR models with mean absolute deviation method. After comparing among three methods, it is found that MLR model with LQD method became the best model with the lowest value of mean square error (MSE). According to the final model, it shows that the years with current employer, years at current address, household income in thousands and debt to income ratio are positively associated with the amount of credit debt. Meanwhile variables for age, level of education and other debt are negatively associated with amount of credit debt. This study may serve as a reference for the bank company by using robust methods, so that they could better understand their options and choice that is best aligned with their goals for inference regarding to the credit card debt.
Regression Analysis by Example. 5th Edition
Chatterjee, Samprit; Hadi, Ali S.
2012-01-01
Regression analysis is a conceptually simple method for investigating relationships among variables. Carrying out a successful application of regression analysis, however, requires a balance of theoretical results, empirical rules, and subjective judgment. "Regression Analysis by Example, Fifth Edition" has been expanded and thoroughly…
Robust Methods for Moderation Analysis with a Two-Level Regression Model.
Yang, Miao; Yuan, Ke-Hai
2016-01-01
Moderation analysis has many applications in social sciences. Most widely used estimation methods for moderation analysis assume that errors are normally distributed and homoscedastic. When these assumptions are not met, the results from a classical moderation analysis can be misleading. For more reliable moderation analysis, this article proposes two robust methods with a two-level regression model when the predictors do not contain measurement error. One method is based on maximum likelihood with Student's t distribution and the other is based on M-estimators with Huber-type weights. An algorithm for obtaining the robust estimators is developed. Consistent estimates of standard errors of the robust estimators are provided. The robust approaches are compared against normal-distribution-based maximum likelihood (NML) with respect to power and accuracy of parameter estimates through a simulation study. Results show that the robust approaches outperform NML under various distributional conditions. Application of the robust methods is illustrated through a real data example. An R program is developed and documented to facilitate the application of the robust methods.
Applications of Monte Carlo method to nonlinear regression of rheological data
Kim, Sangmo; Lee, Junghaeng; Kim, Sihyun; Cho, Kwang Soo
2018-02-01
In rheological study, it is often to determine the parameters of rheological models from experimental data. Since both rheological data and values of the parameters vary in logarithmic scale and the number of the parameters is quite large, conventional method of nonlinear regression such as Levenberg-Marquardt (LM) method is usually ineffective. The gradient-based method such as LM is apt to be caught in local minima which give unphysical values of the parameters whenever the initial guess of the parameters is far from the global optimum. Although this problem could be solved by simulated annealing (SA), the Monte Carlo (MC) method needs adjustable parameter which could be determined in ad hoc manner. We suggest a simplified version of SA, a kind of MC methods which results in effective values of the parameters of most complicated rheological models such as the Carreau-Yasuda model of steady shear viscosity, discrete relaxation spectrum and zero-shear viscosity as a function of concentration and molecular weight.
Method of Factor Extraction and Simple Structure of Data from Diverse Scientific Areas.
Thorndike, Robert M.
To study the applicability of simple structure logic for factorial data from scientific disciplines outside psychology, four correlation matrices from each of six scientific areas were factor analyzed by five factoring methods. Resulting factor matrices were compared on two objective criteria of simple structure before and after rotation.…
A Simple DTC-SVM method for Matrix Converter Drives Using a Deadbeat Scheme
DEFF Research Database (Denmark)
Lee, Kyo-Beum; Blaabjerg, Frede; Lee, Kwang-Won
2005-01-01
In this paper, a simple direct torque control (DTC) method for sensorless matrix converter drives is proposed, which is characterized by a simple structure, minimal torque ripple and unity input power factor. Also a good sensorless speed-control performance in the low speed operation is obtained,...
Wulandari, S. P.; Salamah, M.; Rositawati, A. F. D.
2018-04-01
Food security is the condition where the food fulfilment is managed well for the country till the individual. Indonesia is one of the country which has the commitment to create the food security becomes main priority. However, the food necessity becomes common thing means that it doesn’t care about nutrient standard and the health condition of family member, so in the fulfilment of food necessity also has to consider the disease suffered by the family member, one of them is pulmonary tuberculosa. From that reasons, this research is conducted to know the factors which influence on household food security status which suffered from pulmonary tuberculosis in the coastal area of Surabaya by using binary logistic regression method. The analysis result by using binary logistic regression shows that the variables wife latest education, house density and spacious house ventilation significantly affect on household food security status which suffered from pulmonary tuberculosis in the coastal area of Surabaya, where the wife education level is University/equivalent, the house density is eligible or 8 m2/person and spacious house ventilation 10% of the floor area has the opportunity to become food secure households amounted to 0.911089. While the chance of becoming food insecure households amounted to 0.088911. The model household food security status which suffered from pulmonary tuberculosis in the coastal area of Surabaya has been conformable, and the overall percentages of those classifications are at 71.8%.
A New Global Regression Analysis Method for the Prediction of Wind Tunnel Model Weight Corrections
Ulbrich, Norbert Manfred; Bridge, Thomas M.; Amaya, Max A.
2014-01-01
A new global regression analysis method is discussed that predicts wind tunnel model weight corrections for strain-gage balance loads during a wind tunnel test. The method determines corrections by combining "wind-on" model attitude measurements with least squares estimates of the model weight and center of gravity coordinates that are obtained from "wind-off" data points. The method treats the least squares fit of the model weight separate from the fit of the center of gravity coordinates. Therefore, it performs two fits of "wind- off" data points and uses the least squares estimator of the model weight as an input for the fit of the center of gravity coordinates. Explicit equations for the least squares estimators of the weight and center of gravity coordinates are derived that simplify the implementation of the method in the data system software of a wind tunnel. In addition, recommendations for sets of "wind-off" data points are made that take typical model support system constraints into account. Explicit equations of the confidence intervals on the model weight and center of gravity coordinates and two different error analyses of the model weight prediction are also discussed in the appendices of the paper.
Hwang, Kyu-Baek; Lee, In-Hee; Park, Jin-Ho; Hambuch, Tina; Choe, Yongjoon; Kim, MinHyeok; Lee, Kyungjoon; Song, Taemin; Neu, Matthew B; Gupta, Neha; Kohane, Isaac S; Green, Robert C; Kong, Sek Won
2014-08-01
As whole genome sequencing (WGS) uncovers variants associated with rare and common diseases, an immediate challenge is to minimize false-positive findings due to sequencing and variant calling errors. False positives can be reduced by combining results from orthogonal sequencing methods, but costly. Here, we present variant filtering approaches using logistic regression (LR) and ensemble genotyping to minimize false positives without sacrificing sensitivity. We evaluated the methods using paired WGS datasets of an extended family prepared using two sequencing platforms and a validated set of variants in NA12878. Using LR or ensemble genotyping based filtering, false-negative rates were significantly reduced by 1.1- to 17.8-fold at the same levels of false discovery rates (5.4% for heterozygous and 4.5% for homozygous single nucleotide variants (SNVs); 30.0% for heterozygous and 18.7% for homozygous insertions; 25.2% for heterozygous and 16.6% for homozygous deletions) compared to the filtering based on genotype quality scores. Moreover, ensemble genotyping excluded > 98% (105,080 of 107,167) of false positives while retaining > 95% (897 of 937) of true positives in de novo mutation (DNM) discovery in NA12878, and performed better than a consensus method using two sequencing platforms. Our proposed methods were effective in prioritizing phenotype-associated variants, and an ensemble genotyping would be essential to minimize false-positive DNM candidates. © 2014 WILEY PERIODICALS, INC.
A dynamic particle filter-support vector regression method for reliability prediction
International Nuclear Information System (INIS)
Wei, Zhao; Tao, Tao; ZhuoShu, Ding; Zio, Enrico
2013-01-01
Support vector regression (SVR) has been applied to time series prediction and some works have demonstrated the feasibility of its use to forecast system reliability. For accuracy of reliability forecasting, the selection of SVR's parameters is important. The existing research works on SVR's parameters selection divide the example dataset into training and test subsets, and tune the parameters on the training data. However, these fixed parameters can lead to poor prediction capabilities if the data of the test subset differ significantly from those of training. Differently, the novel method proposed in this paper uses particle filtering to estimate the SVR model parameters according to the whole measurement sequence up to the last observation instance. By treating the SVR training model as the observation equation of a particle filter, our method allows updating the SVR model parameters dynamically when a new observation comes. Because of the adaptability of the parameters to dynamic data pattern, the new PF–SVR method has superior prediction performance over that of standard SVR. Four application results show that PF–SVR is more robust than SVR to the decrease of the number of training data and the change of initial SVR parameter values. Also, even if there are trends in the test data different from those in the training data, the method can capture the changes, correct the SVR parameters and obtain good predictions. -- Highlights: •A dynamic PF–SVR method is proposed to predict the system reliability. •The method can adjust the SVR parameters according to the change of data. •The method is robust to the size of training data and initial parameter values. •Some cases based on both artificial and real data are studied. •PF–SVR shows superior prediction performance over standard SVR
Ahmed, Rehan
2014-11-01
A simple and convenient method was developed for the simultaneous determination of metformin HCl and glimepiride in tablet dosage form of different pharmaceuticals companies. This method was validated and proved to be applicable for assay determination in intermediate and finished staged. More over a single medium dissolution of metformin HCl and glimepiride was established and the media was evaluated for comparative studies for different formulations. Reverse phase HPLC equipped with UV detector was used for the determination of metformin HCl and glimepiride. A mixture of acetonitrile and ammonium acetate buffer 0.05M pH 3.0 was used as mobile phase at flow rate of 1.0ml/min. Promocil C18 5µ 100Aº 4.6 x 100mm C18 silica column was used and detection was carried out at 270nm. Method was found to be linear over the range of 4ppm to 16ppm for glimepiride and 170ppm to 680ppm for metformin HCl. Regression co-efficient were found to be 0.9949 and 0.9864 for glimepiride and metformin HCl respectively. Dissolution was performed in 500ml 0.2% sodium lauryl sulfate at 37°C for 45min using paddle apparatus. Dissolution of glimepiride was found to be 98.60% and 101.08% in Orinase Met1 tablet and Amaryl M tablet respectively whereas metformin was found 99.41% and 98.59% in Orinase Met 1 tablet and Amaryl M tablet. RSD for all the dissolutions was less than 2.0% after completion.
Statistical learning method in regression analysis of simulated positron spectral data
International Nuclear Information System (INIS)
Avdic, S. Dz.
2005-01-01
Positron lifetime spectroscopy is a non-destructive tool for detection of radiation induced defects in nuclear reactor materials. This work concerns the applicability of the support vector machines method for the input data compression in the neural network analysis of positron lifetime spectra. It has been demonstrated that the SVM technique can be successfully applied to regression analysis of positron spectra. A substantial data compression of about 50 % and 8 % of the whole training set with two and three spectral components respectively has been achieved including a high accuracy of the spectra approximation. However, some parameters in the SVM approach such as the insensitivity zone e and the penalty parameter C have to be chosen carefully to obtain a good performance. (author)
The crux of the method: assumptions in ordinary least squares and logistic regression.
Long, Rebecca G
2008-10-01
Logistic regression has increasingly become the tool of choice when analyzing data with a binary dependent variable. While resources relating to the technique are widely available, clear discussions of why logistic regression should be used in place of ordinary least squares regression are difficult to find. The current paper compares and contrasts the assumptions of ordinary least squares with those of logistic regression and explains why logistic regression's looser assumptions make it adept at handling violations of the more important assumptions in ordinary least squares.
A simple method for the prevention of endometrial autolysis in hysterectomy specimens
Houghton, J P; Roddy, S; Carroll, S; McCluggage, W G
2004-01-01
Aims: Uteri are among the most common surgical pathology specimens. Assessment of the endometrium is often difficult because of pronounced tissue autolysis. This study describes a simple method to prevent endometrial autolysis and aid in interpretation of the endometrium.
A simple method of fitting ill-conditioned polynomials to data
International Nuclear Information System (INIS)
Buckler, A.N.; Lawrence, J.
1979-04-01
A very simple transformation of the independent variable x is shown to cure the ill-conditioning when some polynomial series are fitted to given Y values. Numerical examples are given to illustrate the power of the method. (author)
Development of the simple evaluation method of the soil biomass by the ATP measurement
Czech Academy of Sciences Publication Activity Database
Urashima, Y.; Nakajima, M.; Kaneda, Satoshi; Murakami, T.
2007-01-01
Roč. 78, č. 2 (2007), s. 187-190 ISSN 0029-0610 Institutional research plan: CEZ:AV0Z60660521 Keywords : simple evaluation method * soil biomass * ATP measurement Subject RIV: EH - Ecology, Behaviour
[Analysis on the accuracy of simple selection method of Fengshi (GB 31)].
Li, Zhixing; Zhang, Haihua; Li, Suhe
2015-12-01
To explore the accuracy of simple selection method of Fengshi (GB 31). Through the study of the ancient and modern data,the analysis and integration of the acupuncture books,the comparison of the locations of Fengshi (GB 31) by doctors from all dynasties and the integration of modern anatomia, the modern simple selection method of Fengshi (GB 31) is definite, which is the same as the traditional way. It is believed that the simple selec tion method is in accord with the human-oriented thought of TCM. Treatment by acupoints should be based on the emerging nature and the individual difference of patients. Also, it is proposed that Fengshi (GB 31) should be located through the integration between the simple method and body surface anatomical mark.
Austin, Peter C; Lee, Douglas S; Steyerberg, Ewout W; Tu, Jack V
2012-01-01
In biomedical research, the logistic regression model is the most commonly used method for predicting the probability of a binary outcome. While many clinical researchers have expressed an enthusiasm for regression trees, this method may have limited accuracy for predicting health outcomes. We aimed to evaluate the improvement that is achieved by using ensemble-based methods, including bootstrap aggregation (bagging) of regression trees, random forests, and boosted regression trees. We analyzed 30-day mortality in two large cohorts of patients hospitalized with either acute myocardial infarction (N = 16,230) or congestive heart failure (N = 15,848) in two distinct eras (1999–2001 and 2004–2005). We found that both the in-sample and out-of-sample prediction of ensemble methods offered substantial improvement in predicting cardiovascular mortality compared to conventional regression trees. However, conventional logistic regression models that incorporated restricted cubic smoothing splines had even better performance. We conclude that ensemble methods from the data mining and machine learning literature increase the predictive performance of regression trees, but may not lead to clear advantages over conventional logistic regression models for predicting short-term mortality in population-based samples of subjects with cardiovascular disease. PMID:22777999
Dinç, Erdal; Ustündağ, Ozgür; Baleanu, Dumitru
2010-08-01
The sole use of pyridoxine hydrochloride during treatment of tuberculosis gives rise to pyridoxine deficiency. Therefore, a combination of pyridoxine hydrochloride and isoniazid is used in pharmaceutical dosage form in tuberculosis treatment to reduce this side effect. In this study, two chemometric methods, partial least squares (PLS) and principal component regression (PCR), were applied to the simultaneous determination of pyridoxine (PYR) and isoniazid (ISO) in their tablets. A concentration training set comprising binary mixtures of PYR and ISO consisting of 20 different combinations were randomly prepared in 0.1 M HCl. Both multivariate calibration models were constructed using the relationships between the concentration data set (concentration data matrix) and absorbance data matrix in the spectral region 200-330 nm. The accuracy and the precision of the proposed chemometric methods were validated by analyzing synthetic mixtures containing the investigated drugs. The recovery results obtained by applying PCR and PLS calibrations to the artificial mixtures were found between 100.0 and 100.7%. Satisfactory results obtained by applying the PLS and PCR methods to both artificial and commercial samples were obtained. The results obtained in this manuscript strongly encourage us to use them for the quality control and the routine analysis of the marketing tablets containing PYR and ISO drugs. Copyright © 2010 John Wiley & Sons, Ltd.
The efficiency of the centroid method compared to a simple average
DEFF Research Database (Denmark)
Eskildsen, Jacob Kjær; Kristensen, Kai; Nielsen, Rikke
Based on empirical data as well as a simulation study this paper gives recommendations with respect to situations wheere a simple avarage of the manifest indicators can be used as a close proxy for the centroid method and when it cannot.......Based on empirical data as well as a simulation study this paper gives recommendations with respect to situations wheere a simple avarage of the manifest indicators can be used as a close proxy for the centroid method and when it cannot....
A simple and secure method to fix laparoscopic trocars in children.
Yip, K F; Tam, P K H; Li, M K W
2006-04-01
We introduce a simple method of fixing trocars to the abdominal wall in children. Before anchoring the trocar, a piece of Tegaderm polyurethrane adhesive (3M Healthcare, St. Paul, Minnesota) is attached to the trocar. A silk stitch is anchored to neighboring skin, and then transfixed over the shaft of the trocar through the adhesive. Both inward and outward movement of the trocar can be restrained. This method is simple, fast, secure, and can be applied to trocars of any size.
Reflexion on linear regression trip production modelling method for ensuring good model quality
Suprayitno, Hitapriya; Ratnasari, Vita
2017-11-01
Transport Modelling is important. For certain cases, the conventional model still has to be used, in which having a good trip production model is capital. A good model can only be obtained from a good sample. Two of the basic principles of a good sampling is having a sample capable to represent the population characteristics and capable to produce an acceptable error at a certain confidence level. It seems that this principle is not yet quite understood and used in trip production modeling. Therefore, investigating the Trip Production Modelling practice in Indonesia and try to formulate a better modeling method for ensuring the Model Quality is necessary. This research result is presented as follows. Statistics knows a method to calculate span of prediction value at a certain confidence level for linear regression, which is called Confidence Interval of Predicted Value. The common modeling practice uses R2 as the principal quality measure, the sampling practice varies and not always conform to the sampling principles. An experiment indicates that small sample is already capable to give excellent R2 value and sample composition can significantly change the model. Hence, good R2 value, in fact, does not always mean good model quality. These lead to three basic ideas for ensuring good model quality, i.e. reformulating quality measure, calculation procedure, and sampling method. A quality measure is defined as having a good R2 value and a good Confidence Interval of Predicted Value. Calculation procedure must incorporate statistical calculation method and appropriate statistical tests needed. A good sampling method must incorporate random well distributed stratified sampling with a certain minimum number of samples. These three ideas need to be more developed and tested.
Directory of Open Access Journals (Sweden)
Nina L. Timofeeva
2014-01-01
Full Text Available The article presents the methodological and technical bases for the creation of regression models that adequately reflect reality. The focus is on methods of removing residual autocorrelation in models. Algorithms eliminating heteroscedasticity and autocorrelation of the regression model residuals: reweighted least squares method, the method of Cochran-Orkutta are given. A model of "pure" regression is build, as well as to compare the effect on the dependent variable of the different explanatory variables when the latter are expressed in different units, a standardized form of the regression equation. The scheme of abatement techniques of heteroskedasticity and autocorrelation for the creation of regression models specific to the social and cultural sphere is developed.
2017-12-01
Fig. 2 Simulation method; the process for one iteration of the simulation . It was repeated 250 times per combination of HR and FAR. Analysis was...distribution is unlimited. 8 Fig. 2 Simulation method; the process for one iteration of the simulation . It was repeated 250 times per combination of HR...stimuli. Simulations show that this regression method results in an unbiased and accurate estimate of target detection performance. The regression
Isa, Zakiah Mohd; Tawfiq, Omar Farouq; Noor, Norliza Mohd; Shamsudheen, Mohd Iqbal; Rijal, Omar Mohd
2010-03-01
In rehabilitating edentulous patients, selecting appropriately sized teeth in the absence of preextraction records is problematic. The purpose of this study was to investigate the relationships between some facial dimensions and widths of the maxillary anterior teeth to potentially provide a guide for tooth selection. Sixty full dentate Malaysian adults (18-36 years) representing 2 ethnic groups (Malay and Chinese), with well aligned maxillary anterior teeth and minimal attrition, participated in this study. Standardized digital images of the face, viewed frontally, were recorded. Using image analyzing software, the images were used to determine the interpupillary distance (IPD), inner canthal distance (ICD), and interalar width (IA). Widths of the 6 maxillary anterior teeth were measured directly from casts of the subjects using digital calipers. Regression analyses were conducted to measure the strength of the associations between the variables (alpha=.10). The means (standard deviations) of IPD, IA, and ICD of the subjects were 62.28 (2.47), 39.36 (3.12), and 34.36 (2.15) mm, respectively. The mesiodistal diameters of the maxillary central incisors, lateral incisors, and canines were 8.54 (0.50), 7.09 (0.48), and 7.94 (0.40) mm, respectively. The width of the central incisors was highly correlated to the IPD (r=0.99), while the widths of the lateral incisors and canines were highly correlated to a combination of IPD and IA (r=0.99 and 0.94, respectively). Using regression methods, the widths of the anterior teeth within the population tested may be predicted by a combination of the facial dimensions studied. (c) 2010 The Editorial Council of the Journal of Prosthetic Dentistry. Published by Mosby, Inc. All rights reserved.
A simple method for validation and verification of pipettes mounted on automated liquid handlers
DEFF Research Database (Denmark)
Stangegaard, Michael; Hansen, Anders Johannes; Frøslev, Tobias Guldberg
We have implemented a simple method for validation and verification of the performance of pipettes mounted on automated liquid handlers as necessary for laboratories accredited under ISO 17025. An 8-step serial dilution of Orange G was prepared in quadruplicates in a flat bottom 96-well microtit...... available. In conclusion, we have set up a simple solution for the continuous validation of automated liquid handlers used for accredited work. The method is cheap, simple and easy to use for aqueous solutions but requires a spectrophotometer that can read microtiter plates....... We have implemented a simple method for validation and verification of the performance of pipettes mounted on automated liquid handlers as necessary for laboratories accredited under ISO 17025. An 8-step serial dilution of Orange G was prepared in quadruplicates in a flat bottom 96-well microtiter...
Delwiche, Stephen R; Reeves, James B
2010-01-01
In multivariate regression analysis of spectroscopy data, spectral preprocessing is often performed to reduce unwanted background information (offsets, sloped baselines) or accentuate absorption features in intrinsically overlapping bands. These procedures, also known as pretreatments, are commonly smoothing operations or derivatives. While such operations are often useful in reducing the number of latent variables of the actual decomposition and lowering residual error, they also run the risk of misleading the practitioner into accepting calibration equations that are poorly adapted to samples outside of the calibration. The current study developed a graphical method to examine this effect on partial least squares (PLS) regression calibrations of near-infrared (NIR) reflection spectra of ground wheat meal with two analytes, protein content and sodium dodecyl sulfate sedimentation (SDS) volume (an indicator of the quantity of the gluten proteins that contribute to strong doughs). These two properties were chosen because of their differing abilities to be modeled by NIR spectroscopy: excellent for protein content, fair for SDS sedimentation volume. To further demonstrate the potential pitfalls of preprocessing, an artificial component, a randomly generated value, was included in PLS regression trials. Savitzky-Golay (digital filter) smoothing, first-derivative, and second-derivative preprocess functions (5 to 25 centrally symmetric convolution points, derived from quadratic polynomials) were applied to PLS calibrations of 1 to 15 factors. The results demonstrated the danger of an over reliance on preprocessing when (1) the number of samples used in a multivariate calibration is low (<50), (2) the spectral response of the analyte is weak, and (3) the goodness of the calibration is based on the coefficient of determination (R(2)) rather than a term based on residual error. The graphical method has application to the evaluation of other preprocess functions and various
Energy Technology Data Exchange (ETDEWEB)
Lee, Sang Dae; Lohumi, Santosh; Cho, Byoung Kwan [Dept. of Biosystems Machinery Engineering, Chungnam National University, Daejeon (Korea, Republic of); Kim, Moon Sung [United States Department of Agriculture Agricultural Research Service, Washington (United States); Lee, Soo Hee [Life and Technology Co.,Ltd., Hwasung (Korea, Republic of)
2014-08-15
This study was conducted to develop a non-destructive detection method for adulterated powder products using Raman spectroscopy and partial least squares regression(PLSR). Garlic and ginger powder, which are used as natural seasoning and in health supplement foods, were selected for this experiment. Samples were adulterated with corn starch in concentrations of 5-35%. PLSR models for adulterated garlic and ginger powders were developed and their performances evaluated using cross validation. The R{sup 2}{sub c} and SEC of an optimal PLSR model were 0.99 and 2.16 for the garlic powder samples, and 0.99 and 0.84 for the ginger samples, respectively. The variable importance in projection (VIP) score is a useful and simple tool for the evaluation of the importance of each variable in a PLSR model. After the VIP scores were taken pre-selection, the Raman spectrum data was reduced by one third. New PLSR models, based on a reduced number of wavelengths selected by the VIP scores technique, gave good predictions for the adulterated garlic and ginger powder samples.
International Nuclear Information System (INIS)
Lee, Sang Dae; Lohumi, Santosh; Cho, Byoung Kwan; Kim, Moon Sung; Lee, Soo Hee
2014-01-01
This study was conducted to develop a non-destructive detection method for adulterated powder products using Raman spectroscopy and partial least squares regression(PLSR). Garlic and ginger powder, which are used as natural seasoning and in health supplement foods, were selected for this experiment. Samples were adulterated with corn starch in concentrations of 5-35%. PLSR models for adulterated garlic and ginger powders were developed and their performances evaluated using cross validation. The R 2 c and SEC of an optimal PLSR model were 0.99 and 2.16 for the garlic powder samples, and 0.99 and 0.84 for the ginger samples, respectively. The variable importance in projection (VIP) score is a useful and simple tool for the evaluation of the importance of each variable in a PLSR model. After the VIP scores were taken pre-selection, the Raman spectrum data was reduced by one third. New PLSR models, based on a reduced number of wavelengths selected by the VIP scores technique, gave good predictions for the adulterated garlic and ginger powder samples.
The Box-and-Dot Method: A Simple Strategy for Counting Significant Figures
Stephenson, W. Kirk
2009-01-01
A visual method for counting significant digits is presented. This easy-to-learn (and easy-to-teach) method, designated the box-and-dot method, uses the device of "boxing" significant figures based on two simple rules, then counting the number of digits in the boxes. (Contains 4 notes.)
Whole-genome regression and prediction methods applied to plant and animal breeding
Los Campos, De G.; Hickey, J.M.; Pong-Wong, R.; Daetwyler, H.D.; Calus, M.P.L.
2013-01-01
Genomic-enabled prediction is becoming increasingly important in animal and plant breeding, and is also receiving attention in human genetics. Deriving accurate predictions of complex traits requires implementing whole-genome regression (WGR) models where phenotypes are regressed on thousands of
Modelling infant mortality rate in Central Java, Indonesia use generalized poisson regression method
Prahutama, Alan; Sudarno
2018-05-01
The infant mortality rate is the number of deaths under one year of age occurring among the live births in a given geographical area during a given year, per 1,000 live births occurring among the population of the given geographical area during the same year. This problem needs to be addressed because it is an important element of a country’s economic development. High infant mortality rate will disrupt the stability of a country as it relates to the sustainability of the population in the country. One of regression model that can be used to analyze the relationship between dependent variable Y in the form of discrete data and independent variable X is Poisson regression model. Recently The regression modeling used for data with dependent variable is discrete, among others, poisson regression, negative binomial regression and generalized poisson regression. In this research, generalized poisson regression modeling gives better AIC value than poisson regression. The most significant variable is the Number of health facilities (X1), while the variable that gives the most influence to infant mortality rate is the average breastfeeding (X9).
Selecting minimum dataset soil variables using PLSR as a regressive multivariate method
Stellacci, Anna Maria; Armenise, Elena; Castellini, Mirko; Rossi, Roberta; Vitti, Carolina; Leogrande, Rita; De Benedetto, Daniela; Ferrara, Rossana M.; Vivaldi, Gaetano A.
2017-04-01
Long-term field experiments and science-based tools that characterize soil status (namely the soil quality indices, SQIs) assume a strategic role in assessing the effect of agronomic techniques and thus in improving soil management especially in marginal environments. Selecting key soil variables able to best represent soil status is a critical step for the calculation of SQIs. Current studies show the effectiveness of statistical methods for variable selection to extract relevant information deriving from multivariate datasets. Principal component analysis (PCA) has been mainly used, however supervised multivariate methods and regressive techniques are progressively being evaluated (Armenise et al., 2013; de Paul Obade et al., 2016; Pulido Moncada et al., 2014). The present study explores the effectiveness of partial least square regression (PLSR) in selecting critical soil variables, using a dataset comparing conventional tillage and sod-seeding on durum wheat. The results were compared to those obtained using PCA and stepwise discriminant analysis (SDA). The soil data derived from a long-term field experiment in Southern Italy. On samples collected in April 2015, the following set of variables was quantified: (i) chemical: total organic carbon and nitrogen (TOC and TN), alkali-extractable C (TEC and humic substances - HA-FA), water extractable N and organic C (WEN and WEOC), Olsen extractable P, exchangeable cations, pH and EC; (ii) physical: texture, dry bulk density (BD), macroporosity (Pmac), air capacity (AC), and relative field capacity (RFC); (iii) biological: carbon of the microbial biomass quantified with the fumigation-extraction method. PCA and SDA were previously applied to the multivariate dataset (Stellacci et al., 2016). PLSR was carried out on mean centered and variance scaled data of predictors (soil variables) and response (wheat yield) variables using the PLS procedure of SAS/STAT. In addition, variable importance for projection (VIP
EPMLR: sequence-based linear B-cell epitope prediction method using multiple linear regression.
Lian, Yao; Ge, Meng; Pan, Xian-Ming
2014-12-19
B-cell epitopes have been studied extensively due to their immunological applications, such as peptide-based vaccine development, antibody production, and disease diagnosis and therapy. Despite several decades of research, the accurate prediction of linear B-cell epitopes has remained a challenging task. In this work, based on the antigen's primary sequence information, a novel linear B-cell epitope prediction model was developed using the multiple linear regression (MLR). A 10-fold cross-validation test on a large non-redundant dataset was performed to evaluate the performance of our model. To alleviate the problem caused by the noise of negative dataset, 300 experiments utilizing 300 sub-datasets were performed. We achieved overall sensitivity of 81.8%, precision of 64.1% and area under the receiver operating characteristic curve (AUC) of 0.728. We have presented a reliable method for the identification of linear B cell epitope using antigen's primary sequence information. Moreover, a web server EPMLR has been developed for linear B-cell epitope prediction: http://www.bioinfo.tsinghua.edu.cn/epitope/EPMLR/ .
Osborne, Jason W.
2012-01-01
Logistic regression is slowly gaining acceptance in the social sciences, and fills an important niche in the researcher's toolkit: being able to predict important outcomes that are not continuous in nature. While OLS regression is a valuable tool, it cannot routinely be used to predict outcomes that are binary or categorical in nature. These…
A simple and reliable method reducing sulfate to sulfide for multiple sulfur isotope analysis.
Geng, Lei; Savarino, Joel; Savarino, Clara A; Caillon, Nicolas; Cartigny, Pierre; Hattori, Shohei; Ishino, Sakiko; Yoshida, Naohiro
2018-02-28
Precise analysis of four sulfur isotopes of sulfate in geological and environmental samples provides the means to extract unique information in wide geological contexts. Reduction of sulfate to sulfide is the first step to access such information. The conventional reduction method suffers from a cumbersome distillation system, long reaction time and large volume of the reducing solution. We present a new and simple method enabling the process of multiple samples at one time with a much reduced volume of reducing solution. One mL of reducing solution made of HI and NaH 2 PO 2 was added to a septum glass tube with dry sulfate. The tube was heated at 124°C and the produced H 2 S was purged with inert gas (He or N 2 ) through gas-washing tubes and then collected by NaOH solution. The collected H 2 S was converted into Ag 2 S by adding AgNO 3 solution and the co-precipitated Ag 2 O was removed by adding a few drops of concentrated HNO 3 . Within 2-3 h, a 100% yield was observed for samples with 0.2-2.5 μmol Na 2 SO 4 . The reduction rate was much slower for BaSO 4 and a complete reduction was not observed. International sulfur reference materials, NBS-127, SO-5 and SO-6, were processed with this method, and the measured against accepted δ 34 S values yielded a linear regression line which had a slope of 0.99 ± 0.01 and a R 2 value of 0.998. The new methodology is easy to handle and allows us to process multiple samples at a time. It has also demonstrated good reproducibility in terms of H 2 S yield and for further isotope analysis. It is thus a good alternative to the conventional manual method, especially when processing samples with limited amount of sulfate available. © 2017 The Authors. Rapid Communications in Mass Spectrometry Pubished by John Wiley & Sons Ltd.
Simple equation method for nonlinear partial differential equations and its applications
Directory of Open Access Journals (Sweden)
Taher A. Nofal
2016-04-01
Full Text Available In this article, we focus on the exact solution of the some nonlinear partial differential equations (NLPDEs such as, Kodomtsev–Petviashvili (KP equation, the (2 + 1-dimensional breaking soliton equation and the modified generalized Vakhnenko equation by using the simple equation method. In the simple equation method the trial condition is the Bernoulli equation or the Riccati equation. It has been shown that the method provides a powerful mathematical tool for solving nonlinear wave equations in mathematical physics and engineering problems.
International Nuclear Information System (INIS)
Kobayashi, K.
2009-01-01
In 2001, an international cooperation on the 3D radiation transport benchmarks for simple geometries with void region was performed under the leadership of E. Sartori of OECD/NEA. There were contributions from eight institutions, where 6 contributions were by the discrete ordinate method and only two were by the spherical harmonics method. The 3D spherical harmonics program FFT3 by the finite Fourier transformation method has been improved for this presentation, and benchmark solutions for the 2D and 3D simple geometries with void region by the FFT2 and FFT3 are given showing fairly good accuracy. (authors)
Simple PVT quantitative method of Kr under high pure N2 condition
International Nuclear Information System (INIS)
Li Xuesong; Zhang Zibin; Wei Guanyi; Chen Liyun; Zhai Lihua
2005-01-01
A simple PVT quantitative method of Kr in the high pure N 2 was studied. Pressure, volume and temperature of the sample gas were measured by three individual methods to obtain the sum sample with food uncertainty. The ratio of Kr/N 2 could measured by GAM 400 quadrupole mass spectrometer. So the quantity of Kr could be calculated with the two measured data above. This method can be suited for quantitative analysis of other simple composed noble gas sample with high pure carrying gas. (authors)
Comparison of methods for the analysis of relatively simple mediation models
Rijnhart, Judith J.M.; Twisk, Jos W.R.; Chinapaw, Mai J.M.; de Boer, Michiel R.; Heymans, Martijn W.
2017-01-01
Background/aims Statistical mediation analysis is an often used method in trials, to unravel the pathways underlying the effect of an intervention on a particular outcome variable. Throughout the years, several methods have been proposed, such as ordinary least square (OLS) regression, structural
Energy Technology Data Exchange (ETDEWEB)
Wesolowski, Michal J.; Watson, Gage; Wanasundara, Surajith N.; Babyn, Paul [University of Saskatchewan, Department of Medical Imaging, Saskatoon, SK (Canada); Conrad, Gary R. [University of Kentucky College of Medicine, Department of Radiology, Lexington, KY (United States); Samal, Martin [Charles University Prague and the General University Hospital in Prague, Department of Nuclear Medicine, First Faculty of Medicine, Praha 2 (Czech Republic); Wesolowski, Carl A. [University of Saskatchewan, Department of Medical Imaging, Saskatoon, SK (Canada); Memorial University of Newfoundland, Department of Radiology, St. John' s, NL (Canada)
2016-03-15
Commonly used methods for determining split renal function (SRF) from dynamic scintigraphic data require extrarenal background subtraction and additional correction for intrarenal vascular activity. The use of these additional regions of interest (ROIs) can produce inaccurate results and be challenging, e.g. if the heart is out of the camera field of view. The purpose of this study was to evaluate a new method for determining SRF called the blood pool compensation (BPC) technique, which is simple to implement, does not require extrarenal background correction and intrinsically corrects for intrarenal vascular activity. In the BPC method SRF is derived from a parametric plot of the curves generated by one blood-pool and two renal ROIs. Data from 107 patients who underwent {sup 99m}Tc-MAG3 scintigraphy were used to determine SRF values. Values calculated using the BPC method were compared to those obtained with the integral (IN) and Patlak-Rutland (PR) techniques using Bland-Altman plotting and Passing-Bablok regression. The interobserver variability of the BPC technique was also assessed for two observers. The SRF values obtained with the BPC method did not differ significantly from those obtained with the PR method and showed no consistent bias, while SRF values obtained with the IN method showed significant differences with some bias in comparison to those obtained with either the PR or BPC method. No significant interobserver variability was found between two observers calculating SRF using the BPC method. The BPC method requires only three ROIs to produce reliable estimates of SRF, was simple to implement, and in this study yielded statistically equivalent results to the PR method with appreciable interobserver agreement. As such, it adds a new reliable method for quality control of monitoring relative kidney function. (orig.)
Bawaneh, Ali Khalid Ali; Nurulazam Md Zain, Ahmad; Salmiza, Saleh
2011-01-01
The purpose of this study was to investigate the effect of Herrmann Whole Brain Teaching Method over conventional teaching method on eight graders in their understanding of simple electric circuits in Jordan. Participants (N = 273 students; M = 139, F = 134) were randomly selected from Bani Kenanah region-North of Jordan and randomly assigned to…
Simple methods of aligning four-circle diffractometers with crystal reflections
Energy Technology Data Exchange (ETDEWEB)
Mitsui, Y [Tokyo Univ. (Japan). Faculty of Pharmaceutical Sciences
1979-08-01
Simple methods of aligning four-circle diffractometers with crystal reflections are devised. They provide the methods to check (1) perpendicularity of chi plane to the incident beam, (2) zero point of 2theta and linearity of focus-chi center-receiving aperture and (3) zero point of chi.
A Simple Method for Dynamic Scheduling in a Heterogeneous Computing System
Žumer, Viljem; Brest, Janez
2002-01-01
A simple method for the dynamic scheduling on a heterogeneous computing system is proposed in this paper. It was implemented to minimize the parallel program execution time. The proposed method decomposes the program workload into computationally homogeneous subtasks, which may be of the different size, depending on the current load of each machine in a heterogeneous computing system.
A simple immunoblotting method after separation of proteins in agarose gel
DEFF Research Database (Denmark)
Koch, C; Skjødt, K; Laursen, I
1985-01-01
A simple and sensitive method for immunoblotting of proteins after separation in agarose gels is described. It involves transfer of proteins onto nitrocellulose paper simply by diffusion through pressure, a transfer which only takes about 10 min. By this method we have demonstrated the existence ...
12 CFR 334.25 - Reasonable and simple methods of opting out.
2010-01-01
... STATEMENTS OF GENERAL POLICY FAIR CREDIT REPORTING Affiliate Marketing § 334.25 Reasonable and simple methods... or processed at an Internet Web site, if the consumer agrees to the electronic delivery of... opt-out under the Act, and the affiliate marketing opt-out under the Act, by a single method, such as...
Comparison of methods for the analysis of relatively simple mediation models.
Rijnhart, Judith J M; Twisk, Jos W R; Chinapaw, Mai J M; de Boer, Michiel R; Heymans, Martijn W
2017-09-01
Statistical mediation analysis is an often used method in trials, to unravel the pathways underlying the effect of an intervention on a particular outcome variable. Throughout the years, several methods have been proposed, such as ordinary least square (OLS) regression, structural equation modeling (SEM), and the potential outcomes framework. Most applied researchers do not know that these methods are mathematically equivalent when applied to mediation models with a continuous mediator and outcome variable. Therefore, the aim of this paper was to demonstrate the similarities between OLS regression, SEM, and the potential outcomes framework in three mediation models: 1) a crude model, 2) a confounder-adjusted model, and 3) a model with an interaction term for exposure-mediator interaction. Secondary data analysis of a randomized controlled trial that included 546 schoolchildren. In our data example, the mediator and outcome variable were both continuous. We compared the estimates of the total, direct and indirect effects, proportion mediated, and 95% confidence intervals (CIs) for the indirect effect across OLS regression, SEM, and the potential outcomes framework. OLS regression, SEM, and the potential outcomes framework yielded the same effect estimates in the crude mediation model, the confounder-adjusted mediation model, and the mediation model with an interaction term for exposure-mediator interaction. Since OLS regression, SEM, and the potential outcomes framework yield the same results in three mediation models with a continuous mediator and outcome variable, researchers can continue using the method that is most convenient to them.
Chen, X.; Vierling, Lee; Deering, D.
2005-01-01
Satellite data offer unrivaled utility in monitoring and quantifying large scale land cover change over time. Radiometric consistency among collocated multi-temporal imagery is difficult to maintain, however, due to variations in sensor characteristics, atmospheric conditions, solar angle, and sensor view angle that can obscure surface change detection. To detect accurate landscape change using multi-temporal images, we developed a variation of the pseudoinvariant feature (PIF) normalization scheme: the temporally invariant cluster (TIC) method. Image data were acquired on June 9, 1990 (Landsat 4), June 20, 2000 (Landsat 7), and August 26, 2001 (Landsat 7) to analyze boreal forests near the Siberian city of Krasnoyarsk using the normalized difference vegetation index (NDVI), enhanced vegetation index (EVI), and reduced simple ratio (RSR). The temporally invariant cluster (TIC) centers were identified via a point density map of collocated pixel VIs from the base image and the target image, and a normalization regression line was created to intersect all TIC centers. Target image VI values were then recalculated using the regression function so that these two images could be compared using the resulting common radiometric scale. We found that EVI was very indicative of vegetation structure because of its sensitivity to shadowing effects and could thus be used to separate conifer forests from deciduous forests and grass/crop lands. Conversely, because NDVI reduced the radiometric influence of shadow, it did not allow for distinctions among these vegetation types. After normalization, correlations of NDVI and EVI with forest leaf area index (LAI) field measurements combined for 2000 and 2001 were significantly improved; the r 2 values in these regressions rose from 0.49 to 0.69 and from 0.46 to 0.61, respectively. An EVI "cancellation effect" where EVI was positively related to understory greenness but negatively related to forest canopy coverage was evident across a
Simple method of obtaining the band strengths in the electronic spectra of diatomic molecules
International Nuclear Information System (INIS)
Gowda, L.S.; Balaji, V.N.
1977-01-01
It is shown that relative band strengths of diatomic molecules for which the product of Franck-Condon factor and r-centroid is approximately equal to 1 for (0,0) band can be determined by a simple method which is in good agreement with the smoothed array of experimental values. Such values for the Swan bands of the C 2 molecule are compared with the band strengths of the simple method. It is noted that the Swan bands are one of the outstanding features of R- and N-type stars and of the heads of comets
Yuan, Jin-Peng; Ji, Zhong-Hua; Zhao, Yan-Ting; Chang, Xue-Fang; Xiao, Lian-Tuan; Jia, Suo-Tang
2013-09-01
We present a simple, reliable, and nondestructive method for the measurement of vacuum pressure in a magneto-optical trap. The vacuum pressure is verified to be proportional to the collision rate constant between cold atoms and the background gas with a coefficient k, which can be calculated by means of the simple ideal gas law. The rate constant for loss due to collisions with all background gases can be derived from the total collision loss rate by a series of loading curves of cold atoms under different trapping laser intensities. The presented method is also applicable for other cold atomic systems and meets the miniaturization requirement of commercial applications.
DEFF Research Database (Denmark)
Xu, Jing; Ding, Yunhong; Peucheret, Christophe
2011-01-01
Although patterning effects (PEs) are known to be a limiting factor of ultrafast photonic switches based on semiconductor optical amplifiers (SOAs), a simple approach for their evaluation in numerical simulations and experiments is missing. In this work, we experimentally investigate and verify...... as well as the operation bit rate. Furthermore, a simple and effective method for probing the maximum PEs is demonstrated, which may relieve the computational effort or the experimental difficulties associated with the use of long PRBSs for the simulation or characterization of SOA-based switches. Good...... agrement with conventional PRBS characterization is obtained. The method is suitable for quick and systematic estimation and optimization of the switching performance....
Functional regression method for whole genome eQTL epistasis analysis with sequencing data.
Xu, Kelin; Jin, Li; Xiong, Momiao
2017-05-18
Epistasis plays an essential rule in understanding the regulation mechanisms and is an essential component of the genetic architecture of the gene expressions. However, interaction analysis of gene expressions remains fundamentally unexplored due to great computational challenges and data availability. Due to variation in splicing, transcription start sites, polyadenylation sites, post-transcriptional RNA editing across the entire gene, and transcription rates of the cells, RNA-seq measurements generate large expression variability and collectively create the observed position level read count curves. A single number for measuring gene expression which is widely used for microarray measured gene expression analysis is highly unlikely to sufficiently account for large expression variation across the gene. Simultaneously analyzing epistatic architecture using the RNA-seq and whole genome sequencing (WGS) data poses enormous challenges. We develop a nonlinear functional regression model (FRGM) with functional responses where the position-level read counts within a gene are taken as a function of genomic position, and functional predictors where genotype profiles are viewed as a function of genomic position, for epistasis analysis with RNA-seq data. Instead of testing the interaction of all possible pair-wises SNPs, the FRGM takes a gene as a basic unit for epistasis analysis, which tests for the interaction of all possible pairs of genes and use all the information that can be accessed to collectively test interaction between all possible pairs of SNPs within two genome regions. By large-scale simulations, we demonstrate that the proposed FRGM for epistasis analysis can achieve the correct type 1 error and has higher power to detect the interactions between genes than the existing methods. The proposed methods are applied to the RNA-seq and WGS data from the 1000 Genome Project. The numbers of pairs of significantly interacting genes after Bonferroni correction
Multiscale methods coupling atomistic and continuum mechanics: analysis of a simple case
Blanc , Xavier; Le Bris , Claude; Legoll , Frédéric
2007-01-01
International audience; The description and computation of fine scale localized phenomena arising in a material (during nanoindentation, for instance) is a challenging problem that has given birth to many multiscale methods. In this work, we propose an analysis of a simple one-dimensional method that couples two scales, the atomistic one and the continuum mechanics one. The method includes an adaptive criterion in order to split the computational domain into two subdomains, that are described...
A simple method to adapt time sampling of the analog signal
International Nuclear Information System (INIS)
Kalinin, Yu.G.; Martyanov, I.S.; Sadykov, Kh.; Zastrozhnova, N.N.
2004-01-01
In this paper we briefly describe the time sampling method, which is adapted to the speed of the signal change. Principally, this method is based on a simple idea--the combination of discrete integration with differentiation of the analog signal. This method can be used in nuclear electronics research into the characteristics of detectors and the shape of the pulse signal, pulse and transitive characteristics of inertial systems of processing of signals, etc
Steganalysis using logistic regression
Lubenko, Ivans; Ker, Andrew D.
2011-02-01
We advocate Logistic Regression (LR) as an alternative to the Support Vector Machine (SVM) classifiers commonly used in steganalysis. LR offers more information than traditional SVM methods - it estimates class probabilities as well as providing a simple classification - and can be adapted more easily and efficiently for multiclass problems. Like SVM, LR can be kernelised for nonlinear classification, and it shows comparable classification accuracy to SVM methods. This work is a case study, comparing accuracy and speed of SVM and LR classifiers in detection of LSB Matching and other related spatial-domain image steganography, through the state-of-art 686-dimensional SPAM feature set, in three image sets.
Directory of Open Access Journals (Sweden)
Sergei Vladimirovich Varaksin
2017-06-01
Full Text Available Purpose. Construction of a mathematical model of the dynamics of childbearing change in the Altai region in 2000–2016, analysis of the dynamics of changes in birth rates for multiple age categories of women of childbearing age. Methodology. A auxiliary analysis element is the construction of linear mathematical models of the dynamics of childbearing by using fuzzy linear regression method based on fuzzy numbers. Fuzzy linear regression is considered as an alternative to standard statistical linear regression for short time series and unknown distribution law. The parameters of fuzzy linear and standard statistical regressions for childbearing time series were defined with using the built in language MatLab algorithm. Method of fuzzy linear regression is not used in sociological researches yet. Results. There are made the conclusions about the socio-demographic changes in society, the high efficiency of the demographic policy of the leadership of the region and the country, and the applicability of the method of fuzzy linear regression for sociological analysis.
Lee, Chong Suh; Chung, Sung Soo; Park, Se Jun; Kim, Dong Min; Shin, Seong Kee
2014-01-01
This study aimed at deriving a lordosis predictive equation using the pelvic incidence and to establish a simple prediction method of lumbar lordosis for planning lumbar corrective surgery in Asians. Eighty-six asymptomatic volunteers were enrolled in the study. The maximal lumbar lordosis (MLL), lower lumbar lordosis (LLL), pelvic incidence (PI), and sacral slope (SS) were measured. The correlations between the parameters were analyzed using Pearson correlation analysis. Predictive equations of lumbar lordosis through simple regression analysis of the parameters and simple predictive values of lumbar lordosis using PI were derived. The PI strongly correlated with the SS (r = 0.78), and a strong correlation was found between the SS and LLL (r = 0.89), and between the SS and MLL (r = 0.83). Based on these correlations, the predictive equations of lumbar lordosis were found (SS = 0.80 + 0.74 PI (r = 0.78, R (2) = 0.61), LLL = 5.20 + 0.87 SS (r = 0.89, R (2) = 0.80), MLL = 17.41 + 0.96 SS (r = 0.83, R (2) = 0.68). When PI was between 30° to 35°, 40° to 50° and 55° to 60°, the equations predicted that MLL would be PI + 10°, PI + 5° and PI, and LLL would be PI - 5°, PI - 10° and PI - 15°, respectively. This simple calculation method can provide a more appropriate and simpler prediction of lumbar lordosis for Asian populations. The prediction of lumbar lordosis should be used as a reference for surgeons planning to restore the lumbar lordosis in lumbar corrective surgery.
International Nuclear Information System (INIS)
Wang Weida; Xia Junding; Zhou Zhixin; Leung, P.L.
2001-01-01
Thermoluminescence (TL) dating using a regression method of saturating exponential in pre-dose technique was described. 23 porcelain samples from past dynasties of China were dated by this method. The results show that the TL ages are in reasonable agreement with archaeological dates within a standard deviation of 27%. Such error can be accepted in porcelain dating
Standardization and validation of a novel and simple method to assess lumbar dural sac size
International Nuclear Information System (INIS)
Daniels, M.L.A.; Lowe, J.R.; Roy, P.; Patrone, M.V.; Conyers, J.M.; Fine, J.P.; Knowles, M.R.; Birchard, K.R.
2015-01-01
Aim: To develop and validate a simple, reproducible method to assess dural sac size using standard imaging technology. Materials and methods: This study was institutional review board-approved. Two readers, blinded to the diagnoses, measured anterior–posterior (AP) and transverse (TR) dural sac diameter (DSD), and AP vertebral body diameter (VBD) of the lumbar vertebrae using MRI images from 53 control patients with pre-existing MRI examinations, 19 prospectively MRI-imaged healthy controls, and 24 patients with Marfan syndrome with prior MRI or CT lumbar spine imaging. Statistical analysis utilized linear and logistic regression, Pearson correlation, and receiver operating characteristic (ROC) curves. Results: AP-DSD and TR-DSD measurements were reproducible between two readers (r = 0.91 and 0.87, respectively). DSD (L1–L5) was not different between male and female controls in the AP or TR plane (p = 0.43; p = 0.40, respectively), and did not vary by age (p = 0.62; p = 0.25) or height (p = 0.64; p = 0.32). AP-VBD was greater in males versus females (p = 1.5 × 10 −8 ), resulting in a smaller dural sac ratio (DSR) (DSD/VBD) in males (p = 5.8 × 10 −6 ). Marfan patients had larger AP-DSDs and TR-DSDs than controls (p = 5.9 × 10 −9 ; p = 6.5 × 10 −9 , respectively). Compared to DSR, AP-DSD and TR-DSD better discriminate Marfan from control subjects based on area under the curve (AUC) values from unadjusted ROCs (AP-DSD p < 0.01; TR-DSD p = 0.04). Conclusion: Individual vertebrae and L1–L5 (average) AP-DSD and TR-DSD measurements are simple, reliable, and reproducible for quantitating dural sac size without needing to control for gender, age, or height. - Highlights: • DSD (L1-L5) does not differ in the AP or TR plane by gender, height, or age. • AP- and TR-DSD measures correlate well between readers with different experience. • Height is positively correlated to AP-VBD in both males and females. • Varying
The analysis of survival data in nephrology: basic concepts and methods of Cox regression
van Dijk, Paul C.; Jager, Kitty J.; Zwinderman, Aeilko H.; Zoccali, Carmine; Dekker, Friedo W.
2008-01-01
How much does the survival of one group differ from the survival of another group? How do differences in age in these two groups affect such a comparison? To obtain a quantity to compare the survival of different patient groups and to account for confounding effects, a multiple regression technique
A new method to study simple shear processing of wheat gluten-starch mixtures
Peighambardoust, S.H.; Goot, A.J. van der; Hamer, R.J.; Boom, R.M.
2004-01-01
This article introduces a new method that uses a shearing device to study the effect of simple shear on the overall properties of pasta-like products made from commercial wheat gluten-starch (GS) blends. The shear-processed GS samples had a lower cooking loss (CL) and a higher swelling index (SI)
The simple modelling method for storm- and grey-water quality ...
African Journals Online (AJOL)
The simple modelling method for storm- and grey-water quality management applied to Alexandra settlement. ... objectives optimally consist of educational programmes, erosion and sediment control, street sweeping, removal of sanitation system overflows, impervious cover reduction, downspout disconnections, removal of ...
International Nuclear Information System (INIS)
Breskin, A.; Zwang, N.
1977-01-01
A simple method for bidimensional position read-out of Parallel Plate Avalanche counters (PPAC) has been developed, using the induced charge technique. An accuracy better than 0.5 mm (FWHM) has been achieved for both coordinates with 5.5. MeV α-particles at gas pressures of 10-40 torr. (author)
International Nuclear Information System (INIS)
Ko, P J; Takahashi, H; Sakai, H; Thu, T V; Okada, H; Sandhu, A; Koide, S
2013-01-01
Graphene shows promise for applications in flexible electronics. Here, we describe our procedure to transfer graphene grown on copper substrates by chemical vapor deposition to polydimethylsiloxane (PDMS) and SiO 2 /Si surfaces. The transfer of graphene was achieved by a simple, etching-free method onto flexible PDMS substrates.
12 CFR 717.25 - Reasonable and simple methods of opting out.
2010-01-01
... simple methods for exercising an opt-out right do not include— (i) Requiring the consumer to write his or... out. (a) In general. You must not use eligibility information about a consumer that you receive from an affiliate to make a solicitation to the consumer about your products or services, unless the...
A Simple Method to Determine if a Music Information Retrieval System is a "Horse"
DEFF Research Database (Denmark)
Sturm, Bob L.
2014-01-01
We propose and demonstrate a simple method to determine if a music information retrieval (MIR) system is using factors irrelevant to the task for which it is designed. This is of critical importance to certain use cases, but cannot be accomplished using standard approaches to evaluation in MIR...
A simple enzymic method for the synthesis of [32P]phosphoenolpyruvate
International Nuclear Information System (INIS)
Parra, F.
1982-01-01
A rapid and simple enzymic method is described for the synthesis of [ 32 P]phosphoenolpyruvate from [ 32 P]Psub(i), with a reproducible yield of 74%. The final product was shown to be a good substrate for pyruvate kinase (EC 2.7.1.40). (author)
A simple method of fabricating mask-free microfluidic devices for biological analysis.
Yi, Xin; Kodzius, Rimantas; Gong, Xiuqing; Xiao, Kang; Wen, Weijia
2010-01-01
We report a simple, low-cost, rapid, and mask-free method to fabricate two-dimensional (2D) and three-dimensional (3D) microfluidic chip for biological analysis researches. In this fabrication process, a laser system is used to cut through paper
12 CFR 571.25 - Reasonable and simple methods of opting out.
2010-01-01
... CREDIT REPORTING Affiliate Marketing § 571.25 Reasonable and simple methods of opting out. (a) In general... out, such as a form that can be electronically mailed or processed at an Internet Web site, if the... (15 U.S.C. 6801 et seq.), the affiliate sharing opt-out under the Act, and the affiliate marketing opt...
16 CFR 680.25 - Reasonable and simple methods of opting out.
2010-01-01
... AFFILIATE MARKETING § 680.25 Reasonable and simple methods of opting out. (a) In general. You must not use... a form that can be electronically mailed or processed at an Internet Web site, if the consumer..., 15 U.S.C. 6801 et seq., the affiliate sharing opt-out under the Act, and the affiliate marketing opt...
A simple red-ox titrimetric method for the evaluation of photo ...
Indian Academy of Sciences (India)
Unknown
tal conditions in a relatively short duration in R&D labora- tories having basic analytical facilities. The method suggested here could also be adopted to study the photo- catalytic activity of other transition metal oxide based catalysts. For establishing this technique, we have moni- tored a simple one-electron transfer red-ox ...
Estimating traffic volume on Wyoming low volume roads using linear and logistic regression methods
Directory of Open Access Journals (Sweden)
Dick Apronti
2016-12-01
Full Text Available Traffic volume is an important parameter in most transportation planning applications. Low volume roads make up about 69% of road miles in the United States. Estimating traffic on the low volume roads is a cost-effective alternative to taking traffic counts. This is because traditional traffic counts are expensive and impractical for low priority roads. The purpose of this paper is to present the development of two alternative means of cost-effectively estimating traffic volumes for low volume roads in Wyoming and to make recommendations for their implementation. The study methodology involves reviewing existing studies, identifying data sources, and carrying out the model development. The utility of the models developed were then verified by comparing actual traffic volumes to those predicted by the model. The study resulted in two regression models that are inexpensive and easy to implement. The first regression model was a linear regression model that utilized pavement type, access to highways, predominant land use types, and population to estimate traffic volume. In verifying the model, an R2 value of 0.64 and a root mean square error of 73.4% were obtained. The second model was a logistic regression model that identified the level of traffic on roads using five thresholds or levels. The logistic regression model was verified by estimating traffic volume thresholds and determining the percentage of roads that were accurately classified as belonging to the given thresholds. For the five thresholds, the percentage of roads classified correctly ranged from 79% to 88%. In conclusion, the verification of the models indicated both model types to be useful for accurate and cost-effective estimation of traffic volumes for low volume Wyoming roads. The models developed were recommended for use in traffic volume estimations for low volume roads in pavement management and environmental impact assessment studies.
A simple method of chaos control for a class of chaotic discrete-time systems
International Nuclear Information System (INIS)
Jiang Guoping; Zheng Weixing
2005-01-01
In this paper, a simple method is proposed for chaos control for a class of discrete-time chaotic systems. The proposed method is built upon the state feedback control and the characteristic of ergodicity of chaos. The feedback gain matrix of the controller is designed using a simple criterion, so that control parameters can be selected via the pole placement technique of linear control theory. The new controller has a feature that it only uses the state variable for control and does not require the target equilibrium point in the feedback path. Moreover, the proposed control method cannot only overcome the so-called 'odd eigenvalues number limitation' of delayed feedback control, but also control the chaotic systems to the specified equilibrium points. The effectiveness of the proposed method is demonstrated by a two-dimensional discrete-time chaotic system
A simple method for one-loop renormalization in curved space-time
Energy Technology Data Exchange (ETDEWEB)
Markkanen, Tommi [Helsinki Institute of Physics and Department of Physics, P.O. Box 64, FI-00014, University of Helsinki (Finland); Tranberg, Anders, E-mail: tommi.markkanen@helsinki.fi, E-mail: anders.tranberg@uis.no [Niels Bohr International Academy and Discovery Center, Niels Bohr Institute, Blegdamsvej 17, 2100 Copenhagen (Denmark)
2013-08-01
We present a simple method for deriving the renormalization counterterms from the components of the energy-momentum tensor in curved space-time. This method allows control over the finite parts of the counterterms and provides explicit expressions for each term separately. As an example, the method is used for the self-interacting scalar field in a Friedmann-Robertson-Walker metric in the adiabatic approximation, where we calculate the renormalized equation of motion for the field and the renormalized components of the energy-momentum tensor to fourth adiabatic order while including interactions to one-loop order. Within this formalism the trace anomaly, including contributions from interactions, is shown to have a simple derivation. We compare our results to those obtained by two standard methods, finding agreement with the Schwinger-DeWitt expansion but disagreement with adiabatic subtractions for interacting theories.
Fill rate estimation in periodic review policies with lost sales using simple methods
Energy Technology Data Exchange (ETDEWEB)
Cardós, M.; Guijarro Tarradellas, E.; Babiloni Griñón, E.
2016-07-01
Purpose: The exact estimation of the fill rate in the lost sales case is complex and time consuming. However, simple and suitable methods are needed for its estimation so that inventory managers could use them. Design/methodology/approach: Instead of trying to compute the fill rate in one step, this paper focuses first on estimating the probabilities of different on-hand stock levels so that the fill rate is computed later. Findings: As a result, the performance of a novel proposed method overcomes the other methods and is relatively simple to compute. Originality/value: Existing methods for estimating stock levels are examined, new procedures are proposed and their performance is assessed.
A simple method for plasma total vitamin C analysis suitable for routine clinical laboratory use
Robitaille, Line; Hoffer, L. John
2016-01-01
Background In-hospital hypovitaminosis C is highly prevalent but almost completely unrecognized. Medical awareness of this potentially important disorder is hindered by the inability of most hospital laboratories to determine plasma vitamin C concentrations. The availability of a simple, reliable method for analyzing plasma vitamin C could increase opportunities for routine plasma vitamin C analysis in clinical medicine. Methods Plasma vitamin C can be analyzed by high performance liquid chro...
A simple method to evaluate the composition of tissue-equivalent phantom materials
International Nuclear Information System (INIS)
Geske, G.
1977-01-01
A description is given of a method to calculate the composition of phantom materials with given density and radiation-physical parameters mixed of components, of which are known their chemical composition and their effective specific volumes. By an example of a simple composition with three components the method is illustrated. The results of this example and some experimental details that must be considered are discussed. (orig.) [de
A simple optical method for measuring the vibration amplitude of a speaker
UEDA, Masahiro; YAMAGUCHI, Toshihiko; KAKIUCHI, Hiroki; SUGA, Hiroshi
1999-01-01
A simple optical method has been proposed for measuring the vibration amplitude of a speaker vibrating with a frequency of approximately 10 kHz. The method is based on a multiple reflection between a vibrating speaker plane and a mirror parallel to that speaker plane. The multiple reflection can magnify a dispersion of the laser beam caused by the vibration, and easily make a measurement of the amplitude. The measuring sensitivity ranges between sub-microns and 1 mm. A preliminary experim...
Simple and effective method for nuclear tellurium isomers separation from antimony cyclotron targets
International Nuclear Information System (INIS)
Bondarevskij, S.I.; Eremin, V.V.
1999-01-01
Simple and effective method of generation of tellurium nuclear isomers from irradiated on cyclotron metallic antimony is suggested. Basically this method consists in consideration of the big difference in volatilities of metallic forms of antimony, tin and tellurium. Heating of the tin-antimony alloy at 1200 K permits to separate about 90 % of produced quantity of 121m Te and 123m Te (in this case impurity of antimony radionuclides is not more than 1 % on activity) [ru
A simple and fast method for extraction and quantification of cryptophyte phycoerythrin
Thoisen, Christina; Hansen, Benni Winding; Nielsen, S?ren Laurentius
2017-01-01
The microalgal pigment phycoerythrin (PE) is of commercial interest as natural colorant in food and cosmetics, as well as fluoroprobes for laboratory analysis. Several methods for extraction and quantification of PE are available but they comprise typically various extraction buffers, repetitive freeze-thaw cycles and liquid nitrogen, making extraction procedures more complicated. A simple method for extraction of PE from cryptophytes is described using standard laboratory materials and equip...
A simple method to downscale daily wind statistics to hourly wind data
Guo, Zhongling
2013-01-01
Wind is the principal driver in the wind erosion models. The hourly wind speed data were generally required for precisely wind erosion modeling. In this study, a simple method to generate hourly wind speed data from daily wind statistics (daily average and maximum wind speeds together or daily average wind speed only) was established. A typical windy location with 3285 days (9 years) measured hourly wind speed data were used to validate the downscaling method. The results showed that the over...
Simulation Opportunity Index, A Simple and Effective Method to Boost the Hydrocarbon Recovery
Saputra, Wardana
2016-01-01
This paper describes how the SOI software helps as a simple, fast, and accurate way to obtain the higher hydrocarbon production than that of trial-error method and previous studies in two different fields located in offshore Indonesia. On one hand, the proposed method could save money by minimizing the required number of wells. On the other hand, it could maximize profit by maximizing recovery.
Energy Technology Data Exchange (ETDEWEB)
Keilacker, H; Becker, G; Ziegler, M; Gottschling, H D [Zentralinstitut fuer Diabetes, Karlsburg (German Democratic Republic)
1980-10-01
In order to handle all types of radioimmunoassay (RIA) calibration curves obtained in the authors' laboratory in the same way, they tried to find a non-linear expression for their regression which allows calibration curves with different degrees of curvature to be fitted. Considering the two boundary cases of the incubation protocol they derived a hyperbolic inverse regression function: x = a/sub 1/y + a/sub 0/ + asub(-1)y/sup -1/, where x is the total concentration of antigen, asub(i) are constants, and y is the specifically bound radioactivity. An RIA evaluation procedure based on this function is described providing a fitted inverse RIA calibration curve and some statistical quality parameters. The latter are of an order which is normal for RIA systems. There is an excellent agreement between fitted and experimentally obtained calibration curves having a different degree of curvature.
A simple and efficient method for isolating small RNAs from different plant species
Directory of Open Access Journals (Sweden)
de Folter Stefan
2011-02-01
Full Text Available Abstract Background Small RNAs emerged over the last decade as key regulators in diverse biological processes in eukaryotic organisms. To identify and study small RNAs, good and efficient protocols are necessary to isolate them, which sometimes may be challenging due to the composition of specific tissues of certain plant species. Here we describe a simple and efficient method to isolate small RNAs from different plant species. Results We developed a simple and efficient method to isolate small RNAs from different plant species by first comparing different total RNA extraction protocols, followed by streamlining the best one, finally resulting in a small RNA extraction method that has no need of first total RNA extraction and is not based on the commercially available TRIzol® Reagent or columns. This small RNA extraction method not only works well for plant tissues with high polysaccharide content, like cactus, agave, banana, and tomato, but also for plant species like Arabidopsis or tobacco. Furthermore, the obtained small RNA samples were successfully used in northern blot assays. Conclusion Here we provide a simple and efficient method to isolate small RNAs from different plant species, such as cactus, agave, banana, tomato, Arabidopsis, and tobacco, and the small RNAs from this simplified and low cost method is suitable for downstream handling like northern blot assays.
A Simple and Reliable Method of Design for Standalone Photovoltaic Systems
Srinivasarao, Mantri; Sudha, K. Rama; Bhanu, C. V. K.
2017-06-01
Standalone photovoltaic (SAPV) systems are seen as a promoting method of electrifying areas of developing world that lack power grid infrastructure. Proliferations of these systems require a design procedure that is simple, reliable and exhibit good performance over its life time. The proposed methodology uses simple empirical formulae and easily available parameters to design SAPV systems, that is, array size with energy storage. After arriving at the different array size (area), performance curves are obtained for optimal design of SAPV system with high amount of reliability in terms of autonomy at a specified value of loss of load probability (LOLP). Based on the array to load ratio (ALR) and levelized energy cost (LEC) through life cycle cost (LCC) analysis, it is shown that the proposed methodology gives better performance, requires simple data and is more reliable when compared with conventional design using monthly average daily load and insolation.
A simple method for validation and verification of pipettes mounted on automated liquid handlers
DEFF Research Database (Denmark)
Stangegaard, Michael; Hansen, Anders Johannes; Frøslev, Tobias G
2011-01-01
We have implemented a simple, inexpensive, and fast procedure for validation and verification of the performance of pipettes mounted on automated liquid handlers (ALHs) as necessary for laboratories accredited under ISO 17025. A six- or seven-step serial dilution of OrangeG was prepared in quadru......We have implemented a simple, inexpensive, and fast procedure for validation and verification of the performance of pipettes mounted on automated liquid handlers (ALHs) as necessary for laboratories accredited under ISO 17025. A six- or seven-step serial dilution of OrangeG was prepared...... are freely available. In conclusion, we have set up a simple, inexpensive, and fast solution for the continuous validation of ALHs used for accredited work according to the ISO 17025 standard. The method is easy to use for aqueous solutions but requires a spectrophotometer that can read microtiter plates....
Directory of Open Access Journals (Sweden)
Mok Tik
2014-06-01
Full Text Available This study formulates regression of vector data that will enable statistical analysis of various geodetic phenomena such as, polar motion, ocean currents, typhoon/hurricane tracking, crustal deformations, and precursory earthquake signals. The observed vector variable of an event (dependent vector variable is expressed as a function of a number of hypothesized phenomena realized also as vector variables (independent vector variables and/or scalar variables that are likely to impact the dependent vector variable. The proposed representation has the unique property of solving the coefficients of independent vector variables (explanatory variables also as vectors, hence it supersedes multivariate multiple regression models, in which the unknown coefficients are scalar quantities. For the solution, complex numbers are used to rep- resent vector information, and the method of least squares is deployed to estimate the vector model parameters after transforming the complex vector regression model into a real vector regression model through isomorphism. Various operational statistics for testing the predictive significance of the estimated vector parameter coefficients are also derived. A simple numerical example demonstrates the use of the proposed vector regression analysis in modeling typhoon paths.
A simple and fast method for extraction and quantification of cryptophyte phycoerythrin.
Thoisen, Christina; Hansen, Benni Winding; Nielsen, Søren Laurentius
2017-01-01
The microalgal pigment phycoerythrin (PE) is of commercial interest as natural colorant in food and cosmetics, as well as fluoroprobes for laboratory analysis. Several methods for extraction and quantification of PE are available but they comprise typically various extraction buffers, repetitive freeze-thaw cycles and liquid nitrogen, making extraction procedures more complicated. A simple method for extraction of PE from cryptophytes is described using standard laboratory materials and equipment. The cryptophyte cells on the filters were disrupted at -80 °C and added phosphate buffer for extraction at 4 °C followed by absorbance measurement. The cryptophyte Rhodomonas salina was used as a model organism. •Simple method for extraction and quantification of phycoerythrin from cryptophytes.•Minimal usage of equipment and chemicals, and low labor costs.•Applicable for industrial and biological purposes.
A simple and fast method for extraction and quantification of cryptophyte phycoerythrin
DEFF Research Database (Denmark)
Thoisen, Christina Vinum; Hansen, Benni Winding; Nielsen, Søren Laurentius
2017-01-01
The microalgal pigment phycoerythrin (PE) is of commercial interest as natural colorant in food and cosmetics, as well as fluoroprobes for laboratory analysis. Several methods for extraction and quantification of PE are available but they comprise typically various extraction buffers, repetitive...... freeze-thaw cycles and liquid nitrogen, making extraction procedures more complicated. A simple method for extraction of PE from cryptophytes is described using standard laboratory materials and equipment. Filters with the cryptophyte were frozen (−80 °C) and added phosphate buffer for extraction at 4 °C...... followed by absorbance measurement. The cryptophyte Rhodomonas salina was used as a model organism. •Simple method for extraction and quantification of phycoerythrin from cryptophytes. •Minimal usage of equipment and chemicals, and low labor costs. •Applicable for industrial and biological purposes....
Directory of Open Access Journals (Sweden)
Jun Bi
2018-04-01
Full Text Available Battery electric vehicles (BEVs reduce energy consumption and air pollution as compared with conventional vehicles. However, the limited driving range and potential long charging time of BEVs create new problems. Accurate charging time prediction of BEVs helps drivers determine travel plans and alleviate their range anxiety during trips. This study proposed a combined model for charging time prediction based on regression and time-series methods according to the actual data from BEVs operating in Beijing, China. After data analysis, a regression model was established by considering the charged amount for charging time prediction. Furthermore, a time-series method was adopted to calibrate the regression model, which significantly improved the fitting accuracy of the model. The parameters of the model were determined by using the actual data. Verification results confirmed the accuracy of the model and showed that the model errors were small. The proposed model can accurately depict the charging time characteristics of BEVs in Beijing.
Validity of a Simple Method for Measuring Force-Velocity-Power Profile in Countermovement Jump.
Jiménez-Reyes, Pedro; Samozino, Pierre; Pareja-Blanco, Fernando; Conceição, Filipe; Cuadrado-Peñafiel, Víctor; González-Badillo, Juan José; Morin, Jean-Benoît
2017-01-01
To analyze the reliability and validity of a simple computation method to evaluate force (F), velocity (v), and power (P) output during a countermovement jump (CMJ) suitable for use in field conditions and to verify the validity of this computation method to compute the CMJ force-velocity (F-v) profile (including unloaded and loaded jumps) in trained athletes. Sixteen high-level male sprinters and jumpers performed maximal CMJs under 6 different load conditions (0-87 kg). A force plate sampling at 1000 Hz was used to record vertical ground-reaction force and derive vertical-displacement data during CMJ trials. For each condition, mean F, v, and P of the push-off phase were determined from both force-plate data (reference method) and simple computation measures based on body mass, jump height (from flight time), and push-off distance and used to establish the linear F-v relationship for each individual. Mean absolute bias values were 0.9% (± 1.6%), 4.7% (± 6.2%), 3.7% (± 4.8%), and 5% (± 6.8%) for F, v, P, and slope of the F-v relationship (S Fv ), respectively. Both methods showed high correlations for F-v-profile-related variables (r = .985-.991). Finally, all variables computed from the simple method showed high reliability, with ICC >.980 and CV push-off distance, and jump height are known.
Lusiana, Evellin Dewi
2017-12-01
The parameters of binary probit regression model are commonly estimated by using Maximum Likelihood Estimation (MLE) method. However, MLE method has limitation if the binary data contains separation. Separation is the condition where there are one or several independent variables that exactly grouped the categories in binary response. It will result the estimators of MLE method become non-convergent, so that they cannot be used in modeling. One of the effort to resolve the separation is using Firths approach instead. This research has two aims. First, to identify the chance of separation occurrence in binary probit regression model between MLE method and Firths approach. Second, to compare the performance of binary probit regression model estimator that obtained by MLE method and Firths approach using RMSE criteria. Those are performed using simulation method and under different sample size. The results showed that the chance of separation occurrence in MLE method for small sample size is higher than Firths approach. On the other hand, for larger sample size, the probability decreased and relatively identic between MLE method and Firths approach. Meanwhile, Firths estimators have smaller RMSE than MLEs especially for smaller sample sizes. But for larger sample sizes, the RMSEs are not much different. It means that Firths estimators outperformed MLE estimator.
Amini, Payam; Maroufizadeh, Saman; Samani, Reza Omani; Hamidi, Omid; Sepidarkish, Mahdi
2017-06-01
Preterm birth (PTB) is a leading cause of neonatal death and the second biggest cause of death in children under five years of age. The objective of this study was to determine the prevalence of PTB and its associated factors using logistic regression and decision tree classification methods. This cross-sectional study was conducted on 4,415 pregnant women in Tehran, Iran, from July 6-21, 2015. Data were collected by a researcher-developed questionnaire through interviews with mothers and review of their medical records. To evaluate the accuracy of the logistic regression and decision tree methods, several indices such as sensitivity, specificity, and the area under the curve were used. The PTB rate was 5.5% in this study. The logistic regression outperformed the decision tree for the classification of PTB based on risk factors. Logistic regression showed that multiple pregnancies, mothers with preeclampsia, and those who conceived with assisted reproductive technology had an increased risk for PTB ( p logistic regression model for the classification of risk groups for PTB.
Al-Harrasi, Ahmed; Rehman, Najeeb Ur; Mabood, Fazal; Albroumi, Muhammaed; Ali, Liaqat; Hussain, Javid; Hussain, Hidayat; Csuk, René; Khan, Abdul Latif; Alam, Tanveer; Alameri, Saif
2017-09-01
In the present study, for the first time, NIR spectroscopy coupled with PLS regression as a rapid and alternative method was developed to quantify the amount of Keto-β-Boswellic Acid (KBA) in different plant parts of Boswellia sacra and the resin exudates of the trunk. NIR spectroscopy was used for the measurement of KBA standards and B. sacra samples in absorption mode in the wavelength range from 700-2500 nm. PLS regression model was built from the obtained spectral data using 70% of KBA standards (training set) in the range from 0.1 ppm to 100 ppm. The PLS regression model obtained was having R-square value of 98% with 0.99 corelationship value and having good prediction with RMSEP value 3.2 and correlation of 0.99. It was then used to quantify the amount of KBA in the samples of B. sacra. The results indicated that the MeOH extract of resin has the highest concentration of KBA (0.6%) followed by essential oil (0.1%). However, no KBA was found in the aqueous extract. The MeOH extract of the resin was subjected to column chromatography to get various sub-fractions at different polarity of organic solvents. The sub-fraction at 4% MeOH/CHCl3 (4.1% of KBA) was found to contain the highest percentage of KBA followed by another sub-fraction at 2% MeOH/CHCl3 (2.2% of KBA). The present results also indicated that KBA is only present in the gum-resin of the trunk and not in all parts of the plant. These results were further confirmed through HPLC analysis and therefore it is concluded that NIRS coupled with PLS regression is a rapid and alternate method for quantification of KBA in Boswellia sacra. It is non-destructive, rapid, sensitive and uses simple methods of sample preparation.
Eekhout, I.; Wiel, M.A. van de; Heymans, M.W.
2017-01-01
Background. Multiple imputation is a recommended method to handle missing data. For significance testing after multiple imputation, Rubin’s Rules (RR) are easily applied to pool parameter estimates. In a logistic regression model, to consider whether a categorical covariate with more than two levels
Koneczny, Jarosław; Czekierdowski, Artur; Florczak, Marek; Poziemski, Paweł; Stachowicz, Norbert; Borowski, Dariusz
2017-01-01
Sonography based methods with various tumor markers are currently used to discriminate the type of adnexal masses. To compare the predictive value of selected sonography-based models along with subjective assessment in ovarian cancer prediction. We analyzed data of 271 women operated because of adnexal masses. All masses were verified by histological examination. Preoperative sonography was performed in all patients and various predictive models includ¬ing IOTA group logistic regression model LR1 (LR1), IOTA simple ultrasound-based rules by IOTA (SR), GI-RADS and risk of malignancy index (RMI3) were used. ROC curves were constructed and respective AUC's with 95% CI's were compared. Of 271 masses 78 proved to be malignant including 6 borderline tumors. LR1 had sensitivity of 91.0%, specificity of 91.2%, AUC = 0.95 (95% CI: 0.92-0.98). Sensitivity for GI-RADS for 271 patients was 88.5% with specificity of 85% and AUC = 0.91 (95% CI: 0.88-0.95). Subjective assessment yielded sensitivity and specificity of 85.9% and 96.9%, respectively with AUC = 0.97 (95% CI: 0.94-0.99). SR were applicable in 236 masses and had sensitivity of 90.6% with specificity of 95.3% and AUC = 0.93 (95% CI 0.89-0.97). RMI3 was calculated only in 104 women who had CA125 available and had sensitivity of 55.3%, specificity of 94% and AUC = 0.85 (95% CI: 0.77-0.93). Although subjective assessment by the ultrasound expert remains the best current method of adnexal tumors preoperative discrimination, the simplicity and high predictive value favor the IOTA SR method, and when not applicable, the IOTA LR1 or GI-RADS models to be primarily and effectively used.
Determination of benzo(apyrene content in PM10 using regression methods
Directory of Open Access Journals (Sweden)
Jacek Gębicki
2015-12-01
Full Text Available The paper presents an attempt of application of multidimensional linear regression to estimation of an empirical model describing the factors influencing on B(aP content in suspended dust PM10 in Olsztyn and Elbląg city regions between 2010 and 2013. During this period annual average concentration of B(aP in PM10 exceeded the admissible level 1.5-3 times. Conducted investigations confirm that the reasons of B(aP concentration increase are low-efficiency individual home heat stations or low-temperature heat sources, which are responsible for so-called low emission during heating period. Dependences between the following quantities were analysed: concentration of PM10 dust in air, air temperature, wind velocity, air humidity. A measure of model fitting to actual B(aP concentration in PM10 was the coefficient of determination of the model. Application of multidimensional linear regression yielded the equations characterized by high values of the coefficient of determination of the model, especially during heating season. This parameter ranged from 0.54 to 0.80 during the analyzed period.
Simple measurement of 14C in the environment using gel suspension method
International Nuclear Information System (INIS)
Wakabayashi, Genichiro; Oura, Hirotaka; Nagao, Kenjiro; Okai, Tomio; Matoba, Masaru; Kakiuchi, Hideki; Momoshima, Noriyuki; Kawamura, Hidehisa
1999-01-01
A gel suspension method using N-lauroyl-L-glutamic-α, γ-dibutylamide as gelling agent and calcium carbonate as sample was studied and it was proved a more simple measurement method of 14 C in environment than the ordinary method. 100, 20 and 7 ml of sample could introduce 3.6, 0.72 and 0.252 g of carbon, respectively. When 100 ml and 20 ml of vial introduced the maximum carbon, the lower limit of detection was about 0.3 dpm/g-C and 0.5 dpm/g-C, respectively. These values showed that this method was able to determine 14 C in the environment. The value of sample has been constant for two years or more. This fact indicated the sample prepared by this method was good for repeat measurement and long-term storage. Many samples prepared by the same calcium carbonate showed almost same values. The concentrations of 14 C in the growth rings of a tree and in rice in the environment were determined and the results agreed with the values in the references. From these above results, this method is more simple measurement method of 14 C in the environment than the ordinary method and can apply to determine 14 C in and around the nuclear installation. (S.Y.)
Boucher, Thomas F.; Ozanne, Marie V.; Carmosino, Marco L.; Dyar, M. Darby; Mahadevan, Sridhar; Breves, Elly A.; Lepore, Kate H.; Clegg, Samuel M.
2015-05-01
The ChemCam instrument on the Mars Curiosity rover is generating thousands of LIBS spectra and bringing interest in this technique to public attention. The key to interpreting Mars or any other types of LIBS data are calibrations that relate laboratory standards to unknowns examined in other settings and enable predictions of chemical composition. Here, LIBS spectral data are analyzed using linear regression methods including partial least squares (PLS-1 and PLS-2), principal component regression (PCR), least absolute shrinkage and selection operator (lasso), elastic net, and linear support vector regression (SVR-Lin). These were compared against results from nonlinear regression methods including kernel principal component regression (K-PCR), polynomial kernel support vector regression (SVR-Py) and k-nearest neighbor (kNN) regression to discern the most effective models for interpreting chemical abundances from LIBS spectra of geological samples. The results were evaluated for 100 samples analyzed with 50 laser pulses at each of five locations averaged together. Wilcoxon signed-rank tests were employed to evaluate the statistical significance of differences among the nine models using their predicted residual sum of squares (PRESS) to make comparisons. For MgO, SiO2, Fe2O3, CaO, and MnO, the sparse models outperform all the others except for linear SVR, while for Na2O, K2O, TiO2, and P2O5, the sparse methods produce inferior results, likely because their emission lines in this energy range have lower transition probabilities. The strong performance of the sparse methods in this study suggests that use of dimensionality-reduction techniques as a preprocessing step may improve the performance of the linear models. Nonlinear methods tend to overfit the data and predict less accurately, while the linear methods proved to be more generalizable with better predictive performance. These results are attributed to the high dimensionality of the data (6144 channels
Directory of Open Access Journals (Sweden)
Xiangbing Zhou
2018-04-01
Full Text Available Rapidly growing GPS (Global Positioning System trajectories hide much valuable information, such as city road planning, urban travel demand, and population migration. In order to mine the hidden information and to capture better clustering results, a trajectory regression clustering method (an unsupervised trajectory clustering method is proposed to reduce local information loss of the trajectory and to avoid getting stuck in the local optimum. Using this method, we first define our new concept of trajectory clustering and construct a novel partitioning (angle-based partitioning method of line segments; second, the Lagrange-based method and Hausdorff-based K-means++ are integrated in fuzzy C-means (FCM clustering, which are used to maintain the stability and the robustness of the clustering process; finally, least squares regression model is employed to achieve regression clustering of the trajectory. In our experiment, the performance and effectiveness of our method is validated against real-world taxi GPS data. When comparing our clustering algorithm with the partition-based clustering algorithms (K-means, K-median, and FCM, our experimental results demonstrate that the presented method is more effective and generates a more reasonable trajectory.
Exploring simple assessment methods for lighting quality with architecture and design students
DEFF Research Database (Denmark)
Madsen, Merete
2006-01-01
that cannot be assessed by simple equations or rules-of-thumb. Balancing the many an often contradictory aspects of energy efficiency and high quality lighting design is a complex undertaking not just for students. The work described in this paper is one result of an academic staff exchange between...... the Schools of Architecture in Copenhagen and Victoria University of Wellington (New Zealand). The authors explore two approaches to teaching students simple assessment methods that can contribute to making more informed decisions about the luminous environment and its quality. One approach deals...... with the assessment of luminance ratios in relation to computer work and presents in that context some results from an experiment undertaken to introduce the concept of luminance ratios and preferred luminance ranges to architeture students. In the other approach a Danish method for assissing the luminance...
International Nuclear Information System (INIS)
Kim, Dongwook; Bang, Sungsik; Kim, Minsoo; Lee, Hyungyil; Kim, Naksoo
2013-01-01
In this study we establish a process to predict hardening behavior considering the Branchings effect for zircaloy-4 sheets. When a metal is compressed after tension in forming, the yield strength decreases. For this reason, the Branchings effect should be considered in FE simulations of spring-back. We suggested a suitable specimen size and a method for determining the optimum tightening torque for simple shear tests. Shear stress-strain curves are obtained for five materials. We developed a method to convert the shear load-displacement curve to the effective stress-strain curve with Fea. We simulated the simple shear forward/reverse test using the combined isotropic/kinematic hardening model. We also investigated the change of the load-displacement curve by varying the hardening coefficients. We determined the hardening coefficients so that they follow the hardening behavior of zircaloy-4 in experiments
Energy Technology Data Exchange (ETDEWEB)
Kim, Dongwook; Bang, Sungsik; Kim, Minsoo; Lee, Hyungyil; Kim, Naksoo [Sogang Univ., Seoul (Korea, Republic of)
2013-10-15
In this study we establish a process to predict hardening behavior considering the Branchings effect for zircaloy-4 sheets. When a metal is compressed after tension in forming, the yield strength decreases. For this reason, the Branchings effect should be considered in FE simulations of spring-back. We suggested a suitable specimen size and a method for determining the optimum tightening torque for simple shear tests. Shear stress-strain curves are obtained for five materials. We developed a method to convert the shear load-displacement curve to the effective stress-strain curve with Fea. We simulated the simple shear forward/reverse test using the combined isotropic/kinematic hardening model. We also investigated the change of the load-displacement curve by varying the hardening coefficients. We determined the hardening coefficients so that they follow the hardening behavior of zircaloy-4 in experiments.
A simple method for generation of back-ground-free gamma-ray spectra
International Nuclear Information System (INIS)
Kawarasaki, Y.
1976-01-01
A simple and versatile method of generating background-free γ-ray spectra is presented. This method is equivalent to the generation of a continuous background baseline over the entire energy range of spectra corresponding to the original ones obtained with a Ge(Li) detector. These background curves can not be generally expressed in a single and simple analytic form nor in the form of a power series. These background-free spectra thus obtained make it feasible to assign many tiny peaks at the stage of visual inspection of the spectra, which is difficult to do with the original ones. The automatic peak-finding and peak area calculation procedures are both applicable to these background-free spectra. Examples of the application are illustrated. The effect of the peak-shape distortion is also discussed. (Auth.)
A simple method for deriving functional MSCs and applied for osteogenesis in 3D scaffolds
DEFF Research Database (Denmark)
Zou, Lijin; Luo, Yonglun; Chen, Muwan
2013-01-01
We describe a simple method for bone engineering using biodegradable scaffolds with mesenchymal stem cells derived from human induced-pluripotent stem cells (hiPS-MSCs). The hiPS-MSCs expressed mesenchymal markers (CD90, CD73, and CD105), possessed multipotency characterized by tri......-lineages differentiation: osteogenic, adipogenic, and chondrogenic, and lost pluripotency - as seen with the loss of markers OCT3/4 and TRA-1-81 - and tumorigenicity. However, these iPS-MSCs are still positive for marker NANOG. We further explored the osteogenic potential of the hiPS-MSCs in synthetic polymer......, our results suggest the iPS-MSCs derived by this simple method retain fully osteogenic function and provide a new solution towards personalized orthopedic therapy in the future....
Lowest-order constrained variational method for simple many-fermion systems
International Nuclear Information System (INIS)
Alexandrov, I.; Moszkowski, S.A.; Wong, C.W.
1975-01-01
The authors study the potential energy of many-fermion systems calculated by the lowest-order constrained variational (LOCV) method of Pandharipande. Two simple two-body interactions are used. For a simple hard-core potential in a dilute Fermi gas, they find that the Huang-Yang exclusion correction can be used to determine a healing distance. The result is close to the older Pandharipande prescription for the healing distance. For a hard core plus attractive exponential potential, the LOCV result agrees closely with the lowest-order separation method of Moszkowski and Scott. They find that the LOCV result has a shallow minimum as a function of the healing distance at the Moszkowski-Scott separation distance. The significance of the absence of a Brueckner dispersion correction in the LOCV result is discussed. (Auth.)
Double-lock technique: a simple method to secure abdominal wall closure
International Nuclear Information System (INIS)
Jategaonkar, P.A.; Yadav, S.P.
2013-01-01
Secure closure of a laparotomy incision remains an important aspect of any abdominal operation with the aim to avoid the postoperative morbidity and hasten the patient's recovery. Depending on the operator's preference and experience, it may be done by the continuous or the interrupted methods either using a non-absorbable or delayed-absorbable suture. We describe a simple, secure and quick technique of abdominal wall closure which involves continuous suture inter-locked doubly after every third bite. This simple and easy to use mass closure technique can be easily mastered by any member of the surgical team and does not need any assistant. It amalgamates the advantages of both, the continuous and the interrupted methods of closures. To our knowledge, such a technique has not been reported in the literature. (author)
A Simple Method to Measure Nematodes' Propulsive Thrust and the Nematode Ratchet.
Bau, Haim; Yuan, Jinzhou; Raizen, David
2015-11-01
Since the propulsive thrust of micro organisms provides a more sensitive indicator of the animal's health and response to drugs than motility, a simple, high throughput, direct measurement of the thrust is desired. Taking advantage of the nematode C. elegans being heavier than water, we devised a simple method to determine the propulsive thrust of the animals by monitoring their velocity when swimming along an inclined plane. We find that the swimming velocity is a linear function of the sin of the inclination angle. This method allows us to determine, among other things, the animas' propulsive thrust as a function of genotype, drugs, and age. Furthermore, taking advantage of the animals' inability to swim over a stiff incline, we constructed a sawteeth ratchet-like track that restricts the animals to swim in a predetermined direction. This research was supported, in part, by NIH NIA Grant 5R03AG042690-02.
DEFF Research Database (Denmark)
Eslamimanesh, Ali; Gharagheizi, Farhad; Mohammadi, Amir H.
2012-01-01
We, herein, present a statistical method for diagnostics of the outliers in phase equilibrium data (dissociation data) of simple clathrate hydrates. The applied algorithm is performed on the basis of the Leverage mathematical approach, in which the statistical Hat matrix, Williams Plot, and the r......We, herein, present a statistical method for diagnostics of the outliers in phase equilibrium data (dissociation data) of simple clathrate hydrates. The applied algorithm is performed on the basis of the Leverage mathematical approach, in which the statistical Hat matrix, Williams Plot...... in exponential form is used to represent/predict the hydrate dissociation pressures for three-phase equilibrium conditions (liquid water/ice–vapor-hydrate). The investigated hydrate formers are methane, ethane, propane, carbon dioxide, nitrogen, and hydrogen sulfide. It is interpreted from the obtained results...
A simple method for labelling proteins with 211At via diazotized aromatic diamine
International Nuclear Information System (INIS)
Wunderlich, G.; Franke, W.-G.; Fischer, S.; Dreyer, R.
1987-01-01
A simple and rapid method for labelling proteins with 211 At by means of a 1,4-diaminobenzene link is described. This link is transformed into the diazonium salt and subsequently reactions of both 211 At and proteins with the diazonium salt take place simultaneously. For possibly high yields of astatized protein an appropriate temperature of 273 K was found. The results demonstrate the difference between the reaction mechanisms of iodine and astatine with proteins. (author)
A simple method to take urethral sutures for neobladder reconstruction and radical prostatectomy
Directory of Open Access Journals (Sweden)
B Satheesan
2007-01-01
Full Text Available For the reconstruction of urethra-vesical anastamosis after radical prostatectomy and for neobladder reconstruction, taking adequate sutures to include the urethral mucosa is vital. Due to the retraction of the urethra and unfriendly pelvis, the process of taking satisfactory urethral sutures may be laborious. Here, we describe a simple method by which we could overcome similar technical problems during surgery using Foley catheter as the guide for the suture.
A Simple Method to Estimate Large Fixed Effects Models Applied to Wage Determinants and Matching
Mittag, Nikolas
2016-01-01
Models with high dimensional sets of fixed effects are frequently used to examine, among others, linked employer-employee data, student outcomes and migration. Estimating these models is computationally difficult, so simplifying assumptions that are likely to cause bias are often invoked to make computation feasible and specification tests are rarely conducted. I present a simple method to estimate large two-way fixed effects (TWFE) and worker-firm match effect models without additional assum...
A simple, rapid and inexpensive screening method for the identification of Pythium insidiosum.
Tondolo, Juliana Simoni Moraes; Loreto, Erico Silva; Denardi, Laura Bedin; Mario, Débora Alves Nunes; Alves, Sydney Hartz; Santurio, Janio Morais
2013-04-01
Growth of Pythium insidiosum mycelia around minocycline disks (30μg) did not occur within 7days of incubation at 35°C when the isolates were grown on Sabouraud, corn meal, Muller-Hinton or RPMI agar. This technique offers a simple and rapid method for the differentiation of P. insidiosum from true filamentous fungi. Copyright © 2013 Elsevier B.V. All rights reserved.
International Nuclear Information System (INIS)
Wang, Y.
1996-01-01
We present two simple analytical methods for computing the gravity-wave contribution to the cosmic background radiation (CBR) anisotropy in inflationary models; one method uses a time-dependent transfer function, the other methods uses an approximate gravity-mode function which is a simple combination of the lowest order spherical Bessel functions. We compare the CBR anisotropy tensor multipole spectrum computed using our methods with the previous result of the highly accurate numerical method, the open-quote open-quote Boltzmann close-quote close-quote method. Our time-dependent transfer function is more accurate than the time-independent transfer function found by Turner, White, and Lindsey; however, we find that the transfer function method is only good for l approx-lt 120. Using our approximate gravity-wave mode function, we obtain much better accuracy; the tensor multipole spectrum we find differs by less than 2% for l approx-lt 50, less than 10% for l approx-lt 120, and less than 20% for l≤300 from the open-quote open-quote Boltzmann close-quote close-quote result. Our approximate graviton mode function should be quite useful in studying tensor perturbations from inflationary models. copyright 1996 The American Physical Society
A Study of Simple α Source Preparation Using a Micro-coprecipitation Method
International Nuclear Information System (INIS)
Lee, Myung Ho; Park, Taehong; Song, Byung Chul; Park, Jong Ho; Song, Kyuseok
2012-01-01
This study presents a rapid and simple α source preparation method for a radioactive waste sample. The recovery of 239 Pu, 232 U and 243 Am using a micro-coprecipitation method was over 95%. The α-peak resolution of Pu and Am isotopes through the micro-coprecipitation method is enough to discriminate the Pu and Am isotopes from other Pu and Am isotopes. The determination of the Pu and Am isotopes using the micro-coprecipitation method was applied to the radioactive waste sample, so that the activity concentrations of the Pu and Am isotopes using the micro-coprecipitation method in the radioactive waste sample were similar to those using the electrodeposition method
A method to determine the necessity for global signal regression in resting-state fMRI studies.
Chen, Gang; Chen, Guangyu; Xie, Chunming; Ward, B Douglas; Li, Wenjun; Antuono, Piero; Li, Shi-Jiang
2012-12-01
In resting-state functional MRI studies, the global signal (operationally defined as the global average of resting-state functional MRI time courses) is often considered a nuisance effect and commonly removed in preprocessing. This global signal regression method can introduce artifacts, such as false anticorrelated resting-state networks in functional connectivity analyses. Therefore, the efficacy of this technique as a correction tool remains questionable. In this article, we establish that the accuracy of the estimated global signal is determined by the level of global noise (i.e., non-neural noise that has a global effect on the resting-state functional MRI signal). When the global noise level is low, the global signal resembles the resting-state functional MRI time courses of the largest cluster, but not those of the global noise. Using real data, we demonstrate that the global signal is strongly correlated with the default mode network components and has biological significance. These results call into question whether or not global signal regression should be applied. We introduce a method to quantify global noise levels. We show that a criteria for global signal regression can be found based on the method. By using the criteria, one can determine whether to include or exclude the global signal regression in minimizing errors in functional connectivity measures. Copyright © 2012 Wiley Periodicals, Inc.
A simple two-step method to fabricate highly transparent ITO/polymer nanocomposite films
International Nuclear Information System (INIS)
Liu, Haitao; Zeng, Xiaofei; Kong, Xiangrong; Bian, Shuguang; Chen, Jianfeng
2012-01-01
Highlights: ► A simple two-step method without further surface modification step was employed. ► ITO nanoparticles were easily to be uniformly dispersed in polymer matrix. ► ITO/polymer nanocomposite film had high transparency and UV/IR blocking properties. - Abstract: Transparent functional indium tin oxide (ITO)/polymer nanocomposite films were fabricated via a simple approach with two steps. Firstly, the functional monodisperse ITO nanoparticles were synthesized via a facile nonaqueous solvothermal method using bifunctional chemical agent (N-methyl-pyrrolidone, NMP) as the reaction solvent and surface modifier. Secondly, the ITO/acrylics polyurethane (PUA) nanocomposite films were fabricated by a simple sol-solution mixing method without any further surface modification step as often employed traditionally. Flower-like ITO nanoclusters with about 45 nm in diameter were mono-dispersed in ethyl acetate and each nanocluster was assembled by nearly spherical nanoparticles with primary size of 7–9 nm in diameter. The ITO nanoclusters exhibited an excellent dispersibility in polymer matrix of PUA, remaining their original size without any further agglomeration. When the loading content of ITO nanoclusters reached to 5 wt%, the transparent functional nanocomposite film featured a high transparency more than 85% in the visible light region (at 550 nm), meanwhile cutting off near-infrared radiation about 50% at 1500 nm and blocking UV ray about 45% at 350 nm. It could be potential for transparent functional coating materials applications.
A simple method to design non-collision relative orbits for close spacecraft formation flying
Jiang, Wei; Li, JunFeng; Jiang, FangHua; Bernelli-Zazzera, Franco
2018-05-01
A set of linearized relative motion equations of spacecraft flying on unperturbed elliptical orbits are specialized for particular cases, where the leader orbit is circular or equatorial. Based on these extended equations, we are able to analyze the relative motion regulation between a pair of spacecraft flying on arbitrary unperturbed orbits with the same semi-major axis in close formation. Given the initial orbital elements of the leader, this paper presents a simple way to design initial relative orbital elements of close spacecraft with the same semi-major axis, thus preventing collision under non-perturbed conditions. Considering the mean influence of J 2 perturbation, namely secular J 2 perturbation, we derive the mean derivatives of orbital element differences, and then expand them to first order. Thus the first order expansion of orbital element differences can be added to the relative motion equations for further analysis. For a pair of spacecraft that will never collide under non-perturbed situations, we present a simple method to determine whether a collision will occur when J 2 perturbation is considered. Examples are given to prove the validity of the extended relative motion equations and to illustrate how the methods presented can be used. The simple method for designing initial relative orbital elements proposed here could be helpful to the preliminary design of the relative orbital elements between spacecraft in a close formation, when collision avoidance is necessary.
Kolasa-Wiecek, Alicja
2015-04-01
The energy sector in Poland is the source of 81% of greenhouse gas (GHG) emissions. Poland, among other European Union countries, occupies a leading position with regard to coal consumption. Polish energy sector actively participates in efforts to reduce GHG emissions to the atmosphere, through a gradual decrease of the share of coal in the fuel mix and development of renewable energy sources. All evidence which completes the knowledge about issues related to GHG emissions is a valuable source of information. The article presents the results of modeling of GHG emissions which are generated by the energy sector in Poland. For a better understanding of the quantitative relationship between total consumption of primary energy and greenhouse gas emission, multiple stepwise regression model was applied. The modeling results of CO2 emissions demonstrate a high relationship (0.97) with the hard coal consumption variable. Adjustment coefficient of the model to actual data is high and equal to 95%. The backward step regression model, in the case of CH4 emission, indicated the presence of hard coal (0.66), peat and fuel wood (0.34), solid waste fuels, as well as other sources (-0.64) as the most important variables. The adjusted coefficient is suitable and equals R2=0.90. For N2O emission modeling the obtained coefficient of determination is low and equal to 43%. A significant variable influencing the amount of N2O emission is the peat and wood fuel consumption. Copyright © 2015. Published by Elsevier B.V.
Efectivity of Additive Spline for Partial Least Square Method in Regression Model Estimation
Directory of Open Access Journals (Sweden)
Ahmad Bilfarsah
2005-04-01
Full Text Available Additive Spline of Partial Least Square method (ASPL as one generalization of Partial Least Square (PLS method. ASPLS method can be acommodation to non linear and multicollinearity case of predictor variables. As a principle, The ASPLS method approach is cahracterized by two idea. The first is to used parametric transformations of predictors by spline function; the second is to make ASPLS components mutually uncorrelated, to preserve properties of the linear PLS components. The performance of ASPLS compared with other PLS method is illustrated with the fisher economic application especially the tuna fish production.
Spady, Richard; Stouli, Sami
2012-01-01
We propose dual regression as an alternative to the quantile regression process for the global estimation of conditional distribution functions under minimal assumptions. Dual regression provides all the interpretational power of the quantile regression process while avoiding the need for repairing the intersecting conditional quantile surfaces that quantile regression often produces in practice. Our approach introduces a mathematical programming characterization of conditional distribution f...
A simple method for plasma total vitamin C analysis suitable for routine clinical laboratory use.
Robitaille, Line; Hoffer, L John
2016-04-21
In-hospital hypovitaminosis C is highly prevalent but almost completely unrecognized. Medical awareness of this potentially important disorder is hindered by the inability of most hospital laboratories to determine plasma vitamin C concentrations. The availability of a simple, reliable method for analyzing plasma vitamin C could increase opportunities for routine plasma vitamin C analysis in clinical medicine. Plasma vitamin C can be analyzed by high performance liquid chromatography (HPLC) with electrochemical (EC) or ultraviolet (UV) light detection. We modified existing UV-HPLC methods for plasma total vitamin C analysis (the sum of ascorbic and dehydroascorbic acid) to develop a simple, constant-low-pH sample reduction procedure followed by isocratic reverse-phase HPLC separation using a purely aqueous low-pH non-buffered mobile phase. Although EC-HPLC is widely recommended over UV-HPLC for plasma total vitamin C analysis, the two methods have never been directly compared. We formally compared the simplified UV-HPLC method with EC-HPLC in 80 consecutive clinical samples. The simplified UV-HPLC method was less expensive, easier to set up, required fewer reagents and no pH adjustments, and demonstrated greater sample stability than many existing methods for plasma vitamin C analysis. When compared with the gold-standard EC-HPLC method in 80 consecutive clinical samples exhibiting a wide range of plasma vitamin C concentrations, it performed equivalently. The easy set up, simplicity and sensitivity of the plasma vitamin C analysis method described here could make it practical in a normally equipped hospital laboratory. Unlike any prior UV-HPLC method for plasma total vitamin C analysis, it was rigorously compared with the gold-standard EC-HPLC method and performed equivalently. Adoption of this method could increase the availability of plasma vitamin C analysis in clinical medicine.
International Nuclear Information System (INIS)
Sun Zhong-Hua; Jiang Fan
2010-01-01
In this paper a new continuous variable called core-ratio is defined to describe the probability for a residue to be in a binding site, thereby replacing the previous binary description of the interface residue using 0 and 1. So we can use the support vector machine regression method to fit the core-ratio value and predict the protein binding sites. We also design a new group of physical and chemical descriptors to characterize the binding sites. The new descriptors are more effective, with an averaging procedure used. Our test shows that much better prediction results can be obtained by the support vector regression (SVR) method than by the support vector classification method. (rapid communication)
Directory of Open Access Journals (Sweden)
Liyun Su
2012-01-01
Full Text Available We introduce the extension of local polynomial fitting to the linear heteroscedastic regression model. Firstly, the local polynomial fitting is applied to estimate heteroscedastic function, then the coefficients of regression model are obtained by using generalized least squares method. One noteworthy feature of our approach is that we avoid the testing for heteroscedasticity by improving the traditional two-stage method. Due to nonparametric technique of local polynomial estimation, we do not need to know the heteroscedastic function. Therefore, we can improve the estimation precision, when the heteroscedastic function is unknown. Furthermore, we focus on comparison of parameters and reach an optimal fitting. Besides, we verify the asymptotic normality of parameters based on numerical simulations. Finally, this approach is applied to a case of economics, and it indicates that our method is surely effective in finite-sample situations.
A simple bacterial turbidimetric method for detection of some radurized foods
International Nuclear Information System (INIS)
Gautam, S.; Sharma, Arun; Thomas, Paul
1998-01-01
A simple and quick method for detection of irradiated food is proposed. The method is based on the principle of microbial contribution to the development of turbidity in a clear medium. It employs measurement of absorbance at 600 nm of the medium after the test commodity has been suspended and shaken in it for a fixed interval. The differences in the bacterial turbidity from irradiated and nonirradiated samples are quite marked so as to allow identification of the irradiated foods like fish, lamb meat, chicken and mushroom. (author)
Accurate and simple measurement method of complex decay schemes radionuclide activity
International Nuclear Information System (INIS)
Legrand, J.; Clement, C.; Bac, C.
1975-01-01
A simple method for the measurement of the activity is described. It consists of using a well-type sodium iodide crystal whose efficiency mith monoenergetic photon rays has been computed or measured. For each radionuclide with a complex decay scheme a total efficiency is computed; it is shown that the efficiency is very high, near 100%. The associated incertainty is low, in spite of the important uncertainties on the different parameters used in the computation. The method has been applied to the measurement of the 152 Eu primary reference [fr
A Simple Method for Measuring the Verticality of Small-Diameter Driven Wells
DEFF Research Database (Denmark)
Kjeldsen, Peter; Skov, Bent
1994-01-01
The presence of stones, solid waste, and other obstructions can deflect small-diameter driven wells during installation, leading to deviations of the well from its intended position. This could lead to erroneous results, especially for measurements of ground water levels by water level meters....... A simple method was developed to measure deviations from the intended positions of well screens and determine correction factors required for proper measurement of ground water levels in nonvertical wells. The method is based upon measurement of the hydrostatic pressure in the bottom of a water column...... ground water flow directions....
A simple and rapid method of purification of impure plutonium oxide
International Nuclear Information System (INIS)
Michael, K.M.; Rakshe, P.R.; Dharmpurikar, G.R.; Thite, B.S.; Lokhande, Manisha; Sinalkar, Nitin; Dakshinamoorthy, A.; Munshi, S.K.; Dey, P.K.
2007-01-01
Impure plutonium oxides are conventionally purified by dissolution in HNO 3 in presence of HF followed by ion exchange separation and oxalate precipitation. The method is tedious and use of HF enhances corrosion of the plant equipment's. A simple and rapid method has been developed for the purification of the oxide by leaching with various reagents like DM water, NaOH and oxalic acid. A combination of DM water followed by hot leaching with 0.4 M oxalic acid could bring down the impurity levels in the oxide to the desired level required for fuel fabrication. (author)
A simple source preparation method for alpha-ray spectrometry of volcanic rock sample
International Nuclear Information System (INIS)
Takahashi, Masaomi; Kurihara, Yuichi; Sato, Jun
2006-01-01
A simple source preparation method was developed for the alpha-ray spectrometry to determine U and Th in volcanic rockes. Isolation of U and Th from volcanic rocks was made by use of UTEVA-Spec. resin, extraction chromatograph material. U and Th were extracted by TTA-benzene solution and organic phase was evaporated drop by drop on a hot stainless steel planchet to dryness. This method was found to be effective for the preparation of sources for alpha-ray spectrometry. (author)
A simple method of shower localization and identification in laterally segmented calorimeters
International Nuclear Information System (INIS)
Awes, T.C.; Obenshain, F.E.; Plasil, F.; Saini, S.; Young, G.R.; Sorensen, S.P.
1992-01-01
A method is proposed to calculate the first and second moments of the spatial distribution of the energy of electromagnetic and hadronic showers measured in laterally segmented colorimeters. The technique uses a logarithmic weighting of energy fraction observed in the individual detector cells. It is fast and simple requiring no fitting or complicated corrections for the position or angle of incidence. The method is demonstrated with GEANT simulations of a BGO detector array. The position resolution results and the e/π separation results are found to be equal or superior to those obtained with more complicated techniques. (orig.)
Directory of Open Access Journals (Sweden)
Farhang Mahboub
2011-06-01
Full Text Available An abnormally small oral orifice is defined as microstomia. Microstomia may result from epidermolysis bullosa (EB, which consists of a group of disorders characterized by the presence of mechanical fragility of the skin with recurrent development of blisters and vesicles, resulting from minor mechanical friction or trauma. Since such patients have a small oral aperture, it may be impossible to take impression and fabricate dentures using conventional methods. In this article, a simple method for taking preliminary impressions from upper and lower edentulous ridges in one patient with limited mouth opening and then preparing the complete denture with custom denture teeth in a single unit was described.
Simple method for evaluating Goldstone diagrams in an angular momentum coupled representation
International Nuclear Information System (INIS)
Kuo, T.T.S.; Shurpin, J.; Tam, K.C.; Osnes, E.; Ellis, P.J.
1981-01-01
A simple and convenient method is derived for evaluating linked Goldstone diagrams in an angular momentum coupled representation. Our method is general, and can be used to evaluate any effective interaction and/or effective operator diagrams for both closed-shell nuclei (vacuum to vacuum linked diagrams) and open-shell nuclei (valence linked diagrams). The techniques of decomposing diagrams into ladder diagrams, cutting open internal lines and cutting off one-body insertions are introduced. These enable us to determine angular momentum factors associated with diagrams in the coupled representation directly, without the need for carrying out complicated angular momentum algebra. A summary of diagram rules is given
Directory of Open Access Journals (Sweden)
Ali Akbari
2017-01-01
Full Text Available A simple method for the synthesis of Tetrahydrobenzo[a]xanthenes-11-one derivatives in the presence of BF3.SiO2, and its antibacterial activity was assessed against Pseudomonas syringae, Xanthomonas citi and Pectobacterium carotovorum. The structure of the isolated compounds has been determined by means of 1H/13C NMR and FT-IR spectroscopy. The reactions were carried out in water at room temperature for 5 h. This method has some advantages such as good to excellent yield, mild reaction condition, ease of operation and workup, high product purity and green process.
A Nonmonotone Trust Region Method for Nonlinear Programming with Simple Bound Constraints
International Nuclear Information System (INIS)
Chen, Z.-W.; Han, J.-Y.; Xu, D.-C.
2001-01-01
In this paper we propose a nonmonotone trust region algorithm for optimization with simple bound constraints. Under mild conditions, we prove the global convergence of the algorithm. For the monotone case it is also proved that the correct active set can be identified in a finite number of iterations if the strict complementarity slackness condition holds, and so the proposed algorithm reduces finally to an unconstrained minimization method in a finite number of iterations, allowing a fast asymptotic rate of convergence. Numerical experiments show that the method is efficient
Simple method for assembly of CRISPR synergistic activation mediator gRNA expression array.
Vad-Nielsen, Johan; Nielsen, Anders Lade; Luo, Yonglun
2018-05-20
When studying complex interconnected regulatory networks, effective methods for simultaneously manipulating multiple genes expression are paramount. Previously, we have developed a simple method for generation of an all-in-one CRISPR gRNA expression array. We here present a Golden Gate Assembly-based system of synergistic activation mediator (SAM) compatible CRISPR/dCas9 gRNA expression array for the simultaneous activation of multiple genes. Using this system, we demonstrated the simultaneous activation of the transcription factors, TWIST, SNAIL, SLUG, and ZEB1 a human breast cancer cell line. Copyright © 2018 Elsevier B.V. All rights reserved.
Note on a simple test method for estimaing J/sub Ic/
International Nuclear Information System (INIS)
Whipple, T.A.; McHenry, H.I.
1980-01-01
Fracture toughness testing is generally a time-consuming and expensive procedure; therefore, there has been a significant amount of effort directed toward developing an inexpensive and rapid method of estimating the fracture toughness of materials. In this paper, a simple method for estimating J/sub Ic/ through the use of small, notched, bend bars is evaluated. The test only involves the measurement of the energy necessary to fracture the sample. Initial tests on Fe-18Cr-3Ni-13Mn and 304L stainless steel at 76 and 4 0 K have yielded results consistent with other fracture toughness tests, for materials in the low- to medium-toughness range
Pfeiffer, Valentin; Barbeau, Benoit
2014-02-01
Despite its shortcomings, the T10 method introduced by the United States Environmental Protection Agency (USEPA) in 1989 is currently the method most frequently used in North America to calculate disinfection performance. Other methods (e.g., the Integrated Disinfection Design Framework, IDDF) have been advanced as replacements, and more recently, the USEPA suggested the Extended T10 and Extended CSTR (Continuous Stirred-Tank Reactor) methods to improve the inactivation calculations within ozone contactors. To develop a method that fully considers the hydraulic behavior of the contactor, two models (Plug Flow with Dispersion and N-CSTR) were successfully fitted with five tracer tests results derived from four Water Treatment Plants and a pilot-scale contactor. A new method based on the N-CSTR model was defined as the Partially Segregated (Pseg) method. The predictions from all the methods mentioned were compared under conditions of poor and good hydraulic performance, low and high disinfectant decay, and different levels of inactivation. These methods were also compared with experimental results from a chlorine pilot-scale contactor used for Escherichia coli inactivation. The T10 and Extended T10 methods led to large over- and under-estimations. The Segregated Flow Analysis (used in the IDDF) also considerably overestimated the inactivation under high disinfectant decay. Only the Extended CSTR and Pseg methods produced realistic and conservative predictions in all cases. Finally, a simple implementation procedure of the Pseg method was suggested for calculation of disinfection performance. Copyright © 2013 Elsevier Ltd. All rights reserved.
Gilstrap, Donald L.
2013-01-01
In addition to qualitative methods presented in chaos and complexity theories in educational research, this article addresses quantitative methods that may show potential for future research studies. Although much in the social and behavioral sciences literature has focused on computer simulations, this article explores current chaos and…
A simple method to approximate liver size on cross-sectional images using living liver models
International Nuclear Information System (INIS)
Muggli, D.; Mueller, M.A.; Karlo, C.; Fornaro, J.; Marincek, B.; Frauenfelder, T.
2009-01-01
Aim: To assess whether a simple. diameter-based formula applicable to cross-sectional images can be used to calculate the total liver volume. Materials and methods: On 119 cross-sectional examinations (62 computed tomography and 57 magnetic resonance imaging) a simple, formula-based method to approximate the liver volume was evaluated. The total liver volume was approximated measuring the largest craniocaudal (cc), ventrodorsal (vd), and coronal (cor) diameters by two readers and implementing the equation: Vol estimated =ccxvdxcorx0.31. Inter-rater reliability, agreement, and correlation between liver volume calculation and virtual liver volumetry were analysed. Results: No significant disagreement between the two readers was found. The formula correlated significantly with the volumetric data (r > 0.85, p < 0.0001). In 81% of cases the error of the approximated volume was <10% and in 92% of cases <15% compared to the volumetric data. Conclusion: Total liver volume can be accurately estimated on cross-sectional images using a simple, diameter-based equation.
Forecast daily indices of solar activity, F10.7, using support vector regression method
International Nuclear Information System (INIS)
Huang Cong; Liu Dandan; Wang Jingsong
2009-01-01
The 10.7 cm solar radio flux (F10.7), the value of the solar radio emission flux density at a wavelength of 10.7 cm, is a useful index of solar activity as a proxy for solar extreme ultraviolet radiation. It is meaningful and important to predict F10.7 values accurately for both long-term (months-years) and short-term (days) forecasting, which are often used as inputs in space weather models. This study applies a novel neural network technique, support vector regression (SVR), to forecasting daily values of F10.7. The aim of this study is to examine the feasibility of SVR in short-term F10.7 forecasting. The approach, based on SVR, reduces the dimension of feature space in the training process by using a kernel-based learning algorithm. Thus, the complexity of the calculation becomes lower and a small amount of training data will be sufficient. The time series of F10.7 from 2002 to 2006 are employed as the data sets. The performance of the approach is estimated by calculating the norm mean square error and mean absolute percentage error. It is shown that our approach can perform well by using fewer training data points than the traditional neural network. (research paper)
Determination of Urine Albumin by New Simple High-Performance Liquid Chromatography Method.
Klapkova, Eva; Fortova, Magdalena; Prusa, Richard; Moravcova, Libuse; Kotaska, Karel
2016-11-01
A simple high-performance liquid chromatography (HPLC) method was developed for the determination of albumin in patients' urine samples without coeluting proteins and was compared with the immunoturbidimetric determination of albumin. Urine albumin is important biomarker in diabetic patients, but part of it is immuno-nonreactive. Albumin was determined by high-performance liquid chromatography (HPLC), UV detection at 280 nm, Zorbax 300SB-C3 column. Immunoturbidimetric analysis was performed using commercial kit on automatic biochemistry analyzer COBAS INTEGRA ® 400, Roche Diagnostics GmbH, Manheim, Germany. The HLPC method was fully validated. No significant interference with other proteins (transferrin, α-1-acid glycoprotein, α-1-antichymotrypsin, antitrypsin, hemopexin) was found. The results from 301 urine samples were compared with immunochemical determination. We found a statistically significant difference between these methods (P = 0.0001, Mann-Whitney test). New simple HPLC method was developed for the determination of urine albumin without coeluting proteins. Our data indicate that the HPLC method is highly specific and more sensitive than immunoturbidimetry. © 2016 Wiley Periodicals, Inc.
Directory of Open Access Journals (Sweden)
Ali Abdollahi, Ahad Bavili-Tabrizi
2016-03-01
Full Text Available Background: Cephalosporins are among the safest and the most effective broad-spectrum bactericidal antimicrobial agents which have been prescribed by the clinician as antibiotics. Thus, the developing of simple, sensitive and rapid analytical methods for their determination can be attractive and desirable. Methods: A simple, rapid and sensitive spectrofluorimetric method was developed for the determination of cefixime, cefalexin and ceftriaxone in pharmaceutical formulations. Proposed method is based on the oxidation of these cephalosporins with cerium (IV to produce cerium (III, and its fluorescence was monitored at 356 ± 3 nm after excitation at 254 ± 3 nm. Results: The variables effecting oxidation of each cephalosporin with cerum (IV were studied and optimized. Under the experimental conditions used, the calibration graphs were linear over the range 0.1-4 µg/mL. The limit of detection and limit of quantification were in the range 0.031-0.054 and 0.102-0.172 µg/mL, respectively. Intra- and inter-day assay precisions, expressed as the relative standard deviation (RSD, were lower than 5.6 and 6.8%, respectively. Conclusion: The proposed method was applied to the determination of studied cephalosporins in pharmaceutical formulations by good recoveries in the range 91-110%.
International Nuclear Information System (INIS)
Kann, Frank van; Winterflood, John
2005-01-01
A simple but powerful method is presented for calibrating geophones, seismometers, and other inertial vibration sensors, including passive accelerometers. The method requires no cumbersome or expensive fixtures such as shaker platforms and can be performed using a standard instrument commonly available in the field. An absolute calibration is obtained using the reciprocity property of the device, based on the standard mathematical model for such inertial sensors. It requires only simple electrical measurement of the impedance of the sensor as a function of frequency to determine the parameters of the model and hence the sensitivity function. The method is particularly convenient if one of these parameters, namely the suspended mass is known. In this case, no additional mechanical apparatus is required and only a single set of impedance measurements yields the desired calibration function. Moreover, this measurement can be made with the device in situ. However, the novel and most powerful aspect of the method is its ability to accurately determine the effective suspended mass. For this, the impedance measurement is made with the device hanging from a simple spring or flexible cord (depending on the orientation of its sensitive axis). To complete the calibration, the device is weighed to determine its total mass. All the required calibration parameters, including the suspended mass, are then determined from a least-squares fit to the impedance as a function of frequency. A demonstration using both a 4.5 Hz geophone and a 1 Hz seismometer shows that the method can yield accurate absolute calibrations with an error of 0.1% or better, assuming no a priori knowledge of any parameters
An Operationally Simple Method for Separating the Rare-Earth Elements Neodymium and Dysprosium.
Bogart, Justin A; Lippincott, Connor A; Carroll, Patrick J; Schelter, Eric J
2015-07-06
Rare-earth metals are critical components of electronic materials and permanent magnets. Recycling of consumer materials is a promising new source of rare earths. To incentivize recycling there is a clear need for simple methods for targeted separations of mixtures of rare-earth metal salts. Metal complexes of a tripodal nitroxide ligand [{(2-(t) BuNO)C6 H4 CH2 }3 N](3-) (TriNOx(3-) ), feature a size-sensitive aperture formed of its three η(2) -(N,O) ligand arms. Exposure of metal cations in the aperture induces a self-associative equilibrium comprising [M(TriNOx)thf]/ [M(TriNOx)]2 (M=rare-earth metal). Differences in the equilibrium constants (Keq ) for early and late metals enables simple Nd/Dy separations through leaching with a separation ratio SNd/Dy =359. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
International Nuclear Information System (INIS)
Yang, Jianhong; Yi, Cancan; Xu, Jinwu; Ma, Xianghong
2015-01-01
A new LIBS quantitative analysis method based on analytical line adaptive selection and Relevance Vector Machine (RVM) regression model is proposed. First, a scheme of adaptively selecting analytical line is put forward in order to overcome the drawback of high dependency on a priori knowledge. The candidate analytical lines are automatically selected based on the built-in characteristics of spectral lines, such as spectral intensity, wavelength and width at half height. The analytical lines which will be used as input variables of regression model are determined adaptively according to the samples for both training and testing. Second, an LIBS quantitative analysis method based on RVM is presented. The intensities of analytical lines and the elemental concentrations of certified standard samples are used to train the RVM regression model. The predicted elemental concentration analysis results will be given with a form of confidence interval of probabilistic distribution, which is helpful for evaluating the uncertainness contained in the measured spectra. Chromium concentration analysis experiments of 23 certified standard high-alloy steel samples have been carried out. The multiple correlation coefficient of the prediction was up to 98.85%, and the average relative error of the prediction was 4.01%. The experiment results showed that the proposed LIBS quantitative analysis method achieved better prediction accuracy and better modeling robustness compared with the methods based on partial least squares regression, artificial neural network and standard support vector machine. - Highlights: • Both training and testing samples are considered for analytical lines selection. • The analytical lines are auto-selected based on the built-in characteristics of spectral lines. • The new method can achieve better prediction accuracy and modeling robustness. • Model predictions are given with confidence interval of probabilistic distribution
Directory of Open Access Journals (Sweden)
Sara Mortaz Hejri
2013-01-01
Full Text Available Background: One of the methods used for standard setting is the borderline regression method (BRM. This study aims to assess the reliability of BRM when the pass-fail standard in an objective structured clinical examination (OSCE was calculated by averaging the BRM standards obtained for each station separately. Materials and Methods: In nine stations of the OSCE with direct observation the examiners gave each student a checklist score and a global score. Using a linear regression model for each station, we calculated the checklist score cut-off on the regression equation for the global scale cut-off set at 2. The OSCE pass-fail standard was defined as the average of all station′s standard. To determine the reliability, the root mean square error (RMSE was calculated. The R2 coefficient and the inter-grade discrimination were calculated to assess the quality of OSCE. Results: The mean total test score was 60.78. The OSCE pass-fail standard and its RMSE were 47.37 and 0.55, respectively. The R2 coefficients ranged from 0.44 to 0.79. The inter-grade discrimination score varied greatly among stations. Conclusion: The RMSE of the standard was very small indicating that BRM is a reliable method of setting standard for OSCE, which has the advantage of providing data for quality assurance.
International Nuclear Information System (INIS)
Ballini, J.-P.; Cazes, P.; Turpin, P.-Y.
1976-01-01
Analysing the histogram of anode pulse amplitudes allows a discussion of the hypothesis that has been proposed to account for the statistical processes of secondary multiplication in a photomultiplier. In an earlier work, good agreement was obtained between experimental and reconstructed spectra, assuming a first dynode distribution including two Poisson distributions of distinct mean values. This first approximation led to a search for a method which could give the weights of several Poisson distributions of distinct mean values. Three methods have been briefly exposed: classical linear regression, constraint regression (d'Esopo's method), and regression on variables subject to error. The use of these methods gives an approach of the frequency function which represents the dispersion of the punctual mean gain around the whole first dynode mean gain value. Comparison between this function and the one employed in Polya distribution allows the statement that the latter is inadequate to describe the statistical process of secondary multiplication. Numerous spectra obtained with two kinds of photomultiplier working under different physical conditions have been analysed. Then two points are discussed: - Does the frequency function represent the dynode structure and the interdynode collection process. - Is the model (the multiplication process of all dynodes but the first one, is Poissonian) valid whatever the photomultiplier and the utilization conditions. (Auth.)
Huang, Lei
2015-01-01
To solve the problem in which the conventional ARMA modeling methods for gyro random noise require a large number of samples and converge slowly, an ARMA modeling method using a robust Kalman filtering is developed. The ARMA model parameters are employed as state arguments. Unknown time-varying estimators of observation noise are used to achieve the estimated mean and variance of the observation noise. Using the robust Kalman filtering, the ARMA model parameters are estimated accurately. The developed ARMA modeling method has the advantages of a rapid convergence and high accuracy. Thus, the required sample size is reduced. It can be applied to modeling applications for gyro random noise in which a fast and accurate ARMA modeling method is required. PMID:26437409
Machine learning plus optical flow: a simple and sensitive method to detect cardioactive drugs
Lee, Eugene K.; Kurokawa, Yosuke K.; Tu, Robin; George, Steven C.; Khine, Michelle
2015-07-01
Current preclinical screening methods do not adequately detect cardiotoxicity. Using human induced pluripotent stem cell-derived cardiomyocytes (iPS-CMs), more physiologically relevant preclinical or patient-specific screening to detect potential cardiotoxic effects of drug candidates may be possible. However, one of the persistent challenges for developing a high-throughput drug screening platform using iPS-CMs is the need to develop a simple and reliable method to measure key electrophysiological and contractile parameters. To address this need, we have developed a platform that combines machine learning paired with brightfield optical flow as a simple and robust tool that can automate the detection of cardiomyocyte drug effects. Using three cardioactive drugs of different mechanisms, including those with primarily electrophysiological effects, we demonstrate the general applicability of this screening method to detect subtle changes in cardiomyocyte contraction. Requiring only brightfield images of cardiomyocyte contractions, we detect changes in cardiomyocyte contraction comparable to - and even superior to - fluorescence readouts. This automated method serves as a widely applicable screening tool to characterize the effects of drugs on cardiomyocyte function.
A Comparison of Multidimensional Item Selection Methods in Simple and Complex Test Designs
Directory of Open Access Journals (Sweden)
Eren Halil ÖZBERK
2017-03-01
Full Text Available In contrast with the previous studies, this study employed various test designs (simple and complex which allow the evaluation of the overall ability score estimations across multiple real test conditions. In this study, four factors were manipulated, namely the test design, number of items per dimension, correlation between dimensions and item selection methods. Using the generated item and ability parameters, dichotomous item responses were generated in by using M3PL compensatory multidimensional IRT model with specified correlations. MCAT composite ability score accuracy was evaluated using absolute bias (ABSBIAS, correlation and the root mean square error (RMSE between true and estimated ability scores. The results suggest that the multidimensional test structure, number of item per dimension and correlation between dimensions had significant effect on item selection methods for the overall score estimations. For simple structure test design it was found that V1 item selection has the lowest absolute bias estimations for both long and short tests while estimating overall scores. As the model gets complex KL item selection method performed better than other two item selection method.
Bolarinwa, O A; Adeola, O
2012-12-01
Digestible and metabolizable energy contents of feed ingredients for pigs can be determined by direct or indirect methods. There are situations when only the indirect approach is suitable and the regression method is a robust indirect approach. This study was conducted to compare the direct and regression methods for determining the energy value of wheat for pigs. Twenty-four barrows with an average initial BW of 31 kg were assigned to 4 diets in a randomized complete block design. The 4 diets consisted of 969 g wheat/kg plus minerals and vitamins (sole wheat) for the direct method, corn (Zea mays)-soybean (Glycine max) meal reference diet (RD), RD + 300 g wheat/kg, and RD + 600 g wheat/kg. The 3 corn-soybean meal diets were used for the regression method and wheat replaced the energy-yielding ingredients, corn and soybean meal, so that the same ratio of corn and soybean meal across the experimental diets was maintained. The wheat used was analyzed to contain 883 g DM, 15.2 g N, and 3.94 Mcal GE/kg. Each diet was fed to 6 barrows in individual metabolism crates for a 5-d acclimation followed by a 5-d total but separate collection of feces and urine. The DE and ME for the sole wheat diet were 3.83 and 3.77 Mcal/kg DM, respectively. Because the sole wheat diet contained 969 g wheat/kg, these translate to 3.95 Mcal DE/kg DM and 3.89 Mcal ME/kg DM. The RD used for the regression approach yielded 4.00 Mcal DE and 3.91 Mcal ME/kg DM diet. Increasing levels of wheat in the RD linearly reduced (P direct method (3.95 and 3.89 Mcal/kg DM) did not differ (0.78 < P < 0.89) from those obtained using the regression method (3.96 and 3.88 Mcal/kg DM).
Liou, Jyun-you; Smith, Elliot H.; Bateman, Lisa M.; McKhann, Guy M., II; Goodman, Robert R.; Greger, Bradley; Davis, Tyler S.; Kellis, Spencer S.; House, Paul A.; Schevon, Catherine A.
2017-08-01
Objective. Epileptiform discharges, an electrophysiological hallmark of seizures, can propagate across cortical tissue in a manner similar to traveling waves. Recent work has focused attention on the origination and propagation patterns of these discharges, yielding important clues to their source location and mechanism of travel. However, systematic studies of methods for measuring propagation are lacking. Approach. We analyzed epileptiform discharges in microelectrode array recordings of human seizures. The array records multiunit activity and local field potentials at 400 micron spatial resolution, from a small cortical site free of obstructions. We evaluated several computationally efficient statistical methods for calculating traveling wave velocity, benchmarking them to analyses of associated neuronal burst firing. Main results. Over 90% of discharges met statistical criteria for propagation across the sampled cortical territory. Detection rate, direction and speed estimates derived from a multiunit estimator were compared to four field potential-based estimators: negative peak, maximum descent, high gamma power, and cross-correlation. Interestingly, the methods that were computationally simplest and most efficient (negative peak and maximal descent) offer non-inferior results in predicting neuronal traveling wave velocities compared to the other two, more complex methods. Moreover, the negative peak and maximal descent methods proved to be more robust against reduced spatial sampling challenges. Using least absolute deviation in place of least squares error minimized the impact of outliers, and reduced the discrepancies between local field potential-based and multiunit estimators. Significance. Our findings suggest that ictal epileptiform discharges typically take the form of exceptionally strong, rapidly traveling waves, with propagation detectable across millimeter distances. The sequential activation of neurons in space can be inferred from clinically
DEFF Research Database (Denmark)
Riccardi, M.; Mele, G.; Pulvento, C.
2014-01-01
Leaf chlorophyll content provides valuable information about physiological status of plants; it is directly linked to photosynthetic potential and primary production. In vitro assessment by wet chemical extraction is the standard method for leaf chlorophyll determination. This measurement is expe...
Energy Technology Data Exchange (ETDEWEB)
Jabr, R.A. [Electrical, Computer and Communication Engineering Department, Notre Dame University, P.O. Box 72, Zouk Mikhael, Zouk Mosbeh (Lebanon)
2006-02-15
This paper presents an implementation of the least absolute value (LAV) power system state estimator based on obtaining a sequence of solutions to the L{sub 1}-regression problem using an iteratively reweighted least squares (IRLS{sub L1}) method. The proposed implementation avoids reformulating the regression problem into standard linear programming (LP) form and consequently does not require the use of common methods of LP, such as those based on the simplex method or interior-point methods. It is shown that the IRLS{sub L1} method is equivalent to solving a sequence of linear weighted least squares (LS) problems. Thus, its implementation presents little additional effort since the sparse LS solver is common to existing LS state estimators. Studies on the termination criteria of the IRLS{sub L1} method have been carried out to determine a procedure for which the proposed estimator is more computationally efficient than a previously proposed non-linear iteratively reweighted least squares (IRLS) estimator. Indeed, it is revealed that the proposed method is a generalization of the previously reported IRLS estimator, but is based on more rigorous theory. (author)
Energy Technology Data Exchange (ETDEWEB)
Boucher, Thomas F., E-mail: boucher@cs.umass.edu [School of Computer Science, University of Massachusetts Amherst, 140 Governor' s Drive, Amherst, MA 01003, United States. (United States); Ozanne, Marie V. [Department of Astronomy, Mount Holyoke College, South Hadley, MA 01075 (United States); Carmosino, Marco L. [School of Computer Science, University of Massachusetts Amherst, 140 Governor' s Drive, Amherst, MA 01003, United States. (United States); Dyar, M. Darby [Department of Astronomy, Mount Holyoke College, South Hadley, MA 01075 (United States); Mahadevan, Sridhar [School of Computer Science, University of Massachusetts Amherst, 140 Governor' s Drive, Amherst, MA 01003, United States. (United States); Breves, Elly A.; Lepore, Kate H. [Department of Astronomy, Mount Holyoke College, South Hadley, MA 01075 (United States); Clegg, Samuel M. [Los Alamos National Laboratory, P.O. Box 1663, MS J565, Los Alamos, NM 87545 (United States)
2015-05-01
The ChemCam instrument on the Mars Curiosity rover is generating thousands of LIBS spectra and bringing interest in this technique to public attention. The key to interpreting Mars or any other types of LIBS data are calibrations that relate laboratory standards to unknowns examined in other settings and enable predictions of chemical composition. Here, LIBS spectral data are analyzed using linear regression methods including partial least squares (PLS-1 and PLS-2), principal component regression (PCR), least absolute shrinkage and selection operator (lasso), elastic net, and linear support vector regression (SVR-Lin). These were compared against results from nonlinear regression methods including kernel principal component regression (K-PCR), polynomial kernel support vector regression (SVR-Py) and k-nearest neighbor (kNN) regression to discern the most effective models for interpreting chemical abundances from LIBS spectra of geological samples. The results were evaluated for 100 samples analyzed with 50 laser pulses at each of five locations averaged together. Wilcoxon signed-rank tests were employed to evaluate the statistical significance of differences among the nine models using their predicted residual sum of squares (PRESS) to make comparisons. For MgO, SiO{sub 2}, Fe{sub 2}O{sub 3}, CaO, and MnO, the sparse models outperform all the others except for linear SVR, while for Na{sub 2}O, K{sub 2}O, TiO{sub 2}, and P{sub 2}O{sub 5}, the sparse methods produce inferior results, likely because their emission lines in this energy range have lower transition probabilities. The strong performance of the sparse methods in this study suggests that use of dimensionality-reduction techniques as a preprocessing step may improve the performance of the linear models. Nonlinear methods tend to overfit the data and predict less accurately, while the linear methods proved to be more generalizable with better predictive performance. These results are attributed to the high
International Nuclear Information System (INIS)
Boucher, Thomas F.; Ozanne, Marie V.; Carmosino, Marco L.; Dyar, M. Darby; Mahadevan, Sridhar; Breves, Elly A.; Lepore, Kate H.; Clegg, Samuel M.
2015-01-01
The ChemCam instrument on the Mars Curiosity rover is generating thousands of LIBS spectra and bringing interest in this technique to public attention. The key to interpreting Mars or any other types of LIBS data are calibrations that relate laboratory standards to unknowns examined in other settings and enable predictions of chemical composition. Here, LIBS spectral data are analyzed using linear regression methods including partial least squares (PLS-1 and PLS-2), principal component regression (PCR), least absolute shrinkage and selection operator (lasso), elastic net, and linear support vector regression (SVR-Lin). These were compared against results from nonlinear regression methods including kernel principal component regression (K-PCR), polynomial kernel support vector regression (SVR-Py) and k-nearest neighbor (kNN) regression to discern the most effective models for interpreting chemical abundances from LIBS spectra of geological samples. The results were evaluated for 100 samples analyzed with 50 laser pulses at each of five locations averaged together. Wilcoxon signed-rank tests were employed to evaluate the statistical significance of differences among the nine models using their predicted residual sum of squares (PRESS) to make comparisons. For MgO, SiO 2 , Fe 2 O 3 , CaO, and MnO, the sparse models outperform all the others except for linear SVR, while for Na 2 O, K 2 O, TiO 2 , and P 2 O 5 , the sparse methods produce inferior results, likely because their emission lines in this energy range have lower transition probabilities. The strong performance of the sparse methods in this study suggests that use of dimensionality-reduction techniques as a preprocessing step may improve the performance of the linear models. Nonlinear methods tend to overfit the data and predict less accurately, while the linear methods proved to be more generalizable with better predictive performance. These results are attributed to the high dimensionality of the data (6144
Directory of Open Access Journals (Sweden)
Tamer Khatib
2014-01-01
Full Text Available In this research an improved approach for sizing standalone PV system (SAPV is presented. This work is an improved work developed previously by the authors. The previous work is based on the analytical method which faced some concerns regarding the difficulty of finding the model’s coefficients. Therefore, the proposed approach in this research is based on a combination of an analytical method and a machine learning approach for a generalized artificial neural network (GRNN. The GRNN assists to predict the optimal size of a PV system using the geographical coordinates of the targeted site instead of using mathematical formulas. Employing the GRNN facilitates the use of a previously developed method by the authors and avoids some of its drawbacks. The approach has been tested using data from five Malaysian sites. According to the results, the proposed method can be efficiently used for SAPV sizing whereas the proposed GRNN based model predicts the sizing curves of the PV system accurately with a prediction error of 0.6%. Moreover, hourly meteorological and load demand data are used in this research in order to consider the uncertainty of the solar energy and the load demand.
Comparison of Sparse and Jack-knife partial least squares regression methods for variable selection
DEFF Research Database (Denmark)
Karaman, Ibrahim; Qannari, El Mostafa; Martens, Harald
2013-01-01
The objective of this study was to compare two different techniques of variable selection, Sparse PLSR and Jack-knife PLSR, with respect to their predictive ability and their ability to identify relevant variables. Sparse PLSR is a method that is frequently used in genomics, whereas Jack-knife PL...
Using a Linear Regression Method to Detect Outliers in IRT Common Item Equating
He, Yong; Cui, Zhongmin; Fang, Yu; Chen, Hanwei
2013-01-01
Common test items play an important role in equating alternate test forms under the common item nonequivalent groups design. When the item response theory (IRT) method is applied in equating, inconsistent item parameter estimates among common items can lead to large bias in equated scores. It is prudent to evaluate inconsistency in parameter…
Sun, L.G.; De Visser, C.C.; Chu, Q.P.; Mulder, J.A.
2012-01-01
The optimality of the kernel number and kernel centers plays a significant role in determining the approximation power of nearly all kernel methods. However, the process of choosing optimal kernels is always formulated as a global optimization task, which is hard to accomplish. Recently, an
Asghari, Mehdi Poursheikhali; Hayatshahi, Sayyed Hamed Sadat; Abdolmaleki, Parviz
2012-01-01
From both the structural and functional points of view, β-turns play important biological roles in proteins. In the present study, a novel two-stage hybrid procedure has been developed to identify β-turns in proteins. Binary logistic regression was initially used for the first time to select significant sequence parameters in identification of β-turns due to a re-substitution test procedure. Sequence parameters were consisted of 80 amino acid positional occurrences and 20 amino acid percentages in sequence. Among these parameters, the most significant ones which were selected by binary logistic regression model, were percentages of Gly, Ser and the occurrence of Asn in position i+2, respectively, in sequence. These significant parameters have the highest effect on the constitution of a β-turn sequence. A neural network model was then constructed and fed by the parameters selected by binary logistic regression to build a hybrid predictor. The networks have been trained and tested on a non-homologous dataset of 565 protein chains. With applying a nine fold cross-validation test on the dataset, the network reached an overall accuracy (Qtotal) of 74, which is comparable with results of the other β-turn prediction methods. In conclusion, this study proves that the parameter selection ability of binary logistic regression together with the prediction capability of neural networks lead to the development of more precise models for identifying β-turns in proteins.
A simple mass-conserved level set method for simulation of multiphase flows
Yuan, H.-Z.; Shu, C.; Wang, Y.; Shu, S.
2018-04-01
In this paper, a modified level set method is proposed for simulation of multiphase flows with large density ratio and high Reynolds number. The present method simply introduces a source or sink term into the level set equation to compensate the mass loss or offset the mass increase. The source or sink term is derived analytically by applying the mass conservation principle with the level set equation and the continuity equation of flow field. Since only a source term is introduced, the application of the present method is as simple as the original level set method, but it can guarantee the overall mass conservation. To validate the present method, the vortex flow problem is first considered. The simulation results are compared with those from the original level set method, which demonstrates that the modified level set method has the capability of accurately capturing the interface and keeping the mass conservation. Then, the proposed method is further validated by simulating the Laplace law, the merging of two bubbles, a bubble rising with high density ratio, and Rayleigh-Taylor instability with high Reynolds number. Numerical results show that the mass is a well-conserved by the present method.
A simple method for fabricating multi-layer PDMS structures for 3D microfluidic chips
Zhang, Mengying
2010-01-01
We report a simple methodology to fabricate PDMS multi-layer microfluidic chips. A PDMS slab was surface-treated by trichloro (1H,1H,2H,2H-perfluorooctyl) silane, and acts as a reusable transferring layer. Uniformity of the thickness of the patterned PDMS layer and the well-alignment could be achieved due to the transparency and proper flexibility of this transferring layer. Surface treatment results are confirmed by XPS and contact angle testing, while bonding forces between different layers were measured for better understanding of the transferring process. We have also designed and fabricated a few simple types of 3D PDMS chip, especially one consisting of 6 thin layers (each with thickness of 50 μm), to demonstrate the potential utilization of this technique. 3D fluorescence images were taken by a confocal microscope to illustrate the spatial characters of essential parts. This fabrication method is confirmed to be fast, simple, repeatable, low cost and possible to be mechanized for mass production. © The Royal Society of Chemistry 2010.
Ruminal Methane Production on Simple Phenolic Acids Addition in in Vitro Gas Production Method
Directory of Open Access Journals (Sweden)
A. Jayanegara
2009-04-01
Full Text Available Methane production from ruminants contributes to total global methane production, which is an important contributor to global warming. In this experiment, six sources of simple phenolic acids (benzoic, cinnamic, phenylacetic, caffeic, p-coumaric and ferulic acids at two different levels (2 and 5 mM added to hay diet were evaluated for their potential to reduce enteric methane production using in vitro Hohenheim gas production method. The measured variables were gas production, methane, organic matter digestibility (OMD, and short chain fatty acids (SCFA. The results showed that addition of cinnamic, caffeic, p-coumaric and ferulic acids at 5 mM significantly (P p-coumaric > ferulic > cinnamic. The addition of simple phenols did not significantly decrease OMD. Addition of simple phenols tends to decrease total SCFA production. It was concluded that methane decrease by addition of phenolic acids was relatively small, and the effect of phenolic acids on methane decrease depended on the source and concentration applied.
Tejos, Nicolas; Rodríguez-Puebla, Aldo; Primack, Joel R.
2018-01-01
We present a simple, efficient and robust approach to improve cosmological redshift measurements. The method is based on the presence of a reference sample for which a precise redshift number distribution (dN/dz) can be obtained for different pencil-beam-like sub-volumes within the original survey. For each sub-volume we then impose that: (i) the redshift number distribution of the uncertain redshift measurements matches the reference dN/dz corrected by their selection functions and (ii) the rank order in redshift of the original ensemble of uncertain measurements is preserved. The latter step is motivated by the fact that random variables drawn from Gaussian probability density functions (PDFs) of different means and arbitrarily large standard deviations satisfy stochastic ordering. We then repeat this simple algorithm for multiple arbitrary pencil-beam-like overlapping sub-volumes; in this manner, each uncertain measurement has multiple (non-independent) 'recovered' redshifts which can be used to estimate a new redshift PDF. We refer to this method as the Stochastic Order Redshift Technique (SORT). We have used a state-of-the-art N-body simulation to test the performance of SORT under simple assumptions and found that it can improve the quality of cosmological redshifts in a robust and efficient manner. Particularly, SORT redshifts (zsort) are able to recover the distinctive features of the so-called 'cosmic web' and can provide unbiased measurement of the two-point correlation function on scales ≳4 h-1Mpc. Given its simplicity, we envision that a method like SORT can be incorporated into more sophisticated algorithms aimed to exploit the full potential of large extragalactic photometric surveys.
DEFF Research Database (Denmark)
Jakobsen, Bo; Sanz, Alejandro; Niss, Kristine
2016-01-01
and their crystallization, e.g., for locating the glass transition and melting point(s), as well as for investigating the stability against crystallization and estimating the relative change in specific heat between the solid and liquid phases at the glass transition......We present a simple method for fast and cheap thermal analysis on supercooled glass-forming liquids. This “Thermalization Calorimetry” technique is based on monitoring the temperature and its rate of change during heating or cooling of a sample for which the thermal power input comes from heat...
A simple method for conversion of airborne gamma-ray spectra to ground level doses
DEFF Research Database (Denmark)
Korsbech, Uffe C C; Bargholz, Kim
1996-01-01
A new and simple method for conversion of airborne NaI(Tl) gamma-ray spectra to dose rates at ground level has been developed. By weighting the channel count rates with the channel numbers a spectrum dose index (SDI) is calculated for each spectrum. Ground level dose rates then are determined...... by multiplying the SDI by an altitude dependent conversion factor. The conversion factors are determined from spectra based on Monte Carlo calculations. The results are compared with measurements in a laboratory calibration set-up. IT-NT-27. June 1996. 27 p....
A simple and effective method for detecting precipitated proteins in MALDI-TOF MS.
Oshikane, Hiroyuki; Watabe, Masahiko; Nakaki, Toshio
2018-04-01
MALDI-TOF MS has developed rapidly into an essential analytical tool for the life sciences. Cinnamic acid derivatives are generally employed in routine molecular weight determinations of intact proteins using MALDI-TOF MS. However, a protein of interest may precipitate when mixed with matrix solution, perhaps preventing MS detection. We herein provide a simple approach to enable the MS detection of such precipitated protein species by means of a "direct deposition method" -- loading the precipitant directly onto the sample plate. It is thus expected to improve routine MS analysis of intact proteins. Copyright © 2018. Published by Elsevier Inc.
Simple emittance measurement of negative hydrogen ion beam using pepper-pot method
International Nuclear Information System (INIS)
Hamabe, M.; Tsumori, K.; Takeiri, Y.; Kaneko, O.; Asano, E.; Kawamoto, T.; Kuroda, T.; Guharay, S.K.
1997-01-01
A simple apparatus for emittance measurement using pepper-pot method is developed. The pepper-pot patterns are directly exposed and recorded on a Kapton foil. Using this apparatus, emittance was measured in the case of the negative hydrogen (H - ) beam from the large negative ion source, which is the 1/3 scaled test device for the negative-ion-based neutral beam injection (N-NBI) on the Large Helical Device (LHD). As the consequence of the first trial, the 95% normalized emittance value is measured as 0.59 mm mrad. (author)
DEFF Research Database (Denmark)
Seidelin, Jakob B; Horn, Thomas; Nielsen, Ole H
2003-01-01
and where the diagnosis of irritable bowel syndrome was later reached, were included. Seven colon biopsies were taken and incubated at varying time periods of 10-120 min and temperatures of 4-37 degrees C in a chelating buffer. The epithelium was then harvested and cultivated under three different......Few comparative and validated reports exist on the isolation and growth of colonoscopically obtained colonic epithelium. The aim of this study was to develop and validate a simple method for the cultivation of colonoscopically obtained colonocytes. Forty patients, who underwent routine colonoscopy...
Velan, A. Senthilkumara; Joseph, J.; Raman, N.
2008-01-01
A simple, efficient and cost effective method is described for the synthesis of Biginelli type heterocyclic compounds of dihydropyrimidinones analogous. They were prepared from a reaction mixture consisting of substituted benzaldehydes, thiourea and ethylacetoacetate using ammonium dihydrogenphosphate as catalyst. The procedure for the preparation of the compounds is environmentally benign and safe which is advantageous in terms of experimentation, catalyst reusability, yields of the products, shorter reaction times and preclusion of toxic solvents. The four new synthesised compounds were tested for their antifungal activity. They have good antifungal activity comparing to the standard (Fluconazole). PMID:23997611
A simple method for the preparation of activated carbon fibers coated with graphite nanofibers.
Kim, Byung-Joo; Park, Soo-Jin
2007-11-15
A simple method is described for the preparation of activated carbon fibers (ACFs) coated with graphite nanofibers (GNFs). Low-pressure-plasma mixed-gas (Ar/O2) treatment of the ACFs led to the growth of GNFs on their surface. The growth was greater at higher power inputs, and from TEM observations the GNFs were seen to be of herringbone type. It was found that the N2 adsorption capacity of the ACFs did not sharply decrease, and that volume resistivity of the ACFs enhanced as a result of this treatment.
A simple method to prepare self-assembled organic-organic heterobilayers on metal substrates
Directory of Open Access Journals (Sweden)
L. D. Sun
2011-06-01
Full Text Available We demonstrate a self-assembly based simple method to prepare organic-organic heterobilayers on a metal substrate. By either sequential- or co-deposition of para-sexiphenyl (p-6P and pentacene molecules onto the Cu(110 surface in ultrahigh vacuum, p-6P/pentacene/Cu(110 heterobilayer is synthesized at room temperature. The layer sequence of the heterostructure is independent of the growth scenario indicating the p-6P/pentacene/Cu(110 is a self-assembled structure with lowest energy. Besides, the bilayer shows a very high orientational ordering and is thermally stable up to 430K.
Simple method for identifying doubly ionized uranium (U III) produced in a hollow-cathode discharge
International Nuclear Information System (INIS)
Piyakis, K.N.; Gagne, J.M.
1988-01-01
We have studied by emission spectroscopy the spectral properties of doubly ionized uranium, produced in a vapor generator of hollow-cathode design, as a function of the nature of a pure fill gas (helium, neon, argon, krypton, xenon) and its pressure. The spectral intensity is found to increase with increasing ionization potential of the discharge buffer gas, except in the case of helium. Based on our preliminary results, a simple and practical method for the positive identification of the complex U III spectrum is suggested
Simple emittance measurement of negative hydrogen ion beam using pepper-pot method
Energy Technology Data Exchange (ETDEWEB)
Hamabe, M.; Tsumori, K.; Takeiri, Y.; Kaneko, O.; Asano, E.; Kawamoto, T.; Kuroda, T. [National Inst. for Fusion Science, Nagoya (Japan); Guharay, S.K.
1997-02-01
A simple apparatus for emittance measurement using pepper-pot method is developed. The pepper-pot patterns are directly exposed and recorded on a Kapton foil. Using this apparatus, emittance was measured in the case of the negative hydrogen (H{sup -}) beam from the large negative ion source, which is the 1/3 scaled test device for the negative-ion-based neutral beam injection (N-NBI) on the Large Helical Device (LHD). As the consequence of the first trial, the 95% normalized emittance value is measured as 0.59 mm mrad. (author)
Tracing and quantifying groundwater inflow into lakes using a simple method for radon-222 analysis
Directory of Open Access Journals (Sweden)
T. Kluge
2007-09-01
Full Text Available Due to its high activities in groundwater, the radionuclide ^{222}Rn is a sensitive natural tracer to detect and quantify groundwater inflow into lakes, provided the comparatively low activities in the lakes can be measured accurately. Here we present a simple method for radon measurements in the low-level range down to 3 Bq m^{−3}, appropriate for groundwater-influenced lakes, together with a concept to derive inflow rates from the radon budget in lakes. The analytical method is based on a commercially available radon detector and combines the advantages of established procedures with regard to efficient sampling and sensitive analysis. Large volume (12 l water samples are taken in the field and analyzed in the laboratory by equilibration with a closed air loop and alpha spectrometry of radon in the gas phase. After successful laboratory tests, the method has been applied to a small dredging lake without surface in- or outflow in order to estimate the groundwater contribution to the hydrological budget. The inflow rate calculated from a ^{222}Rn balance for the lake is around 530 m³ per day, which is comparable to the results of previous studies. In addition to the inflow rate, the vertical and horizontal radon distribution in the lake provides information on the spatial distribution of groundwater inflow to the lake. The simple measurement and sampling technique encourages further use of radon to examine groundwater-lake water interaction.
Control of Solar Power Plants Connected Grid with Simple Calculation Method on Residential Homes
Kananda, Kiki; Nazir, Refdinal
2017-12-01
One of the most compatible renewable energy in all regions to apply is solar energy. Solar power plants can be built connected to existing or stand-alone power grids. In assisting the residential electricity in which there is a power grid, then a small scale solar energy power plants is very appropriate. However, the general constraint of solar energy power plants is still low in terms of efficiency. Therefore, this study will explain how to control the power of solar power plants more optimally, which is expected to reactive power to zero to raise efficiency. This is a continuation of previous research using Newton Rapshon control method. In this study we introduce a simple method by using ordinary mathematical calculations of solar-related equations. In this model, 10 PV modules type of ND T060M1 with a 60 Wp capacity are used. The calculations performed using MATLAB Simulink provide excellent value. For PCC voltage values obtained a stable quantity of approximately 220 V. At a maximum irradiation condition of 1000 W / m2, the reactive power value of Q solar generating system maximum 20.48 Var and maximum active power of 417.5 W. In the condition of lower irradiation, value of reactive power Q almost close to zero 0.77Var. This simple mathematical method can provide excellent quality control power values.
A Simple Method for Assessing Severity of Common Root Rot on Barley
Directory of Open Access Journals (Sweden)
Mohammad Imad Eddin Arabi
2013-12-01
Full Text Available Common root rot caused by Cochliobolus sativus is a serious disease of barley. A simple and reliable method for assessing this disease would enhance our capacity in identifying resistance sources and developing resistant barley cultivars. In searching for such a method, a conidial suspension of C. sativus was dropped onto sterilized elongated subcrown internodes and incubated in sandwich filter paper using polyethylene transparent envelopes. Initial disease symptoms were easily detected after 48h of inoculation. Highly significant correlation coefficients were found in each experiment (A, B and C between sandwich filter paper and seedling assays, indicating that this testing procedure was reliable. The method presented facilitates a rapid pre-selection under uniform conditions which is of importance from a breeder’s point of view.
Simple method for functionalization of silica with alkyl silane and organic ligands
Directory of Open Access Journals (Sweden)
Kasim Mohammed Hello
2018-06-01
Full Text Available 3–(chloropropyltriethoxysilane (CPTES with imidazole and sodium silicate from rice husk ash (RHA successfully reacted within a short time in one–pot synthesis in purely homogenous method. A similar procedure was used for the immobilization of melamine and saccharine to demonstrate a generally applicable method. No reflux was needed, and a green solvent was used as the reaction medium. The surface areas of the prepared materials were very high compared with the materials which have similar structure prepared by the traditional method. The TGA/DTA confirmed that all the materials were highly stable. The FT-IR shows that all expected the functional groups were present. The HRTEM showed that the materials had ordered mesoporous straight-channels which were like the MCM-41. The synthesis procedure is simple, repeatable with different organic ligands and does not require toxic solvents or multiple steps with high products yield. Keywords: Rice husk ash, MCM-41, Imidazole, Melamine, Saccharine
Simple measurement of 14C in the environment using a gel suspension method
International Nuclear Information System (INIS)
Wakabayashi, G.; Ohura, H.; Okai, T.; Matoba, M.
1999-01-01
A simple analytical method for environmental 14 C with a low background liquid scintillation counter was developed. A new gelling agent, N-lauroyl-L-glutamic-α,γ-dibutylamide was used, for the liquid scintillation counting of 14 C as CaCO 3 (gel suspension method). Our procedure for sample preparation was much simpler than that of conventional methods and required no special equipment. The samples prepared with the standard sample of CaCO 3 were measured to evaluate the self absorption of the sample, the optimum condition of counting and the detection limit. Our results indicated that the newly developed technique could be efficiently applied for the monitoring of environmental 14 C. (author)
Directory of Open Access Journals (Sweden)
VLADIMÍR PITSCHMANN
2007-10-01
Full Text Available A simple visual and tristimulus colorimetric method (three-dimensional system CIE–L*a*b* for the determination of trace amounts of diphosgene in air has been developed. The method is based on the suction of diphosgene vapors through a modified cotton fabric filter fixed in a special adapter. Prior to analysis, the filter is saturated with a chromogenic reagent based on 4-(p-nitrobenzylpyridine. The optimal composition of the reagent is 2 g of 4-(p-nitrobenzylpyridine and 4 g of N-phenylbenzylamine in 100 ml of a 50:50 ethanol–glycerol mixture. The intensity of the formed red coloration of the filter is evaluated visually or by a tristimulus colorimeter (LMG 173, Lange, Germany. The detection limit is 0.01 mg m-3. Acetyl chloride and benzoyl chloride react in 150 and 50 times higher concentrations, respecttively. The method is suitable for mobile field analysis.
Directory of Open Access Journals (Sweden)
Dario Modenini
2018-01-01
Full Text Available We propose a simple and relatively inexpensive method for determining the center of gravity (CoG of a small spacecraft. This method, which can be ascribed to the class of suspension techniques, is based on dual-axis inclinometer readings. By performing two consecutive suspensions from two different points, the CoG is determined, ideally, as the intersection between two lines which are uniquely defined by the respective rotations. We performed an experimental campaign to verify the method and assess its accuracy. Thanks to a quantitative error budget, we obtained an error distribution with simulations, which we verified through experimental tests. The retrieved experimental error distribution agrees well with the results predicted through simulations, which in turn lead to a CoG error norm smaller than 2 mm with 95% confidence level.
About the method of approximation of a simple closed plane curve with a sharp edge
Directory of Open Access Journals (Sweden)
Zelenyy A.S.
2017-02-01
Full Text Available it was noted in the article, that initially the problem of interpolation of the simple plane curve arose in the problem of simulation of subsonic flow around a body with the subsequent calculation of the velocity potential using the vortex panel method. However, as it turned out, the practical importance of this method is much wider. This algorithm can be successfully applied in any task that requires a discrete set of points which describe an arbitrary curve: potential function method, flow around an airfoil with the trailing edge (airfoil, liquid drop, etc., analytic expression, which is very difficult to obtain, creation of the font and logo and in some tasks of architecture and garment industry.
A simple component-connection method for building binary decision diagrams encoding a fault tree
International Nuclear Information System (INIS)
Way, Y.-S.; Hsia, D.-Y.
2000-01-01
A simple new method for building binary decision diagrams (BDDs) encoding a fault tree (FT) is provided in this study. We first decompose the FT into FT-components. Each of them is a single descendant (SD) gate-sequence. Following the node-connection rule, the BDD-component encoding an SD FT-component can each be found to be an SD node-sequence. By successively connecting the BDD-components one by one, the BDD for the entire FT is thus obtained. During the node-connection and component-connection, reduction rules might need to be applied. An example FT is used throughout the article to explain the procedure step by step. Our method proposed is a hybrid one for FT analysis. Some algorithms or techniques used in the conventional FT analysis or the newer BDD approach may be applied to our case; our ideas mentioned in the article might be referred by the two methods
Directory of Open Access Journals (Sweden)
Dicky Nofriansyah
2017-10-01
Full Text Available This research was focused on explaining how the concept of simple multi attribute rating technique method in a decision support system based on desktop programming to solve multi-criteria selection problem, especially Scholarship. The Merkle Hellman method is used for securing the results of choices made by the Smart process. The determination of PPA and BBP-PPA scholarship recipients on STMIK Triguna Dharma becomes a problem because it takes a long time in determining the decision. By adopting the SMART method, the application can make decisions quickly and precisely. The expected result of this research is the application can facilitate in overcoming the problems that occur concerning the determination of PPA and BBP-PPA scholarship recipients as well as assisting Student Affairs STMIK Triguna Dharma in making decisions quickly and accurately
Note: A simple image processing based fiducial auto-alignment method for sample registration.
Robertson, Wesley D; Porto, Lucas R; Ip, Candice J X; Nantel, Megan K T; Tellkamp, Friedjof; Lu, Yinfei; Miller, R J Dwayne
2015-08-01
A simple method for the location and auto-alignment of sample fiducials for sample registration using widely available MATLAB/LabVIEW software is demonstrated. The method is robust, easily implemented, and applicable to a wide variety of experiment types for improved reproducibility and increased setup speed. The software uses image processing to locate and measure the diameter and center point of circular fiducials for distance self-calibration and iterative alignment and can be used with most imaging systems. The method is demonstrated to be fast and reliable in locating and aligning sample fiducials, provided here by a nanofabricated array, with accuracy within the optical resolution of the imaging system. The software was further demonstrated to register, load, and sample the dynamically wetted array.
Directory of Open Access Journals (Sweden)
S. Alexis Paz
2018-03-01
Full Text Available In this work, we study the influence of hidden barriers on the convergence behavior of three free-energy calculation methods: well-tempered metadynamics (WTMD, adaptive-biasing forces (ABF, and on-the-fly parameterization (OTFP. We construct a simple two-dimensional potential-energy surfaces (PES that allows for an exact analytical result for the free-energy in any one-dimensional order parameter. Then we chose different CV definitions and PES parameters to create three different systems with increasing sampling challenges. We find that all three methods are not greatly affected by the hidden-barriers in the simplest case considered. The adaptive sampling methods show faster sampling while the auxiliary high-friction requirement of OTFP makes it slower for this case. However, a slight change in the CV definition has a strong impact in the ABF and WTMD performance, illustrating the importance of choosing suitable collective variables.
A simple orbit-attitude coupled modelling method for large solar power satellites
Li, Qingjun; Wang, Bo; Deng, Zichen; Ouyang, Huajiang; Wei, Yi
2018-04-01
A simple modelling method is proposed to study the orbit-attitude coupled dynamics of large solar power satellites based on natural coordinate formulation. The generalized coordinates are composed of Cartesian coordinates of two points and Cartesian components of two unitary vectors instead of Euler angles and angular velocities, which is the reason for its simplicity. Firstly, in order to develop natural coordinate formulation to take gravitational force and gravity gradient torque of a rigid body into account, Taylor series expansion is adopted to approximate the gravitational potential energy. The equations of motion are constructed through constrained Hamilton's equations. Then, an energy- and constraint-conserving algorithm is presented to solve the differential-algebraic equations. Finally, the proposed method is applied to simulate the orbit-attitude coupled dynamics and control of a large solar power satellite considering gravity gradient torque and solar radiation pressure. This method is also applicable to dynamic modelling of other rigid multibody aerospace systems.
A novel and simple fabrication method of embedded SU-8 micro channels by direct UV lithography
International Nuclear Information System (INIS)
Fu, C; Hung, C; Huang, H
2006-01-01
In this paper, we presents a novel and simple method to fabricate embedded micro channels. The method based on different light absorption properties of the SU-8 thick photoresist under different incident UV wavelengths. The channel structures are defined by the ordinary I-line, while the cover layer is patterned by the deep UV. Because the deep UV is obtained directly on the same aligner with a set of filter mirrors, the embedded channel can be easily produced without other rare facilities. Besides, the relationship between the thickness of the top layer and the exposure dose of the deep UV has been measured by an ingeniously designed experiment. The specific thickness of the top layer of the embedded micro channel can then be secured by the specific deep-UV exposure dose. Further more, many meaningful mechanical structures have been realized by this method, the material property of the top layer are also measured
The accuracy of nondestructive optical methods for chlorophyll (Chl) assessment based on leaf spectral characteristics depends on the wavelengths used for Chl assessment. Using spectroscopy, the optimum wavelengths for Chl assessment (OWChl) were determined for almond, poplar, and apple trees grown ...
A simple and efficient total genomic DNA extraction method for individual zooplankton.
Fazhan, Hanafiah; Waiho, Khor; Shahreza, Md Sheriff
2016-01-01
Molecular approaches are widely applied in species identification and taxonomic studies of minute zooplankton. One of the most focused zooplankton nowadays is from Subclass Copepoda. Accurate species identification of all life stages of the generally small sized copepods through molecular analysis is important, especially in taxonomic and systematic assessment of harpacticoid copepod populations and to understand their dynamics within the marine community. However, total genomic DNA (TGDNA) extraction from individual harpacticoid copepods can be problematic due to their small size and epibenthic behavior. In this research, six TGDNA extraction methods done on individual harpacticoid copepods were compared. The first new simple, feasible, efficient and consistent TGDNA extraction method was designed and compared with the commercial kit and modified available TGDNA extraction methods. The newly described TGDNA extraction method, "Incubation in PCR buffer" method, yielded good and consistent results based on the high success rate of PCR amplification (82%) compared to other methods. Coupled with its relatively consistent and economical method the "Incubation in PCR buffer" method is highly recommended in the TGDNA extraction of other minute zooplankton species.
A Simple Method of Spectrum Processing for β-ray Measurement without Pretreatment
Energy Technology Data Exchange (ETDEWEB)
Bae, Jun Woo; Kim, Hee Reyoung [UNIST, Ulsan (Korea, Republic of)
2016-10-15
Radioactivity analysis of β-emitting radionuclide is important because of its dangerousness of overexposure. γ-ray has been measured by conventional detector such as NaI(Tl) or high purity germanium (HPGe) detector. But β-ray is hard to detect by those detectors because of its short range. Therefore, liquid scintillation counter (LSC) has been used to measure the radioactivity of pure beta emitter but there is huge problem of organic waste production, though LSC has high efficiency for detection of low energy β-ray. To solve this problem, characterization of β-ray measurement in a plastic scintillator was carried out in this study. There have been some studies about plastic scintillator to measure the β-rays without liquid scintillation method. Plastic scintillator has benefits for detection of β-ray because it has relative low effective atomic number. β-ray and γ-ray spectra in cylindrical plastic scintillator was analyzed and a method of separation of β-ray spectrum was suggested. A simple method of β-ray spectrum separation was suggested. The method was verified by chi-square method to estimate the difference between calculated and measured spectrum. This method was successfully applied by using disc source. For future works, practical radioactive source will be used to acquire the pulse height spectrum. The method can be used for measurement of pure β emitter without pretreatment if this method is verified for practical purpose.
Mikaeili, F; Kia, E B; Sharbatkhori, M; Sharifdini, M; Jalalizand, N; Heidari, Z; Zarei, Z; Stensvold, C R; Mirhendi, H
2013-06-01
Six simple methods for extraction of ribosomal and mitochondrial DNA from Toxocara canis, Toxocara cati and Toxascaris leonina were compared by evaluating the presence, appearance and intensity of PCR products visualized on agarose gels and amplified from DNA extracted by each of the methods. For each species, two isolates were obtained from the intestines of their respective hosts: T. canis and T. leonina from dogs, and T. cati from cats. For all isolates, total DNA was extracted using six different methods, including grinding, boiling, crushing, beating, freeze-thawing and the use of a commercial kit. To evaluate the efficacy of each method, the internal transcribed spacer (ITS) region and the cytochrome c oxidase subunit 1 (cox1) gene were chosen as representative markers for ribosomal and mitochondrial DNA, respectively. Among the six DNA extraction methods, the beating method was the most cost effective for all three species, followed by the commercial kit. Both methods produced high intensity bands on agarose gels and were characterized by no or minimal smear formation, depending on gene target; however, beating was less expensive. We therefore recommend the beating method for studies where costs need to be kept at low levels. Copyright © 2013 Elsevier Inc. All rights reserved.
Directory of Open Access Journals (Sweden)
Hukharnsusatrue, A.
2005-11-01
Full Text Available The objective of this research is to compare multiple regression coefficients estimating methods with existence of multicollinearity among independent variables. The estimation methods are Ordinary Least Squares method (OLS, Restricted Least Squares method (RLS, Restricted Ridge Regression method (RRR and Restricted Liu method (RL when restrictions are true and restrictions are not true. The study used the Monte Carlo Simulation method. The experiment was repeated 1,000 times under each situation. The analyzed results of the data are demonstrated as follows. CASE 1: The restrictions are true. In all cases, RRR and RL methods have a smaller Average Mean Square Error (AMSE than OLS and RLS method, respectively. RRR method provides the smallest AMSE when the level of correlations is high and also provides the smallest AMSE for all level of correlations and all sample sizes when standard deviation is equal to 5. However, RL method provides the smallest AMSE when the level of correlations is low and middle, except in the case of standard deviation equal to 3, small sample sizes, RRR method provides the smallest AMSE.The AMSE varies with, most to least, respectively, level of correlations, standard deviation and number of independent variables but inversely with to sample size.CASE 2: The restrictions are not true.In all cases, RRR method provides the smallest AMSE, except in the case of standard deviation equal to 1 and error of restrictions equal to 5%, OLS method provides the smallest AMSE when the level of correlations is low or median and there is a large sample size, but the small sample sizes, RL method provides the smallest AMSE. In addition, when error of restrictions is increased, OLS method provides the smallest AMSE for all level, of correlations and all sample sizes, except when the level of correlations is high and sample sizes small. Moreover, the case OLS method provides the smallest AMSE, the most RLS method has a smaller AMSE than
A computer program for uncertainty analysis integrating regression and Bayesian methods
Lu, Dan; Ye, Ming; Hill, Mary C.; Poeter, Eileen P.; Curtis, Gary
2014-01-01
This work develops a new functionality in UCODE_2014 to evaluate Bayesian credible intervals using the Markov Chain Monte Carlo (MCMC) method. The MCMC capability in UCODE_2014 is based on the FORTRAN version of the differential evolution adaptive Metropolis (DREAM) algorithm of Vrugt et al. (2009), which estimates the posterior probability density function of model parameters in high-dimensional and multimodal sampling problems. The UCODE MCMC capability provides eleven prior probability distributions and three ways to initialize the sampling process. It evaluates parametric and predictive uncertainties and it has parallel computing capability based on multiple chains to accelerate the sampling process. This paper tests and demonstrates the MCMC capability using a 10-dimensional multimodal mathematical function, a 100-dimensional Gaussian function, and a groundwater reactive transport model. The use of the MCMC capability is made straightforward and flexible by adopting the JUPITER API protocol. With the new MCMC capability, UCODE_2014 can be used to calculate three types of uncertainty intervals, which all can account for prior information: (1) linear confidence intervals which require linearity and Gaussian error assumptions and typically 10s–100s of highly parallelizable model runs after optimization, (2) nonlinear confidence intervals which require a smooth objective function surface and Gaussian observation error assumptions and typically 100s–1,000s of partially parallelizable model runs after optimization, and (3) MCMC Bayesian credible intervals which require few assumptions and commonly 10,000s–100,000s or more partially parallelizable model runs. Ready access allows users to select methods best suited to their work, and to compare methods in many circumstances.
CSIR Research Space (South Africa)
Gregor, Luke
2017-12-01
Full Text Available understanding with spatially integrated air–sea flux estimates (Fay and McKinley, 2014). Conversely, ocean biogeochemical process models are good tools for mechanis- tic understanding, but fail to represent the seasonality of CO2 fluxes in the Southern Ocean... of including coordinate variables as proxies of 1pCO2 in the empirical methods. In the inter- comparison study by Rödenbeck et al. (2015) proxies typi- cally include, but are not limited to, sea surface temperature (SST), chlorophyll a (Chl a), mixed layer...
International Nuclear Information System (INIS)
Sambou, Soussou
2004-01-01
In flood forecasting modelling, large basins are often considered as hydrological systems with multiple inputs and one output. Inputs are hydrological variables such rainfall, runoff and physical characteristics of basin; output is runoff. Relating inputs to output can be achieved using deterministic, conceptual, or stochastic models. Rainfall runoff models generally lack of accuracy. Physical hydrological processes based models, either deterministic or conceptual are highly data requirement demanding and by the way very complex. Stochastic multiple input-output models, using only historical chronicles of hydrological variables particularly runoff are by the way very popular among the hydrologists for large river basin flood forecasting. Application is made on the Senegal River upstream of Bakel, where the River is formed by the main branch, Bafing, and two tributaries, Bakoye and Faleme; Bafing being regulated by Manantaly Dam. A three inputs and one output model has been used for flood forecasting on Bakel. Influence of the lead forecasting, and of the three inputs taken separately, then associated two by two, and altogether has been verified using a dimensionless variance as criterion of quality. Inadequacies occur generally between model output and observations; to put model in better compliance with current observations, we have compared four parameter updating procedure, recursive least squares, Kalman filtering, stochastic gradient method, iterative method, and an AR errors forecasting model. A combination of these model updating have been used in real time flood forecasting.(Author)
High cycle fatigue test and regression methods of S-N curve
International Nuclear Information System (INIS)
Kim, D. W.; Park, J. Y.; Kim, W. G.; Yoon, J. H.
2011-11-01
The fatigue design curve in the ASME Boiler and Pressure Vessel Code Section III are based on the assumption that fatigue life is infinite after 106 cycles. This is because standard fatigue testing equipment prior to the past decades was limited in speed to less than 200 cycles per second. Traditional servo-hydraulic machines work at frequency of 50 Hz. Servo-hydraulic machines working at 1000 Hz have been developed after 1997. This machines allow high frequency and displacement of up to ±0.1 mm and dynamic load of ±20 kN are guaranteed. The frequency of resonant fatigue test machine is 50-250 Hz. Various forced vibration-based system works at 500 Hz or 1.8 kHz. Rotating bending machines allow testing frequency at 0.1-200 Hz. The main advantage of ultrasonic fatigue testing at 20 kHz is performing Although S-N curve is determined by experiment, the fatigue strength corresponding to a given fatigue life should be determined by statistical method considering the scatter of fatigue properties. In this report, the statistical methods for evaluation of fatigue test data is investigated
Heydari, Rouhollah; Hosseini, Mohammad; Zarabi, Sanaz
2015-01-01
In this paper, a simple and cost effective method was developed for extraction and pre-concentration of carmine in food samples by using cloud point extraction (CPE) prior to its spectrophotometric determination. Carmine was extracted from aqueous solution using Triton X-100 as extracting solvent. The effects of main parameters such as solution pH, surfactant and salt concentrations, incubation time and temperature were investigated and optimized. Calibration graph was linear in the range of 0.04-5.0 μg mL(-1) of carmine in the initial solution with regression coefficient of 0.9995. The limit of detection (LOD) and limit of quantification were 0.012 and 0.04 μg mL(-1), respectively. Relative standard deviation (RSD) at low concentration level (0.05 μg mL(-1)) of carmine was 4.8% (n=7). Recovery values in different concentration levels were in the range of 93.7-105.8%. The obtained results demonstrate the proposed method can be applied satisfactory to determine the carmine in food samples. Copyright © 2015 Elsevier B.V. All rights reserved.
A simple method of genomic DNA extraction suitable for analysis of bulk fungal strains.
Zhang, Y J; Zhang, S; Liu, X Z; Wen, H A; Wang, M
2010-07-01
A simple and rapid method (designated thermolysis) for extracting genomic DNA from bulk fungal strains was described. In the thermolysis method, a few mycelia or yeast cells were first rinsed with pure water to remove potential PCR inhibitors and then incubated in a lysis buffer at 85 degrees C to break down cell walls and membranes. This method was used to extract genomic DNA from large numbers of fungal strains (more than 92 species, 35 genera of three phyla) isolated from different sections of natural Ophiocordyceps sinensis specimens. Regions of interest from high as well as single-copy number genes were successfully amplified from the extracted DNA samples. The DNA samples obtained by this method can be stored at -20 degrees C for over 1 year. The method was effective, easy and fast and allowed batch DNA extraction from multiple fungal isolates. Use of the thermolysis method will allow researchers to obtain DNA from fungi quickly for use in molecular assays. This method requires only minute quantities of starting material and is suitable for diverse fungal species.
Karlitasari, L.; Suhartini, D.; Benny
2017-01-01
The process of determining the employee remuneration for PT Sepatu Mas Idaman currently are still using Microsoft Excel-based spreadsheet where in the spreadsheet there is the value of criterias that must be calculated for every employee. This can give the effect of doubt during the assesment process, therefore resulting in the process to take much longer time. The process of employee remuneration determination is conducted by the assesment team based on some criterias that have been predetermined. The criteria used in the assessment process are namely the ability to work, human relations, job responsibility, discipline, creativity, work, achievement of targets, and absence. To ease the determination of employee remuneration to be more efficient and effective, the Simple Additive Weighting (SAW) method is used. SAW method can help in decision making for a certain case, and the calculation that generates the greatest value will be chosen as the best alternative. Other than SAW, also by using another method was the CPI method which is one of the calculating method in decision making based on performance index. Where SAW method was more faster by 89-93% compared to CPI method. Therefore it is expected that this application can be an evaluation material for the need of training and development for employee performances to be more optimal.
Variable selection methods in PLS regression - a comparison study on metabolomics data
DEFF Research Database (Denmark)
Karaman, İbrahim; Hedemann, Mette Skou; Knudsen, Knud Erik Bach
. The aim of the metabolomics study was to investigate the metabolic profile in pigs fed various cereal fractions with special attention to the metabolism of lignans using LC-MS based metabolomic approach. References 1. Lê Cao KA, Rossouw D, Robert-Granié C, Besse P: A Sparse PLS for Variable Selection when...... integrated approach. Due to the high number of variables in data sets (both raw data and after peak picking) the selection of important variables in an explorative analysis is difficult, especially when different data sets of metabolomics data need to be related. Variable selection (or removal of irrelevant...... different strategies for variable selection on PLSR method were considered and compared with respect to selected subset of variables and the possibility for biological validation. Sparse PLSR [1] as well as PLSR with Jack-knifing [2] was applied to data in order to achieve variable selection prior...
Freitas, Alex A; Limbu, Kriti; Ghafourian, Taravat
2015-01-01
Volume of distribution is an important pharmacokinetic property that indicates the extent of a drug's distribution in the body tissues. This paper addresses the problem of how to estimate the apparent volume of distribution at steady state (Vss) of chemical compounds in the human body using decision tree-based regression methods from the area of data mining (or machine learning). Hence, the pros and cons of several different types of decision tree-based regression methods have been discussed. The regression methods predict Vss using, as predictive features, both the compounds' molecular descriptors and the compounds' tissue:plasma partition coefficients (Kt:p) - often used in physiologically-based pharmacokinetics. Therefore, this work has assessed whether the data mining-based prediction of Vss can be made more accurate by using as input not only the compounds' molecular descriptors but also (a subset of) their predicted Kt:p values. Comparison of the models that used only molecular descriptors, in particular, the Bagging decision tree (mean fold error of 2.33), with those employing predicted Kt:p values in addition to the molecular descriptors, such as the Bagging decision tree using adipose Kt:p (mean fold error of 2.29), indicated that the use of predicted Kt:p values as descriptors may be beneficial for accurate prediction of Vss using decision trees if prior feature selection is applied. Decision tree based models presented in this work have an accuracy that is reasonable and similar to the accuracy of reported Vss inter-species extrapolations in the literature. The estimation of Vss for new compounds in drug discovery will benefit from methods that are able to integrate large and varied sources of data and flexible non-linear data mining methods such as decision trees, which can produce interpretable models. Graphical AbstractDecision trees for the prediction of tissue partition coefficient and volume of distribution of drugs.
A simple gamma spectrometry method for evaluating the burnup of MTR-type HEU fuel elements
Energy Technology Data Exchange (ETDEWEB)
Makmal, T. [The Unit of Nuclear Engineering, Ben-Gurion University of The Negev, Beer-Sheva 84105 (Israel); Nuclear Physics and Engineering Division, Soreq Nuclear Research Center, Yavne 81800 (Israel); Aviv, O. [Radiation Safety Division, Soreq Nuclear Research Center, Yavne 81800 (Israel); Gilad, E., E-mail: gilade@bgu.ac.il [The Unit of Nuclear Engineering, Ben-Gurion University of The Negev, Beer-Sheva 84105 (Israel)
2016-10-21
A simple method for the evaluation of the burnup of a materials testing reactor (MTR) fuel element by gamma spectrometry is presented. The method was applied to a highly enriched uranium MTR nuclear fuel element that was irradiated in a 5 MW pool-type research reactor for a total period of 34 years. The experimental approach is based on in-situ measurements of the MTR fuel element in the reactor pool by a portable high-purity germanium detector located in a gamma cell. To corroborate the method, analytical calculations (based on the irradiation history of the fuel element) and computer simulations using a dedicated fuel cycle burnup code ORIGEN2 were performed. The burnup of the MTR fuel element was found to be 52.4±8.8%, which is in good agreement with the analytical calculations and the computer simulations. The method presented here is suitable for research reactors with either a regular or an irregular irradiation regime and for reactors with limited infrastructure and/or resources. In addition, its simplicity and the enhanced safety it confers may render this method suitable for IAEA inspectors in fuel element burnup assessments during on-site inspections. - Highlights: • Simple, inexpensive, safe and flexible experimental setup that can be quickly deployed. • Experimental results are thoroughly corroborated against ORIGEN2 burnup code. • Experimental uncertainty of 9% and 5% deviation between measurements and simulations. • Very high burnup MTR fuel element is examined, with 60% depletion of {sup 235}U. • Impact of highly irregular irradiation regime on burnup evaluation is studied.
Dai, Huanping; Micheyl, Christophe
2012-11-01
Psychophysical "reverse-correlation" methods allow researchers to gain insight into the perceptual representations and decision weighting strategies of individual subjects in perceptual tasks. Although these methods have gained momentum, until recently their development was limited to experiments involving only two response categories. Recently, two approaches for estimating decision weights in m-alternative experiments have been put forward. One approach extends the two-category correlation method to m > 2 alternatives; the second uses multinomial logistic regression (MLR). In this article, the relative merits of the two methods are discussed, and the issues of convergence and statistical efficiency of the methods are evaluated quantitatively using Monte Carlo simulations. The results indicate that, for a range of values of the number of trials, the estimated weighting patterns are closer to their asymptotic values for the correlation method than for the MLR method. Moreover, for the MLR method, weight estimates for different stimulus components can exhibit strong correlations, making the analysis and interpretation of measured weighting patterns less straightforward than for the correlation method. These and other advantages of the correlation method, which include computational simplicity and a close relationship to other well-established psychophysical reverse-correlation methods, make it an attractive tool to uncover decision strategies in m-alternative experiments.
Filtration Isolation of Nucleic Acids: A Simple and Rapid DNA Extraction Method.
McFall, Sally M; Neto, Mário F; Reed, Jennifer L; Wagner, Robin L
2016-08-06
FINA, filtration isolation of nucleic acids, is a novel extraction method which utilizes vertical filtration via a separation membrane and absorbent pad to extract cellular DNA from whole blood in less than 2 min. The blood specimen is treated with detergent, mixed briefly and applied by pipet to the separation membrane. The lysate wicks into the blotting pad due to capillary action, capturing the genomic DNA on the surface of the separation membrane. The extracted DNA is retained on the membrane during a simple wash step wherein PCR inhibitors are wicked into the absorbent blotting pad. The membrane containing the entrapped DNA is then added to the PCR reaction without further purification. This simple method does not require laboratory equipment and can be easily implemented with inexpensive laboratory supplies. Here we describe a protocol for highly sensitive detection and quantitation of HIV-1 proviral DNA from 100 µl whole blood as a model for early infant diagnosis of HIV that could readily be adapted to other genetic targets.
Rubino, Stefano; Akhtar, Sultan; Leifer, Klaus
2016-02-01
We present a simple, fast method for thickness characterization of suspended graphene/graphite flakes that is based on transmission electron microscopy (TEM). We derive an analytical expression for the intensity of the transmitted electron beam I 0(t), as a function of the specimen thickness t (tgraphite). We show that in thin graphite crystals the transmitted intensity is a linear function of t. Furthermore, high-resolution (HR) TEM simulations are performed to obtain λ for a 001 zone axis orientation, in a two-beam case and in a low symmetry orientation. Subsequently, HR (used to determine t) and bright-field (to measure I 0(0) and I 0(t)) images were acquired to experimentally determine λ. The experimental value measured in low symmetry orientation matches the calculated value (i.e., λ=225±9 nm). The simulations also show that the linear approximation is valid up to a sample thickness of 3-4 nm regardless of the orientation and up to several ten nanometers for a low symmetry orientation. When compared with standard techniques for thickness determination of graphene/graphite, the method we propose has the advantage of being simple and fast, requiring only the acquisition of bright-field images.
Slope stability and bearing capacity of landfills and simple on-site test methods.
Yamawaki, Atsushi; Doi, Yoichi; Omine, Kiyoshi
2017-07-01
This study discusses strength characteristics (slope stability, bearing capacity, etc.) of waste landfills through on-site tests that were carried out at 29 locations in 19 sites in Japan and three other countries, and proposes simple methods to test and assess the mechanical strength of landfills on site. Also, the possibility of using a landfill site was investigated by a full-scale eccentric loading test. As a result of this, landfills containing more than about 10 cm long plastics or other fibrous materials were found to be resilient and hard to yield. An on-site full scale test proved that no differential settlement occurs. The repose angle test proposed as a simple on-site test method has been confirmed to be a good indicator for slope stability assessment. The repose angle test suggested that landfills which have high, near-saturation water content have considerably poorer slope stability. The results of our repose angle test and the impact acceleration test were related to the internal friction angle and the cohesion, respectively. In addition to this, it was found that the air pore volume ratio measured by an on-site air pore volume ratio test is likely to be related to various strength parameters.
Bubble nucleation in simple and molecular liquids via the largest spherical cavity method
International Nuclear Information System (INIS)
Gonzalez, Miguel A.; Abascal, José L. F.; Valeriani, Chantal; Bresme, Fernando
2015-01-01
In this work, we propose a methodology to compute bubble nucleation free energy barriers using trajectories generated via molecular dynamics simulations. We follow the bubble nucleation process by means of a local order parameter, defined by the volume of the largest spherical cavity (LSC) formed in the nucleating trajectories. This order parameter simplifies considerably the monitoring of the nucleation events, as compared with the previous approaches which require ad hoc criteria to classify the atoms and molecules as liquid or vapor. The combination of the LSC and the mean first passage time technique can then be used to obtain the free energy curves. Upon computation of the cavity distribution function the nucleation rate and free-energy barrier can then be computed. We test our method against recent computations of bubble nucleation in simple liquids and water at negative pressures. We obtain free-energy barriers in good agreement with the previous works. The LSC method provides a versatile and computationally efficient route to estimate the volume of critical bubbles the nucleation rate and to compute bubble nucleation free-energies in both simple and molecular liquids
Use of eddy-covariance methods to "calibrate" simple estimators of evapotranspiration
Sumner, David M.; Geurink, Jeffrey S.; Swancar, Amy
2017-01-01
Direct measurement of actual evapotranspiration (ET) provides quantification of this large component of the hydrologic budget, but typically requires long periods of record and large instrumentation and labor costs. Simple surrogate methods of estimating ET, if â€œcalibratedâ€ to direct measurements of ET, provide a reliable means to quantify ET. Eddy-covariance measurements of ET were made for 12 years (2004-2015) at an unimproved bahiagrass (Paspalum notatum) pasture in Florida. These measurements were compared to annual rainfall derived from rain gage data and monthly potential ET (PET) obtained from a long-term (since 1995) U.S. Geological Survey (USGS) statewide, 2-kilometer, daily PET product. The annual proportion of ET to rainfall indicates a strong correlation (r2=0.86) to annual rainfall; the ratio increases linearly with decreasing rainfall. Monthly ET rates correlated closely (r2=0.84) to the USGS PET product. The results indicate that simple surrogate methods of estimating actual ET show positive potential in the humid Florida climate given the ready availability of historical rainfall and PET.
Vindras, Philippe; Desmurget, Michel; Baraduc, Pierre
2012-01-01
In science, it is a common experience to discover that although the investigated effect is very clear in some individuals, statistical tests are not significant because the effect is null or even opposite in other individuals. Indeed, t-tests, Anovas and linear regressions compare the average effect with respect to its inter-individual variability, so that they can fail to evidence a factor that has a high effect in many individuals (with respect to the intra-individual variability). In such paradoxical situations, statistical tools are at odds with the researcher's aim to uncover any factor that affects individual behavior, and not only those with stereotypical effects. In order to go beyond the reductive and sometimes illusory description of the average behavior, we propose a simple statistical method: applying a Kolmogorov-Smirnov test to assess whether the distribution of p-values provided by individual tests is significantly biased towards zero. Using Monte-Carlo studies, we assess the power of this two-step procedure with respect to RM Anova and multilevel mixed-effect analyses, and probe its robustness when individual data violate the assumption of normality and homoscedasticity. We find that the method is powerful and robust even with small sample sizes for which multilevel methods reach their limits. In contrast to existing methods for combining p-values, the Kolmogorov-Smirnov test has unique resistance to outlier individuals: it cannot yield significance based on a high effect in one or two exceptional individuals, which allows drawing valid population inferences. The simplicity and ease of use of our method facilitates the identification of factors that would otherwise be overlooked because they affect individual behavior in significant but variable ways, and its power and reliability with small sample sizes (<30-50 individuals) suggest it as a tool of choice in exploratory studies.
Directory of Open Access Journals (Sweden)
Philippe Vindras
Full Text Available In science, it is a common experience to discover that although the investigated effect is very clear in some individuals, statistical tests are not significant because the effect is null or even opposite in other individuals. Indeed, t-tests, Anovas and linear regressions compare the average effect with respect to its inter-individual variability, so that they can fail to evidence a factor that has a high effect in many individuals (with respect to the intra-individual variability. In such paradoxical situations, statistical tools are at odds with the researcher's aim to uncover any factor that affects individual behavior, and not only those with stereotypical effects. In order to go beyond the reductive and sometimes illusory description of the average behavior, we propose a simple statistical method: applying a Kolmogorov-Smirnov test to assess whether the distribution of p-values provided by individual tests is significantly biased towards zero. Using Monte-Carlo studies, we assess the power of this two-step procedure with respect to RM Anova and multilevel mixed-effect analyses, and probe its robustness when individual data violate the assumption of normality and homoscedasticity. We find that the method is powerful and robust even with small sample sizes for which multilevel methods reach their limits. In contrast to existing methods for combining p-values, the Kolmogorov-Smirnov test has unique resistance to outlier individuals: it cannot yield significance based on a high effect in one or two exceptional individuals, which allows drawing valid population inferences. The simplicity and ease of use of our method facilitates the identification of factors that would otherwise be overlooked because they affect individual behavior in significant but variable ways, and its power and reliability with small sample sizes (<30-50 individuals suggest it as a tool of choice in exploratory studies.
International Nuclear Information System (INIS)
Shimazu, Yoichiro; Tashiro, Shoichi; Tojo, Masayuki
2017-01-01
The performance of two digital reactivity meters, one based on the conventional inverse kinetic method and the other one based on simple feedback theory, are compared analytically using their respective transfer functions. The latter one is proposed by one of the authors. It has been shown that the performance of the two reactivity meters become almost identical when proper system parameters are selected for each reactivity meter. A new correlation between the system parameters of the two reactivity meters is found. With this correlation, filter designers can easily determine the system parameters for the respective reactivity meters to obtain identical performance. (author)
Hoashi, Yohei; Tozuka, Yuichi; Takeuchi, Hirofumi
2013-01-01
The purpose of this study was to develop and test a novel and simple method for evaluating the disintegration time of rapidly disintegrating tablets (RDTs) in vitro, since the conventional disintegration test described in the pharmacopoeia produces poor results due to the difference of its environmental conditions from those of an actual oral cavity. Six RDTs prepared in our laboratory and 5 types of commercial RDTs were used as model formulations. Using our original apparatus, a good correlation was observed between in vivo and in vitro disintegration times by adjusting the height from which the solution was dropped to 8 cm and the weight of the load to 10 or 20 g. Properties of RDTs, such as the pattern of their disintegrating process, can be assessed by verifying the load. These findings confirmed that our proposed method for an in vitro disintegration test apparatus is an excellent one for estimating disintegration time and the disintegration profile of RDTs.
Directory of Open Access Journals (Sweden)
Young Shin Ryu
Full Text Available Multiplex genome engineering is a standalone recombineering tool for large-scale programming and accelerated evolution of cells. However, this advanced genome engineering technique has been limited to use in selected bacterial strains. We developed a simple and effective strain-independent method for effective genome engineering in Escherichia coli. The method involves introducing a suicide plasmid carrying the λ Red recombination system into the mutS gene. The suicide plasmid can be excised from the chromosome via selection in the absence of antibiotics, thus allowing transient inactivation of the mismatch repair system during genome engineering. In addition, we developed another suicide plasmid that enables integration of large DNA fragments into the lacZ genomic locus. These features enable this system to be applied in the exploitation of the benefits of genome engineering in synthetic biology, as well as the metabolic engineering of different strains of E. coli.
Simple estimating method of damages of concrete gravity dam based on linear dynamic analysis
Energy Technology Data Exchange (ETDEWEB)
Sasaki, T.; Kanenawa, K.; Yamaguchi, Y. [Public Works Research Institute, Tsukuba, Ibaraki (Japan). Hydraulic Engineering Research Group
2004-07-01
Due to the occurrence of large earthquakes like the Kobe Earthquake in 1995, there is a strong need to verify seismic resistance of dams against much larger earthquake motions than those considered in the present design standard in Japan. Problems exist in using nonlinear analysis to evaluate the safety of dams including: that the influence which the set material properties have on the results of nonlinear analysis is large, and that the results of nonlinear analysis differ greatly according to the damage estimation models or analysis programs. This paper reports the evaluation indices based on a linear dynamic analysis method and the characteristics of the progress of cracks in concrete gravity dams with different shapes using a nonlinear dynamic analysis method. The study concludes that if simple linear dynamic analysis is appropriately conducted to estimate tensile stress at potential locations of initiating cracks, the damage due to cracks would be predicted roughly. 4 refs., 1 tab., 13 figs.
A simple identification method for spore-forming bacteria showing high resistance against γ-rays
International Nuclear Information System (INIS)
Koshikawa, Tomihiko; Sone, Koji; Kobayashi, Toshikazu
1993-01-01
A simple identification method was developed for spore-forming bacteria which are highly resistant against γ-rays. Among 23 species of Bacillus studied, the spores of Bacillus megaterium, B. cereus, B. thuringiensis, B. pumilus and B. aneurinolyticus showed high resistance against γ-rays as compared with other spores of Bacillus species. Combination of the seven kinds of biochemical tests, namely, the citrate utilization test, nitrate reduction test, starch hydrolysis test, Voges-Proskauer reaction test, gelatine hydrolysis test, mannitol utilization test and xylose utilization test showed a characteristic pattern for each species of Bacillus. The combination pattern of each the above tests with a few supplementary test, if necessary, was useful to identify Bacillus species showing high radiation resistance against γ-rays. The method is specific for B. megaterium, B. thuringiensis and B. pumilus, and highly selective for B. aneurinolyticus and B. cereus. (author)
A Simple and Clean Method for O-Isopropylidenation of Carbohydrates
Energy Technology Data Exchange (ETDEWEB)
Rong, Yuan Wei; Zhang, Qi Hua; Wang, Wei; Li, Bao Lin [Shaanxi Normal Univ., Xi' an (China)
2014-07-15
An efficient catalysis system for the synthesis of O-isopropylidene derivatives of sugars and polyhydroxy alcohols has been developed with sulfonated polystyrene cation exchange resin CAT600 as a catalyst. The key advantages of this protocol are simple workup, good yields and the recoverability, the innocuity and low cost of the catalyst. As a green, general and efficient reaction system, this method is expected to attract much attention for the preparation of various O-isopropylidene sugar derivatives in a large scale. Protection of hydroxyl functions by O-isopropylidenation is an important method in the field of carbohydrate chemistry. Due to its convenient application in synthetic, configurational and conformational studies, the O-isopropylidene derivatives of sugars play an important role in the research of building blocks, such as glycosyl acceptors and glycosyl donors. Additionally, these derivatives of sugars are important in the synthesis of various natural products.
Simple formulae for interpretation of the dead time α (first moment) method of reactor noise
International Nuclear Information System (INIS)
Degweker, S.B.
1999-01-01
The Markov Chain approach for solving problems related to the presence of a non extending dead time in a particle counting circuit with time correlated pulses was developed in an earlier paper. The formalism was applied to, among others, the dead time α (first moment) method of reactor noise. For this problem, however the solution obtained was largely numerical in character and had a tendency to break down for systems close to criticality. In the present paper, simple analytical expressions are derived for the count rate and L ex , the quantities of interest in this method. Comparisons with Monte Carlo simulations show that these formulae are accurate in the range of system parameters of practical interest
Simple saponification method for the quantitative determination of carotenoids in green vegetables.
Larsen, Erik; Christensen, Lars P
2005-08-24
A simple, reliable, and gentle saponification method for the quantitative determination of carotenoids in green vegetables was developed. The method involves an extraction procedure with acetone and the selective removal of the chlorophylls and esterified fatty acids from the organic phase using a strongly basic resin (Ambersep 900 OH). Extracts from common green vegetables (beans, broccoli, green bell pepper, chive, lettuce, parsley, peas, and spinach) were analyzed by high-performance liquid chromatography (HPLC) for their content of major carotenoids before and after action of Ambersep 900 OH. The mean recovery percentages for most carotenoids [(all-E)-violaxanthin, (all-E)-lutein epoxide, (all-E)-lutein, neolutein A, and (all-E)-beta-carotene] after saponification of the vegetable extracts with Ambersep 900 OH were close to 100% (99-104%), while the mean recovery percentages of (9'Z)-neoxanthin increased to 119% and that of (all-E)-neoxanthin and neolutein B decreased to 90% and 72%, respectively.
A Simple Halide-to-Anion Exchange Method for Heteroaromatic Salts and Ionic Liquids
Directory of Open Access Journals (Sweden)
Neus Mesquida
2012-04-01
Full Text Available A broad and simple method permitted halide ions in quaternary heteroaromatic and ammonium salts to be exchanged for a variety of anions using an anion exchange resin (A− form in non-aqueous media. The anion loading of the AER (OH− form was examined using two different anion sources, acids or ammonium salts, and changing the polarity of the solvents. The AER (A− form method in organic solvents was then applied to several quaternary heteroaromatic salts and ILs, and the anion exchange proceeded in excellent to quantitative yields, concomitantly removing halide impurities. Relying on the hydrophobicity of the targeted ion pair for the counteranion swap, organic solvents with variable polarity were used, such as CH3OH, CH3CN and the dipolar nonhydroxylic solvent mixture CH3CN:CH2Cl2 (3:7 and the anion exchange was equally successful with both lipophilic cations and anions.
Perrin, Stephane; Baranski, Maciej; Froehly, Luc; Albero, Jorge; Passilly, Nicolas; Gorecki, Christophe
2015-11-01
We report a simple method, based on intensity measurements, for the characterization of the wavefront and aberrations produced by micro-optical focusing elements. This method employs the setup presented earlier in [Opt. Express 22, 13202 (2014)] for measurements of the 3D point spread function, on which a basic phase-retrieval algorithm is applied. This combination allows for retrieval of the wavefront generated by the micro-optical element and, in addition, quantification of the optical aberrations through the wavefront decomposition with Zernike polynomials. The optical setup requires only an in-motion imaging system. The technique, adapted for the optimization of micro-optical component fabrication, is demonstrated by characterizing a planoconvex microlens.
A simple method employed for the treatment of filters used in atmospheric pollution studies
International Nuclear Information System (INIS)
Prendez B, M.M.; Ortiz C, J.L.; Garrido, J.I.; Huerta P, R.; Alvarez B, C.; Zolezzi C, S.R.
1983-01-01
A simple and rapid method for the multielement routine analysis of atmospheric particulate matter is described. The samples collected on four different types of filters were treated with HNO 3 and HCl at 110-120 deg C in pyrex glassware. Time required for the different stages of the treatment was determined by using 60 Co, 65 Zn and 137 Cs as radioactive tracers. Atomic absorption spectrophotometry was used to determine the concentration of the elements. The efficiency for 11 elements (Mg, Cr, Mn, Fe, Co, Ni, Cu, Zn, Cd, Hg and Pb) was determined. The method was succesfully employed for the treatment of filters used in atmospheric pollution studies in both urban and rural areas. (author)
International Nuclear Information System (INIS)
Hensel, S.J.; Hayes, D.W.
1993-01-01
A simple parameter estimation method has been developed to determine the dispersion and velocity parameters associated with stream/river transport. The unsteady one dimensional Burgers' equation was chosen as the model equation, and the method has been applied to recent Savannah River dye tracer studies. The computed Savannah River transport coefficients compare favorably with documented values, and the time/concentration curves calculated from these coefficients compare well with the actual tracer data. The coefficients were used as a predictive capability and applied to Savannah River tritium concentration data obtained during the December 1991 accidental tritium discharge from the Savannah River Site. The peak tritium concentration at the intersection of Highway 301 and the Savannah River was underpredicted by only 5% using the coefficients computed from the dye data
A simple method for principal strata effects when the outcome has been truncated due to death.
Chiba, Yasutaka; VanderWeele, Tyler J
2011-04-01
In randomized trials with follow-up, outcomes such as quality of life may be undefined for individuals who die before the follow-up is complete. In such settings, restricting analysis to those who survive can give rise to biased outcome comparisons. An alternative approach is to consider the "principal strata effect" or "survivor average causal effect" (SACE), defined as the effect of treatment on the outcome among the subpopulation that would have survived under either treatment arm. The authors describe a very simple technique that can be used to assess the SACE. They give both a sensitivity analysis technique and conditions under which a crude comparison provides a conservative estimate of the SACE. The method is illustrated using data from the ARDSnet (Acute Respiratory Distress Syndrome Network) clinical trial comparing low-volume ventilation and traditional ventilation methods for individuals with acute respiratory distress syndrome.
Mansilha, C; Melo, A; Rebelo, H; Ferreira, I M P L V O; Pinho, O; Domingues, V; Pinho, C; Gameiro, P
2010-10-22
A multi-residue methodology based on a solid phase extraction followed by gas chromatography-tandem mass spectrometry was developed for trace analysis of 32 compounds in water matrices, including estrogens and several pesticides from different chemical families, some of them with endocrine disrupting properties. Matrix standard calibration solutions were prepared by adding known amounts of the analytes to a residue-free sample to compensate matrix-induced chromatographic response enhancement observed for certain pesticides. Validation was done mainly according to the International Conference on Harmonisation recommendations, as well as some European and American validation guidelines with specifications for pesticides analysis and/or GC-MS methodology. As the assumption of homoscedasticity was not met for analytical data, weighted least squares linear regression procedure was applied as a simple and effective way to counteract the greater influence of the greater concentrations on the fitted regression line, improving accuracy at the lower end of the calibration curve. The method was considered validated for 31 compounds after consistent evaluation of the key analytical parameters: specificity, linearity, limit of detection and quantification, range, precision, accuracy, extraction efficiency, stability and robustness. Copyright © 2010 Elsevier B.V. All rights reserved.
Zhao, Yu Xi; Xie, Ping; Sang, Yan Fang; Wu, Zi Yi
2018-04-01
Hydrological process evaluation is temporal dependent. Hydrological time series including dependence components do not meet the data consistency assumption for hydrological computation. Both of those factors cause great difficulty for water researches. Given the existence of hydrological dependence variability, we proposed a correlationcoefficient-based method for significance evaluation of hydrological dependence based on auto-regression model. By calculating the correlation coefficient between the original series and its dependence component and selecting reasonable thresholds of correlation coefficient, this method divided significance degree of dependence into no variability, weak variability, mid variability, strong variability, and drastic variability. By deducing the relationship between correlation coefficient and auto-correlation coefficient in each order of series, we found that the correlation coefficient was mainly determined by the magnitude of auto-correlation coefficient from the 1 order to p order, which clarified the theoretical basis of this method. With the first-order and second-order auto-regression models as examples, the reasonability of the deduced formula was verified through Monte-Carlo experiments to classify the relationship between correlation coefficient and auto-correlation coefficient. This method was used to analyze three observed hydrological time series. The results indicated the coexistence of stochastic and dependence characteristics in hydrological process.
A simple method to measure cell viability in proliferation and cytotoxicity assays
Directory of Open Access Journals (Sweden)
Ricardo Carneiro Borra
2009-09-01
Full Text Available Resazurin dye has been broadly used as indicator of cell viability in several types of assays for evaluation of the biocompatibility of medical and dental materials. Mitochondrial enzymes, as carriers of diaphorase activities, are probably responsible for the transference of electrons from NADPH + H+ to resazurin, which is reduced to resorufin. The level of reduction can be quantified by spectrophotometers since resazurin exhibits an absorption peak at 600 ηm and resorufin at 570 ηm wavelengths. However, the requirement of a spectrophotometer and specific filters for the quantification could be a barrier to many laboratories. Digital cameras containing red, green and blue filters, which allow the capture of red (600 to 700 ηm and green (500 to 600 ηm light wavelengths in ranges bordering on resazurin and resorufin absorption bands, could be used as an alternative method for the assessment of resazurin and resorufin concentrations. Thus, our aim was to develop a simple, cheap and precise method based on a digital CCD camera to measure the reduction of resazurin. We compared the capability of the CCD-based method to distinguish different concentrations of L929 and normal Human buccal fibroblast cell lines with that of a conventional microplate reader. The correlation was analyzed through the Pearson coefficient. The results showed a strong association between the measurements of the method developed here and those made with the microplate reader (r² = 0.996; p < 0.01 and with the cellular concentrations (r² = 0.965; p < 0.01. We concluded that the developed Colorimetric Quantification System based on CCD Images allowed rapid assessment of the cultured cell concentrations with simple equipment at a reduced cost.
The simple method to co-register planar image with photograph
International Nuclear Information System (INIS)
Jang, Sung June; Kim, Seok Ki; Kang, Keon Wook
2005-01-01
Generally scintigraphic image presents the highly specific functional information. Sometimes, there can be limited information of patients anatomical landmark required to identify the lesion in planar nuclear medicine image. In this study, we applied the simple fusion method of planar scintigraphy and plain photography and validated the techniques with our own software. We used three fiducial marks which were comprised with Tc-99m. We obtained planar image with single head gamma camera (ARGUS ADAC laboratory, USA) and photograph using a general digital camera (CANON JAPAN). The coordinates of three marks were obtained in photograph and planar scintigraphy image. Based on these points, we took affine transformation and then fused these two images. To evaluate the precision, we compared with different depth. To find out the depth of lesion, the images were acquired in different angles and we compared the real depth and the geometrically calculated depth. At the same depth with mark, the each discordance was less than 1 mm. When the photograph were taken at the distance with 1 m and 2 m, the point 30 cm off the center were discordant in 5 mm and 2 mm each. We used this method in the localization of the remnant thyroid tissue on I-131 whole body scan with photo image. The simple method to co-register planar image with photography was reliable and easy to use. By this method, we could localize the lesion on the planar scintigraphy more accurately with other planar images (i.e. photograph) and predict the depth of the lesion without tomographic image
Shirazi, Mehdi; Ariafar, Ali; Babaei, Amir Hossein; Ashrafzadeh, Abdosamad; Adib, Ali
2016-11-01
Urethrocutaneous fistula (UCF) is the most prevalent complication after hypospadias repair surgery. Many methods have been developed for UCF correction, and the best technique for UCF repair is determined based on the size, location, and number of fistulas, as well as the status of the surrounding skin. In this study, we introduced and evaluated a simple method for UCF correction after tubularized incised plate (TIP) repair. This clinical study was conducted on children with UCFs ≤ 4 mm that developed after TIP surgery for hypospadias repair. The skin was incised around the fistula and the tract was released from the surrounding tissues and the dartos fascia, then ligated with 5 - 0 polydioxanone (PDS) sutures. The dartos fascia, as the second layer, was covered on the fistula tract with PDS thread (gauge 5 - 0) by the continuous suture method. The skin was closed with 6 - 0 Vicryl sutures. After six months of follow-up, surgical outcomes were evaluated based on fistula relapse and other complications. After six months, relapse occurred in only one patient, a six-year-old boy with a single 4-mm distal opening, who had undergone no previous fistula repairs. Therefore, in 97.5% of the cases, relapse was non-existent. Other complications, such as urethral stenosis, intraurethral obstruction, and epidermal inclusion cysts, were not seen in the other patients during the six-month follow-up period. This repair method, which is simple, rapid, and easily learned, is highly applicable, with a high success rate for the closure of UCFs measuring up to 4 mm in any location.