#### Sample records for stepwise regression principal

1. Driven Factors Analysis of China’s Irrigation Water Use Efficiency by Stepwise Regression and Principal Component Analysis

Renfu Jia

2016-01-01

Full Text Available This paper introduces an integrated approach to find out the major factors influencing efficiency of irrigation water use in China. It combines multiple stepwise regression (MSR and principal component analysis (PCA to obtain more realistic results. In real world case studies, classical linear regression model often involves too many explanatory variables and the linear correlation issue among variables cannot be eliminated. Linearly correlated variables will cause the invalidity of the factor analysis results. To overcome this issue and reduce the number of the variables, PCA technique has been used combining with MSR. As such, the irrigation water use status in China was analyzed to find out the five major factors that have significant impacts on irrigation water use efficiency. To illustrate the performance of the proposed approach, the calculation based on real data was conducted and the results were shown in this paper.

2. A Matlab program for stepwise regression

Yanhong Qi

2016-03-01

Full Text Available The stepwise linear regression is a multi-variable regression for identifying statistically significant variables in the linear regression equation. In present study, we presented the Matlab program of stepwise regression.

3. ANYOLS, Least Square Fit by Stepwise Regression

Atwoods, C.L.; Mathews, S.

1986-01-01

Description of program or function: ANYOLS is a stepwise program which fits data using ordinary or weighted least squares. Variables are selected for the model in a stepwise way based on a user- specified input criterion or a user-written subroutine. The order in which variables are entered can be influenced by user-defined forcing priorities. Instead of stepwise selection, ANYOLS can try all possible combinations of any desired subset of the variables. Automatic output for the final model in a stepwise search includes plots of the residuals, 'studentized' residuals, and leverages; if the model is not too large, the output also includes partial regression and partial leverage plots. A data set may be re-used so that several selection criteria can be tried. Flexibility is increased by allowing the substitution of user-written subroutines for several default subroutines

4. Stepwise versus Hierarchical Regression: Pros and Cons

Lewis, Mitzi

2007-01-01

Multiple regression is commonly used in social and behavioral data analysis. In multiple regression contexts, researchers are very often interested in determining the "best" predictors in the analysis. This focus may stem from a need to identify those predictors that are supportive of theory. Alternatively, the researcher may simply be interested…

5. An Original Stepwise Multilevel Logistic Regression Analysis of Discriminatory Accuracy

Merlo, Juan; Wagner, Philippe; Ghith, Nermin

2016-01-01

BACKGROUND AND AIM: Many multilevel logistic regression analyses of "neighbourhood and health" focus on interpreting measures of associations (e.g., odds ratio, OR). In contrast, multilevel analysis of variance is rarely considered. We propose an original stepwise analytical approach that disting...

6. MULGRES: a computer program for stepwise multiple regression analysis

A. Jeff Martin

1971-01-01

MULGRES is a computer program source deck that is designed for multiple regression analysis employing the technique of stepwise deletion in the search for most significant variables. The features of the program, along with inputs and outputs, are briefly described, with a note on machine compatibility.

7. Principal component regression analysis with SPSS.

Liu, R X; Kuang, J; Gong, Q; Hou, X L

2003-06-01

The paper introduces all indices of multicollinearity diagnoses, the basic principle of principal component regression and determination of 'best' equation method. The paper uses an example to describe how to do principal component regression analysis with SPSS 10.0: including all calculating processes of the principal component regression and all operations of linear regression, factor analysis, descriptives, compute variable and bivariate correlations procedures in SPSS 10.0. The principal component regression analysis can be used to overcome disturbance of the multicollinearity. The simplified, speeded up and accurate statistical effect is reached through the principal component regression analysis with SPSS.

8. REGSTEP - stepwise multivariate polynomial regression with singular extensions

Davierwalla, D.M.

1977-09-01

The program REGSTEP determines a polynomial approximation, in the least squares sense, to tabulated data. The polynomial may be univariate or multivariate. The computational method is that of stepwise regression. A variable is inserted into the regression basis if it is significant with respect to an appropriate F-test at a preselected risk level. In addition, should a variable already in the basis, become nonsignificant (again with respect to an appropriate F-test) after the entry of a new variable, it is expelled from the model. Thus only significant variables are retained in the model. Although written expressly to be incorporated into CORCOD, a code for predicting nuclear cross sections for given values of power, temperature, void fractions, Boron content etc. there is nothing to limit the use of REGSTEP to nuclear applications, as the examples demonstrate. A separate version has been incorporated into RSYST for the general user. (Auth.)

9. Principal component regression for crop yield estimation

Suryanarayana, T M V

2016-01-01

This book highlights the estimation of crop yield in Central Gujarat, especially with regard to the development of Multiple Regression Models and Principal Component Regression (PCR) models using climatological parameters as independent variables and crop yield as a dependent variable. It subsequently compares the multiple linear regression (MLR) and PCR results, and discusses the significance of PCR for crop yield estimation. In this context, the book also covers Principal Component Analysis (PCA), a statistical procedure used to reduce a number of correlated variables into a smaller number of uncorrelated variables called principal components (PC). This book will be helpful to the students and researchers, starting their works on climate and agriculture, mainly focussing on estimation models. The flow of chapters takes the readers in a smooth path, in understanding climate and weather and impact of climate change, and gradually proceeds towards downscaling techniques and then finally towards development of ...

10. Application of stepwise multiple regression techniques to inversion of Nimbus 'IRIS' observations.

Ohring, G.

1972-01-01

Exploratory studies with Nimbus-3 infrared interferometer-spectrometer (IRIS) data indicate that, in addition to temperature, such meteorological parameters as geopotential heights of pressure surfaces, tropopause pressure, and tropopause temperature can be inferred from the observed spectra with the use of simple regression equations. The technique of screening the IRIS spectral data by means of stepwise regression to obtain the best radiation predictors of meteorological parameters is validated. The simplicity of application of the technique and the simplicity of the derived linear regression equations - which contain only a few terms - suggest usefulness for this approach. Based upon the results obtained, suggestions are made for further development and exploitation of the stepwise regression analysis technique.

11. A Simulation Investigation of Principal Component Regression.

Allen, David E.

Regression analysis is one of the more common analytic tools used by researchers. However, multicollinearity between the predictor variables can cause problems in using the results of regression analyses. Problems associated with multicollinearity include entanglement of relative influences of variables due to reduced precision of estimation,…

12. A stepwise regression tree for nonlinear approximation: applications to estimating subpixel land cover

Huang, C.; Townshend, J.R.G.

2003-01-01

A stepwise regression tree (SRT) algorithm was developed for approximating complex nonlinear relationships. Based on the regression tree of Breiman et al . (BRT) and a stepwise linear regression (SLR) method, this algorithm represents an improvement over SLR in that it can approximate nonlinear relationships and over BRT in that it gives more realistic predictions. The applicability of this method to estimating subpixel forest was demonstrated using three test data sets, on all of which it gave more accurate predictions than SLR and BRT. SRT also generated more compact trees and performed better than or at least as well as BRT at all 10 equal forest proportion interval ranging from 0 to 100%. This method is appealing to estimating subpixel land cover over large areas.

13. Improved model of the retardance in citric acid coated ferrofluids using stepwise regression

Lin, J. F.; Qiu, X. R.

2017-06-01

Citric acid (CA) coated Fe3O4 ferrofluids (FFs) have been conducted for biomedical application. The magneto-optical retardance of CA coated FFs was measured by a Stokes polarimeter. Optimization and multiple regression of retardance in FFs were executed by Taguchi method and Microsoft Excel previously, and the F value of regression model was large enough. However, the model executed by Excel was not systematic. Instead we adopted the stepwise regression to model the retardance of CA coated FFs. From the results of stepwise regression by MATLAB, the developed model had highly predictable ability owing to F of 2.55897e+7 and correlation coefficient of one. The average absolute error of predicted retardances to measured retardances was just 0.0044%. Using the genetic algorithm (GA) in MATLAB, the optimized parametric combination was determined as [4.709 0.12 39.998 70.006] corresponding to the pH of suspension, molar ratio of CA to Fe3O4, CA volume, and coating temperature. The maximum retardance was found as 31.712°, close to that obtained by evolutionary solver in Excel and a relative error of -0.013%. Above all, the stepwise regression method was successfully used to model the retardance of CA coated FFs, and the maximum global retardance was determined by the use of GA.

14. A Hybrid Approach of Stepwise Regression, Logistic Regression, Support Vector Machine, and Decision Tree for Forecasting Fraudulent Financial Statements

Suduan Chen

2014-01-01

Full Text Available As the fraudulent financial statement of an enterprise is increasingly serious with each passing day, establishing a valid forecasting fraudulent financial statement model of an enterprise has become an important question for academic research and financial practice. After screening the important variables using the stepwise regression, the study also matches the logistic regression, support vector machine, and decision tree to construct the classification models to make a comparison. The study adopts financial and nonfinancial variables to assist in establishment of the forecasting fraudulent financial statement model. Research objects are the companies to which the fraudulent and nonfraudulent financial statement happened between years 1998 to 2012. The findings are that financial and nonfinancial information are effectively used to distinguish the fraudulent financial statement, and decision tree C5.0 has the best classification effect 85.71%.

15. A hybrid approach of stepwise regression, logistic regression, support vector machine, and decision tree for forecasting fraudulent financial statements.

Chen, Suduan; Goo, Yeong-Jia James; Shen, Zone-De

2014-01-01

As the fraudulent financial statement of an enterprise is increasingly serious with each passing day, establishing a valid forecasting fraudulent financial statement model of an enterprise has become an important question for academic research and financial practice. After screening the important variables using the stepwise regression, the study also matches the logistic regression, support vector machine, and decision tree to construct the classification models to make a comparison. The study adopts financial and nonfinancial variables to assist in establishment of the forecasting fraudulent financial statement model. Research objects are the companies to which the fraudulent and nonfraudulent financial statement happened between years 1998 to 2012. The findings are that financial and nonfinancial information are effectively used to distinguish the fraudulent financial statement, and decision tree C5.0 has the best classification effect 85.71%.

16. Order Selection for General Expression of Nonlinear Autoregressive Model Based on Multivariate Stepwise Regression

Shi, Jinfei; Zhu, Songqing; Chen, Ruwen

2017-12-01

An order selection method based on multiple stepwise regressions is proposed for General Expression of Nonlinear Autoregressive model which converts the model order problem into the variable selection of multiple linear regression equation. The partial autocorrelation function is adopted to define the linear term in GNAR model. The result is set as the initial model, and then the nonlinear terms are introduced gradually. Statistics are chosen to study the improvements of both the new introduced and originally existed variables for the model characteristics, which are adopted to determine the model variables to retain or eliminate. So the optimal model is obtained through data fitting effect measurement or significance test. The simulation and classic time-series data experiment results show that the method proposed is simple, reliable and can be applied to practical engineering.

17. Characterization of vegetative and grain filling periods of winter wheat by stepwise regression procedure. II. Grain filling period

Pržulj Novo

2011-01-01

Full Text Available In wheat, rate and duration of dry matter accumulation and remobilization depend on genotype and growing conditions. The objective of this study was to determine the most appropriate polynomial regression of stepwise regression procedure for describing grain filling period in three winter wheat cultivars. The stepwise regression procedure showed that grain filling is a complex biological process and that it is difficult to offer a simple and appropriate polynomial equation that fits the pattern of changes in dry matter accumulation during the grain filling period, i.e., from anthesis to maximum grain weight, in winter wheat. If grain filling is to be represented with a high power polynomial, quartic and quintic equations showed to be most appropriate. In spite of certain disadvantages, a cubic equation of stepwise regression could be used for describing the pattern of winter wheat grain filling.

18. Estimating leaf photosynthetic pigments information by stepwise multiple linear regression analysis and a leaf optical model

Liu, Pudong; Shi, Runhe; Wang, Hong; Bai, Kaixu; Gao, Wei

2014-10-01

Leaf pigments are key elements for plant photosynthesis and growth. Traditional manual sampling of these pigments is labor-intensive and costly, which also has the difficulty in capturing their temporal and spatial characteristics. The aim of this work is to estimate photosynthetic pigments at large scale by remote sensing. For this purpose, inverse model were proposed with the aid of stepwise multiple linear regression (SMLR) analysis. Furthermore, a leaf radiative transfer model (i.e. PROSPECT model) was employed to simulate the leaf reflectance where wavelength varies from 400 to 780 nm at 1 nm interval, and then these values were treated as the data from remote sensing observations. Meanwhile, simulated chlorophyll concentration (Cab), carotenoid concentration (Car) and their ratio (Cab/Car) were taken as target to build the regression model respectively. In this study, a total of 4000 samples were simulated via PROSPECT with different Cab, Car and leaf mesophyll structures as 70% of these samples were applied for training while the last 30% for model validation. Reflectance (r) and its mathematic transformations (1/r and log (1/r)) were all employed to build regression model respectively. Results showed fair agreements between pigments and simulated reflectance with all adjusted coefficients of determination (R2) larger than 0.8 as 6 wavebands were selected to build the SMLR model. The largest value of R2 for Cab, Car and Cab/Car are 0.8845, 0.876 and 0.8765, respectively. Meanwhile, mathematic transformations of reflectance showed little influence on regression accuracy. We concluded that it was feasible to estimate the chlorophyll and carotenoids and their ratio based on statistical model with leaf reflectance data.

19. Stepwise multiple regression method of greenhouse gas emission modeling in the energy sector in Poland.

Kolasa-Wiecek, Alicja

2015-04-01

The energy sector in Poland is the source of 81% of greenhouse gas (GHG) emissions. Poland, among other European Union countries, occupies a leading position with regard to coal consumption. Polish energy sector actively participates in efforts to reduce GHG emissions to the atmosphere, through a gradual decrease of the share of coal in the fuel mix and development of renewable energy sources. All evidence which completes the knowledge about issues related to GHG emissions is a valuable source of information. The article presents the results of modeling of GHG emissions which are generated by the energy sector in Poland. For a better understanding of the quantitative relationship between total consumption of primary energy and greenhouse gas emission, multiple stepwise regression model was applied. The modeling results of CO2 emissions demonstrate a high relationship (0.97) with the hard coal consumption variable. Adjustment coefficient of the model to actual data is high and equal to 95%. The backward step regression model, in the case of CH4 emission, indicated the presence of hard coal (0.66), peat and fuel wood (0.34), solid waste fuels, as well as other sources (-0.64) as the most important variables. The adjusted coefficient is suitable and equals R2=0.90. For N2O emission modeling the obtained coefficient of determination is low and equal to 43%. A significant variable influencing the amount of N2O emission is the peat and wood fuel consumption. Copyright © 2015. Published by Elsevier B.V.

20. Discrimination of Geographical Origin of Asian Garlic Using Isotopic and Chemical Datasets under Stepwise Principal Component Analysis.

Liu, Tsang-Sen; Lin, Jhen-Nan; Peng, Tsung-Ren

2018-01-16

Isotopic compositions of δ 2 H, δ 18 O, δ 13 C, and δ 15 N and concentrations of 22 trace elements from garlic samples were analyzed and processed with stepwise principal component analysis (PCA) to discriminate garlic's country of origin among Asian regions including South Korea, Vietnam, Taiwan, and China. Results indicate that there is no single trace-element concentration or isotopic composition that can accomplish the study's purpose and the stepwise PCA approach proposed does allow for discrimination between countries on a regional basis. Sequentially, Step-1 PCA distinguishes garlic's country of origin among Taiwanese, South Korean, and Vietnamese samples; Step-2 PCA discriminates Chinese garlic from South Korean garlic; and Step-3 and Step-4 PCA, Chinese garlic from Vietnamese garlic. In model tests, countries of origin of all audit samples were correctly discriminated by stepwise PCA. Consequently, this study demonstrates that stepwise PCA as applied is a simple and effective approach to discriminating country of origin among Asian garlics. © 2018 American Academy of Forensic Sciences.

1. Application of principal component regression and partial least squares regression in ultraviolet spectrum water quality detection

Li, Jiangtong; Luo, Yongdao; Dai, Honglin

2018-01-01

Water is the source of life and the essential foundation of all life. With the development of industrialization, the phenomenon of water pollution is becoming more and more frequent, which directly affects the survival and development of human. Water quality detection is one of the necessary measures to protect water resources. Ultraviolet (UV) spectral analysis is an important research method in the field of water quality detection, which partial least squares regression (PLSR) analysis method is becoming predominant technology, however, in some special cases, PLSR's analysis produce considerable errors. In order to solve this problem, the traditional principal component regression (PCR) analysis method was improved by using the principle of PLSR in this paper. The experimental results show that for some special experimental data set, improved PCR analysis method performance is better than PLSR. The PCR and PLSR is the focus of this paper. Firstly, the principal component analysis (PCA) is performed by MATLAB to reduce the dimensionality of the spectral data; on the basis of a large number of experiments, the optimized principal component is extracted by using the principle of PLSR, which carries most of the original data information. Secondly, the linear regression analysis of the principal component is carried out with statistic package for social science (SPSS), which the coefficients and relations of principal components can be obtained. Finally, calculating a same water spectral data set by PLSR and improved PCR, analyzing and comparing two results, improved PCR and PLSR is similar for most data, but improved PCR is better than PLSR for data near the detection limit. Both PLSR and improved PCR can be used in Ultraviolet spectral analysis of water, but for data near the detection limit, improved PCR's result better than PLSR.

2. Relationships between the structure of wheat gluten and ACE inhibitory activity of hydrolysate: stepwise multiple linear regression analysis.

Zhang, Yanyan; Ma, Haile; Wang, Bei; Qu, Wenjuan; Wali, Asif; Zhou, Cunshan

2016-08-01

Ultrasound pretreatment of wheat gluten (WG) before enzymolysis can improve the angiotensin converting enzyme (ACE) inhibitory activity of the hydrolysates by alerting the structure of substrate proteins. Establishment of a relationship between the structure of WG and ACE inhibitory activity of the hydrolysates to judge the end point of the ultrasonic pretreatment is vital. The results of stepwise multiple linear regression (MLR) showed that the contents of free sulfhydryl, α-helix, disulfide bond, surface hydrophobicity and random coil were significantly correlated to ACE Inhibitory activity of the hydrolysate, with the standard partial regression coefficients were 3.729, -0.676, -0.252, 0.022 and 0.156, respectively. The R(2) of this model was 0.970. External validation showed that the stepwise MLR model could well predict the ACE inhibitory activity of hydrolysate based on the content of free sulfhydryl, α-helix, disulfide bond, surface hydrophobicity and random coil of WG before hydrolysis. A stepwise multiple linear regression model describing the quantitative relationships between the structure of WG and the ACE Inhibitory activity of the hydrolysates was established. This model can be used to predict the endpoint of the ultrasonic pretreatment. © 2015 Society of Chemical Industry. © 2015 Society of Chemical Industry.

3. Regressão múltipla stepwise e hierárquica em Psicologia Organizacional: aplicações, problemas e soluções Stepwise and hierarchical multiple regression in organizational psychology: Applications, problemas and solutions

2002-01-01

4. Application of the step-wise regression procedure to the semi-empirical formulae of the nuclear binding energy

Eissa, E.A.; Ayad, M.; Gashier, F.A.B.

1984-01-01

Most of the binding energy semi-empirical terms without the deformation corrections used by P.A. Seeger are arranged in a multiple linear regression form. The stepwise regression procedure with 95% confidence levels for acceptance and rejection of variables is applied for seeking a model for calculating binding energies of even-even (E-E) nuclei through a significance testing of each basic term. Partial F-values are taken as estimates for the significance of each term. The residual standard deviation and the overall F-value are used for selecting the best linear regression model. (E-E) nuclei are taken into sets lying between two successive proton and neutron magic numbers. The present work is in favour of the magic number 126 followed by 164 for the neutrons and indecisive in supporting the recently predicted proton magic number 114 rather than the previous one, 126. (author)

5. A robust and efficient stepwise regression method for building sparse polynomial chaos expansions

Abraham, Simon, E-mail: Simon.Abraham@ulb.ac.be [Vrije Universiteit Brussel (VUB), Department of Mechanical Engineering, Research Group Fluid Mechanics and Thermodynamics, Pleinlaan 2, 1050 Brussels (Belgium); Raisee, Mehrdad [School of Mechanical Engineering, College of Engineering, University of Tehran, P.O. Box: 11155-4563, Tehran (Iran, Islamic Republic of); Ghorbaniasl, Ghader; Contino, Francesco; Lacor, Chris [Vrije Universiteit Brussel (VUB), Department of Mechanical Engineering, Research Group Fluid Mechanics and Thermodynamics, Pleinlaan 2, 1050 Brussels (Belgium)

2017-03-01

Polynomial Chaos (PC) expansions are widely used in various engineering fields for quantifying uncertainties arising from uncertain parameters. The computational cost of classical PC solution schemes is unaffordable as the number of deterministic simulations to be calculated grows dramatically with the number of stochastic dimension. This considerably restricts the practical use of PC at the industrial level. A common approach to address such problems is to make use of sparse PC expansions. This paper presents a non-intrusive regression-based method for building sparse PC expansions. The most important PC contributions are detected sequentially through an automatic search procedure. The variable selection criterion is based on efficient tools relevant to probabilistic method. Two benchmark analytical functions are used to validate the proposed algorithm. The computational efficiency of the method is then illustrated by a more realistic CFD application, consisting of the non-deterministic flow around a transonic airfoil subject to geometrical uncertainties. To assess the performance of the developed methodology, a detailed comparison is made with the well established LAR-based selection technique. The results show that the developed sparse regression technique is able to identify the most significant PC contributions describing the problem. Moreover, the most important stochastic features are captured at a reduced computational cost compared to the LAR method. The results also demonstrate the superior robustness of the method by repeating the analyses using random experimental designs.

6. Selective principal component regression analysis of fluorescence hyperspectral image to assess aflatoxin contamination in corn

Selective principal component regression analysis (SPCR) uses a subset of the original image bands for principal component transformation and regression. For optimal band selection before the transformation, this paper used genetic algorithms (GA). In this case, the GA process used the regression co...

7. Influence of plant root morphology and tissue composition on phenanthrene uptake: Stepwise multiple linear regression analysis

Zhan, Xinhua; Liang, Xiao; Xu, Guohua; Zhou, Lixiang

2013-01-01

Polycyclic aromatic hydrocarbons (PAHs) are contaminants that reside mainly in surface soils. Dietary intake of plant-based foods can make a major contribution to total PAH exposure. Little information is available on the relationship between root morphology and plant uptake of PAHs. An understanding of plant root morphologic and compositional factors that affect root uptake of contaminants is important and can inform both agricultural (chemical contamination of crops) and engineering (phytoremediation) applications. Five crop plant species are grown hydroponically in solutions containing the PAH phenanthrene. Measurements are taken for 1) phenanthrene uptake, 2) root morphology – specific surface area, volume, surface area, tip number and total root length and 3) root tissue composition – water, lipid, protein and carbohydrate content. These factors are compared through Pearson's correlation and multiple linear regression analysis. The major factors which promote phenanthrene uptake are specific surface area and lipid content. -- Highlights: •There is no correlation between phenanthrene uptake and total root length, and water. •Specific surface area and lipid are the most crucial factors for phenanthrene uptake. •The contribution of specific surface area is greater than that of lipid. -- The contribution of specific surface area is greater than that of lipid in the two most important root morphological and compositional factors affecting phenanthrene uptake

8. Multiple linear stepwise regression of liver lipid levels: proton MR spectroscopy study in vivo at 3.0 T

Xu Li; Liang Changhong; Xiao Yuanqiu; Zhang Zhonglin

2010-01-01

Objective: To analyze the correlations between liver lipid level determined by liver 3.0 T 1 H-MRS in vivo and influencing factors using multiple linear stepwise regression. Methods: The prospective study of liver 1 H-MRS was performed with 3.0 T system and eight-channel torso phased-array coils using PRESS sequence. Forty-four volunteers were enrolled in this study. Liver spectra were collected with a TR of 1500 ms, TE of 30 ms, volume of interest of 2 cm×2 cm×2 cm, NSA of 64 times. The acquired raw proton MRS data were processed by using a software program SAGE. For each MRS measurement, using water as the internal reference, the amplitude of the lipid signal was normalized to the sum of the signal from lipid and water to obtain percentage lipid within the liver. The statistical description of height, weight, age and BMI, Line width and water suppression were recorded, and Pearson analysis was applied to test their relationships. Multiple linear stepwise regression was used to set the statistical model for the prediction of Liver lipid content. Results: Age (39.1±12.6) years, body weight (64.4±10.4) kg, BMI (23.3±3.1) kg/m 2 , linewidth (18.9±4.4) and the water suppression (90.7±6.5)% had significant correlation with liver lipid content (0.00 to 0.96%, median 0.02%), r were 0.11, 0.44, 0.40, 0.52, -0.73 respectively (P<0.05). But only age, BMI, line width, and the water suppression entered into the multiple linear regression equation. Liver lipid content prediction equation was as follows: Y= 1.395 - (0.021×water suppression) + (0.022×BMI) + (0.014×line width) - (0.004×age), and the coefficient of determination was 0. 613, corrected coefficient of determination was 0.59. Conclusion: The regression model fitted well, since the variables of age, BMI, width, and water suppression can explain about 60% of liver lipid content changes. (authors)

9. Relationships between each part of the spinal curves and upright posture using Multiple stepwise linear regression analysis.

Boulet, Sebastien; Boudot, Elsa; Houel, Nicolas

2016-05-03

Back pain is a common reason for consultation in primary healthcare clinical practice, and has effects on daily activities and posture. Relationships between the whole spine and upright posture, however, remain unknown. The aim of this study was to identify the relationship between each spinal curve and centre of pressure position as well as velocity for healthy subjects. Twenty-one male subjects performed quiet stance in natural position. Each upright posture was then recorded using an optoelectronics system (Vicon Nexus) synchronized with two force plates. At each moment, polynomial interpolations of markers attached on the spine segment were used to compute cervical lordosis, thoracic kyphosis and lumbar lordosis angle curves. Mean of centre of pressure position and velocity was then computed. Multiple stepwise linear regression analysis showed that the position and velocity of centre of pressure associated with each part of the spinal curves were defined as best predictors of the lumbar lordosis angle (R(2)=0.45; p=1.65*10-10) and the thoracic kyphosis angle (R(2)=0.54; p=4.89*10-13) of healthy subjects in quiet stance. This study showed the relationships between each of cervical, thoracic, lumbar curvatures, and centre of pressure's fluctuation during free quiet standing using non-invasive full spinal curve exploration. Copyright © 2016 Elsevier Ltd. All rights reserved.

10. Regularized principal covariates regression and its application to finding coupled patterns in climate fields

Fischer, M. J.

2014-02-01

There are many different methods for investigating the coupling between two climate fields, which are all based on the multivariate regression model. Each different method of solving the multivariate model has its own attractive characteristics, but often the suitability of a particular method for a particular problem is not clear. Continuum regression methods search the solution space between the conventional methods and thus can find regression model subspaces that mix the attractive characteristics of the end-member subspaces. Principal covariates regression is a continuum regression method that is easily applied to climate fields and makes use of two end-members: principal components regression and redundancy analysis. In this study, principal covariates regression is extended to additionally span a third end-member (partial least squares or maximum covariance analysis). The new method, regularized principal covariates regression, has several attractive features including the following: it easily applies to problems in which the response field has missing values or is temporally sparse, it explores a wide range of model spaces, and it seeks a model subspace that will, for a set number of components, have a predictive skill that is the same or better than conventional regression methods. The new method is illustrated by applying it to the problem of predicting the southern Australian winter rainfall anomaly field using the regional atmospheric pressure anomaly field. Regularized principal covariates regression identifies four major coupled patterns in these two fields. The two leading patterns, which explain over half the variance in the rainfall field, are related to the subtropical ridge and features of the zonally asymmetric circulation.

11. A method for the selection of a functional form for a thermodynamic equation of state using weighted linear least squares stepwise regression

Jacobsen, R. T.; Stewart, R. B.; Crain, R. W., Jr.; Rose, G. L.; Myers, A. F.

1976-01-01

A method was developed for establishing a rational choice of the terms to be included in an equation of state with a large number of adjustable coefficients. The methods presented were developed for use in the determination of an equation of state for oxygen and nitrogen. However, a general application of the methods is possible in studies involving the determination of an optimum polynomial equation for fitting a large number of data points. The data considered in the least squares problem are experimental thermodynamic pressure-density-temperature data. Attention is given to a description of stepwise multiple regression and the use of stepwise regression in the determination of an equation of state for oxygen and nitrogen.

12. QUANTITATIVE ELECTRONIC STRUCTURE - ACTIVITY RELATIONSHIP OF ANTIMALARIAL COMPOUND OF ARTEMISININ DERIVATIVES USING PRINCIPAL COMPONENT REGRESSION APPROACH

Paul Robert Martin Werfette

2010-06-01

Full Text Available Analysis of quantitative structure - activity relationship (QSAR for a series of antimalarial compound artemisinin derivatives has been done using principal component regression. The descriptors for QSAR study were representation of electronic structure i.e. atomic net charges of the artemisinin skeleton calculated by AM1 semi-empirical method. The antimalarial activity of the compound was expressed in log 1/IC50 which is an experimental data. The main purpose of the principal component analysis approach is to transform a large data set of atomic net charges to simplify into a data set which known as latent variables. The best QSAR equation to analyze of log 1/IC50 can be obtained from the regression method as a linear function of several latent variables i.e. x1, x2, x3, x4 and x5. The best QSAR model is expressed in the following equation,  (;;   Keywords: QSAR, antimalarial, artemisinin, principal component regression

13. Predicting Insolvency : A comparison between discriminant analysis and logistic regression using principal components

Geroukis, Asterios; Brorson, Erik

2014-01-01

In this study, we compare the two statistical techniques logistic regression and discriminant analysis to see how well they classify companies based on clusters – made from the solvency ratio ­– using principal components as independent variables. The principal components are made with different financial ratios. We use cluster analysis to find groups with low, medium and high solvency ratio of 1200 different companies found on the NASDAQ stock market and use this as an apriori definition of ...

14. Resource Loss and Depressive Symptoms Following Hurricane Katrina: A Principal Component Regression Study

Liang L; Hayashi K; Bennett P; Johnson T. J; Aten J. D

2015-01-01

To understand the relationship between the structure of resource loss and depression after disaster exposure, the components of resource loss and the impact of these resource loss components on depression was examined among college students (N=654) at two universities who were affected by Hurricane Katrina. The component of resource loss was analyzed by principal component analysis first. Gender, social relationship loss, and financial loss were then examined with the regression model on depr...

15. Principal Covariates Clusterwise Regression (PCCR): Accounting for Multicollinearity and Population Heterogeneity in Hierarchically Organized Data.

Wilderjans, Tom Frans; Vande Gaer, Eva; Kiers, Henk A L; Van Mechelen, Iven; Ceulemans, Eva

2017-03-01

In the behavioral sciences, many research questions pertain to a regression problem in that one wants to predict a criterion on the basis of a number of predictors. Although in many cases, ordinary least squares regression will suffice, sometimes the prediction problem is more challenging, for three reasons: first, multiple highly collinear predictors can be available, making it difficult to grasp their mutual relations as well as their relations to the criterion. In that case, it may be very useful to reduce the predictors to a few summary variables, on which one regresses the criterion and which at the same time yields insight into the predictor structure. Second, the population under study may consist of a few unknown subgroups that are characterized by different regression models. Third, the obtained data are often hierarchically structured, with for instance, observations being nested into persons or participants within groups or countries. Although some methods have been developed that partially meet these challenges (i.e., principal covariates regression (PCovR), clusterwise regression (CR), and structural equation models), none of these methods adequately deals with all of them simultaneously. To fill this gap, we propose the principal covariates clusterwise regression (PCCR) method, which combines the key idea's behind PCovR (de Jong & Kiers in Chemom Intell Lab Syst 14(1-3):155-164, 1992) and CR (Späth in Computing 22(4):367-373, 1979). The PCCR method is validated by means of a simulation study and by applying it to cross-cultural data regarding satisfaction with life.

16. The role of multicollinearity in landslide susceptibility assessment by means of Binary Logistic Regression: comparison between VIF and AIC stepwise selection

Cama, Mariaelena; Cristi Nicu, Ionut; Conoscenti, Christian; Quénéhervé, Geraldine; Maerker, Michael

2016-04-01

Landslide susceptibility can be defined as the likelihood of a landslide occurring in a given area on the basis of local terrain conditions. In the last decades many research focused on its evaluation by means of stochastic approaches under the assumption that 'the past is the key to the future' which means that if a model is able to reproduce a known landslide spatial distribution, it will be able to predict the future locations of new (i.e. unknown) slope failures. Among the various stochastic approaches, Binary Logistic Regression (BLR) is one of the most used because it calculates the susceptibility in probabilistic terms and its results are easily interpretable from a geomorphological point of view. However, very often not much importance is given to multicollinearity assessment whose effect is that the coefficient estimates are unstable, with opposite sign and therefore difficult to interpret. Therefore, it should be evaluated every time in order to make a model whose results are geomorphologically correct. In this study the effects of multicollinearity in the predictive performance and robustness of landslide susceptibility models are analyzed. In particular, the multicollinearity is estimated by means of Variation Inflation Index (VIF) which is also used as selection criterion for the independent variables (VIF Stepwise Selection) and compared to the more commonly used AIC Stepwise Selection. The robustness of the results is evaluated through 100 replicates of the dataset. The study area selected to perform this analysis is the Moldavian Plateau where landslides are among the most frequent geomorphological processes. This area has an increasing trend of urbanization and a very high potential regarding the cultural heritage, being the place of discovery of the largest settlement belonging to the Cucuteni Culture from Eastern Europe (that led to the development of the great complex Cucuteni-Tripyllia). Therefore, identifying the areas susceptible to

17. Non-Invasive Methodology to Estimate Polyphenol Content in Extra Virgin Olive Oil Based on Stepwise Multilinear Regression.

Martínez Gila, Diego Manuel; Cano Marchal, Pablo; Gómez Ortega, Juan; Gámez García, Javier

2018-03-25

Normally the olive oil quality is assessed by chemical analysis according to international standards. These norms define chemical and organoleptic markers, and depending on the markers, the olive oil can be labelled as lampante, virgin, or extra virgin olive oil (EVOO), the last being an indicator of top quality. The polyphenol content is related to EVOO organoleptic features, and different scientific works have studied the positive influence that these compounds have on human health. The works carried out in this paper are focused on studying relations between the polyphenol content in olive oil samples and its spectral response in the near infrared spectra. In this context, several acquisition parameters have been assessed to optimize the measurement process within the virgin olive oil production process. The best regression model reached a mean error value of 156.14 mg/kg in leave one out cross validation, and the higher regression coefficient was 0.81 through holdout validation.

18. Non-Invasive Methodology to Estimate Polyphenol Content in Extra Virgin Olive Oil Based on Stepwise Multilinear Regression

Diego Manuel Martínez Gila

2018-03-01

Full Text Available Normally the olive oil quality is assessed by chemical analysis according to international standards. These norms define chemical and organoleptic markers, and depending on the markers, the olive oil can be labelled as lampante, virgin, or extra virgin olive oil (EVOO, the last being an indicator of top quality. The polyphenol content is related to EVOO organoleptic features, and different scientific works have studied the positive influence that these compounds have on human health. The works carried out in this paper are focused on studying relations between the polyphenol content in olive oil samples and its spectral response in the near infrared spectra. In this context, several acquisition parameters have been assessed to optimize the measurement process within the virgin olive oil production process. The best regression model reached a mean error value of 156.14 mg/kg in leave one out cross validation, and the higher regression coefficient was 0.81 through holdout validation.

19. Retrieving relevant factors with exploratory SEM and principal-covariate regression: A comparison.

Vervloet, Marlies; Van den Noortgate, Wim; Ceulemans, Eva

2018-02-12

Behavioral researchers often linearly regress a criterion on multiple predictors, aiming to gain insight into the relations between the criterion and predictors. Obtaining this insight from the ordinary least squares (OLS) regression solution may be troublesome, because OLS regression weights show only the effect of a predictor on top of the effects of other predictors. Moreover, when the number of predictors grows larger, it becomes likely that the predictors will be highly collinear, which makes the regression weights' estimates unstable (i.e., the "bouncing beta" problem). Among other procedures, dimension-reduction-based methods have been proposed for dealing with these problems. These methods yield insight into the data by reducing the predictors to a smaller number of summarizing variables and regressing the criterion on these summarizing variables. Two promising methods are principal-covariate regression (PCovR) and exploratory structural equation modeling (ESEM). Both simultaneously optimize reduction and prediction, but they are based on different frameworks. The resulting solutions have not yet been compared; it is thus unclear what the strengths and weaknesses are of both methods. In this article, we focus on the extents to which PCovR and ESEM are able to extract the factors that truly underlie the predictor scores and can predict a single criterion. The results of two simulation studies showed that for a typical behavioral dataset, ESEM (using the BIC for model selection) in this regard is successful more often than PCovR. Yet, in 93% of the datasets PCovR performed equally well, and in the case of 48 predictors, 100 observations, and large differences in the strengths of the factors, PCovR even outperformed ESEM.

20. Principal components and iterative regression analysis of geophysical series: Application to Sunspot number (1750 2004)

Nordemann, D. J. R.; Rigozo, N. R.; de Souza Echer, M. P.; Echer, E.

2008-11-01

We present here an implementation of a least squares iterative regression method applied to the sine functions embedded in the principal components extracted from geophysical time series. This method seems to represent a useful improvement for the non-stationary time series periodicity quantitative analysis. The principal components determination followed by the least squares iterative regression method was implemented in an algorithm written in the Scilab (2006) language. The main result of the method is to obtain the set of sine functions embedded in the series analyzed in decreasing order of significance, from the most important ones, likely to represent the physical processes involved in the generation of the series, to the less important ones that represent noise components. Taking into account the need of a deeper knowledge of the Sun's past history and its implication to global climate change, the method was applied to the Sunspot Number series (1750-2004). With the threshold and parameter values used here, the application of the method leads to a total of 441 explicit sine functions, among which 65 were considered as being significant and were used for a reconstruction that gave a normalized mean squared error of 0.146.

1. Combining multiple regression and principal component analysis for accurate predictions for column ozone in Peninsular Malaysia

Rajab, Jasim M.; MatJafri, M. Z.; Lim, H. S.

2013-06-01

This study encompasses columnar ozone modelling in the peninsular Malaysia. Data of eight atmospheric parameters [air surface temperature (AST), carbon monoxide (CO), methane (CH4), water vapour (H2Ovapour), skin surface temperature (SSKT), atmosphere temperature (AT), relative humidity (RH), and mean surface pressure (MSP)] data set, retrieved from NASA's Atmospheric Infrared Sounder (AIRS), for the entire period (2003-2008) was employed to develop models to predict the value of columnar ozone (O3) in study area. The combined method, which is based on using both multiple regressions combined with principal component analysis (PCA) modelling, was used to predict columnar ozone. This combined approach was utilized to improve the prediction accuracy of columnar ozone. Separate analysis was carried out for north east monsoon (NEM) and south west monsoon (SWM) seasons. The O3 was negatively correlated with CH4, H2Ovapour, RH, and MSP, whereas it was positively correlated with CO, AST, SSKT, and AT during both the NEM and SWM season periods. Multiple regression analysis was used to fit the columnar ozone data using the atmospheric parameter's variables as predictors. A variable selection method based on high loading of varimax rotated principal components was used to acquire subsets of the predictor variables to be comprised in the linear regression model of the atmospheric parameter's variables. It was found that the increase in columnar O3 value is associated with an increase in the values of AST, SSKT, AT, and CO and with a drop in the levels of CH4, H2Ovapour, RH, and MSP. The result of fitting the best models for the columnar O3 value using eight of the independent variables gave about the same values of the R (≈0.93) and R2 (≈0.86) for both the NEM and SWM seasons. The common variables that appeared in both regression equations were SSKT, CH4 and RH, and the principal precursor of the columnar O3 value in both the NEM and SWM seasons was SSKT.

2. Delineating chalk sand distribution of Ekofisk formation using probabilistic neural network (PNN) and stepwise regression (SWR): Case study Danish North Sea field

Haris, A.; Nafian, M.; Riyanto, A.

2017-07-01

Danish North Sea Fields consist of several formations (Ekofisk, Tor, and Cromer Knoll) that was started from the age of Paleocene to Miocene. In this study, the integration of seismic and well log data set is carried out to determine the chalk sand distribution in the Danish North Sea field. The integration of seismic and well log data set is performed by using the seismic inversion analysis and seismic multi-attribute. The seismic inversion algorithm, which is used to derive acoustic impedance (AI), is model-based technique. The derived AI is then used as external attributes for the input of multi-attribute analysis. Moreover, the multi-attribute analysis is used to generate the linear and non-linear transformation of among well log properties. In the case of the linear model, selected transformation is conducted by weighting step-wise linear regression (SWR), while for the non-linear model is performed by using probabilistic neural networks (PNN). The estimated porosity, which is resulted by PNN shows better suited to the well log data compared with the results of SWR. This result can be understood since PNN perform non-linear regression so that the relationship between the attribute data and predicted log data can be optimized. The distribution of chalk sand has been successfully identified and characterized by porosity value ranging from 23% up to 30%.

3. Local Prediction Models on Mid-Atlantic Ridge MORB by Principal Component Regression

Ling, X.; Snow, J. E.; Chin, W.

2017-12-01

The isotopic compositions of the daughter isotopes of long-lived radioactive systems (Sr, Nd, Hf and Pb ) can be used to map the scale and history of mantle heterogeneities beneath mid-ocean ridges. Our goal is to relate the multidimensional structure in the existing isotopic dataset with an underlying physical reality of mantle sources. The numerical technique of Principal Component Analysis is useful to reduce the linear dependence of the data to a minimum set of orthogonal eigenvectors encapsulating the information contained (cf Agranier et al 2005). The dataset used for this study covers almost all the MORBs along mid-Atlantic Ridge (MAR), from 54oS to 77oN and 8.8oW to -46.7oW, including replicating the dataset of Agranier et al., 2005 published plus 53 basalt samples dredged and analyzed since then (data from PetDB). The principal components PC1 and PC2 account for 61.56% and 29.21%, respectively, of the total isotope ratios variability. The samples with similar compositions to HIMU and EM and DM are identified to better understand the PCs. PC1 and PC2 are accountable for HIMU and EM whereas PC2 has limited control over the DM source. PC3 is more strongly controlled by the depleted mantle source than PC2. What this means is that all three principal components have a high degree of significance relevant to the established mantle sources. We also tested the relationship between mantle heterogeneity and sample locality. K-means clustering algorithm is a type of unsupervised learning to find groups in the data based on feature similarity. The PC factor scores of each sample are clustered into three groups. Cluster one and three are alternating on the north and south MAR. Cluster two exhibits on 45.18oN to 0.79oN and -27.9oW to -30.40oW alternating with cluster one. The ridge has been preliminarily divided into 16 sections considering both the clusters and ridge segments. The principal component regression models the section based on 6 isotope ratios and PCs. The

4. Information fusion via constrained principal component regression for robust quantification with incomplete calibrations

Vogt, Frank

2013-01-01

Graphical abstract: Analysis Task: Determine the albumin (= protein) concentration in microalgae cells as a function of the cells’ nutrient availability. Left Panel: The predicted albumin concentrations as obtained by conventional principal component regression features low reproducibility and are partially higher than the concentrations of algae in which albumin is contained. Right Panel: Augmenting an incomplete PCR calibration with additional expert information derives reasonable albumin concentrations which now reveal a significant dependency on the algae's nutrient situation. -- Highlights: •Make quantitative analyses of compounds embedded in largely unknown chemical matrices robust. •Improved concentration prediction with originally insufficient calibration models. •Chemometric approach for incorporating expertise from other fields and/or researchers. •Ensure chemical, biological, or medicinal meaningfulness of quantitative analyses. -- Abstract: Incomplete calibrations are encountered in many applications and hamper chemometric data analyses. Such situations arise when target analytes are embedded in a chemically complex matrix from which calibration concentrations cannot be determined with reasonable efforts. In other cases, the samples’ chemical composition may fluctuate in an unpredictable way and thus cannot be comprehensively covered by calibration samples. The reason for calibration model to fail is the regression principle itself which seeks to explain measured data optimally in terms of the (potentially incomplete) calibration model but does not consider chemical meaningfulness. This study presents a novel chemometric approach which is based on experimentally feasible calibrations, i.e. concentration series of the target analytes outside the chemical matrix (‘ex situ calibration’). The inherent lack-of-information is then compensated by incorporating additional knowledge in form of regression constraints. Any outside knowledge can be

5. Principal components based support vector regression model for on-line instrument calibration monitoring in NPPs

Seo, In Yong; Ha, Bok Nam; Lee, Sung Woo; Shin, Chang Hoon; Kim, Seong Jun

2010-01-01

In nuclear power plants (NPPs), periodic sensor calibrations are required to assure that sensors are operating correctly. By checking the sensor's operating status at every fuel outage, faulty sensors may remain undetected for periods of up to 24 months. Moreover, typically, only a few faulty sensors are found to be calibrated. For the safe operation of NPP and the reduction of unnecessary calibration, on-line instrument calibration monitoring is needed. In this study, principal component based auto-associative support vector regression (PCSVR) using response surface methodology (RSM) is proposed for the sensor signal validation of NPPs. This paper describes the design of a PCSVR-based sensor validation system for a power generation system. RSM is employed to determine the optimal values of SVR hyperparameters and is compared to the genetic algorithm (GA). The proposed PCSVR model is confirmed with the actual plant data of Kori Nuclear Power Plant Unit 3 and is compared with the Auto-Associative support vector regression (AASVR) and the auto-associative neural network (AANN) model. The auto-sensitivity of AASVR is improved by around six times by using a PCA, resulting in good detection of sensor drift. Compared to AANN, accuracy and cross-sensitivity are better while the auto-sensitivity is almost the same. Meanwhile, the proposed RSM for the optimization of the PCSVR algorithm performs even better in terms of accuracy, auto-sensitivity, and averaged maximum error, except in averaged RMS error, and this method is much more time efficient compared to the conventional GA method

6. Real time damage detection using recursive principal components and time varying auto-regressive modeling

Krishnan, M.; Bhowmik, B.; Hazra, B.; Pakrashi, V.

2018-02-01

In this paper, a novel baseline free approach for continuous online damage detection of multi degree of freedom vibrating structures using Recursive Principal Component Analysis (RPCA) in conjunction with Time Varying Auto-Regressive Modeling (TVAR) is proposed. In this method, the acceleration data is used to obtain recursive proper orthogonal components online using rank-one perturbation method, followed by TVAR modeling of the first transformed response, to detect the change in the dynamic behavior of the vibrating system from its pristine state to contiguous linear/non-linear-states that indicate damage. Most of the works available in the literature deal with algorithms that require windowing of the gathered data owing to their data-driven nature which renders them ineffective for online implementation. Algorithms focussed on mathematically consistent recursive techniques in a rigorous theoretical framework of structural damage detection is missing, which motivates the development of the present framework that is amenable for online implementation which could be utilized along with suite experimental and numerical investigations. The RPCA algorithm iterates the eigenvector and eigenvalue estimates for sample covariance matrices and new data point at each successive time instants, using the rank-one perturbation method. TVAR modeling on the principal component explaining maximum variance is utilized and the damage is identified by tracking the TVAR coefficients. This eliminates the need for offline post processing and facilitates online damage detection especially when applied to streaming data without requiring any baseline data. Numerical simulations performed on a 5-dof nonlinear system under white noise excitation and El Centro (also known as 1940 Imperial Valley earthquake) excitation, for different damage scenarios, demonstrate the robustness of the proposed algorithm. The method is further validated on results obtained from case studies involving

7. A water quality index model using stepwise regression and neural networks models for the Piabanha River basin in Rio de Janeiro, Brazil

Villas Boas, M. D.; Olivera, F.; Azevedo, J. S.

2013-12-01

The evaluation of water quality through 'indexes' is widely used in environmental sciences. There are a number of methods available for calculating water quality indexes (WQI), usually based on site-specific parameters. In Brazil, WQI were initially used in the 1970s and were adapted from the methodology developed in association with the National Science Foundation (Brown et al, 1970). Specifically, the WQI 'IQA/SCQA', developed by the Institute of Water Management of Minas Gerais (IGAM), is estimated based on nine parameters: Temperature Range, Biochemical Oxygen Demand, Fecal Coliforms, Nitrate, Phosphate, Turbidity, Dissolved Oxygen, pH and Electrical Conductivity. The goal of this study was to develop a model for calculating the IQA/SCQA, for the Piabanha River basin in the State of Rio de Janeiro (Brazil), using only the parameters measurable by a Multiparameter Water Quality Sonde (MWQS) available in the study area. These parameters are: Dissolved Oxygen, pH and Electrical Conductivity. The use of this model will allow to further the water quality monitoring network in the basin, without requiring significant increases of resources. The water quality measurement with MWQS is less expensive than the laboratory analysis required for the other parameters. The water quality data used in the study were obtained by the Geological Survey of Brazil in partnership with other public institutions (i.e. universities and environmental institutes) as part of the project "Integrated Studies in Experimental and Representative Watersheds". Two models were developed to correlate the values of the three measured parameters and the IQA/SCQA values calculated based on all nine parameters. The results were evaluated according to the following validation statistics: coefficient of determination (R2), Root Mean Square Error (RMSE), Akaike information criterion (AIC) and Final Prediction Error (FPE). The first model was a linear stepwise regression between three independent variables

8. Principal Covariates Clusterwise Regression (PCCR) : Accounting for multicollinearity and population heterogeneity in hierarchically organized data.

Wilderjans, Tom F.; Van de Gaer, E.; Kiers, H.A.L.; Van Mechelen, Iven; Ceulemans, Eva

In the behavioral sciences, many research questions pertain to a regression problem in that one wants to predict a criterion on the basis of a number of predictors. Although in many cases, ordinary least squares regression will suffice, sometimes the prediction problem is more challenging, for three

9. INDIA’S ELECTRICITY DEMAND FORECAST USING REGRESSION ANALYSIS AND ARTIFICIAL NEURAL NETWORKS BASED ON PRINCIPAL COMPONENTS

S. Saravanan

2012-07-01

Full Text Available Power System planning starts with Electric load (demand forecasting. Accurate electricity load forecasting is one of the most important challenges in managing supply and demand of the electricity, since the electricity demand is volatile in nature; it cannot be stored and has to be consumed instantly. The aim of this study deals with electricity consumption in India, to forecast future projection of demand for a period of 19 years from 2012 to 2030. The eleven input variables used are Amount of CO2 emission, Population, Per capita GDP, Per capita gross national income, Gross Domestic savings, Industry, Consumer price index, Wholesale price index, Imports, Exports and Per capita power consumption. A new methodology based on Artificial Neural Networks (ANNs using principal components is also used. Data of 29 years used for training and data of 10 years used for testing the ANNs. Comparison made with multiple linear regression (based on original data and the principal components and ANNs with original data as input variables. The results show that the use of ANNs with principal components (PC is more effective.

10. Using synthetic data to evaluate multiple regression and principal component analyses for statistical modeling of daily building energy consumption

Reddy, T.A. (Energy Systems Lab., Texas A and M Univ., College Station, TX (United States)); Claridge, D.E. (Energy Systems Lab., Texas A and M Univ., College Station, TX (United States))

1994-01-01

Multiple regression modeling of monitored building energy use data is often faulted as a reliable means of predicting energy use on the grounds that multicollinearity between the regressor variables can lead both to improper interpretation of the relative importance of the various physical regressor parameters and to a model with unstable regressor coefficients. Principal component analysis (PCA) has the potential to overcome such drawbacks. While a few case studies have already attempted to apply this technique to building energy data, the objectives of this study were to make a broader evaluation of PCA and multiple regression analysis (MRA) and to establish guidelines under which one approach is preferable to the other. Four geographic locations in the US with different climatic conditions were selected and synthetic data sequence representative of daily energy use in large institutional buildings were generated in each location using a linear model with outdoor temperature, outdoor specific humidity and solar radiation as the three regression variables. MRA and PCA approaches were then applied to these data sets and their relative performances were compared. Conditions under which PCA seems to perform better than MRA were identified and preliminary recommendations on the use of either modeling approach formulated. (orig.)

11. Principal Component Regression Analysis of the Relation Between CIELAB Color and Monomeric Anthocyanins in Young Cabernet Sauvignon Wines

Chang-Qing Duan

2008-11-01

Full Text Available Color is one of the key characteristics used to evaluate the sensory quality of red wine, and anthocyanins are the main contributors to color. Monomeric anthocyanins and CIELAB color values were investigated by HPLC-MS and spectrophotometry during fermentation of Cabernet Sauvignon red wine, and principal component regression (PCR, a statistical tool, was used to establish a linkage between the detected anthocyanins and wine coloring. The results showed that 14 monomeric anthocyanins could be identified in wine samples, and all of these anthocyanins were negatively correlated with the L*, b* and H*ab values, but positively correlated with a* and C*ab values. On an equal concentration basis for each detected anthocyanin, cyanidin-3-O-glucoside (Cy3-glu had the most influence on CIELAB color value, while malvidin 3-O-glucoside (Mv3-glu had the least. The color values of various monomeric anthocyanins were influenced by their structures, substituents on the B-ring, acyl groups on the glucoside and the molecular steric structure. This work develops a statistical method for evaluating correlation between wine color and monomeric anthocyanins, and also provides a basis for elucidating the effect of intramolecular copigmentation on wine coloring.

12. Crude Oil Price Forecasting Based on Hybridizing Wavelet Multiple Linear Regression Model, Particle Swarm Optimization Techniques, and Principal Component Analysis

Shabri, Ani; Samsudin, Ruhaidah

2014-01-01

Crude oil prices do play significant role in the global economy and are a key input into option pricing formulas, portfolio allocation, and risk measurement. In this paper, a hybrid model integrating wavelet and multiple linear regressions (MLR) is proposed for crude oil price forecasting. In this model, Mallat wavelet transform is first selected to decompose an original time series into several subseries with different scale. Then, the principal component analysis (PCA) is used in processing subseries data in MLR for crude oil price forecasting. The particle swarm optimization (PSO) is used to adopt the optimal parameters of the MLR model. To assess the effectiveness of this model, daily crude oil market, West Texas Intermediate (WTI), has been used as the case study. Time series prediction capability performance of the WMLR model is compared with the MLR, ARIMA, and GARCH models using various statistics measures. The experimental results show that the proposed model outperforms the individual models in forecasting of the crude oil prices series. PMID:24895666

13. Crude Oil Price Forecasting Based on Hybridizing Wavelet Multiple Linear Regression Model, Particle Swarm Optimization Techniques, and Principal Component Analysis

Ani Shabri

2014-01-01

Full Text Available Crude oil prices do play significant role in the global economy and are a key input into option pricing formulas, portfolio allocation, and risk measurement. In this paper, a hybrid model integrating wavelet and multiple linear regressions (MLR is proposed for crude oil price forecasting. In this model, Mallat wavelet transform is first selected to decompose an original time series into several subseries with different scale. Then, the principal component analysis (PCA is used in processing subseries data in MLR for crude oil price forecasting. The particle swarm optimization (PSO is used to adopt the optimal parameters of the MLR model. To assess the effectiveness of this model, daily crude oil market, West Texas Intermediate (WTI, has been used as the case study. Time series prediction capability performance of the WMLR model is compared with the MLR, ARIMA, and GARCH models using various statistics measures. The experimental results show that the proposed model outperforms the individual models in forecasting of the crude oil prices series.

14. Crude oil price forecasting based on hybridizing wavelet multiple linear regression model, particle swarm optimization techniques, and principal component analysis.

Shabri, Ani; Samsudin, Ruhaidah

2014-01-01

Crude oil prices do play significant role in the global economy and are a key input into option pricing formulas, portfolio allocation, and risk measurement. In this paper, a hybrid model integrating wavelet and multiple linear regressions (MLR) is proposed for crude oil price forecasting. In this model, Mallat wavelet transform is first selected to decompose an original time series into several subseries with different scale. Then, the principal component analysis (PCA) is used in processing subseries data in MLR for crude oil price forecasting. The particle swarm optimization (PSO) is used to adopt the optimal parameters of the MLR model. To assess the effectiveness of this model, daily crude oil market, West Texas Intermediate (WTI), has been used as the case study. Time series prediction capability performance of the WMLR model is compared with the MLR, ARIMA, and GARCH models using various statistics measures. The experimental results show that the proposed model outperforms the individual models in forecasting of the crude oil prices series.

15. Using both principal component analysis and reduced rank regression to study dietary patterns and diabetes in Chinese adults.

Batis, Carolina; Mendez, Michelle A; Gordon-Larsen, Penny; Sotres-Alvarez, Daniela; Adair, Linda; Popkin, Barry

2016-02-01

We examined the association between dietary patterns and diabetes using the strengths of two methods: principal component analysis (PCA) to identify the eating patterns of the population and reduced rank regression (RRR) to derive a pattern that explains the variation in glycated Hb (HbA1c), homeostasis model assessment of insulin resistance (HOMA-IR) and fasting glucose. We measured diet over a 3 d period with 24 h recalls and a household food inventory in 2006 and used it to derive PCA and RRR dietary patterns. The outcomes were measured in 2009. Adults (n 4316) from the China Health and Nutrition Survey. The adjusted odds ratio for diabetes prevalence (HbA1c≥6·5 %), comparing the highest dietary pattern score quartile with the lowest, was 1·26 (95 % CI 0·76, 2·08) for a modern high-wheat pattern (PCA; wheat products, fruits, eggs, milk, instant noodles and frozen dumplings), 0·76 (95 % CI 0·49, 1·17) for a traditional southern pattern (PCA; rice, meat, poultry and fish) and 2·37 (95 % CI 1·56, 3·60) for the pattern derived with RRR. By comparing the dietary pattern structures of RRR and PCA, we found that the RRR pattern was also behaviourally meaningful. It combined the deleterious effects of the modern high-wheat pattern (high intakes of wheat buns and breads, deep-fried wheat and soya milk) with the deleterious effects of consuming the opposite of the traditional southern pattern (low intakes of rice, poultry and game, fish and seafood). Our findings suggest that using both PCA and RRR provided useful insights when studying the association of dietary patterns with diabetes.

16. Predicting Success in Product Development: The Application of Principal Component Analysis to Categorical Data and Binomial Logistic Regression

Glauco H.S. Mendes

2013-09-01

Full Text Available Critical success factors in new product development (NPD in the Brazilian small and medium enterprises (SMEs are identified and analyzed. Critical success factors are best practices that can be used to improve NPD management and performance in a company. However, the traditional method for identifying these factors is survey methods. Subsequently, the collected data are reduced through traditional multivariate analysis. The objective of this work is to develop a logistic regression model for predicting the success or failure of the new product development. This model allows for an evaluation and prioritization of resource commitments. The results will be helpful for guiding management actions, as one way to improve NPD performance in those industries.

17. Integrating principal component analysis and vector quantization with support vector regression for sulfur content prediction in HDS process

Shokri Saeid

2015-01-01

Full Text Available An accurate prediction of sulfur content is very important for the proper operation and product quality control in hydrodesulfurization (HDS process. For this purpose, a reliable data- driven soft sensors utilizing Support Vector Regression (SVR was developed and the effects of integrating Vector Quantization (VQ with Principle Component Analysis (PCA were studied on the assessment of this soft sensor. First, in pre-processing step the PCA and VQ techniques were used to reduce dimensions of the original input datasets. Then, the compressed datasets were used as input variables for the SVR model. Experimental data from the HDS setup were employed to validate the proposed integrated model. The integration of VQ/PCA techniques with SVR model was able to increase the prediction accuracy of SVR. The obtained results show that integrated technique (VQ-SVR was better than (PCA-SVR in prediction accuracy. Also, VQ decreased the sum of the training and test time of SVR model in comparison with PCA. For further evaluation, the performance of VQ-SVR model was also compared to that of SVR. The obtained results indicated that VQ-SVR model delivered the best satisfactory predicting performance (AARE= 0.0668 and R2= 0.995 in comparison with investigated models.

18. Source apportionment of soil heavy metals using robust absolute principal component scores-robust geographically weighted regression (RAPCS-RGWR) receptor model.

Qu, Mingkai; Wang, Yan; Huang, Biao; Zhao, Yongcun

2018-06-01

The traditional source apportionment models, such as absolute principal component scores-multiple linear regression (APCS-MLR), are usually susceptible to outliers, which may be widely present in the regional geochemical dataset. Furthermore, the models are merely built on variable space instead of geographical space and thus cannot effectively capture the local spatial characteristics of each source contributions. To overcome the limitations, a new receptor model, robust absolute principal component scores-robust geographically weighted regression (RAPCS-RGWR), was proposed based on the traditional APCS-MLR model. Then, the new method was applied to the source apportionment of soil metal elements in a region of Wuhan City, China as a case study. Evaluations revealed that: (i) RAPCS-RGWR model had better performance than APCS-MLR model in the identification of the major sources of soil metal elements, and (ii) source contributions estimated by RAPCS-RGWR model were more close to the true soil metal concentrations than that estimated by APCS-MLR model. It is shown that the proposed RAPCS-RGWR model is a more effective source apportionment method than APCS-MLR (i.e., non-robust and global model) in dealing with the regional geochemical dataset. Copyright © 2018 Elsevier B.V. All rights reserved.

19. EFEKTIVITAS METODE NEW STEPWISE DALAM PEMILIHAN VARIABEL PADA MODEL REGRESI GANDA

Thamrin Tayeb

2017-12-01

Full Text Available New stepwise method is a method of selecting predictor variables in a linear reg- ression model. This method is an extension of the principal component regressi- on, and consists of the selection of the original predictor variables iteratively at the same time, a group of main subset component is selected repeatedly. This me- thod has also the basic properties of the stepwise method. Thus we will get the best combination of stepwise selection and principal component selection me- thods. Model that is obtained by using this method characterizes a low-valued PRESS. The application of this method is not only for linear model, but also can  be expanded to generalized linear models. The comparison of both methods are based on the R2 criteria in the variable selection, obtained R2 value results which are almost the same as those models in the case of solid waste of data, so having payed fully attention to the number of predictor variables entered into the mo- dels, it can be said that the new stepwise method tends to be better than the prin- cipal component regression.

20. The comparison of partial least squares and principal component regression in simultaneous spectrophotometric determination of ascorbic acid, dopamine and uric acid in real samples

Habiboallah Khajehsharifi

2017-05-01

Full Text Available Partial least squares (PLS1 and principal component regression (PCR are two multivariate calibration methods that allow simultaneous determination of several analytes in spite of their overlapping spectra. In this research, a spectrophotometric method using PLS1 is proposed for the simultaneous determination of ascorbic acid (AA, dopamine (DA and uric acid (UA. The linear concentration ranges for AA, DA and UA were 1.76–47.55, 0.57–22.76 and 1.68–28.58 (in μg mL−1, respectively. However, PLS1 and PCR were applied to design calibration set based on absorption spectra in the 250–320 nm range for 36 different mixtures of AA, DA and UA, in all cases, the PLS1 calibration method showed more quantitative prediction ability than PCR method. Cross validation method was used to select the optimum number of principal components (NPC. The NPC for AA, DA and UA was found to be 4 by PLS1 and 5, 12, 8 by PCR. Prediction error sum of squares (PRESS of AA, DA and UA were 1.2461, 1.1144, 2.3104 for PLS1 and 11.0563, 1.3819, 4.0956 for PCR, respectively. Satisfactory results were achieved for the simultaneous determination of AA, DA and UA in some real samples such as human urine, serum and pharmaceutical formulations.

1. Stepwise management of asthma.

Khalid, Ayesha N

2015-09-01

Stepwise management of asthma remains an area of evolving research. Asthma is one of the most expensive chronic diseases in the United States; stepwise management is an important area of focus, with several recent guidelines recommending management. This is a review of published English language literature, focusing on management guidelines for asthma in adult and pediatric patients. Asthma is a chronic disease whose assessment of severity allows for therapeutic goals to match the impairment noted. Good evidence exists to aid risk reduction, leading to decreased emergency room visits, preventing loss of lung function in adults and lung growth in children, and optimizing pharmacotherapy with reduced side effects profile. Recent asthma management guidelines incorporate 4 components of asthma care including: monitoring of severity, patient education, controlling external triggers, and medications, including recent attention to medication adherence. Asthma is an expensive chronic disease with preventive measures leading to reduced healthcare costs. Future targeted cytokine therapy to decrease serum and blood eosinophils may become an integral part of asthma management. © 2015 ARS-AAOA, LLC.

2. A stepwise model to predict monthly streamflow

Mahmood Al-Juboori, Anas; Guven, Aytac

2016-12-01

In this study, a stepwise model empowered with genetic programming is developed to predict the monthly flows of Hurman River in Turkey and Diyalah and Lesser Zab Rivers in Iraq. The model divides the monthly flow data to twelve intervals representing the number of months in a year. The flow of a month, t is considered as a function of the antecedent month's flow (t - 1) and it is predicted by multiplying the antecedent monthly flow by a constant value called K. The optimum value of K is obtained by a stepwise procedure which employs Gene Expression Programming (GEP) and Nonlinear Generalized Reduced Gradient Optimization (NGRGO) as alternative to traditional nonlinear regression technique. The degree of determination and root mean squared error are used to evaluate the performance of the proposed models. The results of the proposed model are compared with the conventional Markovian and Auto Regressive Integrated Moving Average (ARIMA) models based on observed monthly flow data. The comparison results based on five different statistic measures show that the proposed stepwise model performed better than Markovian model and ARIMA model. The R2 values of the proposed model range between 0.81 and 0.92 for the three rivers in this study.

3. Modeling Governance KB with CATPCA to Overcome Multicollinearity in the Logistic Regression

Khikmah, L.; Wijayanto, H.; Syafitri, U. D.

2017-04-01

The problem often encounters in logistic regression modeling are multicollinearity problems. Data that have multicollinearity between explanatory variables with the result in the estimation of parameters to be bias. Besides, the multicollinearity will result in error in the classification. In general, to overcome multicollinearity in regression used stepwise regression. They are also another method to overcome multicollinearity which involves all variable for prediction. That is Principal Component Analysis (PCA). However, classical PCA in only for numeric data. Its data are categorical, one method to solve the problems is Categorical Principal Component Analysis (CATPCA). Data were used in this research were a part of data Demographic and Population Survey Indonesia (IDHS) 2012. This research focuses on the characteristic of women of using the contraceptive methods. Classification results evaluated using Area Under Curve (AUC) values. The higher the AUC value, the better. Based on AUC values, the classification of the contraceptive method using stepwise method (58.66%) is better than the logistic regression model (57.39%) and CATPCA (57.39%). Evaluation of the results of logistic regression using sensitivity, shows the opposite where CATPCA method (99.79%) is better than logistic regression method (92.43%) and stepwise (92.05%). Therefore in this study focuses on major class classification (using a contraceptive method), then the selected model is CATPCA because it can raise the level of the major class model accuracy.

4. Multiple linear regression analysis

Edwards, T. R.

1980-01-01

Program rapidly selects best-suited set of coefficients. User supplies only vectors of independent and dependent data and specifies confidence level required. Program uses stepwise statistical procedure for relating minimal set of variables to set of observations; final regression contains only most statistically significant coefficients. Program is written in FORTRAN IV for batch execution and has been implemented on NOVA 1200.

5. Step-wise stimulated martensitic transformations

Airoldi, G.; Riva, G.

1991-01-01

NiTi alloys, widely known both for their shape memory properties and for unusual pseudoelastic behaviour, are now on the forefront attention for step-wise induced memory processes, thermal or stress stimulated. Literature results related to step-wise stimulated martensite (direct transformation) are examined and contrasted with step-wise thermal stimulated parent phase (reverse transformation). Hypothesis are given to explain the key characters of both transformations, a thermodynamic model from first principles being till now lacking

6. Step-wise refolding of recombinant proteins.

Tsumoto, Kouhei; Arakawa, Tsutomu; Chen, Linda

2010-04-01

Protein refolding is still on trial-and-error basis. Here we describe step-wise dialysis refolding, in which denaturant concentration is altered in step-wise fashion. This technology controls the folding pathway by adjusting the concentrations of the denaturant and other solvent additives to induce sequential folding or disulfide formation.

7. Use of the stepwise progression return-to-play protocol following concussion among practicing athletic trainers

Jessica Wallace

2018-04-01

Full Text Available Purpose: The purpose of this study was to determine whether practicing athletic trainers (ATs were using the stepwise progression to make return-to-play (RTP decisions after concussion and to determine what factors influenced their decision to use the stepwise progression. Methods: A total of 166 ATs (response rate = 16.6% completed a 21-item questionnaire that evaluated participant demographics, methods of concussion management, and RTP decision-making using the stepwise progression. Descriptive statistics and a logistic regression were completed to analyze data. Results: Factors such as education level (p = 0.05 and number of concussions treated (p = 0.05 predicted use of the stepwise progression, whereas sex (p = 0.17, employment setting (p = 0.17, state law (p = 0.86, and years practicing (p = 0.17 did not predict whether ATs were following the stepwise progression. Conclusion: The majority of the ATs from this study are employing the stepwise progression to safely return athletes to play after sustaining a concussion. This demonstrates that ATs are providing a standard of care for concussed athletes across various athletic training settings; however, having a graduate degree and treating more concussions per year are predictors of whether an AT follows all steps of the stepwise progression. Keywords: Athletic trainers, Concussion, Concussion management, Graduate degree, Return to play, Sports medicine, Stepwise progression

8. Modelling Monthly Mental Sickness Cases Using Principal ...

The methodology was principal component analysis (PCA) using data obtained from the hospital to estimate regression coefficients and parameters. It was found that the principal component regression model that was derived was good predictive tool. The principal component regression model obtained was okay and this ...

9. Principal Ports

National Oceanic and Atmospheric Administration, Department of Commerce — Principal Ports are defined by port limits or US Army Corps of Engineers (USACE) projects, these exclude non-USACE projects not authorized for publication. The...

10. Impact of multicollinearity on small sample hydrologic regression models

Kroll, Charles N.; Song, Peter

2013-06-01

Often hydrologic regression models are developed with ordinary least squares (OLS) procedures. The use of OLS with highly correlated explanatory variables produces multicollinearity, which creates highly sensitive parameter estimators with inflated variances and improper model selection. It is not clear how to best address multicollinearity in hydrologic regression models. Here a Monte Carlo simulation is developed to compare four techniques to address multicollinearity: OLS, OLS with variance inflation factor screening (VIF), principal component regression (PCR), and partial least squares regression (PLS). The performance of these four techniques was observed for varying sample sizes, correlation coefficients between the explanatory variables, and model error variances consistent with hydrologic regional regression models. The negative effects of multicollinearity are magnified at smaller sample sizes, higher correlations between the variables, and larger model error variances (smaller R2). The Monte Carlo simulation indicates that if the true model is known, multicollinearity is present, and the estimation and statistical testing of regression parameters are of interest, then PCR or PLS should be employed. If the model is unknown, or if the interest is solely on model predictions, is it recommended that OLS be employed since using more complicated techniques did not produce any improvement in model performance. A leave-one-out cross-validation case study was also performed using low-streamflow data sets from the eastern United States. Results indicate that OLS with stepwise selection generally produces models across study regions with varying levels of multicollinearity that are as good as biased regression techniques such as PCR and PLS.

11. Significance of stepwise excretion pattern in renogram

Tamaki, Nagara; Ishihara, Takashi; Mori, Toru; Bito, Sanae; Ito, Hidetomi

1981-01-01

In 204 routine renogram examinations using 131 I-iodohippurate, stepwise excretion curves were observed in 22 cases (14 with chronic thyroiditis, 4 with idiopathic edema, 3 with lower urinary tract disorders, and 1 with Bartter's syndrome). Such a phenomenon was observed in 74% of euthyroid edematous patients with chronic thyroiditis and 57% of patients with idiopathic edema. The stepwise pattern was considered to have certain correlations with spasm or increased peristalsis of the urinary tract through the studies of excretory urogram, butylscopolamine treated renogram, and regional renogram. In one of these edematous patients with chronic thyroiditis, this renogram pattern could not be reproduced after bed rest corresponding with the clinical evidence that physical rest reduce the edema. Thus, the stepwise excretory pattern in renogram seemed to be a useful indicator of the fluctuating edema in patients with chronic thyroiditis and idiopathic edema. (author)

12. Cushing's syndrome: Stepwise approach to diagnosis

Lila, Anurag R.; Sarathi, Vijaya; Jagtap, Varsha S.; Bandgar, Tushar; Menon, Padmavathy; Shah, Nalini S.

2011-01-01

The projected prevalence of Cushing's syndrome (CS) inclusive of subclinical cases in the adult population ranges from 0.2–2% and it may no longer be considered as an orphan disease (2–3 cases/million/year). The recognition of CS by physicians is important for early diagnosis and treatment. Late-night salivary cortisol, dexamethasone suppressiontesti, or 24-h urine free cortisol are good screening tests. Positively screened cases need stepwise evaluation by an endocrinologist. This paper discusses the importance of screening for CS and suggests a stepwise diagnostic approach to a case of suspected hypercortisolism. PMID:22145134

13. Dual Regression

2012-01-01

We propose dual regression as an alternative to the quantile regression process for the global estimation of conditional distribution functions under minimal assumptions. Dual regression provides all the interpretational power of the quantile regression process while avoiding the need for repairing the intersecting conditional quantile surfaces that quantile regression often produces in practice. Our approach introduces a mathematical programming characterization of conditional distribution f...

14. Principal components

Hallin, M.; Hörmann, S.; Piegorsch, W.; El Shaarawi, A.

2012-01-01

Principal Components are probably the best known and most widely used of all multivariate analysis techniques. The essential idea consists in performing a linear transformation of the observed k-dimensional variables in such a way that the new variables are vectors of k mutually orthogonal

15. Retro-regression--another important multivariate regression improvement.

Randić, M

2001-01-01

We review the serious problem associated with instabilities of the coefficients of regression equations, referred to as the MRA (multivariate regression analysis) "nightmare of the first kind". This is manifested when in a stepwise regression a descriptor is included or excluded from a regression. The consequence is an unpredictable change of the coefficients of the descriptors that remain in the regression equation. We follow with consideration of an even more serious problem, referred to as the MRA "nightmare of the second kind", arising when optimal descriptors are selected from a large pool of descriptors. This process typically causes at different steps of the stepwise regression a replacement of several previously used descriptors by new ones. We describe a procedure that resolves these difficulties. The approach is illustrated on boiling points of nonanes which are considered (1) by using an ordered connectivity basis; (2) by using an ordering resulting from application of greedy algorithm; and (3) by using an ordering derived from an exhaustive search for optimal descriptors. A novel variant of multiple regression analysis, called retro-regression (RR), is outlined showing how it resolves the ambiguities associated with both "nightmares" of the first and the second kind of MRA.

16. Cushing's syndrome: Stepwise approach to diagnosis

Lila, Anurag R.; Sarathi, Vijaya; Jagtap, Varsha S.; Bandgar, Tushar; Menon, Padmavathy; Shah, Nalini S.

2011-01-01

The projected prevalence of Cushing′s syndrome (CS) inclusive of subclinical cases in the adult population ranges from 0.2-2% and it may no longer be considered as an orphan disease (2-3 cases/million/year). The recognition of CS by physicians is important for early diagnosis and treatment. Late-night salivary cortisol, dexamethasone suppressiontesti, or 24-h urine free cortisol are good screening tests. Positively screened cases need stepwise evaluation by an endocrinologist. This paper disc...

17. Multiple-Objective Stepwise Calibration Using Luca

Hay, Lauren E.; Umemoto, Makiko

2007-01-01

This report documents Luca (Let us calibrate), a multiple-objective, stepwise, automated procedure for hydrologic model calibration and the associated graphical user interface (GUI). Luca is a wizard-style user-friendly GUI that provides an easy systematic way of building and executing a calibration procedure. The calibration procedure uses the Shuffled Complex Evolution global search algorithm to calibrate any model compiled with the U.S. Geological Survey's Modular Modeling System. This process assures that intermediate and final states of the model are simulated consistently with measured values.

18. Regression Phalanxes

Zhang, Hongyang; Welch, William J.; Zamar, Ruben H.

2017-01-01

Tomal et al. (2015) introduced the notion of "phalanxes" in the context of rare-class detection in two-class classification problems. A phalanx is a subset of features that work well for classification tasks. In this paper, we propose a different class of phalanxes for application in regression settings. We define a "Regression Phalanx" - a subset of features that work well together for prediction. We propose a novel algorithm which automatically chooses Regression Phalanxes from high-dimensi...

19. Regression filter for signal resolution

Matthes, W.

1975-01-01

The problem considered is that of resolving a measured pulse height spectrum of a material mixture, e.g. gamma ray spectrum, Raman spectrum, into a weighed sum of the spectra of the individual constituents. The model on which the analytical formulation is based is described. The problem reduces to that of a multiple linear regression. A stepwise linear regression procedure was constructed. The efficiency of this method was then tested by transforming the procedure in a computer programme which was used to unfold test spectra obtained by mixing some spectra, from a library of arbitrary chosen spectra, and adding a noise component. (U.K.)

20. Variable Selection for Regression Models of Percentile Flows

2017-12-01

Percentile flows describe the flow magnitude equaled or exceeded for a given percent of time, and are widely used in water resource management. However, these statistics are normally unavailable since most basins are ungauged. Percentile flows of ungauged basins are often predicted using regression models based on readily observable basin characteristics, such as mean elevation. The number of these independent variables is too large to evaluate all possible models. A subset of models is typically evaluated using automatic procedures, like stepwise regression. This ignores a large variety of methods from the field of feature (variable) selection and physical understanding of percentile flows. A study of 918 basins in the United States was conducted to compare an automatic regression procedure to the following variable selection methods: (1) principal component analysis, (2) correlation analysis, (3) random forests, (4) genetic programming, (5) Bayesian networks, and (6) physical understanding. The automatic regression procedure only performed better than principal component analysis. Poor performance of the regression procedure was due to a commonly used filter for multicollinearity, which rejected the strongest models because they had cross-correlated independent variables. Multicollinearity did not decrease model performance in validation because of a representative set of calibration basins. Variable selection methods based strictly on predictive power (numbers 2-5 from above) performed similarly, likely indicating a limit to the predictive power of the variables. Similar performance was also reached using variables selected based on physical understanding, a finding that substantiates recent calls to emphasize physical understanding in modeling for predictions in ungauged basins. The strongest variables highlighted the importance of geology and land cover, whereas widely used topographic variables were the weakest predictors. Variables suffered from a high

1. Stepwise approach to myopathy in systemic disease.

Chawla, Jasvinder

2011-01-01

Muscle diseases can constitute a large variety of both acquired and hereditary disorders. Myopathies in systemic disease results from several different disease processes including endocrine, inflammatory, paraneoplastic, infectious, drug- and toxin-induced, critical illness myopathy, metabolic, and myopathies with other systemic disorders. Patients with systemic myopathies often present acutely or sub acutely. On the other hand, familial myopathies or dystrophies generally present in a chronic fashion with exceptions of metabolic myopathies where symptoms on occasion can be precipitated acutely. Most of the inflammatory myopathies can have a chance association with malignant lesions; the incidence appears to be specifically increased only in patients with dermatomyositis. In dealing with myopathies associated with systemic illnesses, the focus will be on the acquired causes. Management is beyond the scope of this chapter. Prognosis is based upon the underlying cause and, most of the time, carries a good prognosis. In order to approach a patient with suspected myopathy from systemic disease, a stepwise approach is utilized.

2. Combining Alphas via Bounded Regression

2015-11-01

Full Text Available We give an explicit algorithm and source code for combining alpha streams via bounded regression. In practical applications, typically, there is insufficient history to compute a sample covariance matrix (SCM for a large number of alphas. To compute alpha allocation weights, one then resorts to (weighted regression over SCM principal components. Regression often produces alpha weights with insufficient diversification and/or skewed distribution against, e.g., turnover. This can be rectified by imposing bounds on alpha weights within the regression procedure. Bounded regression can also be applied to stock and other asset portfolio construction. We discuss illustrative examples.

3. Autistic Regression

Matson, Johnny L.; Kozlowski, Alison M.

2010-01-01

Autistic regression is one of the many mysteries in the developmental course of autism and pervasive developmental disorders not otherwise specified (PDD-NOS). Various definitions of this phenomenon have been used, further clouding the study of the topic. Despite this problem, some efforts at establishing prevalence have been made. The purpose of…

4. Linear regression

Olive, David J

2017-01-01

This text covers both multiple linear regression and some experimental design models. The text uses the response plot to visualize the model and to detect outliers, does not assume that the error distribution has a known parametric distribution, develops prediction intervals that work when the error distribution is unknown, suggests bootstrap hypothesis tests that may be useful for inference after variable selection, and develops prediction regions and large sample theory for the multivariate linear regression model that has m response variables. A relationship between multivariate prediction regions and confidence regions provides a simple way to bootstrap confidence regions. These confidence regions often provide a practical method for testing hypotheses. There is also a chapter on generalized linear models and generalized additive models. There are many R functions to produce response and residual plots, to simulate prediction intervals and hypothesis tests, to detect outliers, and to choose response trans...

5. FPGA Implementation of the stepwise shutdown system

Lotjonen, L.

2012-07-01

This report elaborates the design process of applications for field-programmable gate array (FPGA) devices. Brief introductions to EPGA technology and the design process are first given and then the design phases are walked through with the aid of a case study. FPGA is a programmable logic device that is programmed by the customer rather than the manufacturer. They are also usually re-programmable which enables updating their programming and otherwise modifying the design. There are also one-time programmable FPGAs that can be used when security issues require it. FPGA is said to be 'hardware designed like software', which means that the design process resembles software development but the end-product is considered a hardware application because the execution of the functions is entirely different from a microprocessor. This duality can give both the flexibility of software and the reliability of hardware. The FPGA design and verification and validation (V and V) methods for NPP safety systems have not yet matured because the technology is rather new in the field. Software development methods and stanfards can be used to some extent but the hardware aspects bring new challenges that cannot be tacled using purely software methods. International efforts are being made to development formal and consistent design and V and V methodology regulations for FPGA devices. A preventive safety function called Stepwise Shutdown System (SWS) was implemented on an Actel M1 IGLOO field-programmable gate array (FPGA) device. SWS is used to drive a process into a normal state if the process measurements deviate from the desired operating values. This can happen in case of process disturbances. The SWS implementation processfrom the reguirements to the functional device is elaborated. The design is tested via simulation and hardware testing. The case study is to be further expanded as a part of a master's thesis. (orig.)

6. FPGA Implementation of the stepwise shutdown system

Lotjonen, L.

2012-01-01

This report elaborates the design process of applications for field-programmable gate array (FPGA) devices. Brief introductions to EPGA technology and the design process are first given and then the design phases are walked through with the aid of a case study. FPGA is a programmable logic device that is programmed by the customer rather than the manufacturer. They are also usually re-programmable which enables updating their programming and otherwise modifying the design. There are also one-time programmable FPGAs that can be used when security issues require it. FPGA is said to be 'hardware designed like software', which means that the design process resembles software development but the end-product is considered a hardware application because the execution of the functions is entirely different from a microprocessor. This duality can give both the flexibility of software and the reliability of hardware. The FPGA design and verification and validation (V and V) methods for NPP safety systems have not yet matured because the technology is rather new in the field. Software development methods and standards can be used to some extent but the hardware aspects bring new challenges that cannot be tackled using purely software methods. International efforts are being made to development formal and consistent design and V and V methodology regulations for FPGA devices. A preventive safety function called Stepwise Shutdown System (SWS) was implemented on an Actel M1 IGLOO field-programmable gate array (FPGA) device. SWS is used to drive a process into a normal state if the process measurements deviate from the desired operating values. This can happen in case of process disturbances. The SWS implementation process from the requirements to the functional device is elaborated. The design is tested via simulation and hardware testing. The case study is to be further expanded as a part of a master's thesis. (orig.)

7. Stepwise Regression Analysis of MDOE Balance Calibration Data Acquired at DNW

DeLoach, RIchard; Philipsen, Iwan

2007-01-01

This paper reports a comparison of two experiment design methods applied in the calibration of a strain-gage balance. One features a 734-point test matrix in which loads are varied systematically according to a method commonly applied in aerospace research and known in the literature of experiment design as One Factor At a Time (OFAT) testing. Two variations of an alternative experiment design were also executed on the same balance, each with different features of an MDOE experiment design. The Modern Design of Experiments (MDOE) is an integrated process of experiment design, execution, and analysis applied at NASA's Langley Research Center to achieve significant reductions in cycle time, direct operating cost, and experimental uncertainty in aerospace research generally and in balance calibration experiments specifically. Personnel in the Instrumentation and Controls Department of the German Dutch Wind Tunnels (DNW) have applied MDOE methods to evaluate them in the calibration of a balance using an automated calibration machine. The data have been sent to Langley Research Center for analysis and comparison. This paper reports key findings from this analysis. The chief result is that a 100-point calibration exploiting MDOE principles delivered quality comparable to a 700+ point OFAT calibration with significantly reduced cycle time and attendant savings in direct and indirect costs. While the DNW test matrices implemented key MDOE principles and produced excellent results, additional MDOE concepts implemented in balance calibrations at Langley Research Center are also identified and described.

8. The Performance of Step-Wise Group Screening Designs

M.M. Manene

2005-06-01

Full Text Available In this paper we evaluate the performance of step-wise group screening designs in which group-factors contain an equal number of factors in the initial step.  A usual assumption in group screening designs is that the directions of possible effects are known a-priori. In practice, however, this assumption is unreasonable. We shall examine step-wise group screening designs without errors in observations when this assumption is relaxed. We shall consider cancellations of effects within group-factors. The performance of step-wise group-screening designs shall then be compared with the performance of multistage group screening designs.

9. Principal components regression of body measurements in five ...

Username, Password, Remember me, or Register ... Body weight and seven biometric traits that are; body length (BL), breast girth (BG), wing length ... Pearson correlations between body weight and biometric traits were positive and highly ...

10. The relative importance of imaging markers for the prediction of Alzheimer's disease dementia in mild cognitive impairment — Beyond classical regression

Stefan J. Teipel

2015-01-01

Penalized regression yielded more parsimonious models than unpenalized stepwise regression for the integration of multiregional and multimodal imaging information. The advantage of penalized regression was particularly strong with a high number of collinear predictors.

11. Stepwise introduction of laparoscopic liver surgery: validation of guideline recommendations.

van der Poel, Marcel J; Huisman, Floor; Busch, Olivier R; Abu Hilal, Mohammad; van Gulik, Thomas M; Tanis, Pieter J; Besselink, Marc G

2017-10-01

Uncontrolled introduction of laparoscopic liver surgery (LLS) could compromise postoperative outcomes. A stepwise introduction of LLS combined with structured training is advised. This study aimed to evaluate the impact of such a stepwise introduction. A retrospective, single-center case series assessing short term outcomes of all consecutive LLS in the period November 2006-January 2017. The technique was implemented in a stepwise fashion. To evaluate the impact of this stepwise approach combined with structured training, outcomes of LLS before and after a laparoscopic HPB fellowship were compared. A total of 135 laparoscopic resections were performed. Overall conversion rate was 4% (n = 5), clinically relevant complication rate 13% (n = 18) and mortality 0.7% (n = 1). A significant increase in patients with major LLS, multiple liver resections, previous abdominal surgery, malignancies and lesions located in posterior segments was observed after the fellowship as well as a decrease in the use of hand-assistance. Increasing complexity in the post fellowship period was reflected by an increase in operating times, but without comprising other surgical outcomes. A stepwise introduction of LLS combined with structured training reduced the clinical impact of the learning curve, thereby confirming guideline recommendations. Copyright © 2017 International Hepato-Pancreato-Biliary Association Inc. Published by Elsevier Ltd. All rights reserved.

12. Melanin fluorescence spectra by step-wise three photon excitation

Lai, Zhenhua; Kerimo, Josef; DiMarzio, Charles A.

2012-03-01

Melanin is the characteristic chromophore of human skin with various potential biological functions. Kerimo discovered enhanced melanin fluorescence by stepwise three-photon excitation in 2011. In this article, step-wise three-photon excited fluorescence (STPEF) spectrum between 450 nm -700 nm of melanin is reported. The melanin STPEF spectrum exhibited an exponential increase with wavelength. However, there was a probability of about 33% that another kind of step-wise multi-photon excited fluorescence (SMPEF) that peaks at 525 nm, shown by previous research, could also be generated using the same process. Using an excitation source at 920 nm as opposed to 830 nm increased the potential for generating SMPEF peaks at 525 nm. The SMPEF spectrum peaks at 525 nm photo-bleached faster than STPEF spectrum.

13. Cushing′s syndrome: Stepwise approach to diagnosis

Anurag R Lila

2011-01-01

Full Text Available The projected prevalence of Cushing′s syndrome (CS inclusive of subclinical cases in the adult population ranges from 0.2-2% and it may no longer be considered as an orphan disease (2-3 cases/million/year. The recognition of CS by physicians is important for early diagnosis and treatment. Late-night salivary cortisol, dexamethasone suppressiontesti, or 24-h urine free cortisol are good screening tests. Positively screened cases need stepwise evaluation by an endocrinologist. This paper discusses the importance of screening for CS and suggests a stepwise diagnostic approach to a case of suspected hypercortisolism.

14. Redesigning Principal Internships: Practicing Principals' Perspectives

Anast-May, Linda; Buckner, Barbara; Geer, Gregory

2011-01-01

Internship programs too often do not provide the types of experiences that effectively bridge the gap between theory and practice and prepare school leaders who are capable of leading and transforming schools. To help address this problem, the current study is directed at providing insight into practicing principals' views of the types of…

15. COPD: A stepwise or a hit hard approach?

A.J. Ferreira

2016-07-01

Full Text Available Current guidelines differ slightly on the recommendations for treatment of Chronic Obstructive Pulmonary Disease (COPD patients, and although there are some undisputed recommendations, there is still debate regarding the management of COPD. One of the hindrances to deciding which therapeutic approach to choose is late diagnosis or misdiagnosis of COPD. After a proper diagnosis is achieved and severity assessed, the choice between a stepwise or âhit hardâ approach has to be made. For GOLD A patients the stepwise approach is recommended, whilst for B, C and D patients this remains debatable. Moreover, in patients for whom inhaled corticosteroids (ICS are recommended, a step-up or âhit hardâ approach with triple therapy will depend on the patient's characteristics and, for patients who are being over-treated with ICS, ICS withdrawal should be performed, in order to optimize therapy and reduce excessive medications.This paper discusses and proposes stepwise, âhit hardâ, step-up and ICS withdrawal therapeutic approaches for COPD patients based on their GOLD group. We conclude that all approaches have benefits, and only a careful patient selection will determine which approach is better, and which patients will benefit the most from each approach. Keywords: COPD, Stepwise, Hit hard, Step-up, ICS withdrawal, Bronchodilators, ICS

16. Stepwise radical cation Diels-Alder reaction via multiple pathways.

Shimizu, Ryo; Okada, Yohei; Chiba, Kazuhiro

2018-01-01

Herein we disclose the radical cation Diels-Alder reaction of aryl vinyl ethers by electrocatalysis, which is triggered by an oxidative SET process. The reaction clearly proceeds in a stepwise fashion, which is a rare mechanism in this class. We also found that two distinctive pathways, including "direct" and "indirect", are possible to construct the Diels-Alder adduct.

17. Stepwise innovation adoption : a neglected concept in innovation research

Huizingh, K.R.E.; Brand, M.J.

2009-01-01

Most innovation researchers tend to consider innovation adoption as a binary process, implying that companies have either adopted an innovation or not. In this paper we focus on e-commerce as an innovation that can be adopted stepwise. We distinguish between two levels of e-commerce, basic and

18. What Motivates Principals?

Iannone, Ron

1973-01-01

Achievement and recognition were mentioned as factors appearing with greater frequency in principal's job satisfactions; school district policy and interpersonal relations were mentioned as job dissatisfactions. (Editor)

19. Principal Ports and Facilities

California Natural Resource Agency — The Principal Port file contains USACE port codes, geographic locations (longitude, latitude), names, and commodity tonnage summaries (total tons, domestic, foreign,...

20. Principal Ports and Facilities

California Department of Resources — The Principal Port file contains USACE port codes, geographic locations (longitude, latitude), names, and commodity tonnage summaries (total tons, domestic, foreign,...

1. Stepwise Decision Making for Long-Term Radioactive Waste Management

Pescatore, Claudio; Vari, Anna

2003-01-01

Consideration is increasingly being given, in radioactive waste management (RWM), to concepts such as 'stepwise decision making' and 'adaptive staging' in which the public, and especially the local communities, are also meaningfully involved in the review and planning of developments. The key feature of these concepts is development by steps or stages that are reversible, within the limits of practicability and provided they meet the requirements of an acceptable safety case. Stepwise decisions are designed to provide reassurance that actions can be reversed if experience shows them to have adverse or unwanted effects. Stepwise decision making has thus come to the fore as being especially important for making progress for radioactive waste management in a manner which is acceptable to large sectors of society. Despite its early identification within the RWM community as an important means for reaching decision in which there is broad-based confidence, stepwise decision making has not been widely debated. Accepted guiding principles of any such process have not yet been formulated, its roots in empirical social science research have not been fully reviewed, nor the difficulties of its implementation analysed. This paper reviews the current developments regarding stepwise decision making in RWM with the aim to pinpoint where it stands, to highlight its societal dimension, to analyse its roots in social sciences, and to identify guiding principles and issues in implementation. It is observed that there is convergence between the approach taken by the practitioners of RWM and the indications received from field studies in social research, and that general guiding principles can be proposed at least as a basis for further discussion. A strong basis for dialogue across disciplines thus exists. General methodological issues are also identified. This paper was developed in the framework of the activities of the NEA Forum on Stakeholder Confidence, which is presented in a

2. An Efficient Stepwise Statistical Test to Identify Multiple Linked Human Genetic Variants Associated with Specific Phenotypic Traits.

Iksoo Huh

Full Text Available Recent advances in genotyping methodologies have allowed genome-wide association studies (GWAS to accurately identify genetic variants that associate with common or pathological complex traits. Although most GWAS have focused on associations with single genetic variants, joint identification of multiple genetic variants, and how they interact, is essential for understanding the genetic architecture of complex phenotypic traits. Here, we propose an efficient stepwise method based on the Cochran-Mantel-Haenszel test (for stratified categorical data to identify causal joint multiple genetic variants in GWAS. This method combines the CMH statistic with a stepwise procedure to detect multiple genetic variants associated with specific categorical traits, using a series of associated I × J contingency tables and a null hypothesis of no phenotype association. Through a new stratification scheme based on the sum of minor allele count criteria, we make the method more feasible for GWAS data having sample sizes of several thousands. We also examine the properties of the proposed stepwise method via simulation studies, and show that the stepwise CMH test performs better than other existing methods (e.g., logistic regression and detection of associations by Markov blanket for identifying multiple genetic variants. Finally, we apply the proposed approach to two genomic sequencing datasets to detect linked genetic variants associated with bipolar disorder and obesity, respectively.

3. Differentiating regressed melanoma from regressed lichenoid keratosis.

Chan, Aegean H; Shulman, Kenneth J; Lee, Bonnie A

2017-04-01

Distinguishing regressed lichen planus-like keratosis (LPLK) from regressed melanoma can be difficult on histopathologic examination, potentially resulting in mismanagement of patients. We aimed to identify histopathologic features by which regressed melanoma can be differentiated from regressed LPLK. Twenty actively inflamed LPLK, 12 LPLK with regression and 15 melanomas with regression were compared and evaluated by hematoxylin and eosin staining as well as Melan-A, microphthalmia transcription factor (MiTF) and cytokeratin (AE1/AE3) immunostaining. (1) A total of 40% of regressed melanomas showed complete or near complete loss of melanocytes within the epidermis with Melan-A and MiTF immunostaining, while 8% of regressed LPLK exhibited this finding. (2) Necrotic keratinocytes were seen in the epidermis in 33% regressed melanomas as opposed to all of the regressed LPLK. (3) A dense infiltrate of melanophages in the papillary dermis was seen in 40% of regressed melanomas, a feature not seen in regressed LPLK. In summary, our findings suggest that a complete or near complete loss of melanocytes within the epidermis strongly favors a regressed melanoma over a regressed LPLK. In addition, necrotic epidermal keratinocytes and the presence of a dense band-like distribution of dermal melanophages can be helpful in differentiating these lesions. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

4. Constrained principal component analysis and related techniques

Takane, Yoshio

2013-01-01

In multivariate data analysis, regression techniques predict one set of variables from another while principal component analysis (PCA) finds a subspace of minimal dimensionality that captures the largest variability in the data. How can regression analysis and PCA be combined in a beneficial way? Why and when is it a good idea to combine them? What kind of benefits are we getting from them? Addressing these questions, Constrained Principal Component Analysis and Related Techniques shows how constrained PCA (CPCA) offers a unified framework for these approaches.The book begins with four concre

5. Principals' Perceptions of Politics

Tooms, Autumn K.; Kretovics, Mark A.; Smialek, Charles A.

2007-01-01

This study is an effort to examine principals' perceptions of workplace politics and its influence on their productivity and efficacy. A survey was used to explore the perceptions of current school administrators with regard to workplace politics. The instrument was disseminated to principals serving public schools in one Midwestern state in the…

6. Renewing the Principal Pipeline

Turnbull, Brenda J.

2015-01-01

The work principals do has always mattered, but as the demands of the job increase, it matters even more. Perhaps once they could maintain safety and order and call it a day, but no longer. Successful principals today must also lead instruction and nurture a productive learning community for students, teachers, and staff. They set the tone for the…

7. Predicting Dropouts of University Freshmen: A Logit Regression Analysis.

Lam, Y. L. Jack

1984-01-01

Stepwise discriminant analysis coupled with logit regression analysis of freshmen data from Brandon University (Manitoba) indicated that six tested variables drawn from research on university dropouts were useful in predicting attrition: student status, residence, financial sources, distance from home town, goal fulfillment, and satisfaction with…

8. Regression: A Bibliography.

Pedrini, D. T.; Pedrini, Bonnie C.

Regression, another mechanism studied by Sigmund Freud, has had much research, e.g., hypnotic regression, frustration regression, schizophrenic regression, and infra-human-animal regression (often directly related to fixation). Many investigators worked with hypnotic age regression, which has a long history, going back to Russian reflexologists.…

9. A New Model for Birth Weight Prediction Using 2- and 3-Dimensional Ultrasonography by Principal Component Analysis: A Chinese Population Study.

Liao, Shuxin; Wang, Yunfang; Xiao, Shufang; Deng, Xujie; Fang, Bimei; Yang, Fang

2018-03-30

To establish a new model for birth weight prediction using 2- and 3-dimensional ultrasonography (US) by principal component analysis (PCA). Two- and 3-dimensional US was prospectively performed in women with normal singleton pregnancies within 7 days before delivery (37-41 weeks' gestation). The participants were divided into a development group (n = 600) and a validation group (n = 597). Principal component analysis and stepwise linear regression analysis were used to develop a new prediction model. The new model's accuracy in predicting fetal birth weight was confirmed by the validation group through comparisons with previously published formulas. A total of 1197 cases were recruited in this study. All interclass and intraclass correlation coefficients of US measurements were greater than 0.75. Two principal components (PCs) were considered primary in determining estimated fetal birth weight, which were derived from 9 US measurements. Stepwise linear regression analysis showed a positive association between birth weight and PC1 and PC2. In the development group, our model had a small mean percentage error (mean ± SD, 3.661% ± 3.161%). At least a 47.558% decrease in the mean percentage error and a 57.421% decrease in the standard deviation of the new model compared with previously published formulas were noted. The results were similar to those in the validation group, and the new model covered 100% of birth weights within 10% of actual birth weights. The birth weight prediction model based on 2- and 3-dimensional US by PCA could help improve the precision of estimated fetal birth weight. © 2018 by the American Institute of Ultrasound in Medicine.

10. Stepwise Diagnosis for Rotating Machinery Using Force Identification Approach

Shozo Kawamura

2012-01-01

Full Text Available Machine condition monitoring and diagnosis have become increasingly important, and the application of these processes has been widely investigated. The authors previously proposed a stepwise diagnosis method for a beam structure. In that method, the location of the abnormality is first estimated using the force identification approach, and then the cause of the abnormality is identified. In this study, the stepwise diagnosis method was improved specifically for rotating machinery. The applicability of the proposed method was checked by using the experimental data. In the case of a rotor system with unbalance, it was shown that the location of the abnormality and its severity could be identified, and, in the case of a rotor system with stationary rubbing, the location of the abnormality could be accurately identified. Therefore, it was confirmed that the proposed diagnostic method is feasible for actual application.

11. Stepwise commissioning of a steam boiler with stability guarantees

Johansen, Simon Vestergaard; Kallesøe, Carsten Skovmose; Bendtsen, Jan Dimon

2016-01-01

This paper aims to make the commissioning of an industrial MIMO controller more straightforward by gradually commissioning it from a set of SISO controllers, after the system has been started. For this purpose a stepwise commissioning strategy based on the Youla-Kucera parametrization has been de...... been commissioned from a SISO controller using the developed method on a real steam boiler and measurements show a clear performance improvement after transition....

12. Multiscale principal component analysis

Akinduko, A A; Gorban, A N

2014-01-01

Principal component analysis (PCA) is an important tool in exploring data. The conventional approach to PCA leads to a solution which favours the structures with large variances. This is sensitive to outliers and could obfuscate interesting underlying structures. One of the equivalent definitions of PCA is that it seeks the subspaces that maximize the sum of squared pairwise distances between data projections. This definition opens up more flexibility in the analysis of principal components which is useful in enhancing PCA. In this paper we introduce scales into PCA by maximizing only the sum of pairwise distances between projections for pairs of datapoints with distances within a chosen interval of values [l,u]. The resulting principal component decompositions in Multiscale PCA depend on point (l,u) on the plane and for each point we define projectors onto principal components. Cluster analysis of these projectors reveals the structures in the data at various scales. Each structure is described by the eigenvectors at the medoid point of the cluster which represent the structure. We also use the distortion of projections as a criterion for choosing an appropriate scale especially for data with outliers. This method was tested on both artificial distribution of data and real data. For data with multiscale structures, the method was able to reveal the different structures of the data and also to reduce the effect of outliers in the principal component analysis

13. Stepwise classification of cancer samples using clinical and molecular data

2011-10-01

Full Text Available Abstract Background Combining clinical and molecular data types may potentially improve prediction accuracy of a classifier. However, currently there is a shortage of effective and efficient statistical and bioinformatic tools for true integrative data analysis. Existing integrative classifiers have two main disadvantages: First, coarse combination may lead to subtle contributions of one data type to be overshadowed by more obvious contributions of the other. Second, the need to measure both data types for all patients may be both unpractical and (cost inefficient. Results We introduce a novel classification method, a stepwise classifier, which takes advantage of the distinct classification power of clinical data and high-dimensional molecular data. We apply classification algorithms to two data types independently, starting with the traditional clinical risk factors. We only turn to relatively expensive molecular data when the uncertainty of prediction result from clinical data exceeds a predefined limit. Experimental results show that our approach is adaptive: the proportion of samples that needs to be re-classified using molecular data depends on how much we expect the predictive accuracy to increase when re-classifying those samples. Conclusions Our method renders a more cost-efficient classifier that is at least as good, and sometimes better, than one based on clinical or molecular data alone. Hence our approach is not just a classifier that minimizes a particular loss function. Instead, it aims to be cost-efficient by avoiding molecular tests for a potentially large subgroup of individuals; moreover, for these individuals a test result would be quickly available, which may lead to reduced waiting times (for diagnosis and hence lower the patients distress. Stepwise classification is implemented in R-package stepwiseCM and available at the Bioconductor website.

14. Stepwise Procedure for Development and Validation of a Multipesticide Method

Ambrus, A. [Hungarian Food Safety Office, Budapest (Hungary)

2009-07-15

The stepwise procedure for development and the validation of so called multi-pesticide methods are described. Principles, preliminary actions, criteria for the selection of chromatographic separation, detection and performance verification of multi-pesticide methods are outlined. Also the long term repeatability and reproducibility, as well as the necessity for the documentation of laboratory work are highlighted. Appendix I hereof describes in detail the calculation of calibration parameters, whereas Appendix II focuses on the calculation of the significance of differences of concentrations obtained on two different separation columns. (author)

15. Effects of stepwise gas combustion on NOx generation

Woperane Seredi, A.; Szepesi, E.

1999-01-01

To decrease NO x emission from gas boilers, the combustion process of gas has been modified from continuous combustion to step-wise combustion. In this process the combustion temperature, the temperature peaks in the flame, the residence time of combustion products in the high-temperature zone and the oxygen partial pressure are changed advantageously. Experiments were performed using multistage burners, and the NO x emission was recorded. It was found that the air factor of the primary combustion space has a determining effect on the NO x reduction. (R.P.)

16. Principal noncommutative torus bundles

Echterhoff, Siegfried; Nest, Ryszard; Oyono-Oyono, Herve

2008-01-01

of bivariant K-theory (denoted RKK-theory) due to Kasparov. Using earlier results of Echterhoff and Williams, we shall give a complete classification of principal non-commutative torus bundles up to equivariant Morita equivalence. We then study these bundles as topological fibrations (forgetting the group...

17. The Principal as CEO

Hollar, Charlie

2004-01-01

They may never grace the pages of The Wall Street Journal or Fortune magazine, but they might possibly be the most important CEOs in our country. They are elementary school principals. Each of them typically serves the learning needs of 350-400 clients (students) while overseeing a multimillion-dollar facility staffed by 20-25 teachers and 10-15…

18. Euler principal component analysis

Liwicki, Stephan; Tzimiropoulos, Georgios; Zafeiriou, Stefanos; Pantic, Maja

Principal Component Analysis (PCA) is perhaps the most prominent learning tool for dimensionality reduction in pattern recognition and computer vision. However, the ℓ 2-norm employed by standard PCA is not robust to outliers. In this paper, we propose a kernel PCA method for fast and robust PCA,

19. Reduced Rank Regression

Johansen, Søren

2008-01-01

The reduced rank regression model is a multivariate regression model with a coefficient matrix with reduced rank. The reduced rank regression algorithm is an estimation procedure, which estimates the reduced rank regression model. It is related to canonical correlations and involves calculating...

20. Enhanced eumelanin emission by stepwise three-photon excitation

Kerimo, Josef; Rajadhyaksha, Milind; DiMarzio, Charles A.

2011-03-01

Eumelanin fluorescence from Sepia officinalis and black human hair was activated with near-infrared radiation and multiphoton excitation. A third order multiphoton absorption by a step-wise process appears to be the underlying mechanism. The activation was caused by a photochemical process since it could not be reproduced by simple heating. Both fluorescence and brightfield imaging indicate the near-infrared irradiation caused photodamage to the eumelanin and the activated emission originated from the photodamaged region. At least two different components with about thousand-fold enhanced fluorescence were activated and could be distinguished by their excitation properties. One component was excited with wavelengths in the visible region and exhibited linear absorption dependence. The second component could be excited with near-infrared wavelengths and had a third order dependence on the laser power. The third order dependence is explained by a step-wise excited state absorption (ESA) process since it could be observed equally with the CW and femtosecond lasers. The new method for photoactivating the eumelanin fluorescence was used to map the melanin content in human hair.

1. Stepwise Nanopore Evolution in One-Dimensional Nanostructures

Choi, Jang Wook

2010-04-14

We report that established simple lithium (Li) ion battery cycles can be used to produce nanopores inside various useful one-dimensional (1D) nanostructures such as zinc oxide, silicon, and silver nanowires. Moreover, porosities of these 1D nanomaterials can be controlled in a stepwise manner by the number of Li-battery cycles. Subsequent pore characterization at the end of each cycle allows us to obtain detailed snapshots of the distinct pore evolution properties in each material due to their different atomic diffusion rates and types of chemical bonds. Also, this stepwise characterization led us to the first observation of pore size increases during cycling, which can be interpreted as a similar phenomenon to Ostwald ripening in analogous nanoparticle cases. Finally, we take advantage of the unique combination of nanoporosity and 1D materials and demonstrate nanoporous silicon nanowires (poSiNWs) as excellent supercapacitor (SC) electrodes in high power operations compared to existing devices with activated carbon. © 2010 American Chemical Society.

2. The Swedish approach to spent fuel disposal - stepwise implementation

Gustaffson, B.

1997-01-01

This presentation describes the stepwise implementation of direct disposal of spent fuel in Sweden. The present status regarding the technical development of the Swedish concept will be discussed as well the local site work made in co-operation with the affected and concerned municipalities. In this respect it should be noted that the siting work in some cases has caused heavy opposition and negative opinions. A brief review will also be given regarding the Aspo Hard Rock Laboratory. The objectives of this laboratory as well as the ongoing demo-project will be discussed. In order to give the symposium organizer a more broad view of the Swedish programme a number of recent papers has been compiled. Theses papers will be summarized in the presentation. (author). 4 tabs., 22 figs

3. Better Care Teams: A Stepwise Skill Reinforcement Model.

Christopher, Beth-Anne; Grantner, Mary; Coke, Lola A; Wideman, Marilyn; Kwakwa, Francis

2016-06-01

The Building Healthy Urban Communities initiative presents a path for organizations partnering to improve patient outcomes with continuing education (CE) as a key component. Components of the CE initiative included traditional CE delivery formats with an essential element of adaptability and new methods, with rigorous evaluation over time that included evaluation prior to the course, immediately following the CE session, 6 to 8 weeks after the CE session, and then subsequent monthly "testlets." Outcome measures were designed to allow for ongoing adaptation of content, reinforcement of key learning objectives, and use of innovative concordant testing and retrieval practice techniques. The results after 1 year of programming suggest the stepwise skill reinforcement model is effective for learning and is an efficient use of financial and human resources. More important, its design is one that could be adopted at low cost by organizations willing to work in close partnership. J Contin Educ Nurs. 2016;47(6):283-288. Copyright 2016, SLACK Incorporated.

4. Group-wise partial least square regression

Camacho, José; Saccenti, Edoardo

2018-01-01

This paper introduces the group-wise partial least squares (GPLS) regression. GPLS is a new sparse PLS technique where the sparsity structure is defined in terms of groups of correlated variables, similarly to what is done in the related group-wise principal component analysis. These groups are

5. Regression analysis by example

Chatterjee, Samprit

2012-01-01

Praise for the Fourth Edition: ""This book is . . . an excellent source of examples for regression analysis. It has been and still is readily readable and understandable."" -Journal of the American Statistical Association Regression analysis is a conceptually simple method for investigating relationships among variables. Carrying out a successful application of regression analysis, however, requires a balance of theoretical results, empirical rules, and subjective judgment. Regression Analysis by Example, Fifth Edition has been expanded

6. PRINCIPAL COMPONENT ANALYSIS OF FACTORS DETERMINING PHOSPHATE ROCK DISSOLUTION ON ACID SOILS

Yusdar Hilman

2016-10-01

Full Text Available Many of the agricultural soils in Indonesia are acidic and low in both total and available phosphorus which severely limits their potential for crops production. These problems can be corrected by application of chemical fertilizers. However, these fertilizers are expensive, and cheaper alternatives such as phosphate rock (PR have been considered. Several soil factors may influence the dissolution of PR in soils, including both chemical and physical properties. The study aimed to identify PR dissolution factors and evaluate their relative magnitude. The experiment was conducted in Soil Chemical Laboratory, Universiti Putra Malaysia and Indonesian Center for Agricultural Land Resources Research and Development from January to April 2002. The principal component analysis (PCA was used to characterize acid soils in an incubation system into a number of factors that may affect PR dissolution. Three major factors selected were soil texture, soil acidity, and fertilization. Using the scores of individual factors as independent variables, stepwise regression analysis was performed to derive a PR dissolution function. The factors influencing PR dissolution in order of importance were soil texture, soil acidity, then fertilization. Soil texture factors including clay content and organic C, and soil acidity factor such as P retention capacity interacted positively with P dissolution and promoted PR dissolution effectively. Soil texture factors, such as sand and silt content, soil acidity factors such as pH, and exchangeable Ca decreased PR dissolution.

7. Quantile Regression Methods

Fitzenberger, Bernd; Wilke, Ralf Andreas

2015-01-01

if the mean regression model does not. We provide a short informal introduction into the principle of quantile regression which includes an illustrative application from empirical labor market research. This is followed by briefly sketching the underlying statistical model for linear quantile regression based......Quantile regression is emerging as a popular statistical approach, which complements the estimation of conditional mean models. While the latter only focuses on one aspect of the conditional distribution of the dependent variable, the mean, quantile regression provides more detailed insights...... by modeling conditional quantiles. Quantile regression can therefore detect whether the partial effect of a regressor on the conditional quantiles is the same for all quantiles or differs across quantiles. Quantile regression can provide evidence for a statistical relationship between two variables even...

Hindman, Jennifer; Rozzelle, Jan; Ball, Rachel; Fahey, John

2015-01-01

The School-University Research Network (SURN) Principal Academy at the College of William & Mary in Williamsburg, Virginia, has a mission to build a leadership development program that increases principals' instructional knowledge and develops mentor principals to sustain the program. The academy is designed to connect and empower principals…

9. Stepwise transformation behavior of the strain-induced martensitic transformation in a metastable stainless steel

Hedstroem, Peter; Lienert, Ulrich; Almer, Jon; Oden, Magnus

2007-01-01

In situ high-energy X-ray diffraction during tensile loading has been used to investigate the evolution of lattice strains and the accompanying strain-induced martensitic transformation in cold-rolled sheets of a metastable stainless steel. At high applied strains the transformation to α-martensite occurs in stepwise bursts. These stepwise transformation events are correlated with stepwise increased lattice strains and peak broadening in the austenite phase. The stepwise transformation arises from growth of α-martensite embryos by autocatalytic transformation

10. Stillbirth evaluation: a stepwise assessment of placental pathology and autopsy.

Miller, Emily S; Minturn, Lucy; Linn, Rebecca; Weese-Mayer, Debra E; Ernst, Linda M

2016-01-01

The American Congress of Obstetricians and Gynecologists places special emphasis on autopsy as one of the most important tests for evaluation of stillbirth. Despite a recommendation of an autopsy, many families will decline the autopsy based on religious/cultural beliefs, fear of additional suffering for the child, or belief that no additional information will be obtained or of value. Further, many obstetric providers express a myriad of barriers limiting their recommendation for a perinatal autopsy despite their understanding of its value. Consequently, perinatal autopsy rates have been declining. Without the information provided by an autopsy, many women are left with unanswered questions regarding cause of death for their fetus and without clear management strategies to reduce the risk of stillbirth in future pregnancies. To avoid this scenario, it is imperative that clinicians are knowledgeable about the benefit of autopsy so they can provide clear information on its diagnostic utility and decrease potential barriers; in so doing the obstetrician can ensure that each family has the necessary information to make an informed decision. We sought to quantify the contribution of placental pathologic examination and autopsy in identifying a cause of stillbirth and to identify how often clinical management is modified due to each result. This is a cohort study of all cases of stillbirth from 2009 through 2013 at a single tertiary care center. Records were reviewed in a stepwise manner: first the clinical history and laboratory results, then the placental pathologic evaluation, and finally the autopsy. At each step, a cause of death and the certainty of that etiology were coded. Clinical changes that would be recommended by information available at each step were also recorded. Among the 144 cases of stillbirth examined, 104 (72%) underwent autopsy and these cases constitute the cohort of study. The clinical and laboratory information alone identified a cause of death

11. Understanding logistic regression analysis

Sperandei, Sandro

2014-01-01

Logistic regression is used to obtain odds ratio in the presence of more than one explanatory variable. The procedure is quite similar to multiple linear regression, with the exception that the response variable is binomial. The result is the impact of each variable on the odds ratio of the observed event of interest. The main advantage is to avoid confounding effects by analyzing the association of all variables together. In this article, we explain the logistic regression procedure using ex...

12. Introduction to regression graphics

Cook, R Dennis

2009-01-01

Covers the use of dynamic and interactive computer graphics in linear regression analysis, focusing on analytical graphics. Features new techniques like plot rotation. The authors have composed their own regression code, using Xlisp-Stat language called R-code, which is a nearly complete system for linear regression analysis and can be utilized as the main computer program in a linear regression course. The accompanying disks, for both Macintosh and Windows computers, contain the R-code and Xlisp-Stat. An Instructor's Manual presenting detailed solutions to all the problems in the book is ava

13. Alternative Methods of Regression

Birkes, David

2011-01-01

Of related interest. Nonlinear Regression Analysis and its Applications Douglas M. Bates and Donald G. Watts ".an extraordinary presentation of concepts and methods concerning the use and analysis of nonlinear regression models.highly recommend[ed].for anyone needing to use and/or understand issues concerning the analysis of nonlinear regression models." --Technometrics This book provides a balance between theory and practice supported by extensive displays of instructive geometrical constructs. Numerous in-depth case studies illustrate the use of nonlinear regression analysis--with all data s

14. Stepwise development of hematopoietic stem cells from embryonic stem cells.

Kenji Matsumoto

Full Text Available The cellular ontogeny of hematopoietic stem cells (HSCs remains poorly understood because their isolation from and their identification in early developing small embryos are difficult. We attempted to dissect early developmental stages of HSCs using an in vitro mouse embryonic stem cell (ESC differentiation system combined with inducible HOXB4 expression. Here we report the identification of pre-HSCs and an embryonic type of HSCs (embryonic HSCs as intermediate cells between ESCs and HSCs. Both pre-HSCs and embryonic HSCs were isolated by their c-Kit(+CD41(+CD45(- phenotype. Pre-HSCs did not engraft in irradiated adult mice. After co-culture with OP9 stromal cells and conditional expression of HOXB4, pre-HSCs gave rise to embryonic HSCs capable of engraftment and long-term reconstitution in irradiated adult mice. Blast colony assays revealed that most hemangioblast activity was detected apart from the pre-HSC population, implying the early divergence of pre-HSCs from hemangioblasts. Gene expression profiling suggests that a particular set of transcripts closely associated with adult HSCs is involved in the transition of pre-HSC to embryonic HSCs. We propose an HSC developmental model in which pre-HSCs and embryonic HSCs sequentially give rise to adult types of HSCs in a stepwise manner.

15. Super-resolution fluorescence microscopy by stepwise optical saturation

Zhang, Yide; Nallathamby, Prakash D.; Vigil, Genevieve D.; Khan, Aamir A.; Mason, Devon E.; Boerckel, Joel D.; Roeder, Ryan K.; Howard, Scott S.

2018-01-01

Super-resolution fluorescence microscopy is an important tool in biomedical research for its ability to discern features smaller than the diffraction limit. However, due to its difficult implementation and high cost, the super-resolution microscopy is not feasible in many applications. In this paper, we propose and demonstrate a saturation-based super-resolution fluorescence microscopy technique that can be easily implemented and requires neither additional hardware nor complex post-processing. The method is based on the principle of stepwise optical saturation (SOS), where M steps of raw fluorescence images are linearly combined to generate an image with a M-fold increase in resolution compared with conventional diffraction-limited images. For example, linearly combining (scaling and subtracting) two images obtained at regular powers extends the resolution by a factor of 1.4 beyond the diffraction limit. The resolution improvement in SOS microscopy is theoretically infinite but practically is limited by the signal-to-noise ratio. We perform simulations and experimentally demonstrate super-resolution microscopy with both one-photon (confocal) and multiphoton excitation fluorescence. We show that with the multiphoton modality, the SOS microscopy can provide super-resolution imaging deep in scattering samples. PMID:29675306

16. Pathways of DNA unlinking: A story of stepwise simplification.

Stolz, Robert; Yoshida, Masaaki; Brasher, Reuben; Flanner, Michelle; Ishihara, Kai; Sherratt, David J; Shimokawa, Koya; Vazquez, Mariel

2017-09-29

17. Principals' Salaries, 2007-2008

Cooke, Willa D.; Licciardi, Chris

2008-01-01

How do salaries of elementary and middle school principals compare with those of other administrators and classroom teachers? Are increases in salaries of principals keeping pace with increases in salaries of classroom teachers? And how have principals' salaries fared over the years when the cost of living is taken into account? There are reliable…

18. Principals Who Think Like Teachers

Fahey, Kevin

2013-01-01

Being a principal is a complex job, requiring quick, on-the-job learning. But many principals already have deep experience in a role at the very essence of the principalship. They know how to teach. In interviews with principals, Fahey and his colleagues learned that thinking like a teacher was key to their work. Part of thinking the way a teacher…

19. School Principals' Emotional Coping Process

Poirel, Emmanuel; Yvon, Frédéric

2014-01-01

The present study examines the emotional coping of school principals in Quebec. Emotional coping was measured by stimulated recall; six principals were filmed during a working day and presented a week later with their video showing stressful encounters. The results show that school principals experience anger because of reproaches from staff…

20. Legal Problems of the Principal.

Stern, Ralph D.; And Others

The three talks included here treat aspects of the law--tort liability, student records, and the age of majority--as they relate to the principal. Specifically, the talk on torts deals with the consequences of principal negligence in the event of injuries to students. Assurance is given that a reasonable and prudent principal will have a minimum…

1. RE Rooted in Principal's Biography

ter Avest, Ina; Bakker, C.

2017-01-01

Critical incidents in the biography of principals appear to be steering in their innovative way of constructing InterReligious Education in their schools. In this contribution, the authors present the biographical narratives of 4 principals: 1 principal introducing interreligious education in a

2. The Future of Principal Evaluation

Clifford, Matthew; Ross, Steven

2012-01-01

The need to improve the quality of principal evaluation systems is long overdue. Although states and districts generally require principal evaluations, research and experience tell that many state and district evaluations do not reflect current standards and practices for principals, and that evaluation is not systematically administered. When…

3. Does Stepwise Voltage Ramping Protect the Kidney from Injury During Extracorporeal Shockwave Lithotripsy? Results of a Prospective Randomized Trial.

Skuginna, Veronika; Nguyen, Daniel P; Seiler, Roland; Kiss, Bernhard; Thalmann, George N; Roth, Beat

2016-02-01

Renal damage is more frequent with new-generation lithotripters. However, animal studies suggest that voltage ramping minimizes the risk of complications following extracorporeal shock wave lithotripsy (SWL). In the clinical setting, the optimal voltage strategy remains unclear. To evaluate whether stepwise voltage ramping can protect the kidney from damage during SWL. A total of 418 patients with solitary or multiple unilateral kidney stones were randomized to receive SWL using a Modulith SLX-F2 lithotripter with either stepwise voltage ramping (n=213) or a fixed maximal voltage (n=205). SWL. The primary outcome was sonographic evidence of renal hematomas. Secondary outcomes included levels of urinary markers of renal damage, stone disintegration, stone-free rate, and rates of secondary interventions within 3 mo of SWL. Descriptive statistics were used to compare clinical outcomes between the two groups. A logistic regression model was generated to assess predictors of hematomas. Significantly fewer hematomas occurred in the ramping group(12/213, 5.6%) than in the fixed group (27/205, 13%; p=0.008). There was some evidence that the fixed group had higher urinary β2-microglobulin levels after SWL compared to the ramping group (p=0.06). Urinary microalbumin levels, stone disintegration, stone-free rate, and rates of secondary interventions did not significantly differ between the groups. The logistic regression model showed a significantly higher risk of renal hematomas in older patients (odds ratio [OR] 1.03, 95% confidence interval [CI] 1.00-1.05; p=0.04). Stepwise voltage ramping was associated with a lower risk of hematomas (OR 0.39, 95% CI 0.19-0.80; p=0.01). The study was limited by the use of ultrasound to detect hematomas. In this prospective randomized study, stepwise voltage ramping during SWL was associated with a lower risk of renal damage compared to a fixed maximal voltage without compromising treatment effectiveness. Lithotripsy is a noninvasive

4. Principal stratification in causal inference.

Frangakis, Constantine E; Rubin, Donald B

2002-03-01

Many scientific problems require that treatment comparisons be adjusted for posttreatment variables, but the estimands underlying standard methods are not causal effects. To address this deficiency, we propose a general framework for comparing treatments adjusting for posttreatment variables that yields principal effects based on principal stratification. Principal stratification with respect to a posttreatment variable is a cross-classification of subjects defined by the joint potential values of that posttreatment variable tinder each of the treatments being compared. Principal effects are causal effects within a principal stratum. The key property of principal strata is that they are not affected by treatment assignment and therefore can be used just as any pretreatment covariate. such as age category. As a result, the central property of our principal effects is that they are always causal effects and do not suffer from the complications of standard posttreatment-adjusted estimands. We discuss briefly that such principal causal effects are the link between three recent applications with adjustment for posttreatment variables: (i) treatment noncompliance, (ii) missing outcomes (dropout) following treatment noncompliance. and (iii) censoring by death. We then attack the problem of surrogate or biomarker endpoints, where we show, using principal causal effects, that all current definitions of surrogacy, even when perfectly true, do not generally have the desired interpretation as causal effects of treatment on outcome. We go on to forrmulate estimands based on principal stratification and principal causal effects and show their superiority.

5. Le principe roman

Ferrari, Jérôme

2015-01-01

Fasciné par la figure du physicien allemand Werner Heisenberg (1901-1976), fondateur de la mécanique quantique, inventeur du célèbre "principe d'incertitude" et Prix Nobel de physique en 1932, un jeune aspirant-philosophe désenchanté s'efforce, à l'aube du XXIe siècle, de considérer l'incomplétude de sa propre existence à l'aune des travaux et de la destinée de cet exceptionnel homme de sciences qui incarne pour lui la rencontre du langage scientifique et de la poésie, lesquels, chacun à leur manière, en ouvrant la voie au scandale de l'inédit, dessillent les yeux sur le monde pour en révéler la mystérieuse beauté que ne cessent de confisquer le matérialisme à l'œuvre dans l'Histoire des hommes.

6. Principal oscillation patterns

Storch, H. von; Buerger, G.; Storch, J.S. von

1993-01-01

The Principal Oscillation Pattern (POP) analysis is a technique which is used to simultaneously infer the characteristic patterns and time scales of a vector time series. The POPs may be seen as the normal modes of a linearized system whose system matrix is estimated from data. The concept of POP analysis is reviewed. Examples are used to illustrate the potential of the POP technique. The best defined POPs of tropospheric day-to-day variability coincide with the most unstable modes derived from linearized theory. POPs can be derived even from a space-time subset of data. POPs are successful in identifying two independent modes with similar time scales in the same data set. The POP method can also produce forecasts which may potentially be used as a reference for other forecast models. The conventional POP analysis technique has been generalized in various ways. In the cyclostationary POP analysis, the estimated system matrix is allowed to vary deterministically with an externally forced cycle. In the complex POP analysis not only the state of the system but also its ''momentum'' is modeled. Associated correlation patterns are a useful tool to describe the appearance of a signal previously identified by a POP analysis in other parameters. (orig.)

7. Stepwise Analysis of Differential Item Functioning Based on Multiple-Group Partial Credit Model.

Muraki, Eiji

1999-01-01

Extended an Item Response Theory (IRT) method for detection of differential item functioning to the partial credit model and applied the method to simulated data using a stepwise procedure. Then applied the stepwise DIF analysis based on the multiple-group partial credit model to writing trend data from the National Assessment of Educational…

8. Gonadoblastoma: evidence for a stepwise progression to dysgerminoma in a dysgenetic ovary.

Pauls, Katharina; Franke, Folker E; Büttner, Reinhard; Zhou, Hui

2005-09-01

9. A stepwise procedure for science communication in the field

Nisancioglu, Kerim; Paasche, Øyvind

2017-04-01

Communicating and disseminating earth science to laypersons, high-school students and their teachers are becoming increasingly important considering the overwhelming impact human civilization have on the planet. One of the main challenges with this type of dissemination arises from the cross-disciplinary nature of the Earth system as it encompasses anything from cloud physics to the geological evidence of ice ages being played out on millennial time scales. During the last four years we have tested and developed an approach referred to as «Turspor» which can be translated to 'Trail Tracks'. The ambition with "Turspor" is to inspire participants to seek in-depth knowledge relating to observations of features made in the field (glacial moraines, active permafrost, clouds, winds and so forth) as we have come to learn that observations made in the field enhances students capability to grasp the bare essentials related to the phenomena in question. By engaging master and PhD students in the process we create a platform where students can improve their teaching and communicative skills through a stepwise procedure. The initial concept was tested on 35 high school students during the summer of 2012 in the mountainous area of Snøheim on Dovre, Southern Norway. Before the arrival of the high school students, the university students prepared one page written summaries describing relevant geological or meteorological features and trained on how to best disseminate a basic scientific understanding of these. Specific examples were patterned ground caused by permafrost, glacier flour, katabatic winds, and equilibrium line altitude of glaciers. Based on the success of the program over the past 4 years with field trips together with local schools, we are in the process of developing the concept to be offered as a course at the master and PhD level, including a week of training in didactics applied to topics in the geosciences as well as practical training in the field. The

10. Boosted beta regression.

Matthias Schmid

Full Text Available Regression analysis with a bounded outcome is a common problem in applied statistics. Typical examples include regression models for percentage outcomes and the analysis of ratings that are measured on a bounded scale. In this paper, we consider beta regression, which is a generalization of logit models to situations where the response is continuous on the interval (0,1. Consequently, beta regression is a convenient tool for analyzing percentage responses. The classical approach to fit a beta regression model is to use maximum likelihood estimation with subsequent AIC-based variable selection. As an alternative to this established - yet unstable - approach, we propose a new estimation technique called boosted beta regression. With boosted beta regression estimation and variable selection can be carried out simultaneously in a highly efficient way. Additionally, both the mean and the variance of a percentage response can be modeled using flexible nonlinear covariate effects. As a consequence, the new method accounts for common problems such as overdispersion and non-binomial variance structures.

11. Stepwise multi-criteria optimization for robotic radiosurgery

Schlaefer, A.; Schweikard, A.

2008-01-01

Achieving good conformality and a steep dose gradient around the target volume remains a key aspect of radiosurgery. Clearly, this involves a trade-off between target coverage, conformality of the dose distribution, and sparing of critical structures. Yet, image guidance and robotic beam placement have extended highly conformal dose delivery to extracranial and moving targets. Therefore, the multi-criteria nature of the optimization problem becomes even more apparent, as multiple conflicting clinical goals need to be considered coordinate to obtain an optimal treatment plan. Typically, planning for robotic radiosurgery is based on constrained optimization, namely linear programming. An extension of that approach is presented, such that each of the clinical goals can be addressed separately and in any sequential order. For a set of common clinical goals the mapping to a mathematical objective and a corresponding constraint is defined. The trade-off among the clinical goals is explored by modifying the constraints and optimizing a simple objective, while retaining feasibility of the solution. Moreover, it becomes immediately obvious whether a desired goal can be achieved and where a trade-off is possible. No importance factors or predefined prioritizations of clinical goals are necessary. The presented framework forms the basis for interactive and automated planning procedures. It is demonstrated for a sample case that the linear programming formulation is suitable to search for a clinically optimal treatment, and that the optimization steps can be performed quickly to establish that a Pareto-efficient solution has been found. Furthermore, it is demonstrated how the stepwise approach is preferable compared to modifying importance factors

12. Traumatic eye ball luxation: A stepwise approach to globe salvage

Himika Gupta

2017-10-01

Full Text Available Craniofacial trauma is often associated with orbital and ocular injuries. We report a case of a 21-year-old male with motor vehicular accident, orbital roof blow-in fracture, cerebrospinal fluid (CSF leak, and left sided globe luxation with corneal abrasion and complete conjunctival denuding. The patient was managed by a multispeciality team and the eyeball was protected by amniotic membrane graft (AMG biological dressing with novel use of inverted sterile metallic bowl as mechanical protection till the patient stabilized. During surgery, eyeball was reposited and ocular surface was reconstructed using amniotic membrane and symblepharon ring. Surgical correction and plating of the facial fractures and dural repair with autologus tensor fascia lata was done. Post surgery ocular surface was intact, ocular motility was well preserved and the globe was prephthisical. Traumatic eyeball luxation is a rare, but dramatic presentation which may occur in a blow in fracture when the intra orbital volume reduces and expels the eye ball out of the socket. This may be associated with extra ocular muscle rupture or optic nerve avulsion. The visual prognosis is nil in majority cases. However, the management is targeted towards globe preservation in view of psychological benefit and ease of cosmetic or prosthetic rehabilitation. Knowing the mechanism of luxation helps to plan the management. A stepwise approach for globe salvage is recommended. Team efforts to take care of various morbidities with special steps to safeguard the eye help to optimize outcomes. Keywords: Traumatic eyeball luxation, Blow in orbital fractures, Amniotic membrane graft for ocular surface, Globe reposition

13. Step-wise pulling protocols for non-equilibrium dynamics

Ngo, Van Anh

The fundamental laws of thermodynamics and statistical mechanics, and the deeper understandings of quantum mechanics have been rebuilt in recent years. It is partly because of the increasing power of computing resources nowadays, that allow shedding direct insights into the connections among the thermodynamics laws, statistical nature of our world, and the concepts of quantum mechanics, which have not yet been understood. But mostly, the most important reason, also the ultimate goal, is to understand the mechanisms, statistics and dynamics of biological systems, whose prevailing non-equilibrium processes violate the fundamental laws of thermodynamics, deviate from statistical mechanics, and finally complicate quantum effects. I believe that investigations of the fundamental laws of non-equilibrium dynamics will be a frontier research for at least several more decades. One of the fundamental laws was first discovered in 1997 by Jarzynski, so-called Jarzynski's Equality. Since then, different proofs, alternative descriptions of Jarzynski's Equality, and its further developments and applications have been quickly accumulated. My understandings, developments and applications of an alternative theory on Jarzynski's Equality form the bulk of this dissertation. The core of my theory is based on stepwise pulling protocols, which provide deeper insight into how fluctuations of reaction coordinates contribute to free-energy changes along a reaction pathway. We find that the most optimal pathways, having the largest contribution to free-energy changes, follow the principle of detailed balance. This is a glimpse of why the principle of detailed balance appears so powerful for sampling the most probable statistics of events. In a further development on Jarzynski's Equality, I have been trying to use it in the formalism of diagonal entropy to propose a way to extract useful thermodynamic quantities such temperature, work and free-energy profiles from far

14. Understanding logistic regression analysis.

Sperandei, Sandro

2014-01-01

Logistic regression is used to obtain odds ratio in the presence of more than one explanatory variable. The procedure is quite similar to multiple linear regression, with the exception that the response variable is binomial. The result is the impact of each variable on the odds ratio of the observed event of interest. The main advantage is to avoid confounding effects by analyzing the association of all variables together. In this article, we explain the logistic regression procedure using examples to make it as simple as possible. After definition of the technique, the basic interpretation of the results is highlighted and then some special issues are discussed.

15. Applied linear regression

Weisberg, Sanford

2013-01-01

Praise for the Third Edition ""...this is an excellent book which could easily be used as a course text...""-International Statistical Institute The Fourth Edition of Applied Linear Regression provides a thorough update of the basic theory and methodology of linear regression modeling. Demonstrating the practical applications of linear regression analysis techniques, the Fourth Edition uses interesting, real-world exercises and examples. Stressing central concepts such as model building, understanding parameters, assessing fit and reliability, and drawing conclusions, the new edition illus

16. Applied logistic regression

Hosmer, David W; Sturdivant, Rodney X

2013-01-01

A new edition of the definitive guide to logistic regression modeling for health science and other applications This thoroughly expanded Third Edition provides an easily accessible introduction to the logistic regression (LR) model and highlights the power of this model by examining the relationship between a dichotomous outcome and a set of covariables. Applied Logistic Regression, Third Edition emphasizes applications in the health sciences and handpicks topics that best suit the use of modern statistical software. The book provides readers with state-of-

17. Understanding poisson regression.

Hayat, Matthew J; Higgins, Melinda

2014-04-01

Nurse investigators often collect study data in the form of counts. Traditional methods of data analysis have historically approached analysis of count data either as if the count data were continuous and normally distributed or with dichotomization of the counts into the categories of occurred or did not occur. These outdated methods for analyzing count data have been replaced with more appropriate statistical methods that make use of the Poisson probability distribution, which is useful for analyzing count data. The purpose of this article is to provide an overview of the Poisson distribution and its use in Poisson regression. Assumption violations for the standard Poisson regression model are addressed with alternative approaches, including addition of an overdispersion parameter or negative binomial regression. An illustrative example is presented with an application from the ENSPIRE study, and regression modeling of comorbidity data is included for illustrative purposes. Copyright 2014, SLACK Incorporated.

18. Principal components analysis in clinical studies.

2017-09-01

In multivariate analysis, independent variables are usually correlated to each other which can introduce multicollinearity in the regression models. One approach to solve this problem is to apply principal components analysis (PCA) over these variables. This method uses orthogonal transformation to represent sets of potentially correlated variables with principal components (PC) that are linearly uncorrelated. PCs are ordered so that the first PC has the largest possible variance and only some components are selected to represent the correlated variables. As a result, the dimension of the variable space is reduced. This tutorial illustrates how to perform PCA in R environment, the example is a simulated dataset in which two PCs are responsible for the majority of the variance in the data. Furthermore, the visualization of PCA is highlighted.

19. Stepwise development of MAIT cells in mouse and human.

Emmanuel Martin

2009-03-01

Full Text Available Mucosal-associated invariant T (MAIT cells display two evolutionarily conserved features: an invariant T cell receptor (TCRalpha (iTCRalpha chain and restriction by the nonpolymorphic class Ib major histocompatibility complex (MHC molecule, MHC-related molecule 1 (MR1. MR1 expression on thymus epithelial cells is not necessary for MAIT cell development but their accumulation in the gut requires MR1 expressing B cells and commensal flora. MAIT cell development is poorly known, as these cells have not been found in the thymus so far. Herein, complementary human and mouse experiments using an anti-humanValpha7.2 antibody and MAIT cell-specific iTCRalpha and TCRbeta transgenic mice in different genetic backgrounds show that MAIT cell development is a stepwise process, with an intra-thymic selection followed by peripheral expansion. Mouse MAIT cells are selected in an MR1-dependent manner both in fetal thymic organ culture and in double iTCRalpha and TCRbeta transgenic RAG knockout mice. In the latter mice, MAIT cells do not expand in the periphery unless B cells are added back by adoptive transfer, showing that B cells are not required for the initial thymic selection step but for the peripheral accumulation. In humans, contrary to natural killer T (NKT cells, MAIT cells display a naïve phenotype in the thymus as well as in cord blood where they are in low numbers. After birth, MAIT cells acquire a memory phenotype and expand dramatically, up to 1%-4% of blood T cells. Finally, in contrast with NKT cells, human MAIT cell development is independent of the molecular adaptor SAP. Interestingly, mouse MAIT cells display a naïve phenotype and do not express the ZBTB16 transcription factor, which, in contrast, is expressed by NKT cells and the memory human MAIT cells found in the periphery after birth. In conclusion, MAIT cells are selected by MR1 in the thymus on a non-B non-T hematopoietic cell, and acquire a memory phenotype and expand in the

20. Portraits of Principal Practice: Time Allocation and School Principal Work

Sebastian, James; Camburn, Eric M.; Spillane, James P.

2018-01-01

Purpose: The purpose of this study was to examine how school principals in urban settings distributed their time working on critical school functions. We also examined who principals worked with and how their time allocation patterns varied by school contextual characteristics. Research Method/Approach: The study was conducted in an urban school…

1. Vector regression introduced

Mok Tik

2014-06-01

Full Text Available This study formulates regression of vector data that will enable statistical analysis of various geodetic phenomena such as, polar motion, ocean currents, typhoon/hurricane tracking, crustal deformations, and precursory earthquake signals. The observed vector variable of an event (dependent vector variable is expressed as a function of a number of hypothesized phenomena realized also as vector variables (independent vector variables and/or scalar variables that are likely to impact the dependent vector variable. The proposed representation has the unique property of solving the coefficients of independent vector variables (explanatory variables also as vectors, hence it supersedes multivariate multiple regression models, in which the unknown coefficients are scalar quantities. For the solution, complex numbers are used to rep- resent vector information, and the method of least squares is deployed to estimate the vector model parameters after transforming the complex vector regression model into a real vector regression model through isomorphism. Various operational statistics for testing the predictive significance of the estimated vector parameter coefficients are also derived. A simple numerical example demonstrates the use of the proposed vector regression analysis in modeling typhoon paths.

2. Multicollinearity and Regression Analysis

Daoud, Jamal I.

2017-12-01

In regression analysis it is obvious to have a correlation between the response and predictor(s), but having correlation among predictors is something undesired. The number of predictors included in the regression model depends on many factors among which, historical data, experience, etc. At the end selection of most important predictors is something objective due to the researcher. Multicollinearity is a phenomena when two or more predictors are correlated, if this happens, the standard error of the coefficients will increase [8]. Increased standard errors means that the coefficients for some or all independent variables may be found to be significantly different from In other words, by overinflating the standard errors, multicollinearity makes some variables statistically insignificant when they should be significant. In this paper we focus on the multicollinearity, reasons and consequences on the reliability of the regression model.

3. Estimating Loess Plateau Average Annual Precipitation with Multiple Linear Regression Kriging and Geographically Weighted Regression Kriging

Qiutong Jin

2016-06-01

Full Text Available Estimating the spatial distribution of precipitation is an important and challenging task in hydrology, climatology, ecology, and environmental science. In order to generate a highly accurate distribution map of average annual precipitation for the Loess Plateau in China, multiple linear regression Kriging (MLRK and geographically weighted regression Kriging (GWRK methods were employed using precipitation data from the period 1980–2010 from 435 meteorological stations. The predictors in regression Kriging were selected by stepwise regression analysis from many auxiliary environmental factors, such as elevation (DEM, normalized difference vegetation index (NDVI, solar radiation, slope, and aspect. All predictor distribution maps had a 500 m spatial resolution. Validation precipitation data from 130 hydrometeorological stations were used to assess the prediction accuracies of the MLRK and GWRK approaches. Results showed that both prediction maps with a 500 m spatial resolution interpolated by MLRK and GWRK had a high accuracy and captured detailed spatial distribution data; however, MLRK produced a lower prediction error and a higher variance explanation than GWRK, although the differences were small, in contrast to conclusions from similar studies.

4. Minimax Regression Quantiles

Bache, Stefan Holst

A new and alternative quantile regression estimator is developed and it is shown that the estimator is root n-consistent and asymptotically normal. The estimator is based on a minimax ‘deviance function’ and has asymptotically equivalent properties to the usual quantile regression estimator. It is......, however, a different and therefore new estimator. It allows for both linear- and nonlinear model specifications. A simple algorithm for computing the estimates is proposed. It seems to work quite well in practice but whether it has theoretical justification is still an open question....

5. riskRegression

Ozenne, Brice; Sørensen, Anne Lyngholm; Scheike, Thomas

2017-01-01

In the presence of competing risks a prediction of the time-dynamic absolute risk of an event can be based on cause-specific Cox regression models for the event and the competing risks (Benichou and Gail, 1990). We present computationally fast and memory optimized C++ functions with an R interface...... for predicting the covariate specific absolute risks, their confidence intervals, and their confidence bands based on right censored time to event data. We provide explicit formulas for our implementation of the estimator of the (stratified) baseline hazard function in the presence of tied event times. As a by...... functionals. The software presented here is implemented in the riskRegression package....

6. School Principals' Sources of Knowledge

Perkins, Arland Early

2014-01-01

The purpose of this study was to determine what sources of professional knowledge are available to principals in 1 rural East Tennessee school district. Qualitative research methods were applied to gain an understanding of what sources of knowledge are used by school principals in 1 rural East Tennessee school district and the barriers they face…

7. Innovation Management Perceptions of Principals

Bakir, Asli Agiroglu

2016-01-01

This study is aimed to determine the perceptions of principals about innovation management and to investigate whether there is a significant difference in this perception according to various parameters. In the study, descriptive research model is used and universe is consisted from principals who participated in "Acquiring Formation Course…

8. What Do Effective Principals Do?

Protheroe, Nancy

2011-01-01

Much has been written during the past decade about the changing role of the principal and the shift in emphasis from manager to instructional leader. Anyone in education, and especially principals themselves, could develop a mental list of responsibilities that fit within each of these realms. But research makes it clear that both those aspects of…

9. Time Management for New Principals

Ruder, Robert

2008-01-01

Becoming a principal is a milestone in an educator's professional life. The principalship is an opportunity to provide leadership that will afford students opportunities to thrive in a nurturing and supportive environment. Despite the continuously expanding demands of being a new principal, effective time management will enable an individual to be…

10. Bureaucratic Control and Principal Role.

Bezdek, Robert; And Others

The purposes of this study were to determine the manner in which the imposition of increased bureaucratic control over principals influenced their allocation of time to tasks and to investigate principals' perceptions of the changes in their roles brought about by this increased control. The specific bureaucratic control system whose effects were…

11. Stepwise mitigation of the Macesnik landslide, N Slovenia

M. Mikoš

2005-01-01

the landslide movement starting from the slide plane towards its surface. Due to the length of the landslide and its longitudinal geometry it will be divided into several sections, and the mitigation works will be executed consecutively in phases. Such an approach proved effective in the 800 m long uppermost section of the landslide, where 3 parallel deep drain trenches (250 m long, 8 to 12 m deep were executed in the autumn of 2003. The reduction of the movements in 2004 enabled the construction of two 5 m wide and 22 m deep reinforced concrete shafts, finished in early 2005. In Slovenia, this sort of support construction, known from road construction, was used for the first time for landslide mitigation. The monitoring results show that the landslide displacements have been drastically reduced to less than 1 cm/day. As a part of the stepwise mitigation of the Macesnik landslide, further reinforced concrete shafts are to be constructed in the middle section of the landslide to support the road crossing the landslide. At the landslide toe, a support construction is planned to prevent further landslide advancement, and its type is still to be defined during the procedure of adopting a detailed plan of national importance for the Macesnik landslide.

12. Stepwise kinetic equilibrium models of quantitative polymerase chain reaction

Cobbs Gary

2012-08-01

Full Text Available Abstract Background Numerous models for use in interpreting quantitative PCR (qPCR data are present in recent literature. The most commonly used models assume the amplification in qPCR is exponential and fit an exponential model with a constant rate of increase to a select part of the curve. Kinetic theory may be used to model the annealing phase and does not assume constant efficiency of amplification. Mechanistic models describing the annealing phase with kinetic theory offer the most potential for accurate interpretation of qPCR data. Even so, they have not been thoroughly investigated and are rarely used for interpretation of qPCR data. New results for kinetic modeling of qPCR are presented. Results Two models are presented in which the efficiency of amplification is based on equilibrium solutions for the annealing phase of the qPCR process. Model 1 assumes annealing of complementary targets strands and annealing of target and primers are both reversible reactions and reach a dynamic equilibrium. Model 2 assumes all annealing reactions are nonreversible and equilibrium is static. Both models include the effect of primer concentration during the annealing phase. Analytic formulae are given for the equilibrium values of all single and double stranded molecules at the end of the annealing step. The equilibrium values are then used in a stepwise method to describe the whole qPCR process. Rate constants of kinetic models are the same for solutions that are identical except for possibly having different initial target concentrations. Analysis of qPCR curves from such solutions are thus analyzed by simultaneous non-linear curve fitting with the same rate constant values applying to all curves and each curve having a unique value for initial target concentration. The models were fit to two data sets for which the true initial target concentrations are known. Both models give better fit to observed qPCR data than other kinetic models present in the

13. Stepwise kinetic equilibrium models of quantitative polymerase chain reaction.

Cobbs, Gary

2012-08-16

Numerous models for use in interpreting quantitative PCR (qPCR) data are present in recent literature. The most commonly used models assume the amplification in qPCR is exponential and fit an exponential model with a constant rate of increase to a select part of the curve. Kinetic theory may be used to model the annealing phase and does not assume constant efficiency of amplification. Mechanistic models describing the annealing phase with kinetic theory offer the most potential for accurate interpretation of qPCR data. Even so, they have not been thoroughly investigated and are rarely used for interpretation of qPCR data. New results for kinetic modeling of qPCR are presented. Two models are presented in which the efficiency of amplification is based on equilibrium solutions for the annealing phase of the qPCR process. Model 1 assumes annealing of complementary targets strands and annealing of target and primers are both reversible reactions and reach a dynamic equilibrium. Model 2 assumes all annealing reactions are nonreversible and equilibrium is static. Both models include the effect of primer concentration during the annealing phase. Analytic formulae are given for the equilibrium values of all single and double stranded molecules at the end of the annealing step. The equilibrium values are then used in a stepwise method to describe the whole qPCR process. Rate constants of kinetic models are the same for solutions that are identical except for possibly having different initial target concentrations. Analysis of qPCR curves from such solutions are thus analyzed by simultaneous non-linear curve fitting with the same rate constant values applying to all curves and each curve having a unique value for initial target concentration. The models were fit to two data sets for which the true initial target concentrations are known. Both models give better fit to observed qPCR data than other kinetic models present in the literature. They also give better estimates of

14. Bayesian logistic regression analysis

Van Erp, H.R.N.; Van Gelder, P.H.A.J.M.

2012-01-01

In this paper we present a Bayesian logistic regression analysis. It is found that if one wishes to derive the posterior distribution of the probability of some event, then, together with the traditional Bayes Theorem and the integrating out of nuissance parameters, the Jacobian transformation is an

15. Linear Regression Analysis

Seber, George A F

2012-01-01

Concise, mathematically clear, and comprehensive treatment of the subject.* Expanded coverage of diagnostics and methods of model fitting.* Requires no specialized knowledge beyond a good grasp of matrix algebra and some acquaintance with straight-line regression and simple analysis of variance models.* More than 200 problems throughout the book plus outline solutions for the exercises.* This revision has been extensively class-tested.

16. Nonlinear Regression with R

Ritz, Christian; Parmigiani, Giovanni

2009-01-01

R is a rapidly evolving lingua franca of graphical display and statistical analysis of experiments from the applied sciences. This book provides a coherent treatment of nonlinear regression with R by means of examples from a diversity of applied sciences such as biology, chemistry, engineering, medicine and toxicology.

17. Bayesian ARTMAP for regression.

Sasu, L M; Andonie, R

2013-10-01

Bayesian ARTMAP (BA) is a recently introduced neural architecture which uses a combination of Fuzzy ARTMAP competitive learning and Bayesian learning. Training is generally performed online, in a single-epoch. During training, BA creates input data clusters as Gaussian categories, and also infers the conditional probabilities between input patterns and categories, and between categories and classes. During prediction, BA uses Bayesian posterior probability estimation. So far, BA was used only for classification. The goal of this paper is to analyze the efficiency of BA for regression problems. Our contributions are: (i) we generalize the BA algorithm using the clustering functionality of both ART modules, and name it BA for Regression (BAR); (ii) we prove that BAR is a universal approximator with the best approximation property. In other words, BAR approximates arbitrarily well any continuous function (universal approximation) and, for every given continuous function, there is one in the set of BAR approximators situated at minimum distance (best approximation); (iii) we experimentally compare the online trained BAR with several neural models, on the following standard regression benchmarks: CPU Computer Hardware, Boston Housing, Wisconsin Breast Cancer, and Communities and Crime. Our results show that BAR is an appropriate tool for regression tasks, both for theoretical and practical reasons. Copyright © 2013 Elsevier Ltd. All rights reserved.

18. Bounded Gaussian process regression

Jensen, Bjørn Sand; Nielsen, Jens Brehm; Larsen, Jan

2013-01-01

We extend the Gaussian process (GP) framework for bounded regression by introducing two bounded likelihood functions that model the noise on the dependent variable explicitly. This is fundamentally different from the implicit noise assumption in the previously suggested warped GP framework. We...... with the proposed explicit noise-model extension....

19. and Multinomial Logistic Regression

This work presented the results of an experimental comparison of two models: Multinomial Logistic Regression (MLR) and Artificial Neural Network (ANN) for classifying students based on their academic performance. The predictive accuracy for each model was measured by their average Classification Correct Rate (CCR).

20. Mechanisms of neuroblastoma regression

Brodeur, Garrett M.; Bagatell, Rochelle

2014-01-01

Recent genomic and biological studies of neuroblastoma have shed light on the dramatic heterogeneity in the clinical behaviour of this disease, which spans from spontaneous regression or differentiation in some patients, to relentless disease progression in others, despite intensive multimodality therapy. This evidence also suggests several possible mechanisms to explain the phenomena of spontaneous regression in neuroblastomas, including neurotrophin deprivation, humoral or cellular immunity, loss of telomerase activity and alterations in epigenetic regulation. A better understanding of the mechanisms of spontaneous regression might help to identify optimal therapeutic approaches for patients with these tumours. Currently, the most druggable mechanism is the delayed activation of developmentally programmed cell death regulated by the tropomyosin receptor kinase A pathway. Indeed, targeted therapy aimed at inhibiting neurotrophin receptors might be used in lieu of conventional chemotherapy or radiation in infants with biologically favourable tumours that require treatment. Alternative approaches consist of breaking immune tolerance to tumour antigens or activating neurotrophin receptor pathways to induce neuronal differentiation. These approaches are likely to be most effective against biologically favourable tumours, but they might also provide insights into treatment of biologically unfavourable tumours. We describe the different mechanisms of spontaneous neuroblastoma regression and the consequent therapeutic approaches. PMID:25331179

1. Functional data analysis of generalized regression quantiles

Guo, Mengmeng

2013-11-05

Generalized regression quantiles, including the conditional quantiles and expectiles as special cases, are useful alternatives to the conditional means for characterizing a conditional distribution, especially when the interest lies in the tails. We develop a functional data analysis approach to jointly estimate a family of generalized regression quantiles. Our approach assumes that the generalized regression quantiles share some common features that can be summarized by a small number of principal component functions. The principal component functions are modeled as splines and are estimated by minimizing a penalized asymmetric loss measure. An iterative least asymmetrically weighted squares algorithm is developed for computation. While separate estimation of individual generalized regression quantiles usually suffers from large variability due to lack of sufficient data, by borrowing strength across data sets, our joint estimation approach significantly improves the estimation efficiency, which is demonstrated in a simulation study. The proposed method is applied to data from 159 weather stations in China to obtain the generalized quantile curves of the volatility of the temperature at these stations. © 2013 Springer Science+Business Media New York.

2. Functional data analysis of generalized regression quantiles

Guo, Mengmeng; Zhou, Lan; Huang, Jianhua Z.; Hä rdle, Wolfgang Karl

2013-01-01

Generalized regression quantiles, including the conditional quantiles and expectiles as special cases, are useful alternatives to the conditional means for characterizing a conditional distribution, especially when the interest lies in the tails. We develop a functional data analysis approach to jointly estimate a family of generalized regression quantiles. Our approach assumes that the generalized regression quantiles share some common features that can be summarized by a small number of principal component functions. The principal component functions are modeled as splines and are estimated by minimizing a penalized asymmetric loss measure. An iterative least asymmetrically weighted squares algorithm is developed for computation. While separate estimation of individual generalized regression quantiles usually suffers from large variability due to lack of sufficient data, by borrowing strength across data sets, our joint estimation approach significantly improves the estimation efficiency, which is demonstrated in a simulation study. The proposed method is applied to data from 159 weather stations in China to obtain the generalized quantile curves of the volatility of the temperature at these stations. © 2013 Springer Science+Business Media New York.

3. Evaluation for Long Term PM10 Concentration Forecasting using Multi Linear Regression (MLR and Principal Component Regression (PCR Models

Samsuri Abdullah

2016-07-01

Full Text Available Air pollution in Peninsular Malaysia is dominated by particulate matter which is demonstrated by having the highest Air Pollution Index (API value compared to the other pollutants at most part of the country. Particulate Matter (PM10 forecasting models development is crucial because it allows the authority and citizens of a community to take necessary actions to limit their exposure to harmful levels of particulates pollution and implement protection measures to significantly improve air quality on designated locations. This study aims in improving the ability of MLR using PCs inputs for PM10 concentrations forecasting. Daily observations for PM10 in Kuala Terengganu, Malaysia from January 2003 till December 2011 were utilized to forecast PM10 concentration levels. MLR and PCR (using PCs input models were developed and the performance was evaluated using RMSE, NAE and IA. Results revealed that PCR performed better than MLR due to the implementation of PCA which reduce intricacy and eliminate data multi-collinearity.

4. Stepwise radiofrequency ablation of Barrett's esophagus preserves esophageal inner diameter, compliance, and motility

Beaumont, H.; Gondrie, J. J.; McMahon, B. P.; Pouw, R. E.; Gregersen, H.; Bergman, J. J.; Boeckxstaens, G. E.

2009-01-01

Background and aim: Stepwise endoscopic circumferential and focal radiofrequency ablation is safe and effective for the eradication of Barrett's esophagus. In contrast to other techniques, radiofrequency ablation appears to avoid significant esophageal scarring or stenosis. Our aim was to evaluate

5. Concerted and stepwise mechanisms in cycloaddition reactions: potential surfaces and isotope effects

Houk, K.N.; Yi Li; Storer, Joey; Raimondi, Laura; Beno, Brett

1994-01-01

CASSCF/6-31G * calculations have been performed on concerted and stepwise Diels-Alder reactions of butadiene with ethene, the dimerization of butadiene, and the dimerization of cyclobutadiene. The relative energies of concerted and stepwise mechanisms are compared, and the factors influencing these ''energies of concert'' are discussed. The comparison of calculated isotope effects to experimental data provides support for theoretical results. (Author)

6. Productivity Enhancement of Solar Still with PV Powered Heating Coil and Chamber Step-Wise Basin

Salah Abdallah

2018-03-01

Full Text Available There is a strong need to improve the productivity of single slope solar still. PV generator powered electrical heater and chamber step-wise design were introduced to the conventional solar still. An experimental study was performed to investigate the effect of adding the above mentioned modifications on the output parameters of the modified solar still. The inclusion of PV-powered heating coil and chamber step-wise design enhanced the productivity of distiller by up to 1098%.

7. Ridge Regression Signal Processing

Kuhl, Mark R.

1990-01-01

The introduction of the Global Positioning System (GPS) into the National Airspace System (NAS) necessitates the development of Receiver Autonomous Integrity Monitoring (RAIM) techniques. In order to guarantee a certain level of integrity, a thorough understanding of modern estimation techniques applied to navigational problems is required. The extended Kalman filter (EKF) is derived and analyzed under poor geometry conditions. It was found that the performance of the EKF is difficult to predict, since the EKF is designed for a Gaussian environment. A novel approach is implemented which incorporates ridge regression to explain the behavior of an EKF in the presence of dynamics under poor geometry conditions. The basic principles of ridge regression theory are presented, followed by the derivation of a linearized recursive ridge estimator. Computer simulations are performed to confirm the underlying theory and to provide a comparative analysis of the EKF and the recursive ridge estimator.

8. Subset selection in regression

Miller, Alan

2002-01-01

Originally published in 1990, the first edition of Subset Selection in Regression filled a significant gap in the literature, and its critical and popular success has continued for more than a decade. Thoroughly revised to reflect progress in theory, methods, and computing power, the second edition promises to continue that tradition. The author has thoroughly updated each chapter, incorporated new material on recent developments, and included more examples and references. New in the Second Edition:A separate chapter on Bayesian methodsComplete revision of the chapter on estimationA major example from the field of near infrared spectroscopyMore emphasis on cross-validationGreater focus on bootstrappingStochastic algorithms for finding good subsets from large numbers of predictors when an exhaustive search is not feasible Software available on the Internet for implementing many of the algorithms presentedMore examplesSubset Selection in Regression, Second Edition remains dedicated to the techniques for fitting...

9. Better Autologistic Regression

Mark A. Wolters

2017-11-01

Full Text Available Autologistic regression is an important probability model for dichotomous random variables observed along with covariate information. It has been used in various fields for analyzing binary data possessing spatial or network structure. The model can be viewed as an extension of the autologistic model (also known as the Ising model, quadratic exponential binary distribution, or Boltzmann machine to include covariates. It can also be viewed as an extension of logistic regression to handle responses that are not independent. Not all authors use exactly the same form of the autologistic regression model. Variations of the model differ in two respects. First, the variable coding—the two numbers used to represent the two possible states of the variables—might differ. Common coding choices are (zero, one and (minus one, plus one. Second, the model might appear in either of two algebraic forms: a standard form, or a recently proposed centered form. Little attention has been paid to the effect of these differences, and the literature shows ambiguity about their importance. It is shown here that changes to either coding or centering in fact produce distinct, non-nested probability models. Theoretical results, numerical studies, and analysis of an ecological data set all show that the differences among the models can be large and practically significant. Understanding the nature of the differences and making appropriate modeling choices can lead to significantly improved autologistic regression analyses. The results strongly suggest that the standard model with plus/minus coding, which we call the symmetric autologistic model, is the most natural choice among the autologistic variants.

Kernberg, O F

1979-02-01

The choice of good leaders is a major task for all organizations. Inforamtion regarding the prospective administrator's personality should complement questions regarding his previous experience, his general conceptual skills, his technical knowledge, and the specific skills in the area for which he is being selected. The growing psychoanalytic knowledge about the crucial importance of internal, in contrast to external, object relations, and about the mutual relationships of regression in individuals and in groups, constitutes an important practical tool for the selection of leaders.

11. Classification and regression trees

Breiman, Leo; Olshen, Richard A; Stone, Charles J

1984-01-01

The methodology used to construct tree structured rules is the focus of this monograph. Unlike many other statistical procedures, which moved from pencil and paper to calculators, this text's use of trees was unthinkable before computers. Both the practical and theoretical sides have been developed in the authors' study of tree methods. Classification and Regression Trees reflects these two sides, covering the use of trees as a data analysis method, and in a more mathematical framework, proving some of their fundamental properties.

12. Logistic regression models

Hilbe, Joseph M

2009-01-01

This book really does cover everything you ever wanted to know about logistic regression … with updates available on the author's website. Hilbe, a former national athletics champion, philosopher, and expert in astronomy, is a master at explaining statistical concepts and methods. Readers familiar with his other expository work will know what to expect-great clarity.The book provides considerable detail about all facets of logistic regression. No step of an argument is omitted so that the book will meet the needs of the reader who likes to see everything spelt out, while a person familiar with some of the topics has the option to skip "obvious" sections. The material has been thoroughly road-tested through classroom and web-based teaching. … The focus is on helping the reader to learn and understand logistic regression. The audience is not just students meeting the topic for the first time, but also experienced users. I believe the book really does meet the author's goal … .-Annette J. Dobson, Biometric...

13. relationship between principals' management approaches

Data were collected using a self-administered questionnaire from a sample of. 211 teachers, 28 principals and 22 chairpersons of parent- teachers association. Data were ..... their role expectation in discipline management. Data from the 20 ...

14. Principals, agents and research programmes

Elizabeth Shove

2003-01-01

Research programmes appear to represent one of the more powerful instruments through which research funders (principals) steer and shape what researchers (agents) do. The fact that agents navigate between different sources and styles of programme funding and that they use programmes to their own ends is readily accommodated within principal-agent theory with the help of concepts such as shirking and defection. Taking a different route, I use three examples of research programming (by the UK, ...

15. Steganalysis using logistic regression

Lubenko, Ivans; Ker, Andrew D.

2011-02-01

We advocate Logistic Regression (LR) as an alternative to the Support Vector Machine (SVM) classifiers commonly used in steganalysis. LR offers more information than traditional SVM methods - it estimates class probabilities as well as providing a simple classification - and can be adapted more easily and efficiently for multiclass problems. Like SVM, LR can be kernelised for nonlinear classification, and it shows comparable classification accuracy to SVM methods. This work is a case study, comparing accuracy and speed of SVM and LR classifiers in detection of LSB Matching and other related spatial-domain image steganography, through the state-of-art 686-dimensional SPAM feature set, in three image sets.

16. SEPARATION PHENOMENA LOGISTIC REGRESSION

Ikaro Daniel de Carvalho Barreto

2014-03-01

Full Text Available This paper proposes an application of concepts about the maximum likelihood estimation of the binomial logistic regression model to the separation phenomena. It generates bias in the estimation and provides different interpretations of the estimates on the different statistical tests (Wald, Likelihood Ratio and Score and provides different estimates on the different iterative methods (Newton-Raphson and Fisher Score. It also presents an example that demonstrates the direct implications for the validation of the model and validation of variables, the implications for estimates of odds ratios and confidence intervals, generated from the Wald statistics. Furthermore, we present, briefly, the Firth correction to circumvent the phenomena of separation.

17. riskRegression

Ozenne, Brice; Sørensen, Anne Lyngholm; Scheike, Thomas

2017-01-01

In the presence of competing risks a prediction of the time-dynamic absolute risk of an event can be based on cause-specific Cox regression models for the event and the competing risks (Benichou and Gail, 1990). We present computationally fast and memory optimized C++ functions with an R interface......-product we obtain fast access to the baseline hazards (compared to survival::basehaz()) and predictions of survival probabilities, their confidence intervals and confidence bands. Confidence intervals and confidence bands are based on point-wise asymptotic expansions of the corresponding statistical...

Goutte, Cyril; Larsen, Jan

2000-01-01

Kernel smoothing is a widely used non-parametric pattern recognition technique. By nature, it suffers from the curse of dimensionality and is usually difficult to apply to high input dimensions. In this contribution, we propose an algorithm that adapts the input metric used in multivariate...... regression by minimising a cross-validation estimate of the generalisation error. This allows to automatically adjust the importance of different dimensions. The improvement in terms of modelling performance is illustrated on a variable selection task where the adaptive metric kernel clearly outperforms...

Goutte, Cyril; Larsen, Jan

1998-01-01

Kernel smoothing is a widely used nonparametric pattern recognition technique. By nature, it suffers from the curse of dimensionality and is usually difficult to apply to high input dimensions. In this paper, we propose an algorithm that adapts the input metric used in multivariate regression...... by minimising a cross-validation estimate of the generalisation error. This allows one to automatically adjust the importance of different dimensions. The improvement in terms of modelling performance is illustrated on a variable selection task where the adaptive metric kernel clearly outperforms the standard...

20. Principal Curves on Riemannian Manifolds.

Hauberg, Soren

2016-09-01

Euclidean statistics are often generalized to Riemannian manifolds by replacing straight-line interpolations with geodesic ones. While these Riemannian models are familiar-looking, they are restricted by the inflexibility of geodesics, and they rely on constructions which are optimal only in Euclidean domains. We consider extensions of Principal Component Analysis (PCA) to Riemannian manifolds. Classic Riemannian approaches seek a geodesic curve passing through the mean that optimizes a criteria of interest. The requirements that the solution both is geodesic and must pass through the mean tend to imply that the methods only work well when the manifold is mostly flat within the support of the generating distribution. We argue that instead of generalizing linear Euclidean models, it is more fruitful to generalize non-linear Euclidean models. Specifically, we extend the classic Principal Curves from Hastie & Stuetzle to data residing on a complete Riemannian manifold. We show that for elliptical distributions in the tangent of spaces of constant curvature, the standard principal geodesic is a principal curve. The proposed model is simple to compute and avoids many of the pitfalls of traditional geodesic approaches. We empirically demonstrate the effectiveness of the Riemannian principal curves on several manifolds and datasets.

1. Aid and growth regressions

Hansen, Henrik; Tarp, Finn

2001-01-01

This paper examines the relationship between foreign aid and growth in real GDP per capita as it emerges from simple augmentations of popular cross country growth specifications. It is shown that aid in all likelihood increases the growth rate, and this result is not conditional on ‘good’ policy....... investment. We conclude by stressing the need for more theoretical work before this kind of cross-country regressions are used for policy purposes.......This paper examines the relationship between foreign aid and growth in real GDP per capita as it emerges from simple augmentations of popular cross country growth specifications. It is shown that aid in all likelihood increases the growth rate, and this result is not conditional on ‘good’ policy...

2. Boosted regression trees, multivariate adaptive regression splines and their two-step combinations with multiple linear regression or partial least squares to predict blood-brain barrier passage: a case study.

Deconinck, E; Zhang, M H; Petitet, F; Dubus, E; Ijjaali, I; Coomans, D; Vander Heyden, Y

2008-02-18

The use of some unconventional non-linear modeling techniques, i.e. classification and regression trees and multivariate adaptive regression splines-based methods, was explored to model the blood-brain barrier (BBB) passage of drugs and drug-like molecules. The data set contains BBB passage values for 299 structural and pharmacological diverse drugs, originating from a structured knowledge-based database. Models were built using boosted regression trees (BRT) and multivariate adaptive regression splines (MARS), as well as their respective combinations with stepwise multiple linear regression (MLR) and partial least squares (PLS) regression in two-step approaches. The best models were obtained using combinations of MARS with either stepwise MLR or PLS. It could be concluded that the use of combinations of a linear with a non-linear modeling technique results in some improved properties compared to the individual linear and non-linear models and that, when the use of such a combination is appropriate, combinations using MARS as non-linear technique should be preferred over those with BRT, due to some serious drawbacks of the BRT approaches.

3. The Principal and the Law. Elementary Principal Series No. 7.

Doverspike, David E.; Cone, W. Henry

Developments over the past 25 years in school-related legal issues in elementary schools have significantly changed the principal's role. In 1975, a decision of the U.S. Supreme Court established three due-process guidelines for short-term suspension. The decision requires student notification of charges, explanation of evidence, and an informal…

4. Surface analysis the principal techniques

Vickerman, John C

2009-01-01

This completely updated and revised second edition of Surface Analysis: The Principal Techniques, deals with the characterisation and understanding of the outer layers of substrates, how they react, look and function which are all of interest to surface scientists. Within this comprehensive text, experts in each analysis area introduce the theory and practice of the principal techniques that have shown themselves to be effective in both basic research and in applied surface analysis. Examples of analysis are provided to facilitate the understanding of this topic and to show readers how they c

5. Principal bundles the classical case

Sontz, Stephen Bruce

2015-01-01

This introductory graduate level text provides a relatively quick path to a special topic in classical differential geometry: principal bundles.  While the topic of principal bundles in differential geometry has become classic, even standard, material in the modern graduate mathematics curriculum, the unique approach taken in this text presents the material in a way that is intuitive for both students of mathematics and of physics. The goal of this book is to present important, modern geometric ideas in a form readily accessible to students and researchers in both the physics and mathematics communities, providing each with an understanding and appreciation of the language and ideas of the other.

6. Modified Regression Correlation Coefficient for Poisson Regression Model

Kaengthong, Nattacha; Domthong, Uthumporn

2017-09-01

This study gives attention to indicators in predictive power of the Generalized Linear Model (GLM) which are widely used; however, often having some restrictions. We are interested in regression correlation coefficient for a Poisson regression model. This is a measure of predictive power, and defined by the relationship between the dependent variable (Y) and the expected value of the dependent variable given the independent variables [E(Y|X)] for the Poisson regression model. The dependent variable is distributed as Poisson. The purpose of this research was modifying regression correlation coefficient for Poisson regression model. We also compare the proposed modified regression correlation coefficient with the traditional regression correlation coefficient in the case of two or more independent variables, and having multicollinearity in independent variables. The result shows that the proposed regression correlation coefficient is better than the traditional regression correlation coefficient based on Bias and the Root Mean Square Error (RMSE).

7. Technique of ICP monitored stepwise intracranial decompression effectively reduces postoperative complications of severe bifrontal contusion

Guan eSun

2016-04-01

Full Text Available Background Bifrontal contusion is a common clinical brain injury. In the early stage, it is often mild, but it progresses rapidly and frequently worsens suddenly. This condition can become life threatening and therefore requires surgery. Conventional decompression craniectomy is the commonly used treatment method. In this study, the effect of ICP monitored stepwise intracranial decompression surgery on the prognosis of patients with acute severe bifrontal contusion was investigated. Method A total of 136 patients with severe bifrontal contusion combined with deteriorated intracranial hypertension admitted from March 2001 to March 2014 in our hospital were selected and randomly divided into two groups, i.e., a conventional decompression group and an intracranial pressure (ICP monitored stepwise intracranial decompression group (68 patients each, to conduct a retrospective study. The incidence rates of acute intraoperative encephalocele, delayed hematomas, and postoperative cerebral infarctions and the Glasgow outcome scores (GOSs 6 months after the surgery were compared between the two groups.Results (1 The incidence rates of acute encephalocele and contralateral delayed epidural hematoma in the stepwise decompression surgery group were significantly lower than those in the conventional decompression group; the differences were statistically significant (P < 0.05; (2 6 months after the surgery, the incidence of vegetative state and mortality in the stepwise decompression group were significantly lower than those in the conventional decompression group (P < 0.05; the rate of favorable prognosis in the stepwise decompression group was also significantly higher than that in the conventional decompression group (P < 0.05.Conclusions The ICP monitored stepwise intracranial decompression technique reduced the perioperative complications of traumatic brain injury through the gradual release of intracranial pressure and was beneficial to the prognosis of

8. Canonical variate regression.

Luo, Chongliang; Liu, Jin; Dey, Dipak K; Chen, Kun

2016-07-01

In many fields, multi-view datasets, measuring multiple distinct but interrelated sets of characteristics on the same set of subjects, together with data on certain outcomes or phenotypes, are routinely collected. The objective in such a problem is often two-fold: both to explore the association structures of multiple sets of measurements and to develop a parsimonious model for predicting the future outcomes. We study a unified canonical variate regression framework to tackle the two problems simultaneously. The proposed criterion integrates multiple canonical correlation analysis with predictive modeling, balancing between the association strength of the canonical variates and their joint predictive power on the outcomes. Moreover, the proposed criterion seeks multiple sets of canonical variates simultaneously to enable the examination of their joint effects on the outcomes, and is able to handle multivariate and non-Gaussian outcomes. An efficient algorithm based on variable splitting and Lagrangian multipliers is proposed. Simulation studies show the superior performance of the proposed approach. We demonstrate the effectiveness of the proposed approach in an [Formula: see text] intercross mice study and an alcohol dependence study. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

9. School Uniforms: Guidelines for Principals.

Essex, Nathan L.

2001-01-01

Principals desiring to develop a school-uniform policy should involve parents, teachers, community leaders, and student representatives; beware restrictions on religious and political expression; provide flexibility and assistance for low-income families; implement a pilot program; align the policy with school-safety issues; and consider legal…

10. The Principal and Tort Liability.

Stern, Ralph D.

The emphasis of this chapter is on the tort liability of principals, especially their commission of unintentional torts or torts resulting from negligent conduct. A tort is defined as a wrongful act, not including a breach of contract or trust, which results in injury to another's person, property, or reputation and for which the injured party is…

11. Teachers' Perspectives on Principal Mistreatment

Blase, Joseph; Blase, Jo

2006-01-01

Although there is some important scholarly work on the problem of workplace mistreatment/abuse, theoretical or empirical work on abusive school principals is nonexistent. Symbolic interactionism was the theoretical structure for the present study. This perspective on social research is founded on three primary assumptions: (1) individuals act…

12. Principal minors and rhombus tilings

Kenyon, Richard; Pemantle, Robin

2014-01-01

The algebraic relations between the principal minors of a generic n × n matrix are somewhat mysterious, see e.g. Lin and Sturmfels (2009 J. Algebra 322 4121–31). We show, however, that by adding in certain almost principal minors, the ideal of relations is generated by translations of a single relation, the so-called hexahedron relation, which is a composition of six cluster mutations. We give in particular a Laurent-polynomial parameterization of the space of n × n matrices, whose parameters consist of certain principal and almost principal minors. The parameters naturally live on vertices and faces of the tiles in a rhombus tiling of a convex 2n-gon. A matrix is associated to an equivalence class of tilings, all related to each other by Yang–Baxter-like transformations. By specializing the initial data we can similarly parameterize the space of Hermitian symmetric matrices over R,C or H the quaternions. Moreover by further specialization we can parametrize the space of positive definite matrices over these rings. This article is part of a special issue of Journal of Physics A: Mathematical and Theoretical devoted to ‘Cluster algebras mathematical physics’. (paper)

13. Improvement of the Performance of Scheduled Stepwise Power Programme Changes within the European Power System

Welfonder, E.; Weissbach, T.; Schulz, U.

2008-01-01

Since the deregulation of the electrical energy market, the technical realisation of power transactions based on energy market contracts often effects large stepwise power programme changes – especially at the change of the hour. Due to mainly economic reasons these stepwise power programme changes...... extended discussions with power plant and power system operators as well as with power plant dispatchers the described issues will be adopted into a VGB-recommendation which shall be published by VGB Powertech for Germany and Europe. Subsequently, it is intended to include the main elements of the VGB...

14. Obtaining insights from high-dimensional data : Sparse principal covariates regression

Van Deun, K.; Crompvoets, E.A.V.; Ceulemans, Eva

2018-01-01

Background Data analysis methods are usually subdivided in two distinct classes: There are methods for prediction and there are methods for exploration. In practice, however, there often is a need to learn from the data in both ways. For example, when predicting the antibody titers a few weeks after

15. Reconstruction of spatio-temporal temperature from sparse historical records using robust probabilistic principal component regression

Tipton, John; Hooten, Mevin; Goring, Simon

2017-01-01

Scientific records of temperature and precipitation have been kept for several hundred years, but for many areas, only a shorter record exists. To understand climate change, there is a need for rigorous statistical reconstructions of the paleoclimate using proxy data. Paleoclimate proxy data are often sparse, noisy, indirect measurements of the climate process of interest, making each proxy uniquely challenging to model statistically. We reconstruct spatially explicit temper...

16. Polynomial regression analysis and significance test of the regression function

Gao Zhengming; Zhao Juan; He Shengping

2012-01-01

In order to analyze the decay heating power of a certain radioactive isotope per kilogram with polynomial regression method, the paper firstly demonstrated the broad usage of polynomial function and deduced its parameters with ordinary least squares estimate. Then significance test method of polynomial regression function is derived considering the similarity between the polynomial regression model and the multivariable linear regression model. Finally, polynomial regression analysis and significance test of the polynomial function are done to the decay heating power of the iso tope per kilogram in accord with the authors' real work. (authors)

17. Multi-type Step-wise group screening designs with unequal A-priori ...

... design with unequal group sizes and obtain values of the group sizes that minimize the expected number of runs.. Keywords: Group Screening, Group factors, multi-type step-wise group screening, expected number of runs, Optimum group screening designs > East African Journal of Statistics Vol. 1 (1) 2005: pp. 49-67 ...

18. Stepwise Inquiry into Hard Water in a High School Chemistry Laboratory

Kakisako, Mami; Nishikawa, Kazuyuki; Nakano, Masayoshi; Harada, Kana S.; Tatsuoka, Tomoyuki; Koga, Nobuyoshi

2016-01-01

This study focuses on the design of a learning program to introduce complexometric titration as a method for determining water hardness in a high school chemistry laboratory. Students are introduced to the different properties and reactions of hard water in a stepwise manner so that they gain the necessary chemical knowledge and conceptual…

19. Stepwise extraction of Lepidium sativum seed gum: Physicochemical characterization and functional properties

2016-01-01

Cress seed gum (CSG) was fractionated using stepwise extraction with water, yielding three fractions (F1, F2, F3) whose average molecular weights ranged from 863 to 1080 kDa. The chemical composition (monosaccharide, ash, moisture, CHN and uronic acid contents) and molecular weight of the fractio...

20. Stepwise excavation may enhance pulp preservation in permanent teeth affected by dental caries

Bjørndal, Lars

2011-01-01

ARTICLE TITLE AND BIBLIOGRAPHIC INFORMATION: Ways of enhancing pulp preservation by stepwise excavation-a systematic review. Hayashi M, Fujitani M, Yamaki C, Momoi Y. J Dent 2011;39(2):95-107. Epub 2010 Dec 3. REVIEWER: Lars Bjørndal, DDS, PhD, Dr Odont PURPOSE/QUESTION: To determine the clinical...

1. Stepwise withdrawal of inhaled corticosteroids in COPD patients receiving dual bronchodilation

Magnussen, Helgo; Watz, Henrik; Kirsten, Anne

2014-01-01

-controlled fashion, one group of patients continues to receive tiotropium, salmeterol and fluticasone, while the second group initiates stepwise withdrawal of fluticasone. The primary end point is time to first moderate or severe exacerbation following randomized treatment over 52 weeks. Lung function, symptoms...

2. Efficient and Mild Microwave-Assisted Stepwise Functionalization of Naphthalenediimide with α-Amino Acids

Pengo, Paolo; Pantoş, G. Dan; Otto, Sijbren; Sanders, Jeremy K.M.

2006-01-01

Microwave dielectric heating proved to be an efficient method for the one-pot and stepwise syntheses of symmetrical and unsymmetrical naphthalenediimide derivatives of α-amino acids. Acid-labile side chain protecting groups are stable under the reaction conditions; protection of the α-carboxylic

3. Stepwise or concerted? DFT study on the mechanism of ionic Diels-Alder reaction of chromanes

2016-01-01

Full Text Available The stepwise and concerted Ionic Diels-Alder reaction between phenyl (pyridin-2-ylmethylene oxonium and styrene derivatives are explored using theoretical method. The results support using computational method via persistent intermediates. The DFT method was essential to reproduce a reasonable potential energy surface for these challenging systems.

4. Stepwise radical endoscopic resection for Barrett's esophagus with early neoplasia: report on a Brussels' cohort

Pouw, R. E.; Peters, F. P.; Sempoux, C.; Piessevaux, H.; Deprez, P. H.

2008-01-01

Background and study aims: The aim of this retrospective study was to assess safety and efficacy of stepwise radical endoscopic resection (SRER) in patients with Barrett's esophagus with high-grade intraepithelial neoplasia (HGIN) or early cancer. Patients and methods: Patients undergoing SRER

5. modelling of directed evolution: Implications for experimental design and stepwise evolution

Wedge , David C.; Rowe , William; Kell , Douglas B.; Knowles , Joshua

2009-01-01

In silico modelling of directed evolution: Implications for experimental design and stepwise evolution correspondence: Corresponding author. Tel.: +441613065145. (Wedge, David C.) (Wedge, David C.) Manchester Interdisciplinary Biocentre, University of Manchester - 131 Princess Street--> , Manchester--> , M1 7ND--> - UNITED KINGDOM (Wedge, David C.) UNITED KINGDOM (Wedge, David C.) Man...

6. A stepwise approach for defining the applicability domain of SAR and QSAR models

Dimitrov, Sabcho; Dimitrova, Gergana; Pavlov, Todor

2005-01-01

A stepwise approach for determining the model applicability domain is proposed. Four stages are applied to account for the diversity and complexity of the current SAR/QSAR models, reflecting their mechanistic rationality (including metabolic activation of chemicals) and transparency. General para...

7. An antarctic stratigraphic record of stepwise ice growth through the eocene-oligocene transition

Passchier, Sandra; Ciarletta, Daniel J.; Miriagos, Triantafilo E.; Bijl, Peter K.; Bohaty, Steven M.

2017-01-01

Earth's current icehouse phase began ~34 m.y. ago with the onset of major Antarctic glaciation at the Eocene-Oligocene transition. Changes in ocean circulation and a decline in atmospheric greenhouse gas levels were associated with stepwise cooling and ice growth at southern high latitudes. The

8. School Principals' Job Satisfaction: The Effects of Work Intensification

Wang, Fei; Pollock, Katina; Hauseman, Cameron

2018-01-01

This study examines principals' job satisfaction in relation to their work intensification. Frederick Herzberg's two-factor theory was used to shed light on how motivating and maintenance factors affect principals' job satisfaction. Logistic multiple regressions were used in the analysis of survey data that were collected from 2,701 elementary and…

9. Recursive Algorithm For Linear Regression

Varanasi, S. V.

1988-01-01

Order of model determined easily. Linear-regression algorithhm includes recursive equations for coefficients of model of increased order. Algorithm eliminates duplicative calculations, facilitates search for minimum order of linear-regression model fitting set of data satisfactory.

10. Developing Principal Instructional Leadership through Collaborative Networking

Cone, Mariah Bahar

2010-01-01

This study examines what occurs when principals of urban schools meet together to learn and improve their instructional leadership in collaborative principal networks designed to support, sustain, and provide ongoing principal capacity building. Principal leadership is considered second only to teaching in its ability to improve schools, yet few…

11. 31 CFR 19.995 - Principal.

2010-07-01

... SUSPENSION (NONPROCUREMENT) Definitions § 19.995 Principal. Principal means— (a) An officer, director, owner, partner, principal investigator, or other person within a participant with management or supervisory... 31 Money and Finance: Treasury 1 2010-07-01 2010-07-01 false Principal. 19.995 Section 19.995...

12. 22 CFR 208.995 - Principal.

2010-04-01

... 22 Foreign Relations 1 2010-04-01 2010-04-01 false Principal. 208.995 Section 208.995 Foreign...) Definitions § 208.995 Principal. Principal means— (a) An officer, director, owner, partner, principal investigator, or other person within a participant with management or supervisory responsibilities related to a...

13. 29 CFR 1471.995 - Principal.

2010-07-01

... SUSPENSION (NONPROCUREMENT) Definitions § 1471.995 Principal. Principal means— (a) An officer, director, owner, partner, principal investigator, or other person within a participant with management or... 29 Labor 4 2010-07-01 2010-07-01 false Principal. 1471.995 Section 1471.995 Labor Regulations...

14. 21 CFR 1404.995 - Principal.

2010-04-01

... 21 Food and Drugs 9 2010-04-01 2010-04-01 false Principal. 1404.995 Section 1404.995 Food and...) Definitions § 1404.995 Principal. Principal means— (a) An officer, director, owner, partner, principal investigator, or other person within a participant with management or supervisory responsibilities related to a...

15. 22 CFR 1006.995 - Principal.

2010-04-01

... 22 Foreign Relations 2 2010-04-01 2010-04-01 true Principal. 1006.995 Section 1006.995 Foreign... § 1006.995 Principal. Principal means— (a) An officer, director, owner, partner, principal investigator, or other person within a participant with management or supervisory responsibilities related to a...

16. 2 CFR 180.995 - Principal.

2010-01-01

... 2 Grants and Agreements 1 2010-01-01 2010-01-01 false Principal. 180.995 Section 180.995 Grants and Agreements OFFICE OF MANAGEMENT AND BUDGET GOVERNMENTWIDE GUIDANCE FOR GRANTS AND AGREEMENTS... § 180.995 Principal. Principal means— (a) An officer, director, owner, partner, principal investigator...

17. 34 CFR 85.995 - Principal.

2010-07-01

... 34 Education 1 2010-07-01 2010-07-01 false Principal. 85.995 Section 85.995 Education Office of...) Definitions § 85.995 Principal. Principal means— (a) An officer, director, owner, partner, principal investigator, or other person within a participant with management or supervisory responsibilities related to a...

18. 22 CFR 1508.995 - Principal.

2010-04-01

... 22 Foreign Relations 2 2010-04-01 2010-04-01 true Principal. 1508.995 Section 1508.995 Foreign...) Definitions § 1508.995 Principal. Principal means— (a) An officer, director, owner, partner, principal investigator, or other person within a participant with management or supervisory responsibilities related to a...

19. A novel simple QSAR model for the prediction of anti-HIV activity using multiple linear regression analysis.

Afantitis, Antreas; Melagraki, Georgia; Sarimveis, Haralambos; Koutentis, Panayiotis A; Markopoulos, John; Igglessi-Markopoulou, Olga

2006-08-01

A quantitative-structure activity relationship was obtained by applying Multiple Linear Regression Analysis to a series of 80 1-[2-hydroxyethoxy-methyl]-6-(phenylthio) thymine (HEPT) derivatives with significant anti-HIV activity. For the selection of the best among 37 different descriptors, the Elimination Selection Stepwise Regression Method (ES-SWR) was utilized. The resulting QSAR model (R (2) (CV) = 0.8160; S (PRESS) = 0.5680) proved to be very accurate both in training and predictive stages.

20. Regression in autistic spectrum disorders.

Stefanatos, Gerry A

2008-12-01

A significant proportion of children diagnosed with Autistic Spectrum Disorder experience a developmental regression characterized by a loss of previously-acquired skills. This may involve a loss of speech or social responsitivity, but often entails both. This paper critically reviews the phenomena of regression in autistic spectrum disorders, highlighting the characteristics of regression, age of onset, temporal course, and long-term outcome. Important considerations for diagnosis are discussed and multiple etiological factors currently hypothesized to underlie the phenomenon are reviewed. It is argued that regressive autistic spectrum disorders can be conceptualized on a spectrum with other regressive disorders that may share common pathophysiological features. The implications of this viewpoint are discussed.

1. Linear regression in astronomy. I

Isobe, Takashi; Feigelson, Eric D.; Akritas, Michael G.; Babu, Gutti Jogesh

1990-01-01

Five methods for obtaining linear regression fits to bivariate data with unknown or insignificant measurement errors are discussed: ordinary least-squares (OLS) regression of Y on X, OLS regression of X on Y, the bisector of the two OLS lines, orthogonal regression, and 'reduced major-axis' regression. These methods have been used by various researchers in observational astronomy, most importantly in cosmic distance scale applications. Formulas for calculating the slope and intercept coefficients and their uncertainties are given for all the methods, including a new general form of the OLS variance estimates. The accuracy of the formulas was confirmed using numerical simulations. The applicability of the procedures is discussed with respect to their mathematical properties, the nature of the astronomical data under consideration, and the scientific purpose of the regression. It is found that, for problems needing symmetrical treatment of the variables, the OLS bisector performs significantly better than orthogonal or reduced major-axis regression.

2. Advanced statistics: linear regression, part I: simple linear regression.

Marill, Keith A

2004-01-01

Simple linear regression is a mathematical technique used to model the relationship between a single independent predictor variable and a single dependent outcome variable. In this, the first of a two-part series exploring concepts in linear regression analysis, the four fundamental assumptions and the mechanics of simple linear regression are reviewed. The most common technique used to derive the regression line, the method of least squares, is described. The reader will be acquainted with other important concepts in simple linear regression, including: variable transformations, dummy variables, relationship to inference testing, and leverage. Simplified clinical examples with small datasets and graphic models are used to illustrate the points. This will provide a foundation for the second article in this series: a discussion of multiple linear regression, in which there are multiple predictor variables.

3. Causal correlation of foliar biochemical concentrations with AVIRIS spectra using forced entry linear regression

Dawson, Terence P.; Curran, Paul J.; Kupiec, John A.

1995-01-01

A major goal of airborne imaging spectrometry is to estimate the biochemical composition of vegetation canopies from reflectance spectra. Remotely-sensed estimates of foliar biochemical concentrations of forests would provide valuable indicators of ecosystem function at regional and eventually global scales. Empirical research has shown a relationship exists between the amount of radiation reflected from absorption features and the concentration of given biochemicals in leaves and canopies (Matson et al., 1994, Johnson et al., 1994). A technique commonly used to determine which wavelengths have the strongest correlation with the biochemical of interest is unguided (stepwise) multiple regression. Wavelengths are entered into a multivariate regression equation, in their order of importance, each contributing to the reduction of the variance in the measured biochemical concentration. A significant problem with the use of stepwise regression for determining the correlation between biochemical concentration and spectra is that of 'overfitting' as there are significantly more wavebands than biochemical measurements. This could result in the selection of wavebands which may be more accurately attributable to noise or canopy effects. In addition, there is a real problem of collinearity in that the individual biochemical concentrations may covary. A strong correlation between the reflectance at a given wavelength and the concentration of a biochemical of interest, therefore, may be due to the effect of another biochemical which is closely related. Furthermore, it is not always possible to account for potentially suitable waveband omissions in the stepwise selection procedure. This concern about the suitability of stepwise regression has been identified and acknowledged in a number of recent studies (Wessman et al., 1988, Curran, 1989, Curran et al., 1992, Peterson and Hubbard, 1992, Martine and Aber, 1994, Kupiec, 1994). These studies have pointed to the lack of a physical

4. Impact of stepwise ablation on the biatrial substrate in patients with persistent atrial fibrillation and heart failure.

Jones, David G; Haldar, Shouvik K; Jarman, Julian W E; Johar, Sofian; Hussain, Wajid; Markides, Vias; Wong, Tom

2013-08-01

Ablation of persistent atrial fibrillation can be challenging, often involving not only pulmonary vein isolation (PVI) but also additional linear lesions and ablation of complex fractionated electrograms (CFE). We examined the impact of stepwise ablation on a human model of advanced atrial substrate of persistent atrial fibrillation in heart failure. In 30 patients with persistent atrial fibrillation and left ventricular ejection fraction ≤35%, high-density CFE maps were recorded biatrially at baseline, in the left atrium (LA) after PVI and linear lesions (roof and mitral isthmus), and biatrially after LA CFE ablation. Surface area of CFE (mean cycle length ≤120 ms) remote to PVI and linear lesions, defined as CFE area, was reduced after PVI (18.3±12.03 to 10.2±7.1 cm(2); Patrial CFE area was reduced by LA ablation, from 25.9±14.1 to 12.9±11.8 cm(2) (Patrial CFE area. Reduction of CFE area at sites remote from ablation would suggest either regression of the advanced atrial substrate or that these CFE were functional phenomena. Nevertheless, in an advanced atrial fibrillation substrate, linear lesions after PVI diminished the target area for CFE ablation, and complete lesions resulted in a favorable clinical outcome.

5. Principal chiral model on superspheres

Mitev, V.; Schomerus, V.; Quella, T.

2008-09-01

We investigate the spectrum of the principal chiral model (PCM) on odd-dimensional superspheres as a function of the curvature radius R. For volume-filling branes on S 3 vertical stroke 2 , we compute the exact boundary spectrum as a function of R. The extension to higher dimensional superspheres is discussed, but not carried out in detail. Our results provide very convincing evidence in favor of the strong-weak coupling duality between supersphere PCMs and OSP(2S+2 vertical stroke 2S) Gross-Neveu models that was recently conjectured by Candu and Saleur. (orig.)

6. Interpretable functional principal component analysis.

Lin, Zhenhua; Wang, Liangliang; Cao, Jiguo

2016-09-01

Functional principal component analysis (FPCA) is a popular approach to explore major sources of variation in a sample of random curves. These major sources of variation are represented by functional principal components (FPCs). The intervals where the values of FPCs are significant are interpreted as where sample curves have major variations. However, these intervals are often hard for naïve users to identify, because of the vague definition of "significant values". In this article, we develop a novel penalty-based method to derive FPCs that are only nonzero precisely in the intervals where the values of FPCs are significant, whence the derived FPCs possess better interpretability than the FPCs derived from existing methods. To compute the proposed FPCs, we devise an efficient algorithm based on projection deflation techniques. We show that the proposed interpretable FPCs are strongly consistent and asymptotically normal under mild conditions. Simulation studies confirm that with a competitive performance in explaining variations of sample curves, the proposed FPCs are more interpretable than the traditional counterparts. This advantage is demonstrated by analyzing two real datasets, namely, electroencephalography data and Canadian weather data. © 2015, The International Biometric Society.

7. Linear regression in astronomy. II

Feigelson, Eric D.; Babu, Gutti J.

1992-01-01

A wide variety of least-squares linear regression procedures used in observational astronomy, particularly investigations of the cosmic distance scale, are presented and discussed. The classes of linear models considered are (1) unweighted regression lines, with bootstrap and jackknife resampling; (2) regression solutions when measurement error, in one or both variables, dominates the scatter; (3) methods to apply a calibration line to new data; (4) truncated regression models, which apply to flux-limited data sets; and (5) censored regression models, which apply when nondetections are present. For the calibration problem we develop two new procedures: a formula for the intercept offset between two parallel data sets, which propagates slope errors from one regression to the other; and a generalization of the Working-Hotelling confidence bands to nonstandard least-squares lines. They can provide improved error analysis for Faber-Jackson, Tully-Fisher, and similar cosmic distance scale relations.

Møller, Jan Kloppenborg; Nielsen, Henrik Aalborg; Madsen, Henrik

2008-01-01

and an updating procedure are combined into a new algorithm for time-adaptive quantile regression, which generates new solutions on the basis of the old solution, leading to savings in computation time. The suggested algorithm is tested against a static quantile regression model on a data set with wind power......An algorithm for time-adaptive quantile regression is presented. The algorithm is based on the simplex algorithm, and the linear optimization formulation of the quantile regression problem is given. The observations have been split to allow a direct use of the simplex algorithm. The simplex method...... production, where the models combine splines and quantile regression. The comparison indicates superior performance for the time-adaptive quantile regression in all the performance parameters considered....

9. [Recommendations for the Stepwise Occupational Reintegration: Can the Characteristic of the Patients Explain the Differences Between the Rehabilitation Centers?].

Schmid, L; Jankowiak, S; Kaluscha, R; Krischak, G

2016-06-01

The first step to initiate a stepwise occupational reintegration (SOR) is the recommendation of the rehabilitation centers. Therefore rehabilitation centers have a significant impact on the use of SOR. There is evidence that the recommendation rate between the rehabilitation centers differs clearly. The present survey therefore analyses in detail the differences of the recommendation rate and examines which patient-related factors could explain the differences. This study is based on analysis of routine data provided by the German pension insurance in Baden-Württemberg (Rehabilitationsstatistikdatenbasis 2013; RSD). In the analyses rehabilitation measures were included if they were conducted by employed patients (18-64 years) with a muscular-skeletal system disease or a disorder of the connective tissue. Logistic regression models were performed to explain the differences in the recommendation rate of the rehabilitation centers. The data of 134 853 rehabilitation measures out of 32 rehabilitation centers were available. The recommendation rate differed between the rehabilitation centers from 1.36-18.53%. The logistic regression analysis showed that the period of working incapacity 12 month before the rehabilitation and the working capacity on the current job were the most important predictors for the recommendation of a SOR by the rehabilitation centers. Also the rehabilitation centers themselves have an important influence. The results of this survey indicate that the characteristic of the patients is an important factor for the recommendation of SOR. Additionally the rehabilitation centers themselves have an influence on the recommendation of SOR. The results point to the fact that the rehabilitation centers use different criteria by making a recommendation. © Georg Thieme Verlag KG Stuttgart · New York.

10. Analysis of γ spectra in airborne radioactivity measurements using multiple linear regressions

Bao Min; Shi Quanlin; Zhang Jiamei

2004-01-01

This paper describes the net peak counts calculating of nuclide 137 Cs at 662 keV of γ spectra in airborne radioactivity measurements using multiple linear regressions. Mathematic model is founded by analyzing every factor that has contribution to Cs peak counts in spectra, and multiple linear regression function is established. Calculating process adopts stepwise regression, and the indistinctive factors are eliminated by F check. The regression results and its uncertainty are calculated using Least Square Estimation, then the Cs peak net counts and its uncertainty can be gotten. The analysis results for experimental spectrum are displayed. The influence of energy shift and energy resolution on the analyzing result is discussed. In comparison with the stripping spectra method, multiple linear regression method needn't stripping radios, and the calculating result has relation with the counts in Cs peak only, and the calculating uncertainty is reduced. (authors)

11. Female Traditional Principals and Co-Principals: Experiences of Role Conflict and Job Satisfaction

Eckman, Ellen Wexler; Kelber, Sheryl Talcott

2010-01-01

This paper presents a secondary analysis of survey data focusing on role conflict and job satisfaction of 102 female principals. Data were collected from 51 female traditional principals and 51 female co-principals. By examining the traditional and co-principal leadership models as experienced by female principals, this paper addresses the impact…

12. Quantile regression theory and applications

Davino, Cristina; Vistocco, Domenico

2013-01-01

A guide to the implementation and interpretation of Quantile Regression models This book explores the theory and numerous applications of quantile regression, offering empirical data analysis as well as the software tools to implement the methods. The main focus of this book is to provide the reader with a comprehensivedescription of the main issues concerning quantile regression; these include basic modeling, geometrical interpretation, estimation and inference for quantile regression, as well as issues on validity of the model, diagnostic tools. Each methodological aspect is explored and

13. An Inter-Networking Mechanism with Stepwise Synchronization for Wireless Sensor Networks

Masayuki Murata

2011-08-01

Full Text Available To realize the ambient information society, multiple wireless networks deployed in the region and devices carried by users are required to cooperate with each other. Since duty cycles and operational frequencies are different among networks, we need a mechanism to allow networks to efficiently exchange messages. For this purpose, we propose a novel inter-networking mechanism where two networks are synchronized with each other in a moderate manner, which we call stepwise synchronization. With our proposal, to bridge the gap between intrinsic operational frequencies, nodes near the border of networks adjust their operational frequencies in a stepwise fashion based on the pulse-coupled oscillator model as a fundamental theory of synchronization. Through simulation experiments, we show that the communication delay and the energy consumption of border nodes are reduced, which enables wireless sensor networks to communicate longer with each other.

14. Impact of Antibiotic Shortage on H. Pylori Treatment: A Step-Wise Approach for Pharmacist Management

Michelle M. Lamb

2013-01-01

Full Text Available The current drug shortage crisis involving multiple oral antibiotics has significantly impacted preferred therapeutic options for treatment of H.pylori infection. Pharmacists may help alleviate the impact of this shortage through a proposed step-wise approach which includes proper inventory management, verification of indication, evaluation of regimen, therapeutic monitoring, and communication with patients and providers regarding alternative therapy or symptomatic relief.   Type: Original Research

15. Porous media fracturing dynamics: stepwise crack advancement and fluid pressure oscillations

Cao, Toan D.; Hussain, Fazle; Schrefler, Bernhard A.

2018-02-01

16. Stepwise cyanation of naphthalene diimide for n-channel field-effect transistors

Chang, Jingjing

2012-06-15

Stepwise cyanation of tetrabromonaphthalenediimide (NDI) 1 gave a series of cyanated NDIs 2-5 with the monocyanated NDI 2 and dicyanated NDI 3 isolated. The tri- and tetracyano- NDIs 4 and 5 show intrinsic instability toward moisture because of their extremely low-lying LUMO energy levels. The partially cyanated intermediates can be utilized as air-stable n-type semiconductors with OFET electron mobility up to 0.05 cm 2 V -1 s -1. © 2012 American Chemical Society.

17. Laboratory quality stepwise implementation tool: National reference TB laboratory of Iran

Ali Naghi Kebriaee; Donya Malekshahian; Mojtaba Ahmadi; Parissa Farnia

2015-01-01

Background and objective: During recent years, the World Health Organization (WHO) proposed new software for improving the tuberculosis (TB) laboratory services. The protocol is known as “quality stepwise implementation tool” and is based on enforcement of quality assurance services through accreditation by the International Organization for Standardization (ISO) 15189. As a national reference TB laboratory (NRL) of Iran, the benefit and challenges of implementing this standard were analyzed....

18. Stepwise synthesis and characterization of germa[4], [5], [8], and [10]pericyclynes.

Tanimoto, Hiroki; Nagao, Tomohiko; Fujiwara, Taro; Nishiyama, Yasuhiro; Morimoto, Tsumoru; Suzuka, Toshimasa; Tsutsumi, Ken; Kakiuchi, Kiyomi

2015-07-14

The stepwise syntheses of germa[N]pericyclynes, including [5]pericyclynes, and their characterization are described. The yields of germa[4] and [8]pericyclynes were improved significantly compared to those obtained in previous studies. The routes reported herein afforded the novel germa[5] and [10]pericyclynes, which were characterized by X-ray crystallography, UV-Vis spectroscopy, and fluorescence emission spectroscopy. A unique fluorescence emission was observed for the large germa[10]pericyclyne ring.

19. Impact of Antibiotic Shortage on H. Pylori Treatment: A Step-Wise Approach for Pharmacist Management

Ann Lloyd, Pharm.D., BCPS

2013-01-01

Full Text Available The current drug shortage crisis involving multiple oral antibiotics has significantly impacted preferred therapeutic options for treatment of H.pylori infection. Pharmacists may help alleviate the impact of this shortage through a proposed step-wise approach which includes proper inventory management, verification of indication, evaluation of regimen, therapeutic monitoring, and communication with patients and providers regarding alternative therapy or symptomatic relief.

20. The Intramolecular Diels–Alder Reaction of Tryptamine-Derived Zincke Aldehydes Is a Stepwise Process

Pham, Hung V.; Martin, David B. C.; Vanderwal, Christopher D.; Houk, K. N.

2012-01-01

Computational studies show that the base-mediated intramolecular Diels–Alder of tryptamine-derived Zincke aldehydes, used as a key step in the synthesis of the Strychnos alkaloids norfluorocurarine and strychnine, proceeds via a stepwise pathway. The experimentally determined importance of a potassium counterion in the base is explained by its ability to preorganize the Zincke aldehyde diene in an s-cis conformation suitable to bicyclization. Computation also supports the thermodynamic import...

1. JT-60 configuration parameters for feedback control determined by regression analysis

Matsukawa, Makoto; Hosogane, Nobuyuki; Ninomiya, Hiromasa (Japan Atomic Energy Research Inst., Naka, Ibaraki (Japan). Naka Fusion Research Establishment)

1991-12-01

The stepwise regression procedure was applied to obtain measurement formulas for equilibrium parameters used in the feedback control of JT-60. This procedure automatically selects variables necessary for the measurements, and selects a set of variables which are not likely to be picked up by physical considerations. Regression equations with stable and small multicollinearity were obtained and it was experimentally confirmed that the measurement formulas obtained through this procedure were accurate enough to be applicable to the feedback control of plasma configurations in JT-60. (author).

2. JT-60 configuration parameters for feedback control determined by regression analysis

Matsukawa, Makoto; Hosogane, Nobuyuki; Ninomiya, Hiromasa

1991-12-01

The stepwise regression procedure was applied to obtain measurement formulas for equilibrium parameters used in the feedback control of JT-60. This procedure automatically selects variables necessary for the measurements, and selects a set of variables which are not likely to be picked up by physical considerations. Regression equations with stable and small multicollinearity were obtained and it was experimentally confirmed that the measurement formulas obtained through this procedure were accurate enough to be applicable to the feedback control of plasma configurations in JT-60. (author)

3. Fourth-order Perturbed Eigenvalue Equation for Stepwise Damage Detection of Aeroplane Wing

Wong Chun Nam

2016-01-01

Full Text Available Perturbed eigenvalue equations up to fourth-order are established to detect structural damage in aeroplane wing. Complete set of perturbation terms including orthogonal and non-orthogonal coefficients are computed using perturbed eigenvalue and orthonormal equations. Then the perturbed eigenparameters are optimized using BFGS approach. Finite element model with small to large stepwise damage is used to represent actual aeroplane wing. In small damaged level, termination number is the same for both approaches, while rms errors and termination d-norms are very close. For medium damaged level, termination number is larger for third-order perturbation with lower d-norm and smaller rms error. In large damaged level, termination number is much larger for third-order perturbation with same d-norm and larger rms error. These trends are more significant as the damaged level increases. As the stepwise damage effect increases with damage level, the increase in stepwise effect leads to the increase in model order. Hence, fourth-order perturbation is more accurate to estimate the model solution.

4. Stepwise dehydrogenation of ammonia on Fcc-Co surfaces: A DFT study

Ma, F.F.; Ma, S.H., E-mail: mash.phy@htu.edu.cn; Jiao, Z.Y.; Dai, X.Q.

2017-05-31

Highlights: • On Co surfaces, oxygen atom not only strengthens ammonia-substrate interaction but also facilitates ammonia dissociation on the Co surfaces. • Pre-adsorbed O atom significantly promotes the stepwise dehydrogenation of ammonia on Co(110), giving rise to N atom strongly binding with the surface. • The dissociation of NH appears to be the rate-determining step on O-covered Co(111) and Co(100) surfaces. • The species N and NH produced in ammonia dehydrogenation are likely responsible for cobalt catalyst deactivation in the excess of oxygen atom. - Abstract: The stepwise dehydrogenation of ammonia on clean and O-covered Co surfaces have been studied by performing density functional theory (DFT) calculations. It is found that the interaction of species NH{sub x} (x = 0–3) with the Co surfaces become stronger with its further dehydrogenation, and oxygen atom not only strengthens ammonia-substrate interaction but also facilitates ammonia dissociation. Specifically, pre-adsorbed O atom significantly promotes the stepwise dehydrogenation of ammonia on Co(110), giving rise to N atom strongly binding with the surface. In contrast, the dissociation of NH appears to be the rate-determining step on O-covered Co(111) and Co(100) surfaces, due to the high energy barriers. And present results demonstrate that the species N and NH produced in ammonia dehydrogenation are likely responsible for cobalt catalyst deactivation in the excess of oxygen atom.

5. In vitro production of buffalo embryos from stepwise vitrified immature oocytes.

Abd-Allah, Saber Mohammed

2009-01-01

This study was conducted to produce buffalo embryos in vitro from stepwise vitrified immature oocytes. Cumulus oocyte complexes (COCs) were obtained from the ovaries of slaughtered buffalo and were collected from the local abattoir. Selected COCs were exposed to a vitrification solution consisting of 40% ethylene glycol (EG) plus 0.3 M trehalose and 20% polyvinyl pyrrolidone (PVP) for 1 min and loaded in 0.25 ml plastic mini-straws containing 100 microl of 10% sucrose. The loaded cryostraws were cryopreserved by stepwise vitrification and were stored in liquid nitrogen for 4 to 6 months. Data analysis revealed a high percentage of post-thawing morphologically normal immature oocytes (80.7%) with a low percentage of damaged oocytes. There were no significant differences in the maturation (82.1%), cleavage (47.6%) and buffalo embryo development (15.4%) produced by the stepwise vitrified immature oocytes in comparison to the three observations in fresh oocytes (88.3%, 50.4% and 19.4%, respectively, p<0.05).

6. Stepwise dehydrogenation of ammonia on Fcc-Co surfaces: A DFT study

Ma, F.F.; Ma, S.H.; Jiao, Z.Y.; Dai, X.Q.

2017-01-01

Highlights: • On Co surfaces, oxygen atom not only strengthens ammonia-substrate interaction but also facilitates ammonia dissociation on the Co surfaces. • Pre-adsorbed O atom significantly promotes the stepwise dehydrogenation of ammonia on Co(110), giving rise to N atom strongly binding with the surface. • The dissociation of NH appears to be the rate-determining step on O-covered Co(111) and Co(100) surfaces. • The species N and NH produced in ammonia dehydrogenation are likely responsible for cobalt catalyst deactivation in the excess of oxygen atom. - Abstract: The stepwise dehydrogenation of ammonia on clean and O-covered Co surfaces have been studied by performing density functional theory (DFT) calculations. It is found that the interaction of species NH x (x = 0–3) with the Co surfaces become stronger with its further dehydrogenation, and oxygen atom not only strengthens ammonia-substrate interaction but also facilitates ammonia dissociation. Specifically, pre-adsorbed O atom significantly promotes the stepwise dehydrogenation of ammonia on Co(110), giving rise to N atom strongly binding with the surface. In contrast, the dissociation of NH appears to be the rate-determining step on O-covered Co(111) and Co(100) surfaces, due to the high energy barriers. And present results demonstrate that the species N and NH produced in ammonia dehydrogenation are likely responsible for cobalt catalyst deactivation in the excess of oxygen atom.

7. A stepwise composite echocardiographic score predicts severe pulmonary hypertension in patients with interstitial lung disease.

Bax, Simon; Bredy, Charlene; Kempny, Aleksander; Dimopoulos, Konstantinos; Devaraj, Anand; Walsh, Simon; Jacob, Joseph; Nair, Arjun; Kokosi, Maria; Keir, Gregory; Kouranos, Vasileios; George, Peter M; McCabe, Colm; Wilde, Michael; Wells, Athol; Li, Wei; Wort, Stephen John; Price, Laura C

2018-04-01

European Respiratory Society (ERS) guidelines recommend the assessment of patients with interstitial lung disease (ILD) and severe pulmonary hypertension (PH), as defined by a mean pulmonary artery pressure (mPAP) ≥35 mmHg at right heart catheterisation (RHC). We developed and validated a stepwise echocardiographic score to detect severe PH using the tricuspid regurgitant velocity and right atrial pressure (right ventricular systolic pressure (RVSP)) and additional echocardiographic signs. Consecutive ILD patients with suspected PH underwent RHC between 2005 and 2015. Receiver operating curve analysis tested the ability of components of the score to predict mPAP ≥35 mmHg, and a score devised using a stepwise approach. The score was tested in a contemporaneous validation cohort. The score used "additional PH signs" where RVSP was unavailable, using a bootstrapping technique. Within the derivation cohort (n=210), a score ≥7 predicted severe PH with 89% sensitivity, 71% specificity, positive predictive value 68% and negative predictive value 90%, with similar performance in the validation cohort (n=61) (area under the curve (AUC) 84.8% versus 83.1%, p=0.8). Although RVSP could be estimated in 92% of studies, reducing this to 60% maintained a fair accuracy (AUC 74.4%). This simple stepwise echocardiographic PH score can predict severe PH in patients with ILD.

8. In vitro production of buffalo embryos from stepwise vitrified immature oocytes

Saber Mohammed Abd-Allah

2009-09-01

Full Text Available This study was conducted to produce buffalo embryos in vitro from stepwise vitrified immature oocytes. Cumulus oocyte complexes (COCs were obtained from the ovaries of slaughtered buffalo and were collected from the local abattoir. Selected COCs were exposed to a vitrification solution consisting of 40% ethylene glycol (EG plus 0.3 M trehalose and 20% polyvinyl pyrrolidone (PVP for 1 min and loaded in 0.25 ml plastic mini-straws containing 100 µl of 10% sucrose. The loaded cryostraws were cryopreserved by stepwise vitrification and were stored in liquid nitrogen for 4 to 6 months. Data analysis revealed a high percentage of post-thawing morphologically normal immature oocytes (80.7% with a low percentage of damaged oocytes. There were no significant differences in the maturation (82.1%, cleavage (47.6% and buffalo embryo development (15.4% produced by the stepwise vitrified immature oocytes in comparison to the three observations in fresh oocytes (88.3%, 50.4% and 19.4%, respectively, p<0.05.

9. A Data Forward Stepwise Fitting Algorithm Based on Orthogonal Function System

Li Han-Ju

2017-01-01

Full Text Available Data fitting is the main method of functional data analysis, and it is widely used in the fields of economy, social science, engineering technology and so on. Least square method is the main method of data fitting, but the least square method is not convergent, no memory property, big fitting error and it is easy to over fitting. Based on the orthogonal trigonometric function system, this paper presents a data forward stepwise fitting algorithm. This algorithm takes forward stepwise fitting strategy, each time using the nearest base function to fit the residual error generated by the previous base function fitting, which makes the residual mean square error minimum. In this paper, we theoretically prove the convergence, the memory property and the fitting error diminishing character for the algorithm. Experimental results show that the proposed algorithm is effective, and the fitting performance is better than that of the least square method and the forward stepwise fitting algorithm based on the non-orthogonal function system.

10. Stepwise mechanism of oxidative ammonolysis of propane to acrylonitrile over gallium-antimony oxide catalysts

Osipova, Z.G.; Sokolovskii, V.D.

1979-03-01

The stepwise mechanism of oxidative ammonolysis of propane to acrylonitrile over gallium-antimony oxide catalysts GaSb/sub 19/O/sub x/, GaSb/sub 3/Ni/sub 1.5/0/sub x/, and GaSb/sub 2.5/Ni/sub 1.5/PW/sub 0//sub 0.25/O/sub x/ was studied at 450/sup 0/ and 550/sup 0/C by introducing alternating pulses of 0.5Vertical Bar3< propane/0.6Vertical Bar3< ammonia/helium (to reduce the steady-state catalytic surface) and 0.5Vertical Bar3< propane/0.6Vertical Bar3< ammonia/1.86Vertical Bar3< oxygen/helium mixtures into a fluidized-bed catalytic reactor. Over all the catalysts studied, the rates of acrylonitrile formation during the two types of pulses were very similar, but carbon dioxide was formed much faster during the reducing pulses, particularly at 450/sup 0/C. These findings suggested that acrylonitrile is formed by a stepwise redox mechanism involving consecutive interaction of propane and ammonia with the surface oxygen of the catalysts and oxidation of the reduced catalyst surface by gas-phase oxygen. The formation of carbon dioxide proceeds by both stepwise and associative mechanisms, the latter being more important at higher temperatures. The results are similar to published results for ammoxidation of propylene and olefins.

11. A novel peak-hopping stepwise feature selection method with application to Raman spectroscopy

McShane, M.J.; Cameron, B.D.; Cote, G.L.; Motamedi, M.; Spiegelman, C.H.

1999-01-01

A new stepwise approach to variable selection for spectroscopy that includes chemical information and attempts to test several spectral regions producing high ranking coefficients has been developed to improve on currently available methods. Existing selection techniques can, in general, be placed into two groups: the first, time-consuming optimization approaches that ignore available information about sample chemistry and require considerable expertise to arrive at appropriate solutions (e.g. genetic algorithms), and the second, stepwise procedures that tend to select many variables in the same area containing redundant information. The algorithm described here is a fast stepwise procedure that uses multiple ranking chains to identify several spectral regions correlated with known sample properties. The multiple-chain approach allows the generation of a final ranking vector that moves quickly away from the initial selection point, testing several areas exhibiting correlation between spectra and composition early in the stepping procedure. Quantitative evidence of the success of this approach as applied to Raman spectroscopy is given in terms of processing speed, number of selected variables, and prediction error in comparison with other selection methods. In this respect, the procedure described here may be considered as a significant evolutionary step in variable selection algorithms. (Copyright (c) 1999 Elsevier Science B.V., Amsterdam. All rights reserved.)

12. On Bayesian Principal Component Analysis

Šmídl, Václav; Quinn, A.

2007-01-01

Roč. 51, č. 9 (2007), s. 4101-4123 ISSN 0167-9473 R&D Projects: GA MŠk(CZ) 1M0572 Institutional research plan: CEZ:AV0Z10750506 Keywords : Principal component analysis ( PCA ) * Variational bayes (VB) * von-Mises–Fisher distribution Subject RIV: BC - Control Systems Theory Impact factor: 1.029, year: 2007 http://www.sciencedirect.com/science?_ob=ArticleURL&_udi=B6V8V-4MYD60N-6&_user=10&_coverDate=05%2F15%2F2007&_rdoc=1&_fmt=&_orig=search&_sort=d&view=c&_acct=C000050221&_version=1&_urlVersion=0&_userid=10&md5=b8ea629d48df926fe18f9e5724c9003a

13. Principals: Learn P.R. Survival Skills.

Reep, Beverly B.

1988-01-01

School building level public relations depends on the principal or vice principal. Strategies designed to enhance school public relations programs include linking school and community, working with the press, and keeping morale high inside the school. (MLF)

14. Panel Smooth Transition Regression Models

González, Andrés; Terasvirta, Timo; Dijk, Dick van

We introduce the panel smooth transition regression model. This new model is intended for characterizing heterogeneous panels, allowing the regression coefficients to vary both across individuals and over time. Specifically, heterogeneity is allowed for by assuming that these coefficients are bou...

15. Testing discontinuities in nonparametric regression

Dai, Wenlin

2017-01-19

In nonparametric regression, it is often needed to detect whether there are jump discontinuities in the mean function. In this paper, we revisit the difference-based method in [13 H.-G. Müller and U. Stadtmüller, Discontinuous versus smooth regression, Ann. Stat. 27 (1999), pp. 299–337. doi: 10.1214/aos/1018031100

16. Testing discontinuities in nonparametric regression

Dai, Wenlin; Zhou, Yuejin; Tong, Tiejun

2017-01-01

In nonparametric regression, it is often needed to detect whether there are jump discontinuities in the mean function. In this paper, we revisit the difference-based method in [13 H.-G. Müller and U. Stadtmüller, Discontinuous versus smooth regression, Ann. Stat. 27 (1999), pp. 299–337. doi: 10.1214/aos/1018031100

17. Logistic Regression: Concept and Application

Cokluk, Omay

2010-01-01

The main focus of logistic regression analysis is classification of individuals in different groups. The aim of the present study is to explain basic concepts and processes of binary logistic regression analysis intended to determine the combination of independent variables which best explain the membership in certain groups called dichotomous…

18. Fungible weights in logistic regression.

Jones, Jeff A; Waller, Niels G

2016-06-01

In this article we develop methods for assessing parameter sensitivity in logistic regression models. To set the stage for this work, we first review Waller's (2008) equations for computing fungible weights in linear regression. Next, we describe 2 methods for computing fungible weights in logistic regression. To demonstrate the utility of these methods, we compute fungible logistic regression weights using data from the Centers for Disease Control and Prevention's (2010) Youth Risk Behavior Surveillance Survey, and we illustrate how these alternate weights can be used to evaluate parameter sensitivity. To make our work accessible to the research community, we provide R code (R Core Team, 2015) that will generate both kinds of fungible logistic regression weights. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

19. Ordinary least square regression, orthogonal regression, geometric mean regression and their applications in aerosol science

Leng Ling; Zhang Tianyi; Kleinman, Lawrence; Zhu Wei

2007-01-01

Regression analysis, especially the ordinary least squares method which assumes that errors are confined to the dependent variable, has seen a fair share of its applications in aerosol science. The ordinary least squares approach, however, could be problematic due to the fact that atmospheric data often does not lend itself to calling one variable independent and the other dependent. Errors often exist for both measurements. In this work, we examine two regression approaches available to accommodate this situation. They are orthogonal regression and geometric mean regression. Comparisons are made theoretically as well as numerically through an aerosol study examining whether the ratio of organic aerosol to CO would change with age

20. Tumor regression patterns in retinoblastoma

Zafar, S.N.; Siddique, S.N.; Zaheer, N.

2016-01-01

To observe the types of tumor regression after treatment, and identify the common pattern of regression in our patients. Study Design: Descriptive study. Place and Duration of Study: Department of Pediatric Ophthalmology and Strabismus, Al-Shifa Trust Eye Hospital, Rawalpindi, Pakistan, from October 2011 to October 2014. Methodology: Children with unilateral and bilateral retinoblastoma were included in the study. Patients were referred to Pakistan Institute of Medical Sciences, Islamabad, for chemotherapy. After every cycle of chemotherapy, dilated funds examination under anesthesia was performed to record response of the treatment. Regression patterns were recorded on RetCam II. Results: Seventy-four tumors were included in the study. Out of 74 tumors, 3 were ICRB group A tumors, 43 were ICRB group B tumors, 14 tumors belonged to ICRB group C, and remaining 14 were ICRB group D tumors. Type IV regression was seen in 39.1% (n=29) tumors, type II in 29.7% (n=22), type III in 25.6% (n=19), and type I in 5.4% (n=4). All group A tumors (100%) showed type IV regression. Seventeen (39.5%) group B tumors showed type IV regression. In group C, 5 tumors (35.7%) showed type II regression and 5 tumors (35.7%) showed type IV regression. In group D, 6 tumors (42.9%) regressed to type II non-calcified remnants. Conclusion: The response and success of the focal and systemic treatment, as judged by the appearance of different patterns of tumor regression, varies with the ICRB grouping of the tumor. (author)

1. A stepwise protocol for the treatment of refractory gastroesophageal reflux-induced chronic cough

Xu, Xianghuai; Lv, Hanjing; Yu, Li; Chen, Qiang; Liang, Siwei

2016-01-01

Background Refractory gastroesophageal reflux-induced chronic cough (GERC) is difficult to manage. The purpose of the study is to evaluate the efficacy of a novel stepwise protocol for treating this condition. Methods A total of 103 consecutive patients with suspected refractory reflux-induced chronic cough failing to a standard anti-reflux therapy were treated with a stepwise therapy. Treatment commences with high-dose omeprazole and, if necessary, is escalated to subsequent sequential treatment with ranitidine and finally baclofen. The primary end-point was overall cough resolution, and the secondary end-point was cough resolution after each treatment step. Results High-dose omeprazole eliminated or improved cough in 28.1% of patients (n=29). Further stepwise of treatment with the addition of ranitide yielded a favorable response in an additional 12.6% (n=13) of patients, and subsequent escalation to baclofen provoked response in another 36.9% (n=38) of patients. Overall, this stepwise protocol was successful in 77.6% (n=80) of patients. The diurnal cough symptom score fell from 3 [1] to 1 [0] (Z=6.316, P=0.000), and the nocturnal cough symptom score decreased from 1 [1] to 0 [1] (Z=–4.511, P=0.000), with a corresponding reduction in the Gastroesophageal Reflux Diagnostic Questionnaire score from 8.6±1.7 to 6.8±0.7 (t=3.612, P=0.000). Conversely, the cough threshold C2 to capsaicin was increased from 0.49 (0.49) µmol/L to 1.95 (2.92) µmol/L (Z=–5.892, P=0.000), and the cough threshold C5 was increased from 1.95 (2.92) µmol/L to 7.8 (5.85) µmol/L (Z=–5.171, P=0.000). Conclusions Sequential stepwise anti-reflux therapy is a useful therapeutic strategy for refractory reflux-induced chronic cough. PMID:26904227

2. Step-wise and punctuated genome evolution drive phenotype changes of tumor cells

Stepanenko, Aleksei; Andreieva, Svitlana; Korets, Kateryna; Mykytenko, Dmytro; Huleyuk, Nataliya; Vassetzky, Yegor; Kavsan, Vadym

2015-01-01

Highlights: • There are the step-wise continuous and punctuated phases of cancer genome evolution. • The system stresses during the different phases may lead to very different responses. • Stable transfection of an empty vector can result in genome and phenotype changes. • Functions of a (trans)gene can be opposite/versatile in cells with different genomes. • Contextually, temozolomide can both promote and suppress tumor cell aggressiveness. - Abstract: The pattern of genome evolution can be divided into two phases: the step-wise continuous phase (step-wise clonal evolution, stable dominant clonal chromosome aberrations (CCAs), and low frequency of non-CCAs, NCCAs) and punctuated phase (marked by elevated NCCAs and transitional CCAs). Depending on the phase, system stresses (the diverse CIN promoting factors) may lead to the very different phenotype responses. To address the contribution of chromosome instability (CIN) to phenotype changes of tumor cells, we characterized CCAs/NCCAs of HeLa and HEK293 cells, and their derivatives after genotoxic stresses (a stable plasmid transfection, ectopic expression of cancer-associated CHI3L1 gene or treatment with temozolomide) by conventional cytogenetics, copy number alterations (CNAs) by array comparative genome hybridization, and phenotype changes by cell viability and soft agar assays. Transfection of either the empty vector pcDNA3.1 or pcDNA3.1-CHI3L1 into 293 cells initiated the punctuated genome changes. In contrast, HeLa-CHI3L1 cells demonstrated the step-wise genome changes. Increased CIN correlated with lower viability of 293-pcDNA3.1 cells but higher colony formation efficiency (CFE). Artificial CHI3L1 production in 293-CHI3L1 cells increased viability and further contributed to CFE. The opposite growth characteristics of 293-CHI3L1 and HeLa-CHI3L1 cells were revealed. The effect and function of a (trans)gene can be opposite and versatile in cells with different genetic network, which is defined by

3. Step-wise and punctuated genome evolution drive phenotype changes of tumor cells

Stepanenko, Aleksei, E-mail: a.a.stepanenko@gmail.com [Department of Biosynthesis of Nucleic Acids, Institute of Molecular Biology and Genetics, National Academy of Sciences of Ukraine, Kyiv 03680 (Ukraine); Andreieva, Svitlana; Korets, Kateryna; Mykytenko, Dmytro [Department of Biosynthesis of Nucleic Acids, Institute of Molecular Biology and Genetics, National Academy of Sciences of Ukraine, Kyiv 03680 (Ukraine); Huleyuk, Nataliya [Institute of Hereditary Pathology, National Academy of Medical Sciences of Ukraine, Lviv 79008 (Ukraine); Vassetzky, Yegor [CNRS UMR8126, Université Paris-Sud 11, Institut de Cancérologie Gustave Roussy, Villejuif 94805 (France); Kavsan, Vadym [Department of Biosynthesis of Nucleic Acids, Institute of Molecular Biology and Genetics, National Academy of Sciences of Ukraine, Kyiv 03680 (Ukraine)

2015-01-15

Highlights: • There are the step-wise continuous and punctuated phases of cancer genome evolution. • The system stresses during the different phases may lead to very different responses. • Stable transfection of an empty vector can result in genome and phenotype changes. • Functions of a (trans)gene can be opposite/versatile in cells with different genomes. • Contextually, temozolomide can both promote and suppress tumor cell aggressiveness. - Abstract: The pattern of genome evolution can be divided into two phases: the step-wise continuous phase (step-wise clonal evolution, stable dominant clonal chromosome aberrations (CCAs), and low frequency of non-CCAs, NCCAs) and punctuated phase (marked by elevated NCCAs and transitional CCAs). Depending on the phase, system stresses (the diverse CIN promoting factors) may lead to the very different phenotype responses. To address the contribution of chromosome instability (CIN) to phenotype changes of tumor cells, we characterized CCAs/NCCAs of HeLa and HEK293 cells, and their derivatives after genotoxic stresses (a stable plasmid transfection, ectopic expression of cancer-associated CHI3L1 gene or treatment with temozolomide) by conventional cytogenetics, copy number alterations (CNAs) by array comparative genome hybridization, and phenotype changes by cell viability and soft agar assays. Transfection of either the empty vector pcDNA3.1 or pcDNA3.1-CHI3L1 into 293 cells initiated the punctuated genome changes. In contrast, HeLa-CHI3L1 cells demonstrated the step-wise genome changes. Increased CIN correlated with lower viability of 293-pcDNA3.1 cells but higher colony formation efficiency (CFE). Artificial CHI3L1 production in 293-CHI3L1 cells increased viability and further contributed to CFE. The opposite growth characteristics of 293-CHI3L1 and HeLa-CHI3L1 cells were revealed. The effect and function of a (trans)gene can be opposite and versatile in cells with different genetic network, which is defined by

4. Principals as Assessment Leaders in Rural Schools

Renihan, Patrick; Noonan, Brian

2012-01-01

This article reports a study of rural school principals' assessment leadership roles and the impact of rural context on their work. The study involved three focus groups of principals serving small rural schools of varied size and grade configuration in three systems. Principals viewed assessment as a matter of teacher accountability and as a…

5. Principal Stability and the Rural Divide

Pendola, Andrew; Fuller, Edward J.

2018-01-01

This article examines the unique features of the rural school context and how these features are associated with the stability of principals in these schools. Given the small but growing literature on the characteristics of rural principals, this study presents an exploratory analysis of principal stability across schools located in different…

6. New Principal Coaching as a Safety Net

Celoria, Davide; Roberson, Ingrid

2015-01-01

This study examines new principal coaching as an induction process and explores the emotional dimensions of educational leadership. Twelve principal coaches and new principals--six of each--participated in this qualitative study that employed emergent coding (Creswell, 2008; Denzin, 2005; Glaser & Strauss, 1998; Spradley, 1979). The major…

7. 12 CFR 561.39 - Principal office.

2010-01-01

... 12 Banks and Banking 5 2010-01-01 2010-01-01 false Principal office. 561.39 Section 561.39 Banks and Banking OFFICE OF THRIFT SUPERVISION, DEPARTMENT OF THE TREASURY DEFINITIONS FOR REGULATIONS AFFECTING ALL SAVINGS ASSOCIATIONS § 561.39 Principal office. The term principal office means the home...

8. The Principal as Academician: The Renewed Voice.

McAvoy, Brenda, Ed.

This collection of essays was written by principals who participated in the 1986-87 Humanities Seminar sponsored by the Principals' Institute of Georgia State University. The focus was "The Evolution of Intellectual Leadership." The roles of the principal as philosopher, historian, ethnician, writer and team member are examined through…

9. Principal-Counselor Collaboration and School Climate

Rock, Wendy D.; Remley, Theodore P.; Range, Lillian M.

2017-01-01

Examining whether principal-counselor collaboration and school climate were related, researchers sent 4,193 surveys to high school counselors in the United States and received 419 responses. As principal-counselor collaboration increased, there were increases in counselors viewing the principal as supportive, the teachers as regarding one another…

10. Principals' Collaborative Roles as Leaders for Learning

Kitchen, Margaret; Gray, Susan; Jeurissen, Maree

2016-01-01

This article draws on data from three multicultural New Zealand primary schools to reconceptualize principals' roles as leaders for learning. In doing so, the writers build on Sinnema and Robinson's (2012) article on goal setting in principal evaluation. Sinnema and Robinson found that even principals hand-picked for their experience fell short on…

11. Perceptions of Beginning Public School Principals.

Lyons, James E.

1993-01-01

Summarizes a study to determine principal's perceptions of their competency in primary responsibility areas and their greatest challenges and frustrations. Beginning principals are challenged by delegating responsibilities and becoming familiar with the principal's role, the local school, and school operations. Their major frustrations are role…

12. Teacher Supervision Practices and Principals' Characteristics

April, Daniel; Bouchamma, Yamina

2015-01-01

A questionnaire was used to determine the individual and collective teacher supervision practices of school principals and vice-principals in Québec (n = 39) who participated in a research-action study on pedagogical supervision. These practices were then analyzed in terms of the principals' sociodemographic and socioprofessional characteristics…

13. Leadership Coaching for Principals: A National Study

Wise, Donald; Cavazos, Blanca

2017-01-01

Surveys were sent to a large representative sample of public school principals in the United States asking if they had received leadership coaching. Comparison of responses to actual numbers of principals indicates that the sample represents the first national study of principal leadership coaching. Results indicate that approximately 50% of all…

14. 41 CFR 105-68.995 - Principal.

2010-07-01

... 41 Public Contracts and Property Management 3 2010-07-01 2010-07-01 false Principal. 105-68.995 Section 105-68.995 Public Contracts and Property Management Federal Property Management Regulations System...-GOVERNMENTWIDE DEBARMENT AND SUSPENSION (NONPROCUREMENT) Definitions § 105-68.995 Principal. Principal means— (a...

15. A principal-agent Model of corruption

Groenendijk, Nico

1997-01-01

One of the new avenues in the study of political corruption is that of neo-institutional economics, of which the principal-agent theory is a part. In this article a principal-agent model of corruption is presented, in which there are two principals (one of which is corrupting), and one agent (who is

16. Regression to Causality : Regression-style presentation influences causal attribution

Bordacconi, Mats Joe; Larsen, Martin Vinæs

2014-01-01

of equivalent results presented as either regression models or as a test of two sample means. Our experiment shows that the subjects who were presented with results as estimates from a regression model were more inclined to interpret these results causally. Our experiment implies that scholars using regression...... models – one of the primary vehicles for analyzing statistical results in political science – encourage causal interpretation. Specifically, we demonstrate that presenting observational results in a regression model, rather than as a simple comparison of means, makes causal interpretation of the results...... more likely. Our experiment drew on a sample of 235 university students from three different social science degree programs (political science, sociology and economics), all of whom had received substantial training in statistics. The subjects were asked to compare and evaluate the validity...

17. Regression analysis with categorized regression calibrated exposure: some interesting findings

Hjartåker Anette

2006-07-01

Full Text Available Abstract Background Regression calibration as a method for handling measurement error is becoming increasingly well-known and used in epidemiologic research. However, the standard version of the method is not appropriate for exposure analyzed on a categorical (e.g. quintile scale, an approach commonly used in epidemiologic studies. A tempting solution could then be to use the predicted continuous exposure obtained through the regression calibration method and treat it as an approximation to the true exposure, that is, include the categorized calibrated exposure in the main regression analysis. Methods We use semi-analytical calculations and simulations to evaluate the performance of the proposed approach compared to the naive approach of not correcting for measurement error, in situations where analyses are performed on quintile scale and when incorporating the original scale into the categorical variables, respectively. We also present analyses of real data, containing measures of folate intake and depression, from the Norwegian Women and Cancer study (NOWAC. Results In cases where extra information is available through replicated measurements and not validation data, regression calibration does not maintain important qualities of the true exposure distribution, thus estimates of variance and percentiles can be severely biased. We show that the outlined approach maintains much, in some cases all, of the misclassification found in the observed exposure. For that reason, regression analysis with the corrected variable included on a categorical scale is still biased. In some cases the corrected estimates are analytically equal to those obtained by the naive approach. Regression calibration is however vastly superior to the naive method when applying the medians of each category in the analysis. Conclusion Regression calibration in its most well-known form is not appropriate for measurement error correction when the exposure is analyzed on a

18. Advanced statistics: linear regression, part II: multiple linear regression.

Marill, Keith A

2004-01-01

The applications of simple linear regression in medical research are limited, because in most situations, there are multiple relevant predictor variables. Univariate statistical techniques such as simple linear regression use a single predictor variable, and they often may be mathematically correct but clinically misleading. Multiple linear regression is a mathematical technique used to model the relationship between multiple independent predictor variables and a single dependent outcome variable. It is used in medical research to model observational data, as well as in diagnostic and therapeutic studies in which the outcome is dependent on more than one factor. Although the technique generally is limited to data that can be expressed with a linear function, it benefits from a well-developed mathematical framework that yields unique solutions and exact confidence intervals for regression coefficients. Building on Part I of this series, this article acquaints the reader with some of the important concepts in multiple regression analysis. These include multicollinearity, interaction effects, and an expansion of the discussion of inference testing, leverage, and variable transformations to multivariate models. Examples from the first article in this series are expanded on using a primarily graphic, rather than mathematical, approach. The importance of the relationships among the predictor variables and the dependence of the multivariate model coefficients on the choice of these variables are stressed. Finally, concepts in regression model building are discussed.

19. Is there a step-wise migration in Nigeria? A case study of the migrational histories of migrants in Lagos.

Afolayan, A A

1985-09-01

"The paper sets out to test whether or not the movement pattern of people in Nigeria is step-wise. It examines the spatial order in the country and the movement pattern of people. It then analyzes the survey data and tests for the validity of step-wise migration in the country. The findings show that step-wise migration cannot adequately describe all the patterns observed." The presence of large-scale circulatory migration between rural and urban areas is noted. Ways to decrease the pressure on Lagos by developing intermediate urban areas are considered. excerpt

20. Logic regression and its extensions.

Schwender, Holger; Ruczinski, Ingo

2010-01-01

Logic regression is an adaptive classification and regression procedure, initially developed to reveal interacting single nucleotide polymorphisms (SNPs) in genetic association studies. In general, this approach can be used in any setting with binary predictors, when the interaction of these covariates is of primary interest. Logic regression searches for Boolean (logic) combinations of binary variables that best explain the variability in the outcome variable, and thus, reveals variables and interactions that are associated with the response and/or have predictive capabilities. The logic expressions are embedded in a generalized linear regression framework, and thus, logic regression can handle a variety of outcome types, such as binary responses in case-control studies, numeric responses, and time-to-event data. In this chapter, we provide an introduction to the logic regression methodology, list some applications in public health and medicine, and summarize some of the direct extensions and modifications of logic regression that have been proposed in the literature. Copyright © 2010 Elsevier Inc. All rights reserved.

1. BRGLM, Interactive Linear Regression Analysis by Least Square Fit

Ringland, J.T.; Bohrer, R.E.; Sherman, M.E.

1985-01-01

1 - Description of program or function: BRGLM is an interactive program written to fit general linear regression models by least squares and to provide a variety of statistical diagnostic information about the fit. Stepwise and all-subsets regression can be carried out also. There are facilities for interactive data management (e.g. setting missing value flags, data transformations) and tools for constructing design matrices for the more commonly-used models such as factorials, cubic Splines, and auto-regressions. 2 - Method of solution: The least squares computations are based on the orthogonal (QR) decomposition of the design matrix obtained using the modified Gram-Schmidt algorithm. 3 - Restrictions on the complexity of the problem: The current release of BRGLM allows maxima of 1000 observations, 99 variables, and 3000 words of main memory workspace. For a problem with N observations and P variables, the number of words of main memory storage required is MAX(N*(P+6), N*P+P*P+3*N, and 3*P*P+6*N). Any linear model may be fit although the in-memory workspace will have to be increased for larger problems

2. Teaching Principal Components Using Correlations.

Westfall, Peter H; Arias, Andrea L; Fulton, Lawrence V

2017-01-01

Introducing principal components (PCs) to students is difficult. First, the matrix algebra and mathematical maximization lemmas are daunting, especially for students in the social and behavioral sciences. Second, the standard motivation involving variance maximization subject to unit length constraint does not directly connect to the "variance explained" interpretation. Third, the unit length and uncorrelatedness constraints of the standard motivation do not allow re-scaling or oblique rotations, which are common in practice. Instead, we propose to motivate the subject in terms of optimizing (weighted) average proportions of variance explained in the original variables; this approach may be more intuitive, and hence easier to understand because it links directly to the familiar "R-squared" statistic. It also removes the need for unit length and uncorrelatedness constraints, provides a direct interpretation of "variance explained," and provides a direct answer to the question of whether to use covariance-based or correlation-based PCs. Furthermore, the presentation can be made without matrix algebra or optimization proofs. Modern tools from data science, including heat maps and text mining, provide further help in the interpretation and application of PCs; examples are given. Together, these techniques may be used to revise currently used methods for teaching and learning PCs in the behavioral sciences.

3. Abstract Expression Grammar Symbolic Regression

Korns, Michael F.

This chapter examines the use of Abstract Expression Grammars to perform the entire Symbolic Regression process without the use of Genetic Programming per se. The techniques explored produce a symbolic regression engine which has absolutely no bloat, which allows total user control of the search space and output formulas, which is faster, and more accurate than the engines produced in our previous papers using Genetic Programming. The genome is an all vector structure with four chromosomes plus additional epigenetic and constraint vectors, allowing total user control of the search space and the final output formulas. A combination of specialized compiler techniques, genetic algorithms, particle swarm, aged layered populations, plus discrete and continuous differential evolution are used to produce an improved symbolic regression sytem. Nine base test cases, from the literature, are used to test the improvement in speed and accuracy. The improved results indicate that these techniques move us a big step closer toward future industrial strength symbolic regression systems.

4. Quantile Regression With Measurement Error

Wei, Ying; Carroll, Raymond J.

2009-01-01

. The finite sample performance of the proposed method is investigated in a simulation study, and compared to the standard regression calibration approach. Finally, we apply our methodology to part of the National Collaborative Perinatal Project growth data, a

5. From Rasch scores to regression

Christensen, Karl Bang

2006-01-01

Rasch models provide a framework for measurement and modelling latent variables. Having measured a latent variable in a population a comparison of groups will often be of interest. For this purpose the use of observed raw scores will often be inadequate because these lack interval scale propertie....... This paper compares two approaches to group comparison: linear regression models using estimated person locations as outcome variables and latent regression models based on the distribution of the score....

6. Testing Heteroscedasticity in Robust Regression

Kalina, Jan

2011-01-01

Roč. 1, č. 4 (2011), s. 25-28 ISSN 2045-3345 Grant - others:GA ČR(CZ) GA402/09/0557 Institutional research plan: CEZ:AV0Z10300504 Keywords : robust regression * heteroscedasticity * regression quantiles * diagnostics Subject RIV: BB - Applied Statistics , Operational Research http://www.researchjournals.co.uk/documents/Vol4/06%20Kalina.pdf

7. Regression methods for medical research

Tai, Bee Choo

2013-01-01

Regression Methods for Medical Research provides medical researchers with the skills they need to critically read and interpret research using more advanced statistical methods. The statistical requirements of interpreting and publishing in medical journals, together with rapid changes in science and technology, increasingly demands an understanding of more complex and sophisticated analytic procedures.The text explains the application of statistical models to a wide variety of practical medical investigative studies and clinical trials. Regression methods are used to appropriately answer the

8. Forecasting with Dynamic Regression Models

Pankratz, Alan

2012-01-01

One of the most widely used tools in statistical forecasting, single equation regression models is examined here. A companion to the author's earlier work, Forecasting with Univariate Box-Jenkins Models: Concepts and Cases, the present text pulls together recent time series ideas and gives special attention to possible intertemporal patterns, distributed lag responses of output to input series and the auto correlation patterns of regression disturbance. It also includes six case studies.

9. Principal Leadership Style and Teacher Commitment among a Sample of Secondary School Teachers in Barbados

Ian Alwyn Marshall

2015-05-01

10. Optimization of fuel recovery through the stepwise co-pyrolysis of palm shell and scrap tire

Abnisa, Faisal; Wan Daud, Wan Mohd Ashri

2015-01-01

Highlights: • The co-pyrolysis of palm shell and scrap tire was studied. • The effect of stepwise co-pyrolysis temperature was investigated. • Co-pyrolysis successfully improved the quantity and quality of product yields. • Stepwise co-pyrolysis slightly increased oil and gas, and decreased char. • The co-pyrolysis of 50% biomass and 50% scrap tire is recommended. - Abstract: This study optimized the use of biomass waste to generate fuel through co-pyrolysis. In this paper, the effects of stepwise co-pyrolysis temperature and different ratios between palm shells and scrap tires in feedstock were studied to observe any improvements in the quantity and quality of the liquid yield and its byproduct. The ratio of palm shells and scrap tires varied at 100:0, 75:25, 50:50, 25:75, and 0:100. The experiment was conducted in a fixed-bed reactor. The study was divided into two scenarios. The first scenario was performed at the optimum temperature of 500 °C with a reaction time of 60 min. In the second scenario, the temperature was set at 500 °C for 60 min before the temperature was increased to 800 °C with a high heating rate. After the temperature reached 800 °C, the condition was maintained for approximately 45 min. Results showed that an increase in the liquid and gas yields was achieved when the temperature increased after optimum conditions. Increased yield was also obtained when the proportion of scrap tire was increased in the feedstock. Several other important findings are discussed in this paper, including the phases of pyrolysis oil, features of the liquid product, and characteristics of the byproducts. All products from both scenarios were analyzed by various methods to understand their fuel characteristics

11. Dissipated energy and entropy production for an unconventional heat engine: the stepwise `circular cycle'

di Liberto, Francesco; Pastore, Raffaele; Peruggi, Fulvio

2011-05-01

When some entropy is transferred, by means of a reversible engine, from a hot heat source to a colder one, the maximum efficiency occurs, i.e. the maximum available work is obtained. Similarly, a reversible heat pumps transfer entropy from a cold heat source to a hotter one with the minimum expense of energy. In contrast, if we are faced with non-reversible devices, there is some lost work for heat engines, and some extra work for heat pumps. These quantities are both related to entropy production. The lost work, i.e. ? , is also called 'degraded energy' or 'energy unavailable to do work'. The extra work, i.e. ? , is the excess of work performed on the system in the irreversible process with respect to the reversible one (or the excess of heat given to the hotter source in the irreversible process). Both quantities are analysed in detail and are evaluated for a complex process, i.e. the stepwise circular cycle, which is similar to the stepwise Carnot cycle. The stepwise circular cycle is a cycle performed by means of N small weights, dw, which are first added and then removed from the piston of the vessel containing the gas or vice versa. The work performed by the gas can be found as the increase of the potential energy of the dw's. Each single dw is identified and its increase, i.e. its increase in potential energy, evaluated. In such a way it is found how the energy output of the cycle is distributed among the dw's. The size of the dw's affects entropy production and therefore the lost and extra work. The distribution of increases depends on the chosen removal process.

12. Thermodynamic optimization with a finite number of heat intercepts for cryogenic systems with parameters stepwise continuous

Bisio, G.

1992-01-01

The aim of this paper is to study the thermodynamic optimization by the variation of the heat transfer rate in a finite number of points through insulation for the general case of one-dimensional heat transfer (flat plate, hollow cylinder and hollow sphere) in systems, consisting of different materials in series, whose thermal conductivity is a function of temperature and of the coordinate in the heat flux direction. Besides, some parameters or their first derivative are assumed stepwise continuous. For this purpose, the results of some researches by the author pertinent to the properties of entropy production rate in the one-dimensional heat transfer are utilized

13. Stepwise optimization and global chaos of nonlinear parameters in exact calculations of few-particle systems

Frolov, A.M.

1986-01-01

The problem of exact variational calculations of few-particle systems in the exponential basis of the relative coordinates using nonlinear parameters is studied. The techniques of stepwise optimization and global chaos of nonlinear parameters are used to calculate the S and P states of homonuclear muonic molecules with an error of no more than +0.001 eV. The global-chaos technique also has proved to be successful in the case of the nuclear systems 3 H and 3 He

14. In vivo stepwise multi-photon activation fluorescence imaging of melanin in human skin

Lai, Zhenhua; Gu, Zetong; Abbas, Saleh; Lowe, Jared; Sierra, Heidy; Rajadhyaksha, Milind; DiMarzio, Charles

2014-03-01

The stepwise multi-photon activated fluorescence (SMPAF) of melanin is a low cost and reliable method of detecting melanin because the activation and excitation can be a continuous-wave (CW) mode near infrared (NIR) laser. Our previous work has demonstrated the melanin SMPAF images in sepia melanin, mouse hair, and mouse skin. In this study, we show the feasibility of using SMPAF to detect melanin in vivo. in vivo melanin SMPAF images of normal skin and benign nevus are demonstrated. SMPAF images add specificity for melanin detection than MPFM images and CRM images. Melanin SMPAF is a promising technology to enable early detection of melanoma for dermatologists.

15. Stepwise decision making for the long-term management of radioactive waste

Pescatore, C.; Vari, A.

2003-01-01

The context of long-term radioactive waste management is being shaped by changes in modern society. Values such as health, environmental protection and safety are increasingly important. This changes in turn necessitate new forms of dialogue and decision-making processes that include a large number of stakeholders. This paper deals with the new features of a stepwise decision-making approach, taking into account the public involvement and social learning processes, and showing the complexity of the new situation. (A.L.B.)

16. Aeromagnetic Compensation Algorithm Based on Principal Component Analysis

Peilin Wu

2018-01-01

Full Text Available Aeromagnetic exploration is an important exploration method in geophysics. The data is typically measured by optically pumped magnetometer mounted on an aircraft. But any aircraft produces significant levels of magnetic interference. Therefore, aeromagnetic compensation is important in aeromagnetic exploration. However, multicollinearity of the aeromagnetic compensation model degrades the performance of the compensation. To address this issue, a novel aeromagnetic compensation method based on principal component analysis is proposed. Using the algorithm, the correlation in the feature matrix is eliminated and the principal components are using to construct the hyperplane to compensate the platform-generated magnetic fields. The algorithm was tested using a helicopter, and the obtained improvement ratio is 9.86. The compensated quality is almost the same or slightly better than the ridge regression. The validity of the proposed method was experimentally demonstrated.

17. Coupled variable selection for regression modeling of complex treatment patterns in a clinical cancer registry.

Schmidtmann, I; Elsäßer, A; Weinmann, A; Binder, H

2014-12-30

For determining a manageable set of covariates potentially influential with respect to a time-to-event endpoint, Cox proportional hazards models can be combined with variable selection techniques, such as stepwise forward selection or backward elimination based on p-values, or regularized regression techniques such as component-wise boosting. Cox regression models have also been adapted for dealing with more complex event patterns, for example, for competing risks settings with separate, cause-specific hazard models for each event type, or for determining the prognostic effect pattern of a variable over different landmark times, with one conditional survival model for each landmark. Motivated by a clinical cancer registry application, where complex event patterns have to be dealt with and variable selection is needed at the same time, we propose a general approach for linking variable selection between several Cox models. Specifically, we combine score statistics for each covariate across models by Fisher's method as a basis for variable selection. This principle is implemented for a stepwise forward selection approach as well as for a regularized regression technique. In an application to data from hepatocellular carcinoma patients, the coupled stepwise approach is seen to facilitate joint interpretation of the different cause-specific Cox models. In conditional survival models at landmark times, which address updates of prediction as time progresses and both treatment and other potential explanatory variables may change, the coupled regularized regression approach identifies potentially important, stably selected covariates together with their effect time pattern, despite having only a small number of events. These results highlight the promise of the proposed approach for coupling variable selection between Cox models, which is particularly relevant for modeling for clinical cancer registries with their complex event patterns. Copyright © 2014 John Wiley & Sons

18. Logistic regression for dichotomized counts.

Preisser, John S; Das, Kalyan; Benecha, Habtamu; Stamm, John W

2016-12-01

Sometimes there is interest in a dichotomized outcome indicating whether a count variable is positive or zero. Under this scenario, the application of ordinary logistic regression may result in efficiency loss, which is quantifiable under an assumed model for the counts. In such situations, a shared-parameter hurdle model is investigated for more efficient estimation of regression parameters relating to overall effects of covariates on the dichotomous outcome, while handling count data with many zeroes. One model part provides a logistic regression containing marginal log odds ratio effects of primary interest, while an ancillary model part describes the mean count of a Poisson or negative binomial process in terms of nuisance regression parameters. Asymptotic efficiency of the logistic model parameter estimators of the two-part models is evaluated with respect to ordinary logistic regression. Simulations are used to assess the properties of the models with respect to power and Type I error, the latter investigated under both misspecified and correctly specified models. The methods are applied to data from a randomized clinical trial of three toothpaste formulations to prevent incident dental caries in a large population of Scottish schoolchildren. © The Author(s) 2014.

19. Consistency and Word-Frequency Effects on Spelling among First- To Fifth-Grade French Children: A Regression-Based Study

Lete, Bernard; Peereman, Ronald; Fayol, Michel

2008-01-01

We describe a large-scale regression study that examines the influence of lexical (word frequency, lexical neighborhood) and sublexical (feedforward and feedback consistency) variables on spelling accuracy among first, second, and third- to fifth-graders. The wordset analyzed contained 3430 French words. Predictors in the stepwise regression…

20. Producing The New Regressive Left

Crone, Christine

members, this thesis investigates a growing political trend and ideological discourse in the Arab world that I have called The New Regressive Left. On the premise that a media outlet can function as a forum for ideology production, the thesis argues that an analysis of this material can help to trace...... the contexture of The New Regressive Left. If the first part of the thesis lays out the theoretical approach and draws the contextual framework, through an exploration of the surrounding Arab media-and ideoscapes, the second part is an analytical investigation of the discourse that permeates the programmes aired...... becomes clear from the analytical chapters is the emergence of the new cross-ideological alliance of The New Regressive Left. This emerging coalition between Shia Muslims, religious minorities, parts of the Arab Left, secular cultural producers, and the remnants of the political,strategic resistance...

1. Reduction of symplectic principal R-bundles

Lacirasella, Ignazio; Marrero, Juan Carlos; Padrón, Edith

2012-01-01

We describe a reduction process for symplectic principal R-bundles in the presence of a momentum map. These types of structures play an important role in the geometric formulation of non-autonomous Hamiltonian systems. We apply this procedure to the standard symplectic principal R-bundle associated with a fibration π:M→R. Moreover, we show a reduction process for non-autonomous Hamiltonian systems on symplectic principal R-bundles. We apply these reduction processes to several examples. (paper)

2. Stepwise integral scaling method and its application to severe accident phenomena

Ishii, M.; Zhang, G.

1993-10-01

Severe accidents in light water reactors are characterized by an occurrence of multiphase flow with complicated phase changes, chemical reaction and various bifurcation phenomena. Because of the inherent difficulties associated with full-scale testing, scaled down and simulation experiments are essential part of the severe accident analyses. However, one of the most significant shortcomings in the area is the lack of well-established and reliable scaling method and scaling criteria. In view of this, the stepwise integral scaling method is developed for severe accident analyses. This new scaling method is quite different from the conventional approach. However, its focus on dominant transport mechanisms and use of the integral response of the system make this method relatively simple to apply to very complicated multi-phase flow problems. In order to demonstrate its applicability and usefulness, three case studies have been made. The phenomena considered are (1) corium dispersion in DCH, (2) corium spreading in BWR MARK-I containment, and (3) incore boil-off and heating process. The results of these studies clearly indicate the effectiveness of their stepwise integral scaling method. Such a simple and systematic scaling method has not been previously available to severe accident analyses

3. The systemic management of cutaneous dermatomyositis: Results of a stepwise strategy

C.O. Anyanwu

2017-12-01

Full Text Available Treatment of dermatomyositis (DM is often achieved with a stepwise algorithm. However, the literature lacks quality evidence to support the use of this therapeutic strategy. The result of a stepwise therapeutic strategy in the management of skin-only DM is presented to better understand the clinical outcomes and allow for future studies. A cohort of 102 patients with DM, 41 of whom had skin-only disease, were seen between July 2009 and April 2013 at a referral-based connective tissue disease clinic. The Cutaneous Dermatomyositis Disease Area and Severity Index was used to prospectively assess disease severity and the outcomes in 41 adult patients with skin-only DM were analyzed. Of the 41 patients with skin-only DM, 23 patients (56.1% received antimalarial medications alone and 18 patients (43.9% received second- or third-line agents. Ten patients (24.4% remained at the first level of the treatment algorithm and received only hydroxychloroquine. Prednisone was included in the treatment regimen for 11 patients with skin-only disease (26.8%. The results show that management of cutaneous DM often requires second-line agents because antimalarial medications alone are insufficient to treat most patients with skin-only disease. Keywords: dermatomyositis, antimalarial, immunosuppressive, CDASI, outcome measures, treatment

4. Stepwise fluorination - a useful approach for the isotopic analysis of hydrous minerals

Haimson, M; Knauth, L P [Arizona State Univ., Tempe (USA). Dept. of Geology

1983-09-01

Analytical uncertainties in oxygen isotopic studies of hydrous silica have been investigated using a partial fluorination procedure in which fractional oxygen yields are achieved by reducing the amount of fluorine. Stepwise reaction of opaline silica results in a set of sequential oxygen fractions which show a wide range of delta/sup 18/O values due to variable amounts of water, organic matter, and other impurities. Delta-values for successive fractions in non-biogenic opal systematically increase as water is reacted away and then remain constant to within +- 0.2 per thousand as the remaining silica reacts. Delta-values in biogenic silica increase similarly but then decrease when low /sup 18/O oxide impurities begin to react. The troublesome water component in opal is readily removed by stepwise fluorination. This technique allows more precise oxygen isotope analysis of non-biogenic opal-A, and may improve the analytical precision for biogenic silica and any silicate mineral containing a significant water component.

5. Differential growth of the northern Tibetan margin: evidence for oblique stepwise rise of the Tibetan Plateau

Wang, Fei; Shi, Wenbei; Zhang, Weibin; Wu, Lin; Yang, Liekun; Wang, Yinzhi; Zhu, Rixiang

2017-01-01

Models of how high elevations formed across Tibet predict: (a) the continuous thickening of a “viscous sheet”; (b) time-dependent, oblique stepwise growth; and (c) synchronous deformation across Tibet that accompanied collision. Our new observations may shed light on this issue. Here, we use 40Ar/39Ar and (U-Th)/He thermochronology from massifs in the hanging walls of thrust structures along the Kunlun Belt, the first-order orogenic range at the northern Tibetan margin, to elucidate the exhumation history. The results show that these massifs, and hence the plateau margin, were subject to slow, steady exhumation during the Early Cenozoic, followed by a pulse of accelerated exhumation during 40–35 Ma. The exhumation rate increases westward (from ~0.22 to 0.34 and 0.5 mm/yr). The two-fold increase in exhumation in the western part (0.5 mm/yr) compared to the eastern part suggests westward increases in exhumation and compressional stress along the Kunlun Belt. We relate these observations to the mechanisms responsible for the oblique stepwise rise of Tibet. After collision, oblique subduction beneath Kunlun caused stronger compressional deformation in the western part than in the eastern part, resulting in differential growth and lateral extrusion. PMID:28117351

6. Estimating stepwise debromination pathways of polybrominated diphenyl ethers with an analogue Markov Chain Monte Carlo algorithm.

Zou, Yonghong; Christensen, Erik R; Zheng, Wei; Wei, Hua; Li, An

2014-11-01

A stochastic process was developed to simulate the stepwise debromination pathways for polybrominated diphenyl ethers (PBDEs). The stochastic process uses an analogue Markov Chain Monte Carlo (AMCMC) algorithm to generate PBDE debromination profiles. The acceptance or rejection of the randomly drawn stepwise debromination reactions was determined by a maximum likelihood function. The experimental observations at certain time points were used as target profiles; therefore, the stochastic processes are capable of presenting the effects of reaction conditions on the selection of debromination pathways. The application of the model is illustrated by adopting the experimental results of decabromodiphenyl ether (BDE209) in hexane exposed to sunlight. Inferences that were not obvious from experimental data were suggested by model simulations. For example, BDE206 has much higher accumulation at the first 30 min of sunlight exposure. By contrast, model simulation suggests that, BDE206 and BDE207 had comparable yields from BDE209. The reason for the higher BDE206 level is that BDE207 has the highest depletion in producing octa products. Compared to a previous version of the stochastic model based on stochastic reaction sequences (SRS), the AMCMC approach was determined to be more efficient and robust. Due to the feature of only requiring experimental observations as input, the AMCMC model is expected to be applicable to a wide range of PBDE debromination processes, e.g. microbial, photolytic, or joint effects in natural environments. Copyright © 2014 Elsevier Ltd. All rights reserved.

7. A modified GFP facilitates counting membrane protein subunits by step-wise photobleaching in Arabidopsis.

Song, Kai; Xue, Yiqun; Wang, Xiaohua; Wan, Yinglang; Deng, Xin; Lin, Jinxing

2017-06-01

Membrane proteins exert functions by forming oligomers or molecular complexes. Currently, step-wise photobleaching has been applied to count the fluorescently labelled subunits in plant cells, for which an accurate and reliable control is required to distinguish individual subunits and define the basal fluorescence. However, the common procedure using immobilized GFP molecules is obviously not applicable for analysis in living plant cells. Using the spatial intensity distribution analysis (SpIDA), we found that the A206K mutation reduced the dimerization of GFP molecules. Further ectopic expression of Myristoyl-GFP A206K driven by the endogenous AtCLC2 promoter allowed imaging of individual molecules at a low expression level. As a result, the percentage of dimers in the transgenic pCLC2::Myristoyl-mGFP A206K line was significantly reduced in comparison to that of the pCLC2::Myristoyl-GFP line, confirming its application in defining the basal fluorescence intensity of GFP. Taken together, our results demonstrated that pCLC2::Myristoyl-mGFP A206K can be used as a standard control for monomer GFP, facilitating the analysis of the step-wise photobleaching of membrane proteins in Arabidopsis thaliana. Copyright © 2017 Elsevier GmbH. All rights reserved.

8. Numerical study of radial stepwise fuel load reshuffling traveling wave reactor

Zhang Dalin; Zheng Meiyin; Tian Wenxi; Qiu Suizheng; Su Guanghui

2015-01-01

Traveling wave reactor is a new conceptual fast breeder reactor, which can adopt natural uranium, depleted uranium and thorium directly to realize the self sustainable breeding and burning to achieve very high fuel utilization fraction. Based on the mechanism of traveling wave reactor, a concept of radial stepwise fuel load reshuffling traveling wave reactor was proposed for realistic application. It was combined with the typical design of sodium-cooled fast reactors, with which the asymptotic characteristics of the inwards stepwise fuel load reshuffling were studied numerically in two-dimension. The calculated results show that the asymptotic k_e_f_f parabolically varies with the reshuffling cycle length, while the burnup increases linearly. The highest burnup satisfying the reactor critical condition is 38%. The power peak shifts from the fuel discharging zone (core centre) to the fuel uploading zone (core periphery) and correspondingly the power peaking factor decreases along with the reshuffling cycle length. In addition, at the high burnup case the axial power distribution close to the core centre displays the M-shaped deformation. (authors)

9. Vapor permeation-stepwise injection simultaneous determination of methanol and ethanol in biodiesel with voltammetric detection.

Shishov, Andrey; Penkova, Anastasia; Zabrodin, Andrey; Nikolaev, Konstantin; Dmitrenko, Maria; Ermakov, Sergey; Bulatov, Andrey

2016-02-01

A novel vapor permeation-stepwise injection (VP-SWI) method for the determination of methanol and ethanol in biodiesel samples is discussed. In the current study, stepwise injection analysis was successfully combined with voltammetric detection and vapor permeation. This method is based on the separation of methanol and ethanol from a sample using a vapor permeation module (VPM) with a selective polymer membrane based on poly(phenylene isophtalamide) (PA) containing high amounts of a residual solvent. After the evaporation into the headspace of the VPM, methanol and ethanol were transported, by gas bubbling, through a PA membrane to a mixing chamber equipped with a voltammetric detector. Ethanol was selectively detected at +0.19 V, and both compounds were detected at +1.20 V. Current subtractions (using a correction factor) were used for the selective determination of methanol. A linear range between 0.05 and 0.5% (m/m) was established for each analyte. The limits of detection were estimated at 0.02% (m/m) for ethanol and methanol. The sample throughput was 5 samples h(-1). The method was successfully applied to the analysis of biodiesel samples. Copyright © 2015 Elsevier B.V. All rights reserved.

10. Design of stepwise screening for prediabetes and type 2 diabetes based on costs and cases detected.

de Graaf, Gimon; Postmus, Douwe; Bakker, Stephan J L; Buskens, Erik

2015-09-01

To provide insight into the trade-off between cost per case detected (CPCD) and the detection rate in questionnaire-based stepwise screening for impaired fasting glucose and undiagnosed type 2 diabetes. We considered a stepwise screening in which individuals whose risk score exceeds a predetermined cutoff value are invited for further blood glucose testing. Using individual patient data to determine questionnaire sensitivity and specificity and external sources to determine screening costs and patient response rates, we rolled back a decision tree to estimate the CPCD and the detection rate for all possible cutoffs on the questionnaire. We found a U-shaped relation between CPCD and detection rate, with high costs per case detected at very low and very high detection rates. Changes in patient response rates had a large impact on both the detection rate and the CPCD, whereas screening costs and questionnaire accuracy mainly impacted the CPCD. Our applied method makes it possible to identify a range of efficient cutoffs where higher detection rates can be achieved at an additional cost per detected patient. This enables decision makers to choose an optimal cutoff based on their willingness to pay for additional detected patients. Copyright © 2015 Elsevier Inc. All rights reserved.

11. A stepwise-cluster microbial biomass inference model in food waste composting

Sun Wei; Huang, Guo H.; Zeng Guangming; Qin Xiaosheng; Sun Xueling

2009-01-01

A stepwise-cluster microbial biomass inference (SMI) model was developed through introducing stepwise-cluster analysis (SCA) into composting process modeling to tackle the nonlinear relationships among state variables and microbial activities. The essence of SCA is to form a classification tree based on a series of cutting or mergence processes according to given statistical criteria. Eight runs of designed experiments in bench-scale reactors in a laboratory were constructed to demonstrate the feasibility of the proposed method. The results indicated that SMI could help establish a statistical relationship between state variables and composting microbial characteristics, where discrete and nonlinear complexities exist. Significance levels of cutting/merging were provided such that the accuracies of the developed forecasting trees were controllable. Through an attempted definition of input effects on the output in SMI, the effects of the state variables on thermophilic bacteria were ranged in a descending order as: Time (day) > moisture content (%) > ash content (%, dry) > Lower Temperature (deg. C) > pH > NH 4 + -N (mg/Kg, dry) > Total N (%, dry) > Total C (%, dry); the effects on mesophilic bacteria were ordered as: Time > Upper Temperature (deg. C) > Total N > moisture content > NH 4 + -N > Total C > pH. This study made the first attempt in applying SCA to mapping the nonlinear and discrete relationships in composting processes.

12. Stepwise Assembly and Characterization of DNA Linked Two-Color Quantum Dot Clusters.

Coopersmith, Kaitlin; Han, Hyunjoo; Maye, Mathew M

2015-07-14

The DNA-mediated self-assembly of multicolor quantum dot (QD) clusters via a stepwise approach is described. The CdSe/ZnS QDs were synthesized and functionalized with an amphiphilic copolymer, followed by ssDNA conjugation. At each functionalization step, the QDs were purified via gradient ultracentrifugation, which was found to remove excess polymer and QD aggregates, allowing for improved conjugation yields and assembly reactivity. The QDs were then assembled and disassembled in a stepwise manner at a ssDNA functionalized magnetic colloid, which provided a convenient way to remove unreacted QDs and ssDNA impurities. After assembly/disassembly, the clusters' optical characteristics were studied by fluorescence spectroscopy and the assembly morphology and stoichiometry was imaged via electron microscopy. The results indicate that a significant amount of QD-to-QD energy transfer occurred in the clusters, which was studied as a function of increasing acceptor-to-donor ratios, resulting in increased QD acceptor emission intensities compared to controls.

13. Correlation and simple linear regression.

Zou, Kelly H; Tuncali, Kemal; Silverman, Stuart G

2003-06-01

In this tutorial article, the concepts of correlation and regression are reviewed and demonstrated. The authors review and compare two correlation coefficients, the Pearson correlation coefficient and the Spearman rho, for measuring linear and nonlinear relationships between two continuous variables. In the case of measuring the linear relationship between a predictor and an outcome variable, simple linear regression analysis is conducted. These statistical concepts are illustrated by using a data set from published literature to assess a computed tomography-guided interventional technique. These statistical methods are important for exploring the relationships between variables and can be applied to many radiologic studies.

14. Nonparametric Mixture of Regression Models.

Huang, Mian; Li, Runze; Wang, Shaoli

2013-07-01

Motivated by an analysis of US house price index data, we propose nonparametric finite mixture of regression models. We study the identifiability issue of the proposed models, and develop an estimation procedure by employing kernel regression. We further systematically study the sampling properties of the proposed estimators, and establish their asymptotic normality. A modified EM algorithm is proposed to carry out the estimation procedure. We show that our algorithm preserves the ascent property of the EM algorithm in an asymptotic sense. Monte Carlo simulations are conducted to examine the finite sample performance of the proposed estimation procedure. An empirical analysis of the US house price index data is illustrated for the proposed methodology.

15. Maximum entropy principal for transportation

Bilich, F.; Da Silva, R.

2008-01-01

In this work we deal with modeling of the transportation phenomenon for use in the transportation planning process and policy-impact studies. The model developed is based on the dependence concept, i.e., the notion that the probability of a trip starting at origin i is dependent on the probability of a trip ending at destination j given that the factors (such as travel time, cost, etc.) which affect travel between origin i and destination j assume some specific values. The derivation of the solution of the model employs the maximum entropy principle combining a priori multinomial distribution with a trip utility concept. This model is utilized to forecast trip distributions under a variety of policy changes and scenarios. The dependence coefficients are obtained from a regression equation where the functional form is derived based on conditional probability and perception of factors from experimental psychology. The dependence coefficients encode all the information that was previously encoded in the form of constraints. In addition, the dependence coefficients encode information that cannot be expressed in the form of constraints for practical reasons, namely, computational tractability. The equivalence between the standard formulation (i.e., objective function with constraints) and the dependence formulation (i.e., without constraints) is demonstrated. The parameters of the dependence-based trip-distribution model are estimated, and the model is also validated using commercial air travel data in the U.S. In addition, policy impact analyses (such as allowance of supersonic flights inside the U.S. and user surcharge at noise-impacted airports) on air travel are performed.

16. Efficient training of multilayer perceptrons using principal component analysis

Bunzmann, Christoph; Urbanczik, Robert; Biehl, Michael

2005-01-01

A training algorithm for multilayer perceptrons is discussed and studied in detail, which relates to the technique of principal component analysis. The latter is performed with respect to a correlation matrix computed from the example inputs and their target outputs. Typical properties of the training procedure are investigated by means of a statistical physics analysis in models of learning regression and classification tasks. We demonstrate that the procedure requires by far fewer examples for good generalization than traditional online training. For networks with a large number of hidden units we derive the training prescription which achieves, within our model, the optimal generalization behavior

17. Cactus: An Introduction to Regression

Hyde, Hartley

2008-01-01

When the author first used "VisiCalc," the author thought it a very useful tool when he had the formulas. But how could he design a spreadsheet if there was no known formula for the quantities he was trying to predict? A few months later, the author relates he learned to use multiple linear regression software and suddenly it all clicked into…

18. Regression Models for Repairable Systems

Novák, Petr

2015-01-01

Roč. 17, č. 4 (2015), s. 963-972 ISSN 1387-5841 Institutional support: RVO:67985556 Keywords : Reliability analysis * Repair models * Regression Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.782, year: 2015 http://library.utia.cas.cz/separaty/2015/SI/novak-0450902.pdf

19. Survival analysis II: Cox regression

Stel, Vianda S.; Dekker, Friedo W.; Tripepi, Giovanni; Zoccali, Carmine; Jager, Kitty J.

2011-01-01

In contrast to the Kaplan-Meier method, Cox proportional hazards regression can provide an effect estimate by quantifying the difference in survival between patient groups and can adjust for confounding effects of other variables. The purpose of this article is to explain the basic concepts of the

20. Kernel regression with functional response

Ferraty, Frédéric; Laksaci, Ali; Tadj, Amel; Vieu, Philippe

2011-01-01

We consider kernel regression estimate when both the response variable and the explanatory one are functional. The rates of uniform almost complete convergence are stated as function of the small ball probability of the predictor and as function of the entropy of the set on which uniformity is obtained.

1. New pulser for principal PO power

Coudert, G.

1984-01-01

The pulser of the principal power of the PS is the unit that makes it possible to generate the reference function of the voltage of the principal magnet. This function depends on time and on the magnetic field of the magnet. It also generates various synchronization and reference pulses

2. Principals: Human Capital Managers at Every School

Kimball, Steven M.

2011-01-01

Being a principal is more than just being an instructional leader. Principals also must manage their schools' teaching talent in a strategic way so that it is linked to school instructional improvement strategies, to the competencies needed to enact the strategies, and to success in boosting student learning. Teacher acquisition and performance…

3. Constructing principals' professional identities through life stories ...

The Life History approach was used to collect data from six ... experience as the most significant leadership factors that influence principals' ... ranging from their entry into the teaching profession to their appointment as ..... teachers. I think I learnt from my principal to be strict but accommodating ..... Teachers College Press.

4. Integrating Technology: The Principals' Role and Effect

2015-01-01

There are many factors that influence technology integration in the classroom such as teacher willingness, availability of hardware, and professional development of staff. Taking into account these elements, this paper describes research on technology integration with a focus on principals' attitudes. The role of the principal in classroom…

5. Building Leadership Capacity to Support Principal Succession

Escalante, Karen Elizabeth

2016-01-01

This study applies transformational leadership theory practices, specifically inspiring a shared vision, modeling the way and enabling others to act to examine the purposeful ways in which principals work to build the next generation of teacher leaders in response to the dearth of K-12 principals. The purpose of this study was to discover how one…

6. Deformation quantization of principal fibre bundles

Weiss, S.

2007-01-01

Deformation quantization is an algebraic but still geometrical way to define noncommutative spacetimes. In order to investigate corresponding gauge theories on such spaces, the geometrical formulation in terms of principal fibre bundles yields the appropriate framework. In this talk I will explain what should be understood by a deformation quantization of principal fibre bundles and how associated vector bundles arise in this context. (author)

7. Primary School Principals' Self-Monitoring Skills

Konan, Necdet

2015-01-01

The aim of the present study is to identify primary school principals' self-monitoring skills. The study adopted the general survey model and its population comprised primary school principals serving in the city of Diyarbakir, Turkey, while 292 of these constituted the sample. Self-Monitoring Scale was used as the data collection instrument. In…

8. Revising the Role of Principal Supervisor

Saltzman, Amy

2016-01-01

In Washington, D.C., and Tulsa, Okla., districts whose efforts are supported by the Wallace Foundation, principal supervisors concentrate on bolstering their principals' work to improve instruction, as opposed to focusing on the managerial or operational aspects of running a school. Supervisors oversee fewer schools, which enables them to provide…

9. An Examination of Principal Job Satisfaction

Pengilly, Michelle M.

2010-01-01

As education continues to succumb to deficits in budgets and increasingly high levels of student performance to meet the federal and state mandates, the quest to sustain and retain successful principals is imperative. The National Association of School Boards (1999) portrays effective principals as "linchpins" of school improvement and…

10. Do Principals Fire the Worst Teachers?

Jacob, Brian A.

2011-01-01

This article takes advantage of a unique policy change to examine how principals make decisions regarding teacher dismissal. In 2004, the Chicago Public Schools (CPS) and Chicago Teachers Union signed a new collective bargaining agreement that gave principals the flexibility to dismiss probationary teachers for any reason and without the…

11. Artful Dodges Principals Use to Beat Bureaucracy.

Ficklen, Ellen

1982-01-01

A study of Chicago (Illinois) principals revealed many ways principals practiced "creative insubordination"--avoiding following instructions but still getting things done. Among the dodges are deliberately missing deadlines, following orders literally, ignoring channels to procure teachers or materials, and using community members to…

12. Women principals' reflections of curriculum management challenges ...

This study reports the reflections of grade 6 rural primary principals in Mpumalanga province. A qualitative method of inquiry was used in this article, where data were collected using individual interviews with three principals and focus group discussions with the school management teams (SMTs) of three primary schools.

13. The Succession of a School Principal.

Fauske, Janice R.; Ogawa, Rodney T.

Applying theory from organizational and cultural perspectives to succession of principals, this study observes and records the language and culture of a small suburban elementary school. The study's procedures included analyses of shared organizational understandings as well as identification of the principal's influence on the school. Analyses of…

14. Principals' Perceptions of School Public Relations

Morris, Robert C.; Chan, Tak Cheung; Patterson, Judith

2009-01-01

This study was designed to investigate school principals' perceptions on school public relations in five areas: community demographics, parental involvement, internal and external communications, school council issues, and community resources. Findings indicated that principals' concerns were as follows: rapid population growth, change of…

15. Should Principals Know More about Law?

Doctor, Tyrus L.

2013-01-01

Educational law is a critical piece of the education conundrum. Principals reference law books on a daily basis in order to address the wide range of complex problems in the school system. A principal's knowledge of law issues and legal decision-making are essential to provide effective feedback for a successful school.

16. How Not to Prepare School Principals

Davis, Stephen H.; Leon, Ronald J.

2011-01-01

Instead of focusing on how principals should be trained, an contrarian view is offered, grounded upon theoretical perspectives of experiential learning, and in particular, upon the theory of andragogy. A brief parable of the DoNoHarm School of Medicine is used as a descriptive analog for many principal preparation programs in America. The…

17. Social Media Strategies for School Principals

Cox, Dan; McLeod, Scott

2014-01-01

The purpose of this qualitative study was to describe, analyze, and interpret the experiences of school principals who use multiple social media tools with stakeholders as part of their comprehensive communications practices. Additionally, it examined why school principals have chosen to communicate with their stakeholders through social media.…

18. New Principals' Perspectives of Their Multifaceted Roles

Gentilucci, James L.; Denti, Lou; Guaglianone, Curtis L.

2013-01-01

This study utilizes Symbolic Interactionism to explore perspectives of neophyte principals. Findings explain how these perspectives are modified through complex interactions throughout the school year, and they also suggest preparation programs can help new principals most effectively by teaching "soft" skills such as active listening…

19. The Principal's Guide to Grant Success.

Bauer, David G.

This book provides principals of public and private elementary and middle schools with a step-by-step approach for developing a system that empowers faculty, staff, and the school community in attracting grant funds. Following the introduction, chapter 1 discusses the principal's role in supporting grantseeking. Chapter 2 describes how to…

20. Neighborhood social capital and crime victimization: comparison of spatial regression analysis and hierarchical regression analysis.

Takagi, Daisuke; Ikeda, Ken'ichi; Kawachi, Ichiro

2012-11-01

Crime is an important determinant of public health outcomes, including quality of life, mental well-being, and health behavior. A body of research has documented the association between community social capital and crime victimization. The association between social capital and crime victimization has been examined at multiple levels of spatial aggregation, ranging from entire countries, to states, metropolitan areas, counties, and neighborhoods. In multilevel analysis, the spatial boundaries at level 2 are most often drawn from administrative boundaries (e.g., Census tracts in the U.S.). One problem with adopting administrative definitions of neighborhoods is that it ignores spatial spillover. We conducted a study of social capital and crime victimization in one ward of Tokyo city, using a spatial Durbin model with an inverse-distance weighting matrix that assigned each respondent a unique level of "exposure" to social capital based on all other residents' perceptions. The study is based on a postal questionnaire sent to 20-69 years old residents of Arakawa Ward, Tokyo. The response rate was 43.7%. We examined the contextual influence of generalized trust, perceptions of reciprocity, two types of social network variables, as well as two principal components of social capital (constructed from the above four variables). Our outcome measure was self-reported crime victimization in the last five years. In the spatial Durbin model, we found that neighborhood generalized trust, reciprocity, supportive networks and two principal components of social capital were each inversely associated with crime victimization. By contrast, a multilevel regression performed with the same data (using administrative neighborhood boundaries) found generally null associations between neighborhood social capital and crime. Spatial regression methods may be more appropriate for investigating the contextual influence of social capital in homogeneous cultural settings such as Japan. Copyright

1. Quantile Regression With Measurement Error

Wei, Ying

2009-08-27

Regression quantiles can be substantially biased when the covariates are measured with error. In this paper we propose a new method that produces consistent linear quantile estimation in the presence of covariate measurement error. The method corrects the measurement error induced bias by constructing joint estimating equations that simultaneously hold for all the quantile levels. An iterative EM-type estimation algorithm to obtain the solutions to such joint estimation equations is provided. The finite sample performance of the proposed method is investigated in a simulation study, and compared to the standard regression calibration approach. Finally, we apply our methodology to part of the National Collaborative Perinatal Project growth data, a longitudinal study with an unusual measurement error structure. © 2009 American Statistical Association.

2. Multivariate and semiparametric kernel regression

Härdle, Wolfgang; Müller, Marlene

1997-01-01

The paper gives an introduction to theory and application of multivariate and semiparametric kernel smoothing. Multivariate nonparametric density estimation is an often used pilot tool for examining the structure of data. Regression smoothing helps in investigating the association between covariates and responses. We concentrate on kernel smoothing using local polynomial fitting which includes the Nadaraya-Watson estimator. Some theory on the asymptotic behavior and bandwidth selection is pro...

3. Regression algorithm for emotion detection

Berthelon , Franck; Sander , Peter

2013-01-01

International audience; We present here two components of a computational system for emotion detection. PEMs (Personalized Emotion Maps) store links between bodily expressions and emotion values, and are individually calibrated to capture each person's emotion profile. They are an implementation based on aspects of Scherer's theoretical complex system model of emotion~\\cite{scherer00, scherer09}. We also present a regression algorithm that determines a person's emotional feeling from sensor m...

4. Directional quantile regression in R

Boček, Pavel; Šiman, Miroslav

2017-01-01

Roč. 53, č. 3 (2017), s. 480-492 ISSN 0023-5954 R&D Projects: GA ČR GA14-07234S Institutional support: RVO:67985556 Keywords : multivariate quantile * regression quantile * halfspace depth * depth contour Subject RIV: BD - Theory of Information OBOR OECD: Applied mathematics Impact factor: 0.379, year: 2016 http://library.utia.cas.cz/separaty/2017/SI/bocek-0476587.pdf

5. Polylinear regression analysis in radiochemistry

Kopyrin, A.A.; Terent'eva, T.N.; Khramov, N.N.

1995-01-01

A number of radiochemical problems have been formulated in the framework of polylinear regression analysis, which permits the use of conventional mathematical methods for their solution. The authors have considered features of the use of polylinear regression analysis for estimating the contributions of various sources to the atmospheric pollution, for studying irradiated nuclear fuel, for estimating concentrations from spectral data, for measuring neutron fields of a nuclear reactor, for estimating crystal lattice parameters from X-ray diffraction patterns, for interpreting data of X-ray fluorescence analysis, for estimating complex formation constants, and for analyzing results of radiometric measurements. The problem of estimating the target parameters can be incorrect at certain properties of the system under study. The authors showed the possibility of regularization by adding a fictitious set of data open-quotes obtainedclose quotes from the orthogonal design. To estimate only a part of the parameters under consideration, the authors used incomplete rank models. In this case, it is necessary to take into account the possibility of confounding estimates. An algorithm for evaluating the degree of confounding is presented which is realized using standard software or regression analysis

6. Accuracy analysis of the thermal diffusivity measurement of molten salts by stepwise heating method

Kato, Yoshio; Furukawa, Kazuo

1976-11-01

The stepwise heating method for measuring thermal diffusivity of molten salts is based on the electrical heating of a thin metal plate as a plane heat source in the molten salt. In this method, the following estimations on error are of importance: (1) thickness effect of the metal plate, (2) effective length between the plate and a temperature measuring point and (3) effect of the noise on the temperature rise signal. In this report, a measuring apparatus is proposed and measuring conditions are suggested on the basis of error estimations. The measurements for distilled water and glycerine were made first to test the performance; the results agreed well with standard values. The thermal diffusivities of molten NaNO 3 at 320-380 0 C and of molten Li 2 BeF 4 at 470-700 0 C were measured. (auth.)

7. Examples of Nearly Net Zero Energy Buildings Through One-Step and Stepwise Retrofits

Galiotto, Nicolas; Heiselberg, Per; Knudstrup, Mary-Ann

2012-01-01

This paper presents the review of eight single-family house retrofit projects. The main objective is to collect and classify several approaches to nearly net zero energy building retrofitting. The selection has been made on the capacity of reaching a nearly net zero energy level via a one......-step or stepwise retrofit process. The review work is part of a more global Ph.D. project and is used as one of the basement of the future research work. The considered approaches have been sorted in two categories. The first approach has a very high use of energy conservation measures and low use of renewable...... energy production measures. The second approach has a lower use of energy conservation measures (but still high compared to a traditional renovation) and a higher use of renewable energy production measures. A third approach to nearly net zero energy building renovation exists but has not been considered...

8. Step-wise kinetics of natural physical ageing in arsenic selenide glasses

Golovchak, R; Kozdras, A; Balitska, V; Shpotyuk, O

2012-01-01

The long-term kinetics of physical ageing at ambient temperature is studied in Se-rich As-Se glasses using the conventional differential scanning calorimetry technique. It is analysed through the changes in the structural relaxation parameters occurring during the glass-to-supercooled liquid transition in the heating mode. Along with the time dependences of the glass transition temperature (T g ) and partial area (A) under the endothermic relaxation peak, the enthalpy losses (ΔH) and calculated fictive temperature (T F ) are analysed as key parameters, characterizing the kinetics of physical ageing. The latter is shown to have step-wise character, revealing some kinds of subsequent plateaus and steep regions. A phenomenological description of physical ageing in the investigated glasses is proposed on the basis of an alignment-shrinkage mechanism and first-order kinetic equations.

9. The stepwise evolution of the exome during acquisition of docetaxel resistance in breast cancer cells

Hansen, Stine Ninel; Ehlers, Natasja Spring; Zhu, Shida

2016-01-01

Background: Resistance to taxane-based therapy in breast cancer patients is a major clinical problem that may be addressed through insight of the genomic alterations leading to taxane resistance in breast cancer cells. In the current study we used whole exome sequencing to discover somatic genomic...... alterations, evolving across evolutionary stages during the acquisition of docetaxel resistance in breast cancer cell lines. Results: Two human breast cancer in vitro models (MCF-7 and MDA-MB-231) of the step-wise acquisition of docetaxel resistance were developed by exposing cells to 18 gradually increasing...... resistance relevant genomic variation appeared to arise midway towards fully resistant cells corresponding to passage 31 (5 nM docetaxel) for MDA-MB-231 and passage 16 (1.2 nM docetaxel) for MCF-7, and where the cells also exhibited a period of reduced growth rate or arrest, respectively. MCF-7 cell acquired...

10. A Stepwise Approach: Decreasing Infection in Deep Brain Stimulation for Childhood Dystonic Cerebral Palsy.

Johans, Stephen J; Swong, Kevin N; Hofler, Ryan C; Anderson, Douglas E

2017-09-01

Dystonia is a movement disorder characterized by involuntary muscle contractions, which cause twisting movements or abnormal postures. Deep brain stimulation has been used to improve the quality of life for secondary dystonia caused by cerebral palsy. Despite being a viable treatment option for childhood dystonic cerebral palsy, deep brain stimulation is associated with a high rate of infection in children. The authors present a small series of patients with dystonic cerebral palsy who underwent a stepwise approach for bilateral globus pallidus interna deep brain stimulation placement in order to decrease the rate of infection. Four children with dystonic cerebral palsy who underwent a total of 13 surgical procedures (electrode and battery placement) were identified via a retrospective review. There were zero postoperative infections. Using a multistaged surgical plan for pediatric patients with dystonic cerebral palsy undergoing deep brain stimulation may help to reduce the risk of infection.

11. Stepwise Splitting Growth and Pseudocapacitive Properties of Hierarchical Three-Dimensional Co3O4 Nanobooks

Huilong Chen

2017-04-01

Full Text Available Three-dimensional hierarchical Co3O4 nanobooks have been synthesized successfully on a large scale by calcining orthorhombic Co(CO30.5(OH·0.11H2O precursors with identical morphologies. Based on the influence of reaction time and urea concentration on the nanostructures of the precursors, a stepwise splitting growth mechanism can be proposed to understand the formation of the 3D nanobooks. The 3D Co3O4 nanobooks exhibit excellent pseudocapacitive performances with specific capacitances of 590, 539, 476, 453, and 421 F/g at current densities of 0.5, 1, 2, 4, and 8 A/g, respectively. The devices can retain ca. 97.4% of the original specific capacitances after undergoing charge–discharge cycle tests 1000 times continuously at 4 A/g.

12. Stepwise modularization in the construction industry using a bottom-up approach

Kudsk, Anders; Grønvold, Martin O'Brien; Olsen, Magnus Holo

2013-01-01

The manufacturing industry has experienced a great deal of improvement in efficiency and cost reductions throughout the last centuries. But although there have been improvements in the manufacturing industry, the principles and work methods in the construction industry have stood still for more t...... than a hundred years. Based on principles of mass customization applied in the manufacturing industry, two cases of successful implementation of mass customization and modularization have been investigated as a means of showcasing the possibility to incorporate standardization in parts...... implemented stepwise. The case shows that substantial benefits can be gained through implementing modularized construction. It is especially interesting to note that these benefits are achieved through the development of a module with focus on the internal interfaces. © Kudsk et al.; Licensee Bentham Open....

13. Solid structures of the stepwise self-assembled copillar[5]arene-based supramolecular polymers

Park, Yeon Sil; Hwang, Seong Min; Shin, Jae Yeon; Paek, Kyung Soo [Dept. of Chemistry, Soongsil University, Seoul (Korea, Republic of)

2016-10-15

Development of supramolecular polymer has attracted much interest because of their interesting properties such as stimuli-responsiveness, recycling, self-healing and degradability, and their consequential applications. The essential feature of this class of polymers is the self-assembly of discrete monomeric subunits via non-covalent interactions or dynamic covalent bonds. Among the many monomeric subunits, pillar[n]arenes have been ideal building blocks for the fabrication of polymeric supramolecules because of their intrinsic characteristics. The ring-shaped morphologies in supramolecular polymer P are probably due to the tendency of the end-to-end connection in the solid state of long flexible supramolecular chains. The size increase of nano-rings as the stepwise addition increases might be due to the fact that the linear supramolecular polymer P in solution seems to be maintained until the nano-ring formation by solidification.

14. A Stepwise ISO-Based TQM Implementation Approach Using ISO 9001:2015

Chen Chi-kuang

2016-12-01

Full Text Available The lack of an implementation roadmap always deters enterprises from choosing Total Quality Management (TQM as its major management approach. This paper proposes a stepwise ISO-based TQM implementation approach which is based on the notion of the new three-dimensional overall business excellence framework developed by Dahlgaard et al. [1]. The proposed approach consists of nine steps comprising three categories: “TQM faith building”, “TQM tools and techniques learning”, and “system development”. The steps in each of the three categories are arranged to span across the proposed nine-step approach. The ISO 9001:2015 standard is used as a case study to demonstrate the proposed approach. The ideas and benefits of the proposed approach are further discussed in relation to this illustration.

15. Solid structures of the stepwise self-assembled copillar[5]arene-based supramolecular polymers

Park, Yeon Sil; Hwang, Seong Min; Shin, Jae Yeon; Paek, Kyung Soo

2016-01-01

Development of supramolecular polymer has attracted much interest because of their interesting properties such as stimuli-responsiveness, recycling, self-healing and degradability, and their consequential applications. The essential feature of this class of polymers is the self-assembly of discrete monomeric subunits via non-covalent interactions or dynamic covalent bonds. Among the many monomeric subunits, pillar[n]arenes have been ideal building blocks for the fabrication of polymeric supramolecules because of their intrinsic characteristics. The ring-shaped morphologies in supramolecular polymer P are probably due to the tendency of the end-to-end connection in the solid state of long flexible supramolecular chains. The size increase of nano-rings as the stepwise addition increases might be due to the fact that the linear supramolecular polymer P in solution seems to be maintained until the nano-ring formation by solidification

16. A Stepwise Fitting Procedure for automated fitting of Ecopath with Ecosim models

Erin Scott

2016-01-01

Full Text Available The Stepwise Fitting Procedure automates testing of alternative hypotheses used for fitting Ecopath with Ecosim (EwE models to observation reference data (Mackinson et al. 2009. The calibration of EwE model predictions to observed data is important to evaluate any model that will be used for ecosystem based management. Thus far, the model fitting procedure in EwE has been carried out manually: a repetitive task involving setting >1000 specific individual searches to find the statistically ‘best fit’ model. The novel fitting procedure automates the manual procedure therefore producing accurate results and lets the modeller concentrate on investigating the ‘best fit’ model for ecological accuracy.

17. A stepwise approach for effective management of chronic pain in autosomal-dominant polycystic kidney disease.

Casteleijn, Niek F; Visser, Folkert W; Drenth, Joost P H; Gevers, Tom J G; Groen, Gerbrand J; Hogan, Marie C; Gansevoort, Ron T

2014-09-01

18. Generating linear regression model to predict motor functions by use of laser range finder during TUG.

Adachi, Daiki; Nishiguchi, Shu; Fukutani, Naoto; Hotta, Takayuki; Tashiro, Yuto; Morino, Saori; Shirooka, Hidehiko; Nozaki, Yuma; Hirata, Hinako; Yamaguchi, Moe; Yorozu, Ayanori; Takahashi, Masaki; Aoyama, Tomoki

2017-05-01

The purpose of this study was to investigate which spatial and temporal parameters of the Timed Up and Go (TUG) test are associated with motor function in elderly individuals. This study included 99 community-dwelling women aged 72.9 ± 6.3 years. Step length, step width, single support time, variability of the aforementioned parameters, gait velocity, cadence, reaction time from starting signal to first step, and minimum distance between the foot and a marker placed to 3 in front of the chair were measured using our analysis system. The 10-m walk test, five times sit-to-stand (FTSTS) test, and one-leg standing (OLS) test were used to assess motor function. Stepwise multivariate linear regression analysis was used to determine which TUG test parameters were associated with each motor function test. Finally, we calculated a predictive model for each motor function test using each regression coefficient. In stepwise linear regression analysis, step length and cadence were significantly associated with the 10-m walk test, FTSTS and OLS test. Reaction time was associated with the FTSTS test, and step width was associated with the OLS test. Each predictive model showed a strong correlation with the 10-m walk test and OLS test (P motor function test. Moreover, the TUG test time regarded as the lower extremity function and mobility has strong predictive ability in each motor function test. Copyright © 2017 The Japanese Orthopaedic Association. Published by Elsevier B.V. All rights reserved.

19. Gaussian Process Regression Model in Spatial Logistic Regression

Sofro, A.; Oktaviarina, A.

2018-01-01

Spatial analysis has developed very quickly in the last decade. One of the favorite approaches is based on the neighbourhood of the region. Unfortunately, there are some limitations such as difficulty in prediction. Therefore, we offer Gaussian process regression (GPR) to accommodate the issue. In this paper, we will focus on spatial modeling with GPR for binomial data with logit link function. The performance of the model will be investigated. We will discuss the inference of how to estimate the parameters and hyper-parameters and to predict as well. Furthermore, simulation studies will be explained in the last section.

20. Preparing Principals as Instructional Leaders: Perceptions of University Faculty, Expert Principals, and Expert Teacher Leaders

Taylor Backor, Karen; Gordon, Stephen P.

2015-01-01

Although research has established links between the principal's instructional leadership and student achievement, there is considerable concern in the literature concerning the capacity of principal preparation programs to prepare instructional leaders. This study interviewed educational leadership faculty as well as expert principals and teacher…

1. Exploring the Impact of Applicants' Gender and Religion on Principals' Screening Decisions for Assistant Principal Applicants

Bon, Susan C.

2009-01-01

In this experimental study, a national random sample of high school principals (stratified by gender) were asked to evaluate hypothetical applicants whose resumes varied by religion (Jewish, Catholic, nondenominational) and gender (male, female) for employment as assistant principals. Results reveal that male principals rate all applicants higher…

2. Principal Self-Efficacy and Work Engagement: Assessing a Norwegian Principal Self-Efficacy Scale

Federici, Roger A.; Skaalvik, Einar M.

2011-01-01

One purpose of the present study was to develop and test the factor structure of a multidimensional and hierarchical Norwegian Principal Self-Efficacy Scale (NPSES). Another purpose of the study was to investigate the relationship between principal self-efficacy and work engagement. Principal self-efficacy was measured by the 22-item NPSES. Work…

3. Principal Time Management Skills: Explaining Patterns in Principals' Time Use, Job Stress, and Perceived Effectiveness

Grissom, Jason A.; Loeb, Susanna; Mitani, Hajime

2015-01-01

Purpose: Time demands faced by school principals make principals' work increasingly difficult. Research outside education suggests that effective time management skills may help principals meet job demands, reduce job stress, and improve their performance. The purpose of this paper is to investigate these hypotheses. Design/methodology/approach:…

4. Development of a PROficiency-Based StePwise Endovascular Curricular Training (PROSPECT) Program.

Maertens, Heidi; Aggarwal, Rajesh; Desender, Liesbeth; Vermassen, Frank; Van Herzeele, Isabelle

2016-01-01

Focus on patient safety, work-hour limitations, and cost-effective education is putting pressure to improve curricula to acquire minimally invasive techniques during surgical training. This study aimed to design a structured training program for endovascular skills and validate its assessment methods. A PROficiency-based StePwise Endovascular Curricular Training (PROSPECT) program was developed, consisting of e-learning and hands-on simulation modules, focusing on iliac and superficial femoral artery atherosclerotic disease. Construct validity was investigated. Performances were assessed using multiple-choice questionnaires, valid simulation parameters, global rating scorings, and examiner checklists. Feasibility was assessed by passage of 2 final-year medical students through this PROSPECT program. Ghent University Hospital, a tertiary clinical care and academic center in Belgium with general surgery residency program. Senior-year medical students were recruited at Ghent University Hospital. Vascular surgeons were invited to participate during conferences and meetings if they had performed at least 100 endovascular procedures as the primary operator during the last 2 years. Overall, 29 medical students and 20 vascular surgeons participated. Vascular surgeons obtained higher multiple-choice questionnaire scores (median: 24.5-22.0 vs. 15.0-12.0; p train cognitive, technical, and nontechnical endovascular skills was developed. A structured, stepwise, proficiency-based valid endovascular program to train cognitive, technical, and human factor skills has been developed and proven to be feasible. A randomized controlled trial has been initiated to investigate its effect on performances in real life, patient outcomes, and cost-effectiveness. Copyright © 2015 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

5. Morphological and molecular evidence for a stepwise evolutionary transition from teeth to baleen in mysticete whales.

Deméré, Thomas A; McGowen, Michael R; Berta, Annalisa; Gatesy, John

2008-02-01

The origin of baleen in mysticete whales represents a major transition in the phylogenetic history of Cetacea. This key specialization, a keratinous sieve that enables filter-feeding, permitted exploitation of a new ecological niche and heralded the evolution of modern baleen-bearing whales, the largest animals on Earth. To date, all formally described mysticete fossils conform to two types: toothed species from Oligocene-age rocks ( approximately 24 to 34 million years old) and toothless species that presumably utilized baleen to feed (Recent to approximately 30 million years old). Here, we show that several Oligocene toothed mysticetes have nutrient foramina and associated sulci on the lateral portions of their palates, homologous structures in extant mysticetes house vessels that nourish baleen. The simultaneous occurrence of teeth and nutrient foramina implies that both teeth and baleen were present in these early mysticetes. Phylogenetic analyses of a supermatrix that includes extinct taxa and new data for 11 nuclear genes consistently resolve relationships at the base of Mysticeti. The combined data set of 27,340 characters supports a stepwise transition from a toothed ancestor, to a mosaic intermediate with both teeth and baleen, to modern baleen whales that lack an adult dentition but retain developmental and genetic evidence of their ancestral toothed heritage. Comparative sequence data for ENAM (enamelin) and AMBN (ameloblastin) indicate that enamel-specific loci are present in Mysticeti but have degraded to pseudogenes in this group. The dramatic transformation in mysticete feeding anatomy documents an apparently rare, stepwise mode of evolution in which a composite phenotype bridged the gap between primitive and derived morphologies; a combination of fossil and molecular evidence provides a multifaceted record of this macroevolutionary pattern.

6. A stepwise approach to the evaluation and treatment of subclinical hyperthyroidism.

Mai, Vinh Q; Burch, Henry B

2012-01-01

To review a stepwise approach to the evaluation and treatment of subclinical hyperthyroidism. English-language articles regarding clinical management of subclinical hyperthyroidism published between 2007 and 2012 were reviewed. Subclinical hyperthyroidism is encountered on a daily basis in clinical practice. When evaluating patients with a suppressed serum thyrotropin value, it is important to exclude other potential etiologies such as overt triiodothyronine toxicosis, drug effect, nonthyroidal illness, and central hypothyroidism. In younger patients with mild thyrotropin suppression, it is acceptable to perform testing again in 3 to 6 months to assess for persistence before performing further diagnostic testing. In older patients or patients with thyrotropin values less than 0.1 mIU/L, diagnostic testing should proceed without delay. Persistence of thyrotropin suppression is more typical of nodular thyroid autonomy, whereas thyroiditis and mild Graves disease frequently resolve spontaneously. The clinical consequences of subclinical hyperthyroidism, such as atrial dysrhythmia, accelerated bone loss, increased fracture rate, and higher rates of cardiovascular mortality, are dependent on age and severity. The decision to treat subclinical hyperthyroidism is directly tied to an assessment of the potential for clinical consequences in untreated disease. Definitive therapy is generally selected for patients with nodular autonomous function, whereas antithyroid drug therapy is more appropriate for mild, persistent Graves disease. The presented stepwise approach to the care of patients presenting with an isolated suppression of serum thyrotropin focuses on the differential diagnosis, a prediction of the likelihood of persistence, an assessment of potential risks posed to the patient, and, finally, a personalized choice of therapy.

7. Stepwise dynamics of an anionic micellar film - Formation of crown lenses.

Lee, Jongju; Nikolov, Alex; Wasan, Darsh

2017-06-15

We studied the stepwise thinning of a microscopic circular foam film formed from an anionic micellar solution of sodium dodecyl sulfate (SDS). The foam film formed from the SDS micellar solution thins in a stepwise manner by the formation and expansion of a dark spot(s) of one layer less than the film thickness. During the last stages of film thinning (e.g., a film with one micellar layer), the dark spot expansion occurs via two steps. Initially, a small dark circular spot inside a film of several microns in size is formed, which expands at a constant rate. Then, a ridge along the expanding spot is formed. As the ridge grows, it becomes unstable and breaks into regular crown lenses, which are seen as white spots in the reflected light at the border of the dark spot with the surrounding thicker film. The Rayleigh type of instability contributes to the formation of the lenses, which results in the increase of the dark spot expansion rate with time. We applied the two-dimensional micellar-vacancy diffusion model and took into consideration the effects of the micellar layering and film volume on the rate of the dark spot expansion [Lee et al., 2016] to predict the rate of the dark spot expansion for a 0.06M SDS film in the presence of lenses. We briefly discuss the Rayleigh type of instability in the case of a 0.06M SDS foam film. The goals of this study are to reveal why the crown lenses are formed during the foam film stratification and to elucidate their effect on the rate of spot expansion. Copyright © 2017 Elsevier Inc. All rights reserved.

8. A regression approach for Zircaloy-2 in-reactor creep constitutive equations

Yung Liu, Y.; Bement, A.L.

1977-01-01

In this paper the methodology of multiple regressions as applied to Zircaloy-2 in-reactor creep data analysis and construction of constitutive equation are illustrated. While the resulting constitutive equation can be used in creep analysis of in-reactor Zircaloy structural components, the methodology itself is entirely general and can be applied to any creep data analysis. The promising aspects of multiple regression creep data analysis are briefly outlined as follows: (1) When there are more than one variable involved, there is no need to make the assumption that each variable affects the response independently. No separate normalizations are required either and the estimation of parameters is obtained by solving many simultaneous equations. The number of simultaneous equations is equal to the number of data sets. (2) Regression statistics such as R 2 - and F-statistics provide measures of the significance of regression creep equation in correlating the overall data. The relative weights of each variable on the response can also be obtained. (3) Special regression techniques such as step-wise, ridge, and robust regressions and residual plots, etc., provide diagnostic tools for model selections. Multiple regression analysis performed on a set of carefully selected Zircaloy-2 in-reactor creep data leads to a model which provides excellent correlations for the data. (Auth.)

9. Comparative Analysis of Principals' Management Strategies in ...

It was recommended among others that principals of secondary schools should adopt all the management strategies in this study as this will improve school administration and consequently students‟ academic performance. Keywords: Management Strategies; Secondary Schools; Administrative Effectiveness ...

10. Spatial control of groundwater contamination, using principal ...

probe into the spatial controlling processes of groundwater contamination, using principal component analysis (PCA). ... topography, soil type, depth of water levels, and water usage. Thus, the ... of effective sites for infiltration of recharge water.

11. The Relationship between Principals' Managerial Approaches and ...

Nekky Umera

Egerton University, P. O. Box 16568, NAKURU KENYA bosirej@yahoo.com ... teacher and parental input while it was negatively correlated with the level of .... principal's attitude, gender qualifications, and leadership experience (Green,. 1999 ...

12. First-Year Principal Encounters Homophobia

Retelle, Ellen

2011-01-01

A 1st-year principal encounters homonegativity and an ethical dilemma when she attempts to terminate a teacher because of the teacher's inadequate and ineffective teaching. The teacher responds by threatening to "out" Ms. L. to the parents.

13. Integrating Data Transformation in Principal Components Analysis

Maadooliat, Mehdi; Huang, Jianhua Z.; Hu, Jianhua

2015-01-01

Principal component analysis (PCA) is a popular dimension reduction method to reduce the complexity and obtain the informative aspects of high-dimensional datasets. When the data distribution is skewed, data transformation is commonly used prior

14. Spatial control of groundwater contamination, using principal

Spatial control of groundwater contamination, using principal component analysis ... anthropogenic (agricultural activities and domestic wastewaters), and marine ... The PC scores reflect the change of groundwater quality of geogenic origin ...

15. Principal Hawaiian Islands Geoid Heights (GEOID96)

National Oceanic and Atmospheric Administration, Department of Commerce — This 2' geoid height grid for the Principal Hawaiian Islands is distributed as a GEOID96 model. The computation used 61,000 terrestrial and marine gravity data held...

16. Spontaneous regression of pulmonary bullae

Satoh, H.; Ishikawa, H.; Ohtsuka, M.; Sekizawa, K.

2002-01-01

The natural history of pulmonary bullae is often characterized by gradual, progressive enlargement. Spontaneous regression of bullae is, however, very rare. We report a case in which complete resolution of pulmonary bullae in the left upper lung occurred spontaneously. The management of pulmonary bullae is occasionally made difficult because of gradual progressive enlargement associated with abnormal pulmonary function. Some patients have multiple bulla in both lungs and/or have a history of pulmonary emphysema. Others have a giant bulla without emphysematous change in the lungs. Our present case had treated lung cancer with no evidence of local recurrence. He had no emphysematous change in lung function test and had no complaints, although the high resolution CT scan shows evidence of underlying minimal changes of emphysema. Ortin and Gurney presented three cases of spontaneous reduction in size of bulla. Interestingly, one of them had a marked decrease in the size of a bulla in association with thickening of the wall of the bulla, which was observed in our patient. This case we describe is of interest, not only because of the rarity with which regression of pulmonary bulla has been reported in the literature, but also because of the spontaneous improvements in the radiological picture in the absence of overt infection or tumor. Copyright (2002) Blackwell Science Pty Ltd

17. Quantum algorithm for linear regression

Wang, Guoming

2017-07-01

We present a quantum algorithm for fitting a linear regression model to a given data set using the least-squares approach. Differently from previous algorithms which yield a quantum state encoding the optimal parameters, our algorithm outputs these numbers in the classical form. So by running it once, one completely determines the fitted model and then can use it to make predictions on new data at little cost. Moreover, our algorithm works in the standard oracle model, and can handle data sets with nonsparse design matrices. It runs in time poly( log2(N ) ,d ,κ ,1 /ɛ ) , where N is the size of the data set, d is the number of adjustable parameters, κ is the condition number of the design matrix, and ɛ is the desired precision in the output. We also show that the polynomial dependence on d and κ is necessary. Thus, our algorithm cannot be significantly improved. Furthermore, we also give a quantum algorithm that estimates the quality of the least-squares fit (without computing its parameters explicitly). This algorithm runs faster than the one for finding this fit, and can be used to check whether the given data set qualifies for linear regression in the first place.

18. Interpretation of commonly used statistical regression models.

Kasza, Jessica; Wolfe, Rory

2014-01-01

A review of some regression models commonly used in respiratory health applications is provided in this article. Simple linear regression, multiple linear regression, logistic regression and ordinal logistic regression are considered. The focus of this article is on the interpretation of the regression coefficients of each model, which are illustrated through the application of these models to a respiratory health research study. © 2013 The Authors. Respirology © 2013 Asian Pacific Society of Respirology.

19. Principals' instructional management skills and middle school science teacher job satisfaction

Gibbs-Harper, Nzinga A.

The purpose of this research study was to determine if a relationship exists between teachers' perceptions of principals' instructional leadership behaviors and middle school teacher job satisfaction. Additionally, this study sought to assess whether principal's instructional leadership skills were predictors of middle school teachers' satisfaction with work itself. This study drew from 13 middle schools in an urban Mississippi school district. Participants included teachers who taught science. Each teacher was given the Principal Instructional Management Rating Scale (PIMRS; Hallinger, 2011) and the Teacher Job Satisfaction Questionnaire (TJSQ; Lester, 1987) to answer the research questions. The study was guided by two research questions: (a) Is there a relationship between the independent variables Defining the School's Mission, Managing the Instructional Program, and Developing the School Learning Climate Program and the dependent variable Work Itself?; (b) Are Defining the School's Mission, Managing the Instructional Program, and Developing the School Learning Climate Program predictors of Work Itself? The Pearson's correlation and multiple regression analysis were utilized to examine the relationship between the three dimensions of principals' instructional leadership and teacher satisfaction with work itself. The data revealed that there was a strong, positive correlation between all three dimensions of principals' instructional leadership and teacher satisfaction with work itself. However, the multiple regression analysis determined that teachers' perceptions of principals' instructional management skills is a slight predictor of Defining the School's Mission only.

20. Communication Factors as Predictors of Relationship Quality: A National Study of Principals and School Counselors

Duslak, Mark; Geier, Brett

2017-01-01

This study examined the effects of meeting frequency, structured meeting times, annual agreements, and demographic variables on school counselor perceptions of their relationship with their building principal. Results of a regression analysis indicated that meeting frequency accounted for 26.7% of the variance in school counselor-reported…

1. Stepwise-activable multifunctional peptide-guided prodrug micelles for cancerous cells intracellular drug release

Zhang, Jing, E-mail: zhangjing@zjut.edu.cn; Li, Mengfei [Zhejiang University of Technology, College of Materials Science and Engineering (China); Yuan, Zhefan [Zhejiang University, Key Laboratory of Biomass Chemical Engineering of Ministry of Education, Department of Chemical and Biological Engineering (China); Wu, Dan; Chen, Jia-da; Feng, Jie, E-mail: fengjie@zjut.edu.cn [Zhejiang University of Technology, College of Materials Science and Engineering (China)

2016-10-15

A novel type of stepwise-activable multifunctional peptide-guided prodrug micelles (MPPM) was fabricated for cancerous cells intracellular drug release. Deca-lysine sequence (K{sub 10}), a type of cell-penetrating peptide, was synthesized and terminated with azido-glycine. Then a new kind of molecule, alkyne modified doxorubicin (DOX) connecting through disulfide bond (DOX-SS-alkyne), was synthesized. After coupling via Cu-catalyzed azide–alkyne cycloaddition (CuAAC) click chemistry reaction, reduction-sensitive peptide-guided prodrug was obtained. Due to the amphiphilic property of the prodrug, it can assemble to form micelles. To prevent the nanocarriers from unspecific cellular uptake, the prodrug micelles were subsequently modified with 2,3-dimethyl maleic anhydride to obtain MPPM with a negatively charged outer shell. In vitro studies showed that MPPM could be shielded from cells under psychological environment. However, when arriving at mild acidic tumor site, the cell-penetrating capacity of MPPM would be activated by charge reversal of the micelles via hydrolysis of acid-labile β-carboxylic amides and regeneration of K{sub 10}, which enabled efficient internalization of MPPM by tumor cells as well as following glutathione- and protease-induced drug release inside the cancerous cells. Furthermore, since the guide peptide sequences can be accurately designed and synthesized, it can be easily changed for various functions, such as targeting peptide, apoptotic peptide, even aptamers, only need to be terminated with azido-glycine. This method can be used as a template for reduction-sensitive peptide-guided prodrug for cancer therapy.Graphical abstractA novel type of stepwise-activable multifunctional peptide-guided prodrug micelles (MPPM) was fabricated for selective drug delivery in cancerous cells. MPPM could be shielded from cells under psychological environment. However, when arriving at mild acidic tumor site, the cell-penetrating capacity of MPPM would

2. Limited hair cell induction from human induced pluripotent stem cells using a simple stepwise method.

Ohnishi, Hiroe; Skerleva, Desislava; Kitajiri, Shin-ichiro; Sakamoto, Tatsunori; Yamamoto, Norio; Ito, Juichi; Nakagawa, Takayuki

2015-07-10

Disease-specific induced pluripotent stem cells (iPS) cells are expected to contribute to exploring useful tools for studying the pathophysiology of inner ear diseases and to drug discovery for treating inner ear diseases. For this purpose, stable induction methods for the differentiation of human iPS cells into inner ear hair cells are required. In the present study, we examined the efficacy of a simple induction method for inducing the differentiation of human iPS cells into hair cells. The induction of inner ear hair cell-like cells was performed using a stepwise method mimicking inner ear development. Human iPS cells were sequentially transformed into the preplacodal ectoderm, otic placode, and hair cell-like cells. As a first step, preplacodal ectoderm induction, human iPS cells were seeded on a Matrigel-coated plate and cultured in a serum free N2/B27 medium for 8 days according to a previous study that demonstrated spontaneous differentiation of human ES cells into the preplacodal ectoderm. As the second step, the cells after preplacodal ectoderm induction were treated with basic fibroblast growth factor (bFGF) for induction of differentiation into otic-placode-like cells for 15 days. As the final step, cultured cells were incubated in a serum free medium containing Matrigel for 48 days. After preplacodal ectoderm induction, over 90% of cultured cells expressed the genes that express in preplacodal ectoderm. By culture with bFGF, otic placode marker-positive cells were obtained, although their number was limited. Further 48-day culture in serum free media resulted in the induction of hair cell-like cells, which expressed a hair cell marker and had stereocilia bundle-like constructions on their apical surface. Our results indicate that hair cell-like cells are induced from human iPS cells using a simple stepwise method with only bFGF, without the use of xenogeneic cells. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

3. A broadly applicable surgical teaching method: evaluation of a stepwise introduction to cataract surgery.

Kloek, Carolyn E; Borboli-Gerogiannis, Sheila; Chang, Kenneth; Kuperwaser, Mark; Newman, Lori R; Lane, Anne Marie; Loewenstein, John I

2014-01-01

Although cataract surgery is one of the most commonly performed surgeries in the country, it is a microsurgical procedure that is difficult to learn and to teach. This study aims to assess the effectiveness of a new method for introducing postgraduate year (PGY)-3 ophthalmology residents to cataract surgery. Hospital-based ophthalmology residency program. Retrospective cohort study. PGY-3 and PGY-4 residents of the Harvard Medical School Ophthalmology Residency from graduating years 2010 to 2012. In July 2009, a new method of teaching PGY-3 ophthalmology residents cataract surgery was introduced, which was termed "the stepwise introduction to cataract surgery." This curriculum aimed to train residents to perform steps of cataract surgery by deliberately practicing each of the steps of surgery under a structured curriculum with faculty feedback. Assessment methods included surveys administered to the PGY-4 residents who graduated before the implementation of these measures (n = 7), the residents who participated in the first and second years of the new curriculum (n = 16), faculty who teach PGY-4 residents cataract surgery (n = 8), and review of resident Accreditation Council for Graduate Medical Education surgical logs. Resident survey response rate was 100%. Residents who participated in the new curriculum performed more of each step of cataract surgery in the operating room, spent more time practicing each step of cataract surgery on a cataract surgery simulator during the PGY-3 year, and performed more primary cataract surgeries during the PGY-3 year than those who did not. Faculty survey response rate was 63%. Faculty noted an increase in resident preparedness following implementation of the new curriculum. There was no statistical difference between the precurriculum and postcurriculum groups in the percentage turnover of cataracts for the first 2 cataract surgery rotations of the PGY-4 year of training. The introduction of cataract surgery to PGY-3 residents

4. Does acid-base equilibrium correlate with remnant liver volume during stepwise liver resection?

Golriz, Mohammad; Abbasi, Sepehr; Fathi, Parham; Majlesara, Ali; Brenner, Thorsten; Mehrabi, Arianeb

2017-10-01

Small for size and flow syndrome (SFSF) is one of the most challenging complications following extended hepatectomy (EH). After EH, hepatic artery flow decreases and portal vein flow increases per 100 g of remnant liver volume (RLV). This causes hypoxia followed by metabolic acidosis. A correlation between acidosis and posthepatectomy liver failure has been postulated but not studied systematically in a large animal model or clinical setting. In our study, we performed stepwise liver resections on nine pigs to defined SFSF limits as follows: step 1: segment II/III resection, step 2: segment IV resection, step 3: segment V/VIII resection (RLV: 75, 50, and 25%, respectively). Blood gas values were measured before and after each step using four catheters inserted into the carotid artery, internal jugular vein, hepatic artery, and portal vein. The pH, [Formula: see text], and base excess (BE) decreased, but [Formula: see text] values increased after 75% resection in the portal and jugular veins. EH correlated with reduced BE in the hepatic artery. Pco 2 values increased after 75% resection in the jugular vein. In contrast, arterial Po 2 increased after every resection, whereas the venous Po 2 decreased slightly. There were differences in venous [Formula: see text], BE in the hepatic artery, and Pco 2 in the jugular vein after 75% liver resection. Because 75% resection is the limit for SFSF, these noninvasive blood evaluations may be used to predict SFSF. Further studies with long-term follow-up are required to validate this correlation. NEW & NOTEWORTHY This is the first study to evaluate acid-base parameters in major central and hepatic vessels during stepwise liver resection. The pH, [Formula: see text], and base excess (BE) decreased, but [Formula: see text] values increased after 75% resection in the portal and jugular veins. Extended hepatectomy correlated with reduced BE in the hepatic artery. Because 75% resection is the limit for small for size and flow

5. Stepwise magnetic-geochemical approach for efficient assessment of heavy metal polluted sites

Appel, E.; Rösler, W.; Ojha, G.

2012-04-01

Previous studies have shown that magnetometry can outline the distribution of fly ash deposition in the surroundings of coal-burning power plants and steel industries. Especially the easy-to-measure magnetic susceptibility (MS) is capable to act as a proxy for heavy metal (HM) pollution caused by such kind of point source pollution. Here we present a demonstration project around the coal-burning power plant complex "Schwarze Pumpe" in eastern Germany. Before reunification of West and East Germany huge amounts of HM pollutants were emitted from the "Schwarze Pumpe" into the environment by both fly ash emission and dumped clinker. The project has been conducted as part of the TASK Centre of Competence which aims at bringing new innovative techniques closer to the market. Our project combines in situ and laboratory MS measurements and HM analyses in order to demonstrate the efficiency of a stepwise approach for site assessment of HM pollution around point sources of fly-ash emission and deposition into soil. The following scenario is played through: We assume that the "true" spatial distribution of HM pollution (given by the pollution load index PLI comprising Fe, Zn, Pb, and Cu) is represented by our entire set of 85 measured samples (XRF analyses) from forest sites around the "Schwarze Pumpe". Surface MS data (collected with a Bartington MS2D) and in situ vertical MS sections (logged by an SM400 instrument) are used to determine a qualitative overview of potentially higher and lower polluted areas. A suite of spatial HM distribution maps obtained by random selections of 30 out of the 85 analysed sites is compared to the HM map obtained from a targeted 30-sites-selection based on pre-information from the MS results. The PLI distribution map obtained from the targeted 30-sites-selection shows all essential details of the "true" pollution map, while the different random 30-sites-selections miss important features. This comparison shows that, for the same cost

6. Prediction, Regression and Critical Realism

Næss, Petter

2004-01-01

This paper considers the possibility of prediction in land use planning, and the use of statistical research methods in analyses of relationships between urban form and travel behaviour. Influential writers within the tradition of critical realism reject the possibility of predicting social...... phenomena. This position is fundamentally problematic to public planning. Without at least some ability to predict the likely consequences of different proposals, the justification for public sector intervention into market mechanisms will be frail. Statistical methods like regression analyses are commonly...... seen as necessary in order to identify aggregate level effects of policy measures, but are questioned by many advocates of critical realist ontology. Using research into the relationship between urban structure and travel as an example, the paper discusses relevant research methods and the kinds...

7. On Weighted Support Vector Regression

Han, Xixuan; Clemmensen, Line Katrine Harder

2014-01-01

We propose a new type of weighted support vector regression (SVR), motivated by modeling local dependencies in time and space in prediction of house prices. The classic weights of the weighted SVR are added to the slack variables in the objective function (OF‐weights). This procedure directly...... shrinks the coefficient of each observation in the estimated functions; thus, it is widely used for minimizing influence of outliers. We propose to additionally add weights to the slack variables in the constraints (CF‐weights) and call the combination of weights the doubly weighted SVR. We illustrate...... the differences and similarities of the two types of weights by demonstrating the connection between the Least Absolute Shrinkage and Selection Operator (LASSO) and the SVR. We show that an SVR problem can be transformed to a LASSO problem plus a linear constraint and a box constraint. We demonstrate...

8. Laplacian embedded regression for scalable manifold regularization.

Chen, Lin; Tsang, Ivor W; Xu, Dong

2012-06-01

Semi-supervised learning (SSL), as a powerful tool to learn from a limited number of labeled data and a large number of unlabeled data, has been attracting increasing attention in the machine learning community. In particular, the manifold regularization framework has laid solid theoretical foundations for a large family of SSL algorithms, such as Laplacian support vector machine (LapSVM) and Laplacian regularized least squares (LapRLS). However, most of these algorithms are limited to small scale problems due to the high computational cost of the matrix inversion operation involved in the optimization problem. In this paper, we propose a novel framework called Laplacian embedded regression by introducing an intermediate decision variable into the manifold regularization framework. By using ∈-insensitive loss, we obtain the Laplacian embedded support vector regression (LapESVR) algorithm, which inherits the sparse solution from SVR. Also, we derive Laplacian embedded RLS (LapERLS) corresponding to RLS under the proposed framework. Both LapESVR and LapERLS possess a simpler form of a transformed kernel, which is the summation of the original kernel and a graph kernel that captures the manifold structure. The benefits of the transformed kernel are two-fold: (1) we can deal with the original kernel matrix and the graph Laplacian matrix in the graph kernel separately and (2) if the graph Laplacian matrix is sparse, we only need to perform the inverse operation for a sparse matrix, which is much more efficient when compared with that for a dense one. Inspired by kernel principal component analysis, we further propose to project the introduced decision variable into a subspace spanned by a few eigenvectors of the graph Laplacian matrix in order to better reflect the data manifold, as well as accelerate the calculation of the graph kernel, allowing our methods to efficiently and effectively cope with large scale SSL problems. Extensive experiments on both toy and real

9. Stepwise Swelling of a Thin Film of Lamellae-Forming Poly(styrene-b-butadiene) in Cyclohexane Vapor

Di, Zhenyu; Posselt, Dorthe; Smilgies, Detlef-M.

2012-01-01

We investigated the swelling of a thin film of lamellae-forming poly(styrene-b-butadiene) in cyclohexane vapor. The vapor pressure and thus the degree of swelling of the film are increased in a stepwise manner using a custom-built sample cell. The resulting structural changes during and after each...

10. Effects of constant and stepwise changes in temperature on the species abundance dynamics of four cladocera species

Verbitsky V. B.

2011-09-01

Full Text Available Laboratory experiments with natural zooplankton communities were carried out to study the effects of two contrasting temperature regimes: constant temperature (15, 20, and 25 °C and graded changes in temperature. The graded regime consisted of repeated sustained (three weeks controlled stepwise temperature changes of 5 or 10 °C within 15–25 °C on the population dynamics of four dominant species of lake littoral zooplankton, Daphnia longispina (Müller, 1785, Diaphanosoma brachyurum (Lievin, 1848, Simocephalus vetulus (Müller, 1776 and Chydorus sphaericus (Müller, 1785. The results show that controlled stepwise changes (positive or negative in temperature within the ranges of 15–20, 20–25, and 15–25 °C can exert either stimulating or inhibitory effect (direct or delayed on the development of D. longispina and S. vetulus populations. The development of D. brachyurum and Ch. sphaericus, both more steno-thermophile, was only stimulated by a stable elevated temperature (25 °C. These results support the previously formulated hypothesis that, in determining the ecological temperature optimum of a species within a natural community, it is not enough to define its optimum from constant, cyclic or random temperature fluctuations, but also from unidirectional stepwise changes in temperature. These stepwise changes may also induce prolonged or delayed effects.

11. Gaussian process regression for tool wear prediction

Kong, Dongdong; Chen, Yongjie; Li, Ning

2018-05-01

To realize and accelerate the pace of intelligent manufacturing, this paper presents a novel tool wear assessment technique based on the integrated radial basis function based kernel principal component analysis (KPCA_IRBF) and Gaussian process regression (GPR) for real-timely and accurately monitoring the in-process tool wear parameters (flank wear width). The KPCA_IRBF is a kind of new nonlinear dimension-increment technique and firstly proposed for feature fusion. The tool wear predictive value and the corresponding confidence interval are both provided by utilizing the GPR model. Besides, GPR performs better than artificial neural networks (ANN) and support vector machines (SVM) in prediction accuracy since the Gaussian noises can be modeled quantitatively in the GPR model. However, the existence of noises will affect the stability of the confidence interval seriously. In this work, the proposed KPCA_IRBF technique helps to remove the noises and weaken its negative effects so as to make the confidence interval compressed greatly and more smoothed, which is conducive for monitoring the tool wear accurately. Moreover, the selection of kernel parameter in KPCA_IRBF can be easily carried out in a much larger selectable region in comparison with the conventional KPCA_RBF technique, which helps to improve the efficiency of model construction. Ten sets of cutting tests are conducted to validate the effectiveness of the presented tool wear assessment technique. The experimental results show that the in-process flank wear width of tool inserts can be monitored accurately by utilizing the presented tool wear assessment technique which is robust under a variety of cutting conditions. This study lays the foundation for tool wear monitoring in real industrial settings.

12. A Quantile Regression Approach to Estimating the Distribution of Anesthetic Procedure Time during Induction.

Hsin-Lun Wu

Full Text Available Although procedure time analyses are important for operating room management, it is not easy to extract useful information from clinical procedure time data. A novel approach was proposed to analyze procedure time during anesthetic induction. A two-step regression analysis was performed to explore influential factors of anesthetic induction time (AIT. Linear regression with stepwise model selection was used to select significant correlates of AIT and then quantile regression was employed to illustrate the dynamic relationships between AIT and selected variables at distinct quantiles. A total of 1,060 patients were analyzed. The first and second-year residents (R1-R2 required longer AIT than the third and fourth-year residents and attending anesthesiologists (p = 0.006. Factors prolonging AIT included American Society of Anesthesiologist physical status ≧ III, arterial, central venous and epidural catheterization, and use of bronchoscopy. Presence of surgeon before induction would decrease AIT (p < 0.001. Types of surgery also had significant influence on AIT. Quantile regression satisfactorily estimated extra time needed to complete induction for each influential factor at distinct quantiles. Our analysis on AIT demonstrated the benefit of quantile regression analysis to provide more comprehensive view of the relationships between procedure time and related factors. This novel two-step regression approach has potential applications to procedure time analysis in operating room management.

13. Variable selection in Logistic regression model with genetic algorithm.

Zhang, Zhongheng; Trevino, Victor; Hoseini, Sayed Shahabuddin; Belciug, Smaranda; Boopathi, Arumugam Manivanna; Zhang, Ping; Gorunescu, Florin; Subha, Velappan; Dai, Songshi

2018-02-01

Variable or feature selection is one of the most important steps in model specification. Especially in the case of medical-decision making, the direct use of a medical database, without a previous analysis and preprocessing step, is often counterproductive. In this way, the variable selection represents the method of choosing the most relevant attributes from the database in order to build a robust learning models and, thus, to improve the performance of the models used in the decision process. In biomedical research, the purpose of variable selection is to select clinically important and statistically significant variables, while excluding unrelated or noise variables. A variety of methods exist for variable selection, but none of them is without limitations. For example, the stepwise approach, which is highly used, adds the best variable in each cycle generally producing an acceptable set of variables. Nevertheless, it is limited by the fact that it commonly trapped in local optima. The best subset approach can systematically search the entire covariate pattern space, but the solution pool can be extremely large with tens to hundreds of variables, which is the case in nowadays clinical data. Genetic algorithms (GA) are heuristic optimization approaches and can be used for variable selection in multivariable regression models. This tutorial paper aims to provide a step-by-step approach to the use of GA in variable selection. The R code provided in the text can be extended and adapted to other data analysis needs.

14. Wheat flour dough Alveograph characteristics predicted by Mixolab regression models.

Codină, Georgiana Gabriela; Mironeasa, Silvia; Mironeasa, Costel; Popa, Ciprian N; Tamba-Berehoiu, Radiana

2012-02-01

In Romania, the Alveograph is the most used device to evaluate the rheological properties of wheat flour dough, but lately the Mixolab device has begun to play an important role in the breadmaking industry. These two instruments are based on different principles but there are some correlations that can be found between the parameters determined by the Mixolab and the rheological properties of wheat dough measured with the Alveograph. Statistical analysis on 80 wheat flour samples using the backward stepwise multiple regression method showed that Mixolab values using the ‘Chopin S’ protocol (40 samples) and ‘Chopin + ’ protocol (40 samples) can be used to elaborate predictive models for estimating the value of the rheological properties of wheat dough: baking strength (W), dough tenacity (P) and extensibility (L). The correlation analysis confirmed significant findings (P 0.70 for P, R²(adjusted) > 0.70 for W and R²(adjusted) > 0.38 for L, at a 95% confidence interval. Copyright © 2011 Society of Chemical Industry.

15. Promoting principals' managerial involvement in instructional improvement.

Gillat, A

1994-01-01

Studies of school leadership suggest that visiting classrooms, emphasizing achievement and training, and supporting teachers are important indicators of the effectiveness of school principals. The utility of a behavior-analytic program to support the enhancement of these behaviors in 2 school principals and the impact of their involvement upon teachers' and students' performances in three classes were examined in two experiments, one at an elementary school and another at a secondary school. Treatment conditions consisted of helping the principal or teacher to schedule his or her time and to use goal setting, feedback, and praise. A withdrawal design (Experiment 1) and a multiple baseline across classrooms (Experiment 2) showed that the principal's and teacher's rates of praise, feedback, and goal setting increased during the intervention, and were associated with improvements in the academic performance of the students. In the future, school psychologists might analyze the impact of involving themselves in supporting the principal's involvement in improving students' and teachers' performances or in playing a similar leadership role themselves.

16. Formation of mixed organic layers by stepwise electrochemical reduction of diazonium compounds.

Santos, Luis; Ghilane, Jalal; Lacroix, Jean Christophe

2012-03-28

This work describes the formation of a mixed organic layer covalently attached to a carbon electrode. The strategy adopted is based on two successive electrochemical reductions of diazonium salts. First, bithiophene phenyl (BTB) diazonium salt is reduced using host/guest complexation in a water/cyclodextrin (β-CD) solution. The resulting layer consists of grafted BTB oligomers and cyclodextrin that can be removed from the surface. The electrochemical response of several outer-sphere redox probes on such BTB/CD electrodes is close to that of a diode, thanks to the easily p-dopable oligo(BTB) moieties. When CD is removed from the surface, pinholes are created and this diode like behavior is lost. Following this, nitrophenyl (NP) diazonium is reduced to graft a second component. Electrochemical study shows that upon grafting NP insulating moieties, the diode-like behavior of the layer is restored which demonstrates that NP is grafted predominately in the empty spaces generated by β-CD desorption. As a result, a mixed BTB/NP organic layer covalently attached to a carbon electrode is obtained using a stepwise electrochemical reduction of two diazonium compounds.

17. Noble gases from solar energetic particles revealed by closed system stepwise etching of lunar soil minerals

Wieler, R.; Baur, H.; Signer, P.

1986-01-01

He, Ne, and Ar abundances and isotopic ratios in plagioclase and pyroxene separates from lunar soils were determined using a closed system stepwise etching technique. This method of noble gas release allows one to separate solar wind (SW) noble gases from those implanted as solar energetic particles (SEP). SEP-Ne with 20 Ne/ 22 Ne = 11.3 +- 0.3 is present in all samples studied. The abundances of SEP-Ne are 2-4 orders of magnitude too high to be explained exclusively as implanted solar flare gas. The major part of SEP-Ne possibly originates from solar 'suprathermal ions' with energies < 0.1 MeV/amu. The isotopic composition of Ne in these lower energy SEP is, however, probably identical to that of real flare Ne. The suggestion that SEP-Ne might have the same isotopic composition as planetary Ne and thus possibly represent an unfractionated sample of solar Ne is not tenable. SW-Ne retained in plagioclase and pyroxene is less fractionated than has been deduced by total fusion analyses. Ne-B is a mixture of SW-Ne and SEP-Ne rather than fractionated SW-Ne. In contrast to SEP-Ne, SEP-Ar has probably a very similar composition as SW-Ar. (author)

18. Automation of peak-tracking analysis of stepwise perturbed NMR spectra

Banelli, Tommaso; Vuano, Marco [Università di Udine, Dipartimento di Area Medica (Italy); Fogolari, Federico [INBB (Italy); Fusiello, Andrea [Università di Udine, Dipartimento Politecnico di Ingegneria e Architettura (Italy); Esposito, Gennaro [INBB (Italy); Corazza, Alessandra, E-mail: alessandra.corazza@uniud.it [Università di Udine, Dipartimento di Area Medica (Italy)

2017-02-15

We describe a new algorithmic approach able to automatically pick and track the NMR resonances of a large number of 2D NMR spectra acquired during a stepwise variation of a physical parameter. The method has been named Trace in Track (TinT), referring to the idea that a gaussian decomposition traces peaks within the tracks recognised through 3D mathematical morphology. It is capable of determining the evolution of the chemical shifts, intensity and linewidths of each tracked peak.The performances obtained in term of track reconstruction and correct assignment on realistic synthetic spectra were high above 90% when a noise level similar to that of experimental data were considered. TinT was applied successfully to several protein systems during a temperature ramp in isotope exchange experiments. A comparison with a state-of-the-art algorithm showed promising results for great numbers of spectra and low signal to noise ratios, when the graduality of the perturbation is appropriate. TinT can be applied to different kinds of high throughput chemical shift mapping experiments, with quasi-continuous variations, in which a quantitative automated recognition is crucial.

19. Multifaceted Modularity: A Key for Stepwise Building of Hierarchical Complexity in Actinide Metal–Organic Frameworks

Dolgopolova, Ekaterina A. [Department; Ejegbavwo, Otega A. [Department; Martin, Corey R. [Department; Smith, Mark D. [Department; Setyawan, Wahyu [Pacific Northwest National Laboratory, Richland, Washington 99352, United States; Karakalos, Stavros G. [College; Henager, Charles H. [Pacific Northwest National Laboratory, Richland, Washington 99352, United States; zur Loye, Hans-Conrad [Department; Shustova, Natalia B. [Department

2017-11-07

Growing necessity for efficient nuclear waste management is a driving force for development of alternative architectures towards fundamental understanding of mechanisms involved in actinide integration inside extended structures. In this manuscript, metal-organic frameworks (MOFs) were investigated as a model system for engineering radionuclide containing materials through utilization of unprecedented MOF modularity, which cannot be replicated in any other type of materials. Through the implementation of recent synthetic advances in the MOF field, hierarchical complexity of An-materials were built stepwise, which was only feasible due to preparation of the first examples of actinide-based frameworks with “unsaturated” metal nodes. The first successful attempts of solid-state metathesis and metal node extension in An-MOFs are reported, and the results of the former approach revealed drastic differences in chemical behavior of extended structures versus molecular species. Successful utilization of MOF modularity also allowed us to structurally characterize the first example of bimetallic An-An nodes. To the best of our knowledge, through combination of solid-state metathesis, guest incorporation, and capping linker installation, we were able to achieve the highest Th wt% in mono- and bi-actinide frameworks with minimal structural density. Overall, combination of a multistep synthetic approach with homogeneous actinide distribution and moderate solvothermal conditions could make MOFs an exceptionally powerful tool to address fundamental questions responsible for chemical behavior of An-based extended structures, and therefore, shed light on possible optimization of nuclear waste administration.

20. Stepwise decision making and options for retrieval in the Swedish KBS-3 concept

Papp, T.

2000-01-01

The long time aspect of radioactive waste management has resulted in the adoption in Sweden of an ethical principle that requires present generations to manage radioactive waste in a way that will not burden future generations, and at the same time avoid to unnecessarily hinder future generations to retrieve the waste or take actions to change the disposal system. The present Swedish repository design has been developed to get a high and provable safety and to get a robust repository. A consequence of this has been a high barrier integrity that gives the system a high retrievability. The development of a repository system is a step-wise process, as much of the information on repository performance will only become available as the steps are carried through. In such work, retrievability can be extended also to other steps in handling or conditioning, i.e. indicating a general ability to reverse any step taken. At each decision-step, the confidence in achieving a safe repository has to be balanced against the commitments the step involves. An important factor in this balance is the feasibility of stepping back in the process. Thus, reversibility/retrievability are valuable systems characteristics in decision-making under uncertainty. It is generally agreed that the retrievability option should not be used as an excuse for developing a repository with lower safety levels than otherwise would have been required. In fact, such an action is in practice equivalent to an intentional transfer of burdens to future generations. (author)

1. A stepwise validation of a wearable system for estimating energy expenditure in field-based research

Rumo, Martin; Mäder, Urs; Amft, Oliver; Tröster, Gerhard

2011-01-01

Regular physical activity (PA) is an important contributor to a healthy lifestyle. Currently, standard sensor-based methods to assess PA in field-based research rely on a single accelerometer mounted near the body's center of mass. This paper introduces a wearable system that estimates energy expenditure (EE) based on seven recognized activity types. The system was developed with data from 32 healthy subjects and consists of a chest mounted heart rate belt and two accelerometers attached to a thigh and dominant upper arm. The system was validated with 12 other subjects under restricted lab conditions and simulated free-living conditions against indirect calorimetry, as well as in subjects' habitual environments for 2 weeks against the doubly labeled water method. Our stepwise validation methodology gradually trades reference information from the lab against realistic data from the field. The average accuracy for EE estimation was 88% for restricted lab conditions, 55% for simulated free-living conditions and 87% and 91% for the estimation of average daily EE over the period of 1 and 2 weeks

2. The Hexadehydro-Diels-Alder Cycloisomerization Reaction Proceeds by a Stepwise Mechanism.

Wang, Tao; Niu, Dawen; Hoye, Thomas R

2016-06-29

We report here experiments showing that the hexadehydro-Diels-Alder (HDDA) cycloisomerization reaction proceeds in a stepwise manner-i.e., via a diradical intermediate. Judicious use of substituent effects was decisive. We prepared (i) a series of triyne HDDA substrates that differed only in the R group present on the remote terminus of the diynophilic alkyne and (ii) an analogous series of dienophilic alkynes (n-C7H15COC≡CR) for use in classical Diels-Alder (DA) reactions (with 1,3-cyclopentadiene). The R groups were CF3, CHO, COMe/Et, CO2Me, CONMe2/Et2, H, and 1-propynyl. The relative rates of both the HDDA cyclization reactions and the simple DA cycloadditions were measured. The reactivity trends revealed a dramatic difference in the behaviors of the CF3 (slowest HDDA and nearly fastest DA) and 1-propynyl (fastest HDDA and slowest DA) containing members of each series. These differences can be explained by invoking radical-stabilizing energies rather than electron-withdrawing effects as the dominating feature of the HDDA reaction.

3. Stepwise transformation of the molecular building blocks in a porphyrin-encapsulating metal-organic material

Zhang, ZhenJie

2013-04-24

When immersed in solutions containing Cu(II) cations, the microporous metal-organic material P11 ([Cd4(BPT)4]·[Cd(C 44H36N8)(S)]·[S], BPT = biphenyl-3,4′,5-tricarboxylate) undergoes a transformation of its [Cd 2(COO)6]2- molecular building blocks (MBBs) into novel tetranuclear [Cu4X2(COO)6(S) 2] MBBs to form P11-Cu. The transformation occurs in single-crystal to single-crystal fashion, and its stepwise mechanism was studied by varying the Cd2+/Cu2+ ratio of the solution in which crystals of P11 were immersed. P11-16/1 (Cd in framework retained, Cd in encapsulated porphyrins exchanged) and other intermediate phases were thereby isolated and structurally characterized. P11-16/1 and P11-Cu retain the microporosity of P11, and the relatively larger MBBs in P11-Cu permit a 20% unit cell expansion and afford a higher surface area and a larger pore size. © 2013 American Chemical Society.

4. Rapid stepwise onset of Antarctic glaciation and deeper calcite compensation in the Pacific Ocean.

Coxall, Helen K; Wilson, Paul A; Pälike, Heiko; Lear, Caroline H; Backman, Jan

2005-01-06

The ocean depth at which the rate of calcium carbonate input from surface waters equals the rate of dissolution is termed the calcite compensation depth. At present, this depth is approximately 4,500 m, with some variation between and within ocean basins. The calcite compensation depth is linked to ocean acidity, which is in turn linked to atmospheric carbon dioxide concentrations and hence global climate. Geological records of changes in the calcite compensation depth show a prominent deepening of more than 1 km near the Eocene/Oligocene boundary (approximately 34 million years ago) when significant permanent ice sheets first appeared on Antarctica, but the relationship between these two events is poorly understood. Here we present ocean sediment records of calcium carbonate content as well as carbon and oxygen isotopic compositions from the tropical Pacific Ocean that cover the Eocene/Oligocene boundary. We find that the deepening of the calcite compensation depth was more rapid than previously documented and occurred in two jumps of about 40,000 years each, synchronous with the stepwise onset of Antarctic ice-sheet growth. The glaciation was initiated, after climatic preconditioning, by an interval when the Earth's orbit of the Sun favoured cool summers. The changes in oxygen-isotope composition across the Eocene/Oligocene boundary are too large to be explained by Antarctic ice-sheet growth alone and must therefore also indicate contemporaneous global cooling and/or Northern Hemisphere glaciation.

5. The step-wise pathway of septin hetero-octamer assembly in budding yeast.

Weems, Andrew; McMurray, Michael

2017-05-25

Septin proteins bind guanine nucleotides and form rod-shaped hetero-oligomers. Cells choose from a variety of available septins to assemble distinct hetero-oligomers, but the underlying mechanism was unknown. Using a new in vivo assay, we find that a stepwise assembly pathway produces the two species of budding yeast septin hetero-octamers: Cdc11/Shs1-Cdc12-Cdc3-Cdc10-Cdc10-Cdc3-Cdc12-Cdc11/Shs1. Rapid GTP hydrolysis by monomeric Cdc10 drives assembly of the core Cdc10 homodimer. The extended Cdc3 N terminus autoinhibits Cdc3 association with Cdc10 homodimers until prior Cdc3-Cdc12 interaction. Slow hydrolysis by monomeric Cdc12 and specific affinity of Cdc11 for transient Cdc12•GTP drive assembly of distinct trimers, Cdc11-Cdc12-Cdc3 or Shs1-Cdc12-Cdc3. Decreasing the cytosolic GTP:GDP ratio increases the incorporation of Shs1 vs Cdc11, which alters the curvature of filamentous septin rings. Our findings explain how GTP hydrolysis controls septin assembly, and uncover mechanisms by which cells construct defined septin complexes.

6. Stepwise multiphoton activation fluorescence reveals a new method of melanin detection

Lai, Zhenhua; Kerimo, Josef; Mega, Yair; DiMarzio, Charles A.

2013-06-01

The stepwise multiphoton activated fluorescence (SMPAF) of melanin, activated by a continuous-wave mode near infrared (NIR) laser, reveals a broad spectrum extending from the visible spectra to the NIR and has potential application for a low-cost, reliable method of detecting melanin. SMPAF images of melanin in mouse hair and skin are compared with conventional multiphoton fluorescence microscopy and confocal reflectance microscopy (CRM). By combining CRM with SMPAF, we can locate melanin reliably. However, we have the added benefit of eliminating background interference from other components inside mouse hair and skin. The melanin SMPAF signal from the mouse hair is a mixture of a two-photon process and a third-order process. The melanin SMPAF emission spectrum is activated by a 1505.9-nm laser light, and the resulting spectrum has a peak at 960 nm. The discovery of the emission peak may lead to a more energy-efficient method of background-free melanin detection with less photo-bleaching.

7. Stepwise Approach to Problematic Hypoglycemia in Korea: Educational, Technological, and Transplant Interventions

Sang-Man Jin

2017-06-01

Full Text Available Impaired awareness of hypoglycemia has been found to be prevalent in 20% to 40% of people with type 1 diabetes. If a similar prevalence exists in Koreans with type 1 diabetes, at a minimum, thousands of people with type 1 diabetes suffer at least one unpredicted episode of severe hypoglycemia per year in Korea. For patients with problematic hypoglycemia, an evidence-based stepwise approach was suggested in 2015. The first step is structured education regarding multiple daily injections of an insulin analog, and the second step is adding a technological intervention, such as continuous subcutaneous insulin infusion or real-time continuous glucose monitoring. The next step is a sensor-augmented pump, preferably with a low glucose suspension feature or very frequent contact, and the final step is islet or pancreas transplantation. In Korea, however, none of these treatments are reimbursed by the National Health Insurance, and thus have not been widely implemented. The low prevalence of type 1 diabetes means that Korean physicians are relatively unfamiliar with the new technologies in this field. Therefore, the roles of new technologies and pancreas or islet transplantation in the treatment of problematic hypoglycemia need to be defined in the current clinical setting of Korea.

8. Modelling of volunteer satisfaction and intention to remain in community service: A stepwise approach

Hasan, Hazlin; Wahid, Sharifah Norhuda Syed; Jais, Mohammad; Ridzuan, Arifi

2017-05-01

The purpose of this study is to obtain the most significant model of volunteer satisfaction and intention to remain in community service by using a stepwise approach. Currently, Malaysians, young and old are showing more interests in involving themselves in community service projects, either locally or internationally. This positive movement of serving the needy is somehow being halted by the lack of human and financial resources. Therefore, the trend today sees organizers of such projects depend heavily on voluntary supports as they enable project managers to add and to expand the quantity and diversity of services offered without exhausting the minimal budget available. Volunteers are considered a valuable commodity as the available pool of volunteers may be declining due to various reasons which include the volunteer satisfaction. In tandem with the existing situation, a selected sample of 215 diploma students from one of the public universities in Malaysia, who have been involved in at least one community service project, agreed that everybody should have a volunteering intention in helping others. The findings revealed that the most significant model obtained contains two factors that contributed towards intention to remain in community service; work assignment and organizational support, with work assignment becoming the most significant factor. Further research on the differences of intention to remain in community service between students' stream and gender would be conducted to contribute to the body of knowledge.

9. Bridge mediated two-electron transfer reactions: Analysis of stepwise and concerted pathways

Petrov, E.G.; May, V.

2004-01-01

A theory of nonadiabatic donor (D)-acceptor (A) two-electron transfer (TET) mediated by a single regular bridge (B) is developed. The presence of different intermediate two-electron states connecting the reactant state D -- BA with the product state DBA -- results in complex multiexponential kinetics. The conditions are discussed at which a reduction to two-exponential as well as single-exponential kinetics becomes possible. For the latter case the rate K TET is calculated, which describes the bridge-mediated reaction as an effective two-electron D-A transfer. In the limit of small populations of the intermediate TET states D - B - A, DB -- A, D - BA - , and DB - A - , K TET is obtained as a sum of the rates K TET (step) and K TET (sup) . The first rate describes stepwise TET originated by transitions of a single electron. It starts at D -- BA and reaches DBA -- via the intermediate state D - BA - . These transitions cover contributions from sequential as well as superexchange reactions all including reduced bridge states. In contrast, a specific two-electron superexchange mechanism from D -- BA to DBA -- defines K TET (sup) . An analytic dependence of K TET (step) and K TET (sup) on the number of bridging units is presented and different regimes of D-A TET are studied

10. Credit Scoring Problem Based on Regression Analysis

2014-01-01

ABSTRACT: This thesis provides an explanatory introduction to the regression models of data mining and contains basic definitions of key terms in the linear, multiple and logistic regression models. Meanwhile, the aim of this study is to illustrate fitting models for the credit scoring problem using simple linear, multiple linear and logistic regression models and also to analyze the found model functions by statistical tools. Keywords: Data mining, linear regression, logistic regression....

11. Modelling subject-specific childhood growth using linear mixed-effect models with cubic regression splines.

Grajeda, Laura M; Ivanescu, Andrada; Saito, Mayuko; Crainiceanu, Ciprian; Jaganath, Devan; Gilman, Robert H; Crabtree, Jean E; Kelleher, Dermott; Cabrera, Lilia; Cama, Vitaliano; Checkley, William

2016-01-01

Childhood growth is a cornerstone of pediatric research. Statistical models need to consider individual trajectories to adequately describe growth outcomes. Specifically, well-defined longitudinal models are essential to characterize both population and subject-specific growth. Linear mixed-effect models with cubic regression splines can account for the nonlinearity of growth curves and provide reasonable estimators of population and subject-specific growth, velocity and acceleration. We provide a stepwise approach that builds from simple to complex models, and account for the intrinsic complexity of the data. We start with standard cubic splines regression models and build up to a model that includes subject-specific random intercepts and slopes and residual autocorrelation. We then compared cubic regression splines vis-à-vis linear piecewise splines, and with varying number of knots and positions. Statistical code is provided to ensure reproducibility and improve dissemination of methods. Models are applied to longitudinal height measurements in a cohort of 215 Peruvian children followed from birth until their fourth year of life. Unexplained variability, as measured by the variance of the regression model, was reduced from 7.34 when using ordinary least squares to 0.81 (p linear mixed-effect models with random slopes and a first order continuous autoregressive error term. There was substantial heterogeneity in both the intercept (p modeled with a first order continuous autoregressive error term as evidenced by the variogram of the residuals and by a lack of association among residuals. The final model provides a parametric linear regression equation for both estimation and prediction of population- and individual-level growth in height. We show that cubic regression splines are superior to linear regression splines for the case of a small number of knots in both estimation and prediction with the full linear mixed effect model (AIC 19,352 vs. 19

12. Evaluating the Effectiveness of Traditional and Alternative Principal Preparation Programs

Pannell, Summer; Peltier-Glaze, Bernnell M.; Haynes, Ingrid; Davis, Delilah; Skelton, Carrie

2015-01-01

This study sought to determine the effectiveness on increasing student achievement of principals trained in a traditional principal preparation program and those trained in an alternate route principal preparation program within the same Mississippi university. Sixty-six Mississippi principals and assistant principals participated in the study. Of…

13. Riccati transformations and principal solutions of discrete linear systems

Ahlbrandt, C.D.; Hooker, J.W.

1984-01-01

Consider a second-order linear matrix difference equation. A definition of principal and anti-principal, or recessive and dominant, solutions of the equation are given and the existence of principal and anti-principal solutions and the essential uniqueness of principal solutions is proven

14. Principals, Trust, and Cultivating Vibrant Schools

Megan Tschannen-Moran

2015-03-01

Full Text Available Although principals are ultimately held accountable to student learning in their buildings, the most consistent research results have suggested that their impact on student achievement is largely indirect. Leithwood, Patten, and Jantzi proposed four paths through which this indirect influence would flow, and the purpose of this special issue is to examine in greater depth these mediating variables. Among mediating variables, we assert that trust is key. In this paper, we explore the evidence that points to the role that faculty trust in the principal plays in student learning and how principals can cultivate trust by attending to the five facets of trust, as well as the correlates of trust that mediate student learning, including academic press, collective teacher efficacy, and teacher professionalism. We argue that trust plays a role in each of the four paths identified by Leithwood, Patten, and Jantzi. Finally, we explore possible new directions for future research.

15. Regularized Label Relaxation Linear Regression.

Fang, Xiaozhao; Xu, Yong; Li, Xuelong; Lai, Zhihui; Wong, Wai Keung; Fang, Bingwu

2018-04-01

Linear regression (LR) and some of its variants have been widely used for classification problems. Most of these methods assume that during the learning phase, the training samples can be exactly transformed into a strict binary label matrix, which has too little freedom to fit the labels adequately. To address this problem, in this paper, we propose a novel regularized label relaxation LR method, which has the following notable characteristics. First, the proposed method relaxes the strict binary label matrix into a slack variable matrix by introducing a nonnegative label relaxation matrix into LR, which provides more freedom to fit the labels and simultaneously enlarges the margins between different classes as much as possible. Second, the proposed method constructs the class compactness graph based on manifold learning and uses it as the regularization item to avoid the problem of overfitting. The class compactness graph is used to ensure that the samples sharing the same labels can be kept close after they are transformed. Two different algorithms, which are, respectively, based on -norm and -norm loss functions are devised. These two algorithms have compact closed-form solutions in each iteration so that they are easily implemented. Extensive experiments show that these two algorithms outperform the state-of-the-art algorithms in terms of the classification accuracy and running time.

16. Identification of cotton properties to improve yarn count quality by using regression analysis

Amin, M.; Ullah, M.; Akbar, A.

2014-01-01

Identification of raw material characteristics towards yarn count variation was studied by using statistical techniques. Regression analysis is used to meet the objective. Stepwise regression is used for mode) selection, and coefficient of determination and mean squared error (MSE) criteria are used to identify the contributing factors of cotton properties for yam count. Statistical assumptions of normality, autocorrelation and multicollinearity are evaluated by using probability plot, Durbin Watson test, variance inflation factor (VIF), and then model fitting is carried out. It is found that, invisible (INV), nepness (Nep), grayness (RD), cotton trash (TR) and uniformity index (VI) are the main contributing cotton properties for yarn count variation. The results are also verified by Pareto chart. (author)

17. Geometry of Quantum Principal Bundles. Pt. 1

Durdevic, M.

1996-01-01

A theory of principal bundles possessing quantum structure groups and classical base manifolds is presented. Structural analysis of such quantum principal bundles is performed. A differential calculus is constructed, combining differential forms on the base manifold with an appropriate differential calculus on the structure quantum group. Relations between the calculus on the group and the calculus on the bundle are investigated. A concept of (pseudo)tensoriality is formulated. The formalism of connections is developed. In particular, operators of horizontal projection, covariant derivative and curvature are constructed and analyzed. Generalizations of the first Structure Equation and of the Bianchi identity are found. Illustrative examples are presented. (orig.)

18. Cloning and Characterizing Genes Involved in Monoterpene Induced Mammary Tumor Regression.

1996-10-01

AD GRANT NUMBER DAMDI7-94-J-4041 TITLE: Cloning and Characterizing Genes Involved in Monoterpene Induced Mammary Tumor Regression PRINCIPAL...October 1996 Annual (1 Sep 95 - 31 Aug 96) 4. TITLE AND SUBTITLE 5. FUNDING NUMBERS Cloning and Characterizing Genes Involved in Monoterpene Induced... Monoterpene -induced/repressed genes were identified in regressing rat mammary carcinomas treated with dietary limonene using a newly developed method

19. Stellar atmospheric parameter estimation using Gaussian process regression

Bu, Yude; Pan, Jingchang

2015-02-01

As is well known, it is necessary to derive stellar parameters from massive amounts of spectral data automatically and efficiently. However, in traditional automatic methods such as artificial neural networks (ANNs) and kernel regression (KR), it is often difficult to optimize the algorithm structure and determine the optimal algorithm parameters. Gaussian process regression (GPR) is a recently developed method that has been proven to be capable of overcoming these difficulties. Here we apply GPR to derive stellar atmospheric parameters from spectra. Through evaluating the performance of GPR on Sloan Digital Sky Survey (SDSS) spectra, Medium resolution Isaac Newton Telescope Library of Empirical Spectra (MILES) spectra, ELODIE spectra and the spectra of member stars of galactic globular clusters, we conclude that GPR can derive stellar parameters accurately and precisely, especially when we use data preprocessed with principal component analysis (PCA). We then compare the performance of GPR with that of several widely used regression methods (ANNs, support-vector regression and KR) and find that with GPR it is easier to optimize structures and parameters and more efficient and accurate to extract atmospheric parameters.

20. The Deputy Principal Instructional Leadership Role and Professional Learning: Perceptions of Secondary Principals, Deputies and Teachers

Leaf, Ann; Odhiambo, George

2017-01-01

Purpose: The purpose of this paper is to report on a study examining the perceptions of secondary principals, deputies and teachers, of deputy principal (DP) instructional leadership (IL), as well as deputies' professional learning (PL) needs. Framed within an interpretivist approach, the specific objectives of this study were: to explore the…

1. Statewide Data on Supply and Demand of Principals after Policy Changes to Principal Preparation in Illinois

Haller, Alicia; Hunt, Erika

2016-01-01

Research has demonstrated that principals have a powerful impact on school improvement and student learning. Principals play a vital role in recruiting, developing, and retaining effective teachers; creating a school-wide culture of learning; and implementing a continuous improvement plan aimed at increasing student achievement. Leithwood, Louis,…

2. Principal Self-Efficacy, Teacher Perceptions of Principal Performance, and Teacher Job Satisfaction

Evans, Molly Lynn

2016-01-01

In public schools, the principal's role is of paramount importance in influencing teachers to excel and to keep their job satisfaction high. The self-efficacy of leaders is an important characteristic of leadership, but this issue has not been extensively explored in school principals. Using internet-based questionnaires, this study obtained…

3. Andragogical Practices of School Principals in Developing the Leadership Capacities of Assistant Principals

McDaniel, Luther

2017-01-01

The purpose of this mixed methods study was to assess school principals' perspectives of the extent to which they apply the principles of andragogy to the professional development of assistant principals in their schools. This study was conducted in school districts that constitute a RESA area in a southeastern state. The schools in these…

4. Sparse Regression by Projection and Sparse Discriminant Analysis

Qi, Xin

2015-04-03

© 2015, © American Statistical Association, Institute of Mathematical Statistics, and Interface Foundation of North America. Recent years have seen active developments of various penalized regression methods, such as LASSO and elastic net, to analyze high-dimensional data. In these approaches, the direction and length of the regression coefficients are determined simultaneously. Due to the introduction of penalties, the length of the estimates can be far from being optimal for accurate predictions. We introduce a new framework, regression by projection, and its sparse version to analyze high-dimensional data. The unique nature of this framework is that the directions of the regression coefficients are inferred first, and the lengths and the tuning parameters are determined by a cross-validation procedure to achieve the largest prediction accuracy. We provide a theoretical result for simultaneous model selection consistency and parameter estimation consistency of our method in high dimension. This new framework is then generalized such that it can be applied to principal components analysis, partial least squares, and canonical correlation analysis. We also adapt this framework for discriminant analysis. Compared with the existing methods, where there is relatively little control of the dependency among the sparse components, our method can control the relationships among the components. We present efficient algorithms and related theory for solving the sparse regression by projection problem. Based on extensive simulations and real data analysis, we demonstrate that our method achieves good predictive performance and variable selection in the regression setting, and the ability to control relationships between the sparse components leads to more accurate classification. In supplementary materials available online, the details of the algorithms and theoretical proofs, and R codes for all simulation studies are provided.

5. Prevalence and characteristics of asthma–COPD overlap syndrome identified by a stepwise approach

Inoue H

2017-06-01

Full Text Available Hiromasa Inoue,1 Takahide Nagase,2 Satoshi Morita,3 Atsushi Yoshida,4 Tatsunori Jinnai,4 Masakazu Ichinose5 1Department of Pulmonary Medicine, Graduate School of Medical and Dental Sciences, Kagoshima University, Kagoshima, 2Department of Respiratory Medicine, Graduate School of Medicine, The University of Tokyo, Tokyo, 3Department of Biomedical Statistics and Bioinformatics, Kyoto University Graduate School of Medicine, Kyoto, 4Medical Department, AstraZeneca K.K., Osaka, 5Department of Respiratory Medicine, Tohoku University Graduate School of Medicine, Sendai, Japan Background and objective: There is increasing recognition of asthma–COPD overlap syndrome (ACOS, which shares some features of both asthma and COPD; however, the prevalence and characteristics of ACOS are not well understood. The aim of this study was to investigate the prevalence of ACOS among patients with COPD and its characteristics using a stepwise approach as stated in the recent report of the Global Initiative for Asthma (GINA and the Global Initiative for Chronic Obstructive Lung Disease (GOLD. Methods: This multicenter, cross-sectional, observational study enrolled outpatients who were receiving medical treatment for COPD. Clinical data, including spirometry results, were retrieved from medical records. For symptom assessment, patients were asked to complete the Clinical COPD questionnaire and the modified British Medical Research Council questionnaire. Results: Of the 1,008 patients analyzed, 167 (16.6% had syndromic features of ACOS. Of the total number of patients, 93 and 42 (9.2% and 4.2% also had a predefined clinical variability of ≥12%/≥200 mL and ≥12%/≥400 mL in forced expiratory volume in 1 second (FEV1, respectively, and therefore were identified as having ACOS. Conversely, the number of patients who had either syndromic or spirometric feature of ACOS was 595 (59.0%, ≥12%/≥200 mL FEV1 clinical variability, and 328 patients (32.5%, ≥12%/≥400 m

6. Stepwise Rock-Eval pyrolysis as a tool for typing heterogeneous organic matter in soils

Hetenyi, M.; Nyilas, T.; Toth, T.M. [Department of Mineralogy, Geochemistry and Petrology, University of Szeged, P.O. Box 651, H-6701 Szeged (Hungary)

2005-08-15

This paper presents an application of Rock-Eval pyrolysis for estimating the proportion of the components with different thermal stability in soil organic matter, the maturity of which corresponds to the early stage of diagenesis. For testing the validity of the modified Rock-Eval method, parallel series of pyrolysis were carried out on sedimentary rock samples. The temperature program was selected on the basis of the results obtained from stepwise Rock-Eval pyrolysis and from the mathematical deconvolution of pyrograms. The proportion of the original biomolecules in soil organic matter was calculated by the integration of pyrograms below 350{sup o}C and could be determined rapidly by one single pyrolysis using 350{sup o}C as initial cracking temperature. At 380{sup o}C, both the mathematical and the experimental methods provide reliable information about the proportion of the humic substances. Conversely, for rock samples, mathematical deconvolution of the pyrograms showed the heterogeneity of the sedimentary organic matter, the maturity of which corresponds to late diagenesis, without any estimation of the proportion of the different components. The rate of organic carbon accumulation in the studied soils and the decomposition rate of biopolymers were interpreted as a function of land-use and redox conditions. Differences in the precursor vegetation and in the environmental parameters resulted in markedly reduced carbon storage and higher degree of humification in the agricultural soil than in the adjacent forest soil. Redox conditions strongly affected both the amount and the elemental composition of the stored organic matter. The decomposition rate of biopolymers appeared to be controlled mainly by the contribution of resistant lignin components to the source biomass and, to a lesser extent, by redox conditions.

7. Cerebral Activations Related to Ballistic, Stepwise Interrupted and Gradually Modulated Movements in Parkinson Patients

Toxopeus, Carolien M.; Maurits, Natasha M.; Valsan, Gopal; Conway, Bernard A.; Leenders, Klaus L.; de Jong, Bauke M.

2012-01-01

Patients with Parkinson’s disease (PD) experience impaired initiation and inhibition of movements such as difficulty to start/stop walking. At single-joint level this is accompanied by reduced inhibition of antagonist muscle activity. While normal basal ganglia (BG) contributions to motor control include selecting appropriate muscles by inhibiting others, it is unclear how PD-related changes in BG function cause impaired movement initiation and inhibition at single-joint level. To further elucidate these changes we studied 4 right-hand movement tasks with fMRI, by dissociating activations related to abrupt movement initiation, inhibition and gradual movement modulation. Initiation and inhibition were inferred from ballistic and stepwise interrupted movement, respectively, while smooth wrist circumduction enabled the assessment of gradually modulated movement. Task-related activations were compared between PD patients (N = 12) and healthy subjects (N = 18). In healthy subjects, movement initiation was characterized by antero-ventral striatum, substantia nigra (SN) and premotor activations while inhibition was dominated by subthalamic nucleus (STN) and pallidal activations, in line with the known role of these areas in simple movement. Gradual movement mainly involved antero-dorsal putamen and pallidum. Compared to healthy subjects, patients showed reduced striatal/SN and increased pallidal activation for initiation, whereas for inhibition STN activation was reduced and striatal-thalamo-cortical activation increased. For gradual movement patients showed reduced pallidal and increased thalamo-cortical activation. We conclude that PD-related changes during movement initiation fit the (rather static) model of alterations in direct and indirect BG pathways. Reduced STN activation and regional cortical increased activation in PD during inhibition and gradual movement modulation are better explained by a dynamic model that also takes into account enhanced

8. Stepwise threshold clustering: a new method for genotyping MHC loci using next-generation sequencing technology.

William E Stutz

Full Text Available Genes of the vertebrate major histocompatibility complex (MHC are of great interest to biologists because of their important role in immunity and disease, and their extremely high levels of genetic diversity. Next generation sequencing (NGS technologies are quickly becoming the method of choice for high-throughput genotyping of multi-locus templates like MHC in non-model organisms. Previous approaches to genotyping MHC genes using NGS technologies suffer from two problems:1 a "gray zone" where low frequency alleles and high frequency artifacts can be difficult to disentangle and 2 a similar sequence problem, where very similar alleles can be difficult to distinguish as two distinct alleles. Here were present a new method for genotyping MHC loci--Stepwise Threshold Clustering (STC--that addresses these problems by taking full advantage of the increase in sequence data provided by NGS technologies. Unlike previous approaches for genotyping MHC with NGS data that attempt to classify individual sequences as alleles or artifacts, STC uses a quasi-Dirichlet clustering algorithm to cluster similar sequences at increasing levels of sequence similarity. By applying frequency and similarity based criteria to clusters rather than individual sequences, STC is able to successfully identify clusters of sequences that correspond to individual or similar alleles present in the genomes of individual samples. Furthermore, STC does not require duplicate runs of all samples, increasing the number of samples that can be genotyped in a given project. We show how the STC method works using a single sample library. We then apply STC to 295 threespine stickleback (Gasterosteus aculeatus samples from four populations and show that neighboring populations differ significantly in MHC allele pools. We show that STC is a reliable, accurate, efficient, and flexible method for genotyping MHC that will be of use to biologists interested in a variety of downstream applications.

9. Cationic gemini surfactant-assisted synthesis of hollow Au nanostructures by stepwise reductions.

Wang, Wentao; Han, Yuchun; Tian, Maozhang; Fan, Yaxun; Tang, Yongqiang; Gao, Mingyuan; Wang, Yilin

2013-06-26

A novel synthetic approach was developed for creating versatile hollow Au nanostructures by stepwise reductions of Au(III) upon the use of cationic gemini surfactant hexamethylene-1,6-bis(dodecyl dimethylammonium bromide) (C12C6C12Br2) as a template agent. It was observed that the Au(I) ions obtained from the reduction of Au(III) by ascorbic acid can assist the gemini surfactant to form vesicles, capsule-like, and tube-like aggregates that subsequently act as soft templates for hollow Au nanostructures upon further reduction of Au(I) to Au(0) by NaBH4. It was demonstrated that the combination of C12C6C12Br2 and Au(I) plays a key role in regulating the structure of the hollow precursors not only because C12C6C12Br2 has a stronger aggregation ability in comparison with its single chain counterpart but also because the electrostatic repulsion between head groups of C12C6C12Br2 is greatly weakened after Au(III) is converted to Au(I), which is in favor of the construction of vesicles, capsule-like, and tube-like aggregates. Compared with solid Au nanospheres, the resultant hollow nanostructures exhibit enhanced electrocatalytic activities in methanol oxidation, following the order of elongated nanocapsule > nanocapsule > nanosphere. Benefiting from balanced interactions between the gemini surfactant and Au(I), this soft-template method may present a facile and versatile approach for the controlled synthesis of Au nanostructures potentially useful for fuel cells and other Au nanodevices.

10. Cerebral activations related to ballistic, stepwise interrupted and gradually modulated movements in Parkinson patients.

Carolien M Toxopeus

Full Text Available Patients with Parkinson's disease (PD experience impaired initiation and inhibition of movements such as difficulty to start/stop walking. At single-joint level this is accompanied by reduced inhibition of antagonist muscle activity. While normal basal ganglia (BG contributions to motor control include selecting appropriate muscles by inhibiting others, it is unclear how PD-related changes in BG function cause impaired movement initiation and inhibition at single-joint level. To further elucidate these changes we studied 4 right-hand movement tasks with fMRI, by dissociating activations related to abrupt movement initiation, inhibition and gradual movement modulation. Initiation and inhibition were inferred from ballistic and stepwise interrupted movement, respectively, while smooth wrist circumduction enabled the assessment of gradually modulated movement. Task-related activations were compared between PD patients (N = 12 and healthy subjects (N = 18. In healthy subjects, movement initiation was characterized by antero-ventral striatum, substantia nigra (SN and premotor activations while inhibition was dominated by subthalamic nucleus (STN and pallidal activations, in line with the known role of these areas in simple movement. Gradual movement mainly involved antero-dorsal putamen and pallidum. Compared to healthy subjects, patients showed reduced striatal/SN and increased pallidal activation for initiation, whereas for inhibition STN activation was reduced and striatal-thalamo-cortical activation increased. For gradual movement patients showed reduced pallidal and increased thalamo-cortical activation. We conclude that PD-related changes during movement initiation fit the (rather static model of alterations in direct and indirect BG pathways. Reduced STN activation and regional cortical increased activation in PD during inhibition and gradual movement modulation are better explained by a dynamic model that also takes into account

11. Evidence for a stepwise program of extrathymic T cell development within the human tonsil

McClory, Susan; Hughes, Tiffany; Freud, Aharon G.; Briercheck, Edward L.; Martin, Chelsea; Trimboli, Anthony J.; Yu, Jianhua; Zhang, Xiaoli; Leone, Gustavo; Nuovo, Gerard; Caligiuri, Michael A.

2012-01-01

The development of a broad repertoire of T cells, which is essential for effective immune function, occurs in the thymus. Although some data suggest that T cell development can occur extrathymically, many researchers remain skeptical that extrathymic T cell development has an important role in generating the T cell repertoire in healthy individuals. However, it may be important in the setting of poor thymic function or congenital deficit and in the context of autoimmunity, cancer, or regenerative medicine. Here, we report evidence that a stepwise program of T cell development occurs within the human tonsil. We identified 5 tonsillar T cell developmental intermediates: (a) CD34+CD38dimLin– cells, which resemble multipotent progenitors in the bone marrow and thymus; (b) more mature CD34+CD38brightLin– cells; (c) CD34+CD1a+CD11c– cells, which resemble committed T cell lineage precursors in the thymus; (d) CD34–CD1a+CD3–CD11c– cells, which resemble CD4+CD8+ double-positive T cells in the thymus; and (e) CD34–CD1a+CD3+CD11c– cells. The phenotype of each subset closely resembled that of its thymic counterpart. The last 4 populations expressed RAG1 and PTCRA, genes required for TCR rearrangement, and all 5 subsets were capable of ex vivo T cell differentiation. TdT+ cells found within the tonsillar fibrous scaffold expressed CD34 and/or CD1a, indicating that this distinct anatomic region contributes to pre–T cell development, as does the subcapsular region of the thymus. Thus, we provide evidence of a role for the human tonsil in a comprehensive program of extrathymic T cell development. PMID:22378041

12. Inferring transcriptional compensation interactions in yeast via stepwise structure equation modeling

Wang Woei-Fuh

2008-03-01

Full Text Available Abstract Background With the abundant information produced by microarray technology, various approaches have been proposed to infer transcriptional regulatory networks. However, few approaches have studied subtle and indirect interaction such as genetic compensation, the existence of which is widely recognized although its mechanism has yet to be clarified. Furthermore, when inferring gene networks most models include only observed variables whereas latent factors, such as proteins and mRNA degradation that are not measured by microarrays, do participate in networks in reality. Results Motivated by inferring transcriptional compensation (TC interactions in yeast, a stepwise structural equation modeling algorithm (SSEM is developed. In addition to observed variables, SSEM also incorporates hidden variables to capture interactions (or regulations from latent factors. Simulated gene networks are used to determine with which of six possible model selection criteria (MSC SSEM works best. SSEM with Bayesian information criterion (BIC results in the highest true positive rates, the largest percentage of correctly predicted interactions from all existing interactions, and the highest true negative (non-existing interactions rates. Next, we apply SSEM using real microarray data to infer TC interactions among (1 small groups of genes that are synthetic sick or lethal (SSL to SGS1, and (2 a group of SSL pairs of 51 yeast genes involved in DNA synthesis and repair that are of interest. For (1, SSEM with BIC is shown to outperform three Bayesian network algorithms and a multivariate autoregressive model, checked against the results of qRT-PCR experiments. The predictions for (2 are shown to coincide with several known pathways of Sgs1 and its partners that are involved in DNA replication, recombination and repair. In addition, experimentally testable interactions of Rad27 are predicted. Conclusion SSEM is a useful tool for inferring genetic networks, and the

13. Phylogenetic detection of horizontal gene transfer during the step-wise genesis of Mycobacterium tuberculosis

Turenne Christine

2009-08-01

Full Text Available Abstract Background In the past decade, the availability of complete genome sequence data has greatly facilitated comparative genomic research aimed at addressing genetic variability within species. More recently, analysis across species has become feasible, especially in genera where genome sequencing projects of multiple species have been initiated. To understand the genesis of the pathogen Mycobacterium tuberculosis within a genus where the majority of species are harmless environmental organisms, we have used genome sequence data from 16 mycobacteria to look for evidence of horizontal gene transfer (HGT associated with the emergence of pathogenesis. First, using multi-locus sequence analysis (MLSA of 20 housekeeping genes across these species, we derived a phylogeny that serves as the basis for HGT assignments. Next, we performed alignment searches for the 3989 proteins of M. tuberculosis H37Rv against 15 other mycobacterial genomes, generating a matrix of 59835 comparisons, to look for genetic elements that were uniquely found in M. tuberculosis and closely-related pathogenic mycobacteria. To assign when foreign genes were likely acquired, we designed a bioinformatic program called mycoHIT (mycobacterial homologue investigation tool to analyze these data in conjunction with the MLSA-based phylogeny. Results The bioinformatic screen predicted that 137 genes had been acquired by HGT at different phylogenetic strata; these included genes coding for metabolic functions and modification of mycobacterial lipids. For the majority of these genes, corroborating evidence of HGT was obtained, such as presence of phage or plasmid, and an aberrant GC%. Conclusion M. tuberculosis emerged through vertical inheritance along with the step-wise addition of genes acquired via HGT events, a process that may more generally describe the evolution of other pathogens.

14. What Principals Should Know About Food Allergies.

Munoz-Furlong, Anne

2002-01-01

Describes what principals should know about recent research findings on food allergies (peanuts, tree nuts, milk, eggs, soy, wheat) that can produce severe or life-threatening reactions in children. Asserts that every school should have trained staff and written procedures for reacting quickly to allergic reactions. (PKP)

15. A Principal's Guide to Children's Allergies.

Munoz-Furlong, Anne

1999-01-01

Discusses several common children's allergies, including allergic rhinitis, asthma, atopic dermatitis, food allergies, and anaphylactic shock. Principals should become familiar with various medications and should work with children's parents and physicians to determine how to manage their allergies at school. Allergen avoidance is the best…

16. Assessment of School Principals' Reassignment Process

Sezgin-Nartgün, Senay; Ekinci, Serkan

2016-01-01

This study aimed to identify administrators' views related to the assessment of school principals' reassignment in educational organizations. The study utilized qualitative research design and the study group composed of 8 school administrators selected via simple sampling who were employed in the Bolu central district in 2014-2015 academic year.…

17. An Exploration of Principal Instructional Technology Leadership

Townsend, LaTricia Walker

2013-01-01

Nationwide the demand for schools to incorporate technology into their educational programs is great. In response, North Carolina developed the IMPACT model in 2003 to provide a comprehensive model for technology integration in the state. The model is aligned to national educational technology standards for teachers, students, and principals.…

18. Principals' Leadership Styles and Student Achievement

Harnish, David Alan

2012-01-01

Many schools struggle to meet No Child Left Behind's stringent adequate yearly progress standards, although the benchmark has stimulated national creativity and reform. The purpose of this study was to explore teacher perceptions of principals' leadership styles, curriculum reform, and student achievement to ascertain possible factors to improve…

19. How To Select a Good Assistant Principal.

Holman, Linda J.

1997-01-01

Notes that a well-structured job profile and interview can provide insight into the key qualities of an effective assistant principal. These include organizational skills, basic accounting knowledge, interpersonal skills, dependability, strong work ethic, effective problem-solving skills, leadership skills, written communication skills,…

20. Principals' Transformational Leadership in School Improvement

Yang, Yingxiu

2013-01-01

Purpose: This paper aims to contribute experience and ideas of the transformational leadership, not only for the principal want to improve leadership himself (herself), but also for the school at critical period of improvement, through summarizing forming process and the problem during the course and key factors that affect the course.…

1. Imprecise Beliefs in a Principal Agent Model

Rigotti, L.

1998-01-01

This paper presents a principal-agent model where the agent has multiple, or imprecise, beliefs. We model this situation formally by assuming the agent's preferences are incomplete. One can interpret this multiplicity as an agent's limited knowledge of the surrounding environment. In this setting,

2. Bootstrap confidence intervals for principal response curves

Timmerman, Marieke E.; Ter Braak, Cajo J. F.

2008-01-01

The principal response curve (PRC) model is of use to analyse multivariate data resulting from experiments involving repeated sampling in time. The time-dependent treatment effects are represented by PRCs, which are functional in nature. The sample PRCs can be estimated using a raw approach, or the

3. Bootstrap Confidence Intervals for Principal Response Curves

Timmerman, M.E.; Braak, ter C.J.F.

2008-01-01

The principal response curve (PRC) model is of use to analyse multivariate data resulting from experiments involving repeated sampling in time. The time-dependent treatment effects are represented by PRCs, which are functional in nature. The sample PRCs can be estimated using a raw approach, or the

4. Islamitisch financieren tussen principes en realiteit

Wolters, W.G.

2009-01-01

‘De financiële crisis zou niet hebben plaatsgevonden, als de wereld de principes van islamitisch bankieren en financieren zou hebben aangenomen.’ Dat was één van de kenmerkende reacties van de kant van de islamitische bankiers, in de laatste maanden van 2008. Toen begon de wereldwijde financiële

5. Dealing with Crises: One Principal's Experience.

Foley, Charles F.

1986-01-01

The principal of Concord High School (New Hampshire) recounts the 1985-86 school year's four crises--the visits of teacher-astronaut Christa McAuliffe and Secretary of Education William Bennett, the shooting of a former student, and the Challenger space shuttle explosion. The greatest challenge was resuming the normal schedule and fielding media…

6. Principal Pressure in the Middle of Accountability

Derrington, Mary Lynne; Larsen, Donald E.

2012-01-01

When a new superintendent is hired, Tom Thompson, middle school principal, is squeezed between complying with the demands of the district and cultivating a positive culture in his school. He wrestles with the stress of facing tough leadership choices that take a toll on his physical and mental health. Tom realizes that a career-ending move might…

7. The Relationship between Principals' Managerial Approaches and ...

Students' discipline is critical to the attainment of positive school outcomes. This paper presents and discusses findings of a study on the relationship between principals' management approaches and the level of student discipline in selected public secondary schools in Kenya. The premise of the study was that the level of ...

8. Primary School Principals' Experiences with Smartphone Apps

Çakir, Rahman; Aktay, Sayim

2016-01-01

Smartphones are not just pieces of hardware, they at same time also dip into software features such as communication systems. The aim of this study is to examine primary school principals' experiences with smart phone applications. Shedding light on this subject means that this research is qualitative. Criterion sampling has been intentionally…

9. Principal normal indicatrices of closed space curves

Røgen, Peter

1999-01-01

A theorem due to J. Weiner, which is also proven by B. Solomon, implies that a principal normal indicatrix of a closed space curve with nonvanishing curvature has integrated geodesic curvature zero and contains no subarc with integrated geodesic curvature pi. We prove that the inverse problem alw...

10. Summer Principals'/Directors' Orientation Training Module.

Mata, Robert L.; Garcia, Richard L.

Intended to provide current or potential project principals/directors with the basic knowledge, skills, abilities, and sensitivities needed to manage a summer migrant school project in the local educational setting, this module provides instruction in the project management areas of planning, preparation, control, and termination. The module…

11. Probabilistic Principal Component Analysis for Metabolomic Data.

2010-11-23

Abstract Background Data from metabolomic studies are typically complex and high-dimensional. Principal component analysis (PCA) is currently the most widely used statistical technique for analyzing metabolomic data. However, PCA is limited by the fact that it is not based on a statistical model. Results Here, probabilistic principal component analysis (PPCA) which addresses some of the limitations of PCA, is reviewed and extended. A novel extension of PPCA, called probabilistic principal component and covariates analysis (PPCCA), is introduced which provides a flexible approach to jointly model metabolomic data and additional covariate information. The use of a mixture of PPCA models for discovering the number of inherent groups in metabolomic data is demonstrated. The jackknife technique is employed to construct confidence intervals for estimated model parameters throughout. The optimal number of principal components is determined through the use of the Bayesian Information Criterion model selection tool, which is modified to address the high dimensionality of the data. Conclusions The methods presented are illustrated through an application to metabolomic data sets. Jointly modeling metabolomic data and covariates was successfully achieved and has the potential to provide deeper insight to the underlying data structure. Examination of confidence intervals for the model parameters, such as loadings, allows for principled and clear interpretation of the underlying data structure. A software package called MetabolAnalyze, freely available through the R statistical software, has been developed to facilitate implementation of the presented methods in the metabolomics field.

12. Principals in Partnership with Math Coaches

Grant, Catherine Miles; Davenport, Linda Ruiz

2009-01-01

One of the most promising developments in math education is the fact that many districts are hiring math coaches--also called math resource teachers, math facilitators, math lead teachers, or math specialists--to assist elementary-level teachers with math instruction. What must not be lost, however, is that principals play an essential role in…

13. Experimental and principal component analysis of waste ...

The present study is aimed at determining through principal component analysis the most important variables affecting bacterial degradation in ponds. Data were collected from literature. In addition, samples were also collected from the waste stabilization ponds at the University of Nigeria, Nsukka and analyzed to ...

14. Principal Component Analysis as an Efficient Performance ...

This paper uses the principal component analysis (PCA) to examine the possibility of using few explanatory variables (X's) to explain the variation in Y. It applied PCA to assess the performance of students in Abia State Polytechnic, Aba, Nigeria. This was done by estimating the coefficients of eight explanatory variables in a ...

15. Principal component analysis of psoriasis lesions images

Maletti, Gabriela Mariel; Ersbøll, Bjarne Kjær

2003-01-01

A set of RGB images of psoriasis lesions is used. By visual examination of these images, there seem to be no common pattern that could be used to find and align the lesions within and between sessions. It is expected that the principal components of the original images could be useful during future...

16. The Principal as Professional Development Leader

Lindstrom, Phyllis H.; Speck, Marsha

2004-01-01

Individual teachers have the greatest effect on student performance. Principals, as professional development leaders, are in the best position to provide teachers with the professional development strategies they need to improve skills and raise student achievement. This book guides readers through a step-by-step process to formulate, implement,…

17. Burnout And Lifestyle Of Principals And Entrepreneurs

Jasna Lavrenčič

2014-12-01

Full Text Available Research Question (RQ: What kind of lifestyle do the principals and entrepreneurs lead? Does the lifestyle of principals and entrepreneurs influence burnout? Purpose: To find out, based on the results of a questionnaire, what kind of lifestyle both researched groups lead. Does lifestyle have an influence on the occurrence of the phenomenon of burnout. Method: We used the method of data collection by questionnaire. Acquired data were analyzed using SPSS, descriptive and inference statistics. Results: Results showed, that both groups lead a similar lifestyle and that lifestyle influences burnout with principals, as well as entrepreneurs. Organization: School principals and entrepreneurs are the heads of individual organizations or companies, the goal of which is success. To be successful in their work, they must adapt their lifestyle, which can be healthy or unhealthy. If their lifestyle is unhealthy, it can lead to burnout. Society: With results of the questionnaire we would like to answer the question about the lifestyle of both groups and its influence on the occurrence of burnout. Originality: The study of lifestyle and the occurrence of burnout in these two groups is the first study in this area. Limitations/Future Research: In continuation, research groups could be submitted to the research fields of effort physiology and tracking of certain haematological parameters, such as cholesterol, blood sugar and stress hormones - adrenaline, noradrenalin, cortisol. Thus, we could carry out an even more in depth research of the connection between lifestyle and burnout.

18. Principal Connection / Amazon and the Whole Teacher

Hoerr, Thomas R.

2015-01-01

A recent controversy over Amazon's culture has strong implications for the whole child approach, and it offers powerful lessons for principals. A significant difference between the culture of so many businesses today and the culture at good schools is that in good schools, the welfare of the employees is very important. Student success is the…

19. The Gender of Secondary School Principals.

Bonuso, Carl; Shakeshaft, Charol

1983-01-01

A study was conducted to understand why so few of the secondary school principals in New York State are women. Results suggest two possible causes: either sufficient women candidates do not apply for the positions, or sex discrimination still exists. (KH)

20. Regression analysis of growth responses to water depth in three wetland plant species

Sorrell, Brian K; Tanner, Chris C; Brix, Hans

2012-01-01

depths from 0 – 0.5 m. Morphological and growth responses to depth were followed for 54 days before harvest, and then analysed by repeated measures analysis of covariance, and non-linear and quantile regression analysis (QRA), to compare flooding tolerances. Principal results Growth responses to depth...

1. Stepwise Distributed Open Innovation Contests for Software Development: Acceleration of Genome-Wide Association Analysis.

Hill, Andrew; Loh, Po-Ru; Bharadwaj, Ragu B; Pons, Pascal; Shang, Jingbo; Guinan, Eva; Lakhani, Karim; Kilty, Iain; Jelinsky, Scott A

2017-05-01

The association of differing genotypes with disease-related phenotypic traits offers great potential to both help identify new therapeutic targets and support stratification of patients who would gain the greatest benefit from specific drug classes. Development of low-cost genotyping and sequencing has made collecting large-scale genotyping data routine in population and therapeutic intervention studies. In addition, a range of new technologies is being used to capture numerous new and complex phenotypic descriptors. As a result, genotype and phenotype datasets have grown exponentially. Genome-wide association studies associate genotypes and phenotypes using methods such as logistic regression. As existing tools for association analysis limit the efficiency by which value can be extracted from increasing volumes of data, there is a pressing need for new software tools that can accelerate association analyses on large genotype-phenotype datasets. Using open innovation (OI) and contest-based crowdsourcing, the logistic regression analysis in a leading, community-standard genetics software package (PLINK 1.07) was substantially accelerated. OI allowed us to do this in innovation, we achieved an end-to-end speedup of 591-fold for a data set size of 6678 subjects by 645 863 variants, compared to PLINK 1.07's logistic regression. This represents a reduction in run time from 4.8 hours to 29 seconds. Accelerated logistic regression code developed in this project has been incorporated into the PLINK2 project. Using iterative competition-based OI, we have developed a new, faster implementation of logistic regression for genome-wide association studies analysis. We present lessons learned and recommendations on running a successful OI process for bioinformatics. © The Author 2017. Published by Oxford University Press.

2. Quantifying the Sub-Cellular Distributions of Gold Nanospheres Uptaken by Cells through Stepwise, Site-Selective Etching.

Xia, Younan; Huo, Da

2018-04-10

A quantitative understanding of the sub-cellular distributions of nanoparticles uptaken by cells is important to the development of nanomedicine. With Au nanospheres as a model system, here we demonstrate, for the first time, how to quantify the numbers of nanoparticles bound to plasma membrane, accumulated in cytosol, and entrapped in lysosomes, respectively, through stepwise, site-selective etching. Our results indicate that the chance for nanoparticles to escape from lysosomes is insensitive to the presence of targeting ligand although ligand-receptor binding has been documented as a critical factor in triggering internalization. Furthermore, the presence of serum proteins is shown to facilitate the binding of nanoparticles to plasma membrane lacking the specific receptor. Collectively, these findings confirm the potential of stepwise etching in quantitatively analyzing the sub-cellular distributions of nanoparticles uptaken by cells in an effort to optimize the therapeutic effect. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

3. Stepwise hydrochloric acid extraction of monazite hydroxides for the recovery of cerium lean rare earths, cerium, uranium and thorium

Swaminathan, T.V.; Nair, V.R.; John, C.V.

1988-01-01

Monazite sand is normally processed by the caustic soda route to produce mixed rare earth chloride, thorium hydroxide and trisodium phosphate. Bulk of the mixed rare earth chloride is used for the preparation of FC catalysts. Recently some of the catalyst producers have shown preference to cerium depleted (lanthanum enriched) rare earth chloride rather than the natural rare earth chloride obtained from monazite. Therefore, a process for producing cerium depleted rare earth chloride, cerium, thorium and uranium from rare earth + thorium hydroxide obtained by treating monazite, based on stepwise hydrochloric acid extraction, was developed in the authors laboratory. The process involves drying of the mixed rare earth-thorium hydroxide cake obtained by monazite-caustic soda process followed by stepwise extraction of the dried cake with hydrochloric acid under specified conditions

4. Unbalanced Regressions and the Predictive Equation

Osterrieder, Daniela; Ventosa-Santaulària, Daniel; Vera-Valdés, J. Eduardo

Predictive return regressions with persistent regressors are typically plagued by (asymptotically) biased/inconsistent estimates of the slope, non-standard or potentially even spurious statistical inference, and regression unbalancedness. We alleviate the problem of unbalancedness in the theoreti......Predictive return regressions with persistent regressors are typically plagued by (asymptotically) biased/inconsistent estimates of the slope, non-standard or potentially even spurious statistical inference, and regression unbalancedness. We alleviate the problem of unbalancedness...

5. Semiparametric regression during 2003–2007

Ruppert, David; Wand, M.P.; Carroll, Raymond J.

2009-01-01

Semiparametric regression is a fusion between parametric regression and nonparametric regression that integrates low-rank penalized splines, mixed model and hierarchical Bayesian methodology – thus allowing more streamlined handling of longitudinal and spatial correlation. We review progress in the field over the five-year period between 2003 and 2007. We find semiparametric regression to be a vibrant field with substantial involvement and activity, continual enhancement and widespread application.

6. Gaussian process regression analysis for functional data

Shi, Jian Qing

2011-01-01

Gaussian Process Regression Analysis for Functional Data presents nonparametric statistical methods for functional regression analysis, specifically the methods based on a Gaussian process prior in a functional space. The authors focus on problems involving functional response variables and mixed covariates of functional and scalar variables.Covering the basics of Gaussian process regression, the first several chapters discuss functional data analysis, theoretical aspects based on the asymptotic properties of Gaussian process regression models, and new methodological developments for high dime

7. Artificial neural networks and multiple linear regression model using principal components to estimate rainfall over South America

T. Soares dos Santos

2016-01-01

model output and observed monthly precipitation. We used general circulation model (GCM experiments for the 20th century (RCP historical; 1970–1999 and two scenarios (RCP 2.6 and 8.5; 2070–2100. The model test results indicate that the ANNs significantly outperform the MLR downscaling of monthly precipitation variability.

8. Regression Analysis by Example. 5th Edition

2012-01-01

Regression analysis is a conceptually simple method for investigating relationships among variables. Carrying out a successful application of regression analysis, however, requires a balance of theoretical results, empirical rules, and subjective judgment. "Regression Analysis by Example, Fifth Edition" has been expanded and thoroughly…

9. Standards for Standardized Logistic Regression Coefficients

Menard, Scott

2011-01-01

Standardized coefficients in logistic regression analysis have the same utility as standardized coefficients in linear regression analysis. Although there has been no consensus on the best way to construct standardized logistic regression coefficients, there is now sufficient evidence to suggest a single best approach to the construction of a…

10. A Seemingly Unrelated Poisson Regression Model

King, Gary

1989-01-01

This article introduces a new estimator for the analysis of two contemporaneously correlated endogenous event count variables. This seemingly unrelated Poisson regression model (SUPREME) estimator combines the efficiencies created by single equation Poisson regression model estimators and insights from "seemingly unrelated" linear regression models.

11. High performance yellow organic electroluminescent devices by doping iridium(III) complex into host materials with stepwise energy levels

Cui, Rongzhen; Zhou, Liang, E-mail: zhoul@ciac.ac.cn; Jiang, Yunlong; Li, Yanan; Zhao, Xuesen; Zhang, Hongjie, E-mail: hongjie@ciac.ac.cn

2015-10-15

In this work, we aim to further improve the electroluminescent (EL) performances of a yellow light-emitting iridium(III) complex by designing double light-emitting layers (EMLs) devices having stepwise energy levels. Compared with single-EML devices, these designed double-EML devices showed improved EL efficiency and brightness attributed to better balance in carriers. In addition, the stepwise distribution in energy levels of host materials is instrumental in broadening the recombination zone, thus delaying the roll-off of EL efficiency. Based on the investigation of carriers' distribution, device structure was further optimized by adjusting the thickness of deposited layers. Finally, yellow EL device (Commission Internationale de l'Eclairage (CIE) coordinates of (0.446, 0.542)) with maximum current efficiency, power efficiency and brightness up to 78.62 cd/A (external quantum efficiency (EQE) of 21.1%), 82.28 lm/W and 72,713 cd/m{sup 2}, respectively, was obtained. Even at the high brightness of 1000 cd/m{sup 2}, EL efficiency as high as 65.54 cd/A (EQE=17.6%) can be retained. - Highlights: • Yellow electroluminescent devices were designed and fabricated. • P-type and n-type materials having stepwise energy levels were chosen as host materials. • Better balance of holes and electrons causes the enhanced efficiencies. • Improved carriers' trapping suppresses the emission of host material.

12. Development and implementation of the Caribbean Laboratory Quality Management Systems Stepwise Improvement Process (LQMS-SIP) Towards Accreditation.

Alemnji, George; Edghill, Lisa; Guevara, Giselle; Wallace-Sankarsingh, Sacha; Albalak, Rachel; Cognat, Sebastien; Nkengasong, John; Gabastou, Jean-Marc

2017-01-01

Implementing quality management systems and accrediting laboratories in the Caribbean has been a challenge. We report the development of a stepwise process for quality systems improvement in the Caribbean Region. The Caribbean Laboratory Stakeholders met under a joint Pan American Health Organization/US Centers for Disease Control and Prevention initiative and developed a user-friendly framework called 'Laboratory Quality Management System - Stepwise Improvement Process (LQMS-SIP) Towards Accreditation' to support countries in strengthening laboratory services through a stepwise approach toward fulfilling the ISO 15189: 2012 requirements. This approach consists of a three-tiered framework. Tier 1 represents the minimum requirements corresponding to the mandatory criteria for obtaining a licence from the Ministry of Health of the participating country. The next two tiers are quality improvement milestones that are achieved through the implementation of specific quality management system requirements. Laboratories that meet the requirements of the three tiers will be encouraged to apply for accreditation. The Caribbean Regional Organisation for Standards and Quality hosts the LQMS-SIP Secretariat and will work with countries, including the Ministry of Health and stakeholders, including laboratory staff, to coordinate and implement LQMS-SIP activities. The Caribbean Public Health Agency will coordinate and advocate for the LQMS-SIP implementation. This article presents the Caribbean LQMS-SIP framework and describes how it will be implemented among various countries in the region to achieve quality improvement.

13. Stepwise crystallization and the layered distribution in crystallization kinetics of ultra-thin poly(ethylene terephthalate) film

Zuo, Biao, E-mail: chemizuo@zstu.edu.cn, E-mail: wxinping@yahoo.com; Xu, Jianquan; Sun, Shuzheng; Liu, Yue; Yang, Juping; Zhang, Li; Wang, Xinping, E-mail: chemizuo@zstu.edu.cn, E-mail: wxinping@yahoo.com [Department of Chemistry, Key Laboratory of Advanced Textile Materials and Manufacturing Technology of the Education Ministry, Zhejiang Sci-Tech University, Hangzhou 310018 (China)

2016-06-21

Crystallization is an important property of polymeric materials. In conventional viewpoint, the transformation of disordered chains into crystals is usually a spatially homogeneous process (i.e., it occurs simultaneously throughout the sample), that is, the crystallization rate at each local position within the sample is almost the same. Here, we show that crystallization of ultra-thin poly(ethylene terephthalate) (PET) films can occur in the heterogeneous way, exhibiting a stepwise crystallization process. We found that the layered distribution of glass transition dynamics of thin film modifies the corresponding crystallization behavior, giving rise to the layered distribution of the crystallization kinetics of PET films, with an 11-nm-thick surface layer having faster crystallization rate and the underlying layer showing bulk-like behavior. The layered distribution in crystallization kinetics results in a particular stepwise crystallization behavior during heating the sample, with the two cold-crystallization temperatures separated by up to 20 K. Meanwhile, interfacial interaction is crucial for the occurrence of the heterogeneous crystallization, as the thin film crystallizes simultaneously if the interfacial interaction is relatively strong. We anticipate that this mechanism of stepwise crystallization of thin polymeric films will allow new insight into the chain organization in confined environments and permit independent manipulation of localized properties of nanomaterials.

14. Application of the modified chi-square ratio statistic in a stepwise procedure for cascade impactor equivalence testing.

Weber, Benjamin; Lee, Sau L; Delvadia, Renishkumar; Lionberger, Robert; Li, Bing V; Tsong, Yi; Hochhaus, Guenther

2015-03-01

Equivalence testing of aerodynamic particle size distribution (APSD) through multi-stage cascade impactors (CIs) is important for establishing bioequivalence of orally inhaled drug products. Recent work demonstrated that the median of the modified chi-square ratio statistic (MmCSRS) is a promising metric for APSD equivalence testing of test (T) and reference (R) products as it can be applied to a reduced number of CI sites that are more relevant for lung deposition. This metric is also less sensitive to the increased variability often observed for low-deposition sites. A method to establish critical values for the MmCSRS is described here. This method considers the variability of the R product by employing a reference variance scaling approach that allows definition of critical values as a function of the observed variability of the R product. A stepwise CI equivalence test is proposed that integrates the MmCSRS as a method for comparing the relative shapes of CI profiles and incorporates statistical tests for assessing equivalence of single actuation content and impactor sized mass. This stepwise CI equivalence test was applied to 55 published CI profile scenarios, which were classified as equivalent or inequivalent by members of the Product Quality Research Institute working group (PQRI WG). The results of the stepwise CI equivalence test using a 25% difference in MmCSRS as an acceptance criterion provided the best matching with those of the PQRI WG as decisions of both methods agreed in 75% of the 55 CI profile scenarios.

15. Multi-layered nanoparticles for penetrating the endosome and nuclear membrane via a step-wise membrane fusion process.

Akita, Hidetaka; Kudo, Asako; Minoura, Arisa; Yamaguti, Masaya; Khalil, Ikramy A; Moriguchi, Rumiko; Masuda, Tomoya; Danev, Radostin; Nagayama, Kuniaki; Kogure, Kentaro; Harashima, Hideyoshi

2009-05-01

Efficient targeting of DNA to the nucleus is a prerequisite for effective gene therapy. The gene-delivery vehicle must penetrate through the plasma membrane, and the DNA-impermeable double-membraned nuclear envelope, and deposit its DNA cargo in a form ready for transcription. Here we introduce a concept for overcoming intracellular membrane barriers that involves step-wise membrane fusion. To achieve this, a nanotechnology was developed that creates a multi-layered nanoparticle, which we refer to as a Tetra-lamellar Multi-functional Envelope-type Nano Device (T-MEND). The critical structural elements of the T-MEND are a DNA-polycation condensed core coated with two nuclear membrane-fusogenic inner envelopes and two endosome-fusogenic outer envelopes, which are shed in stepwise fashion. A double-lamellar membrane structure is required for nuclear delivery via the stepwise fusion of double layered nuclear membrane structure. Intracellular membrane fusions to endosomes and nuclear membranes were verified by spectral imaging of fluorescence resonance energy transfer (FRET) between donor and acceptor fluorophores that had been dually labeled on the liposome surface. Coating the core with the minimum number of nucleus-fusogenic lipid envelopes (i.e., 2) is essential to facilitate transcription. As a result, the T-MEND achieves dramatic levels of transgene expression in non-dividing cells.

16. Stepwise adsorption of phenanthrene at the fly ash-water interface as affected by solution chemistry: experimental and modeling studies.

An, Chunjiang; Huang, Guohe

2012-11-20

17. Finding Structure in Diversity: A Stepwise Small-N/Medium-N Qualitative Comparative Analysis Approach for Water Resources Management Research

Peter P. Mollinga

2014-02-01

Full Text Available Drawing particularly on recent debates on, and development of, comparative methods in the field of comparative politics, the paper argues that stepwise small-N/medium-N qualitative comparative analysis (QCA is a particularly suitable methodological approach for water resources studies because it can make use of the rich but fragmented water resources studies literature for accumulation of knowledge and development of theory. It is suggested that taking an explicit critical realist ontological and epistemological stance allows expansion of the scope of stepwise small-N/medium-N QCA beyond what is claimed for it in Ragin’s 'configurational comparative methods (CCM' perspective for analysing the complexity of causality as 'multiple conjunctural causation'. In addition to explanation of certain sets of 'outcomes' as in CCM’s combinatorial, set-theoretic approach, embedding stepwise small-N/medium-N QCA in a critical realist ontology allows the method to contribute to development of theory on (qualitative differences between the structures in society that shape water resources use, management and governance.

18. Stepwise decision making for the long-term management of radioactive waste

Pescatore, C.; Vari, A.

2005-01-01

increasingly being given to the better understanding of concepts such as 'stepwise decision making' and 'adaptive staging' in which the public, and especially the most affected local public, are meaningfully involved in the planning process. (author)

19. Multimodal microscopy and the stepwise multi-photon activation fluorescence of melanin

Lai, Zhenhua

The author's work is divided into three aspects: multimodal microscopy, stepwise multi-photon activation fluorescence (SMPAF) of melanin, and customized-profile lenses (CPL) for on-axis laser scanners, which will be introduced respectively. A multimodal microscope provides the ability to image samples with multiple modalities on the same stage, which incorporates the benefits of all modalities. The multimodal microscopes developed in this dissertation are the Keck 3D fusion multimodal microscope 2.0 (3DFM 2.0), upgraded from the old 3DFM with improved performance and flexibility, and the multimodal microscope for targeting small particles (the "Target" system). The control systems developed for both microscopes are low-cost and easy-to-build, with all components off-the-shelf. The control system have not only significantly decreased the complexity and size of the microscope, but also increased the pixel resolution and flexibility. The SMPAF of melanin, activated by a continuous-wave (CW) mode near-infrared (NIR) laser, has potential applications for a low-cost and reliable method of detecting melanin. The photophysics of melanin SMPAF has been studied by theoretical analysis of the excitation process and investigation of the spectra, activation threshold, and photon number absorption of melanin SMPAF. SMPAF images of melanin in mouse hair and skin, mouse melanoma, and human black and white hairs are compared with images taken by conventional multi-photon fluorescence microscopy (MPFM) and confocal reflectance microscopy (CRM). SMPAF images significantly increase specificity and demonstrate the potential to increase sensitivity for melanin detection compared to MPFM images and CRM images. Employing melanin SMPAF imaging to detect melanin inside human skin in vivo has been demonstrated, which proves the effectiveness of melanin detection using SMPAF for medical purposes. Selective melanin ablation with micrometer resolution has been presented using the Target system

20. A Genealogical Interpretation of Principal Components Analysis

McVean, Gil

2009-01-01

Principal components analysis, PCA, is a statistical method commonly used in population genetics to identify structure in the distribution of genetic variation across geographical location and ethnic background. However, while the method is often used to inform about historical demographic processes, little is known about the relationship between fundamental demographic parameters and the projection of samples onto the primary axes. Here I show that for SNP data the projection of samples onto the principal components can be obtained directly from considering the average coalescent times between pairs of haploid genomes. The result provides a framework for interpreting PCA projections in terms of underlying processes, including migration, geographical isolation, and admixture. I also demonstrate a link between PCA and Wright's fst and show that SNP ascertainment has a largely simple and predictable effect on the projection of samples. Using examples from human genetics, I discuss the application of these results to empirical data and the implications for inference. PMID:19834557