WorldWideScience

Sample records for variable selection procedure

  1. A New Variable Weighting and Selection Procedure for K-Means Cluster Analysis

    Science.gov (United States)

    Steinley, Douglas; Brusco, Michael J.

    2008-01-01

    A variance-to-range ratio variable weighting procedure is proposed. We show how this weighting method is theoretically grounded in the inherent variability found in data exhibiting cluster structure. In addition, a variable selection procedure is proposed to operate in conjunction with the variable weighting technique. The performances of these…

  2. Penalized regression procedures for variable selection in the potential outcomes framework.

    Science.gov (United States)

    Ghosh, Debashis; Zhu, Yeying; Coffman, Donna L

    2015-05-10

    A recent topic of much interest in causal inference is model selection. In this article, we describe a framework in which to consider penalized regression approaches to variable selection for causal effects. The framework leads to a simple 'impute, then select' class of procedures that is agnostic to the type of imputation algorithm as well as penalized regression used. It also clarifies how model selection involves a multivariate regression model for causal inference problems and that these methods can be applied for identifying subgroups in which treatment effects are homogeneous. Analogies and links with the literature on machine learning methods, missing data, and imputation are drawn. A difference least absolute shrinkage and selection operator algorithm is defined, along with its multiple imputation analogs. The procedures are illustrated using a well-known right-heart catheterization dataset. Copyright © 2015 John Wiley & Sons, Ltd.

  3. Chaotic Dynamical State Variables Selection Procedure Based Image Encryption Scheme

    Directory of Open Access Journals (Sweden)

    Zia Bashir

    2017-12-01

    Full Text Available Nowadays, in the modern digital era, the use of computer technologies such as smartphones, tablets and the Internet, as well as the enormous quantity of confidential information being converted into digital form have resulted in raised security issues. This, in turn, has led to rapid developments in cryptography, due to the imminent need for system security. Low-dimensional chaotic systems have low complexity and key space, yet they achieve high encryption speed. An image encryption scheme is proposed that, without compromising the security, uses reasonable resources. We introduced a chaotic dynamic state variables selection procedure (CDSVSP to use all state variables of a hyper-chaotic four-dimensional dynamical system. As a result, less iterations of the dynamical system are required, and resources are saved, thus making the algorithm fast and suitable for practical use. The simulation results of security and other miscellaneous tests demonstrate that the suggested algorithm excels at robustness, security and high speed encryption.

  4. Variable Selection via Partial Correlation.

    Science.gov (United States)

    Li, Runze; Liu, Jingyuan; Lou, Lejia

    2017-07-01

    Partial correlation based variable selection method was proposed for normal linear regression models by Bühlmann, Kalisch and Maathuis (2010) as a comparable alternative method to regularization methods for variable selection. This paper addresses two important issues related to partial correlation based variable selection method: (a) whether this method is sensitive to normality assumption, and (b) whether this method is valid when the dimension of predictor increases in an exponential rate of the sample size. To address issue (a), we systematically study this method for elliptical linear regression models. Our finding indicates that the original proposal may lead to inferior performance when the marginal kurtosis of predictor is not close to that of normal distribution. Our simulation results further confirm this finding. To ensure the superior performance of partial correlation based variable selection procedure, we propose a thresholded partial correlation (TPC) approach to select significant variables in linear regression models. We establish the selection consistency of the TPC in the presence of ultrahigh dimensional predictors. Since the TPC procedure includes the original proposal as a special case, our theoretical results address the issue (b) directly. As a by-product, the sure screening property of the first step of TPC was obtained. The numerical examples also illustrate that the TPC is competitively comparable to the commonly-used regularization methods for variable selection.

  5. Developing a spatial-statistical model and map of historical malaria prevalence in Botswana using a staged variable selection procedure

    Directory of Open Access Journals (Sweden)

    Mabaso Musawenkosi LH

    2007-09-01

    Full Text Available Abstract Background Several malaria risk maps have been developed in recent years, many from the prevalence of infection data collated by the MARA (Mapping Malaria Risk in Africa project, and using various environmental data sets as predictors. Variable selection is a major obstacle due to analytical problems caused by over-fitting, confounding and non-independence in the data. Testing and comparing every combination of explanatory variables in a Bayesian spatial framework remains unfeasible for most researchers. The aim of this study was to develop a malaria risk map using a systematic and practicable variable selection process for spatial analysis and mapping of historical malaria risk in Botswana. Results Of 50 potential explanatory variables from eight environmental data themes, 42 were significantly associated with malaria prevalence in univariate logistic regression and were ranked by the Akaike Information Criterion. Those correlated with higher-ranking relatives of the same environmental theme, were temporarily excluded. The remaining 14 candidates were ranked by selection frequency after running automated step-wise selection procedures on 1000 bootstrap samples drawn from the data. A non-spatial multiple-variable model was developed through step-wise inclusion in order of selection frequency. Previously excluded variables were then re-evaluated for inclusion, using further step-wise bootstrap procedures, resulting in the exclusion of another variable. Finally a Bayesian geo-statistical model using Markov Chain Monte Carlo simulation was fitted to the data, resulting in a final model of three predictor variables, namely summer rainfall, mean annual temperature and altitude. Each was independently and significantly associated with malaria prevalence after allowing for spatial correlation. This model was used to predict malaria prevalence at unobserved locations, producing a smooth risk map for the whole country. Conclusion We have

  6. Using variable combination population analysis for variable selection in multivariate calibration.

    Science.gov (United States)

    Yun, Yong-Huan; Wang, Wei-Ting; Deng, Bai-Chuan; Lai, Guang-Bi; Liu, Xin-bo; Ren, Da-Bing; Liang, Yi-Zeng; Fan, Wei; Xu, Qing-Song

    2015-03-03

    Variable (wavelength or feature) selection techniques have become a critical step for the analysis of datasets with high number of variables and relatively few samples. In this study, a novel variable selection strategy, variable combination population analysis (VCPA), was proposed. This strategy consists of two crucial procedures. First, the exponentially decreasing function (EDF), which is the simple and effective principle of 'survival of the fittest' from Darwin's natural evolution theory, is employed to determine the number of variables to keep and continuously shrink the variable space. Second, in each EDF run, binary matrix sampling (BMS) strategy that gives each variable the same chance to be selected and generates different variable combinations, is used to produce a population of subsets to construct a population of sub-models. Then, model population analysis (MPA) is employed to find the variable subsets with the lower root mean squares error of cross validation (RMSECV). The frequency of each variable appearing in the best 10% sub-models is computed. The higher the frequency is, the more important the variable is. The performance of the proposed procedure was investigated using three real NIR datasets. The results indicate that VCPA is a good variable selection strategy when compared with four high performing variable selection methods: genetic algorithm-partial least squares (GA-PLS), Monte Carlo uninformative variable elimination by PLS (MC-UVE-PLS), competitive adaptive reweighted sampling (CARS) and iteratively retains informative variables (IRIV). The MATLAB source code of VCPA is available for academic research on the website: http://www.mathworks.com/matlabcentral/fileexchange/authors/498750. Copyright © 2015 Elsevier B.V. All rights reserved.

  7. Variable selection by lasso-type methods

    Directory of Open Access Journals (Sweden)

    Sohail Chand

    2011-09-01

    Full Text Available Variable selection is an important property of shrinkage methods. The adaptive lasso is an oracle procedure and can do consistent variable selection. In this paper, we provide an explanation that how use of adaptive weights make it possible for the adaptive lasso to satisfy the necessary and almost sufcient condition for consistent variable selection. We suggest a novel algorithm and give an important result that for the adaptive lasso if predictors are normalised after the introduction of adaptive weights, it makes the adaptive lasso performance identical to the lasso.

  8. Purposeful selection of variables in logistic regression

    Directory of Open Access Journals (Sweden)

    Williams David Keith

    2008-12-01

    Full Text Available Abstract Background The main problem in many model-building situations is to choose from a large set of covariates those that should be included in the "best" model. A decision to keep a variable in the model might be based on the clinical or statistical significance. There are several variable selection algorithms in existence. Those methods are mechanical and as such carry some limitations. Hosmer and Lemeshow describe a purposeful selection of covariates within which an analyst makes a variable selection decision at each step of the modeling process. Methods In this paper we introduce an algorithm which automates that process. We conduct a simulation study to compare the performance of this algorithm with three well documented variable selection procedures in SAS PROC LOGISTIC: FORWARD, BACKWARD, and STEPWISE. Results We show that the advantage of this approach is when the analyst is interested in risk factor modeling and not just prediction. In addition to significant covariates, this variable selection procedure has the capability of retaining important confounding variables, resulting potentially in a slightly richer model. Application of the macro is further illustrated with the Hosmer and Lemeshow Worchester Heart Attack Study (WHAS data. Conclusion If an analyst is in need of an algorithm that will help guide the retention of significant covariates as well as confounding ones they should consider this macro as an alternative tool.

  9. Variable Selection for Regression Models of Percentile Flows

    Science.gov (United States)

    Fouad, G.

    2017-12-01

    Percentile flows describe the flow magnitude equaled or exceeded for a given percent of time, and are widely used in water resource management. However, these statistics are normally unavailable since most basins are ungauged. Percentile flows of ungauged basins are often predicted using regression models based on readily observable basin characteristics, such as mean elevation. The number of these independent variables is too large to evaluate all possible models. A subset of models is typically evaluated using automatic procedures, like stepwise regression. This ignores a large variety of methods from the field of feature (variable) selection and physical understanding of percentile flows. A study of 918 basins in the United States was conducted to compare an automatic regression procedure to the following variable selection methods: (1) principal component analysis, (2) correlation analysis, (3) random forests, (4) genetic programming, (5) Bayesian networks, and (6) physical understanding. The automatic regression procedure only performed better than principal component analysis. Poor performance of the regression procedure was due to a commonly used filter for multicollinearity, which rejected the strongest models because they had cross-correlated independent variables. Multicollinearity did not decrease model performance in validation because of a representative set of calibration basins. Variable selection methods based strictly on predictive power (numbers 2-5 from above) performed similarly, likely indicating a limit to the predictive power of the variables. Similar performance was also reached using variables selected based on physical understanding, a finding that substantiates recent calls to emphasize physical understanding in modeling for predictions in ungauged basins. The strongest variables highlighted the importance of geology and land cover, whereas widely used topographic variables were the weakest predictors. Variables suffered from a high

  10. Prediction of Placental Barrier Permeability: A Model Based on Partial Least Squares Variable Selection Procedure

    Directory of Open Access Journals (Sweden)

    Yong-Hong Zhang

    2015-05-01

    Full Text Available Assessing the human placental barrier permeability of drugs is very important to guarantee drug safety during pregnancy. Quantitative structure–activity relationship (QSAR method was used as an effective assessing tool for the placental transfer study of drugs, while in vitro human placental perfusion is the most widely used method. In this study, the partial least squares (PLS variable selection and modeling procedure was used to pick out optimal descriptors from a pool of 620 descriptors of 65 compounds and to simultaneously develop a QSAR model between the descriptors and the placental barrier permeability expressed by the clearance indices (CI. The model was subjected to internal validation by cross-validation and y-randomization and to external validation by predicting CI values of 19 compounds. It was shown that the model developed is robust and has a good predictive potential (r2 = 0.9064, RMSE = 0.09, q2 = 0.7323, rp2 = 0.7656, RMSP = 0.14. The mechanistic interpretation of the final model was given by the high variable importance in projection values of descriptors. Using PLS procedure, we can rapidly and effectively select optimal descriptors and thus construct a model with good stability and predictability. This analysis can provide an effective tool for the high-throughput screening of the placental barrier permeability of drugs.

  11. A Simple K-Map Based Variable Selection Scheme in the Direct ...

    African Journals Online (AJOL)

    A multiplexer with (n-l) data select inputs can realise directly a function of n variables. In this paper, a simple k-map based variable selection scheme is proposed such that an n variable logic function can be synthesised using a multiplexer with (n-q) data input variables and q data select variables. The procedure is based on ...

  12. A survey of variable selection methods in two Chinese epidemiology journals

    Directory of Open Access Journals (Sweden)

    Lynn Henry S

    2010-09-01

    Full Text Available Abstract Background Although much has been written on developing better procedures for variable selection, there is little research on how it is practiced in actual studies. This review surveys the variable selection methods reported in two high-ranking Chinese epidemiology journals. Methods Articles published in 2004, 2006, and 2008 in the Chinese Journal of Epidemiology and the Chinese Journal of Preventive Medicine were reviewed. Five categories of methods were identified whereby variables were selected using: A - bivariate analyses; B - multivariable analysis; e.g. stepwise or individual significance testing of model coefficients; C - first bivariate analyses, followed by multivariable analysis; D - bivariate analyses or multivariable analysis; and E - other criteria like prior knowledge or personal judgment. Results Among the 287 articles that reported using variable selection methods, 6%, 26%, 30%, 21%, and 17% were in categories A through E, respectively. One hundred sixty-three studies selected variables using bivariate analyses, 80% (130/163 via multiple significance testing at the 5% alpha-level. Of the 219 multivariable analyses, 97 (44% used stepwise procedures, 89 (41% tested individual regression coefficients, but 33 (15% did not mention how variables were selected. Sixty percent (58/97 of the stepwise routines also did not specify the algorithm and/or significance levels. Conclusions The variable selection methods reported in the two journals were limited in variety, and details were often missing. Many studies still relied on problematic techniques like stepwise procedures and/or multiple testing of bivariate associations at the 0.05 alpha-level. These deficiencies should be rectified to safeguard the scientific validity of articles published in Chinese epidemiology journals.

  13. Variable selection and estimation for longitudinal survey data

    KAUST Repository

    Wang, Li

    2014-09-01

    There is wide interest in studying longitudinal surveys where sample subjects are observed successively over time. Longitudinal surveys have been used in many areas today, for example, in the health and social sciences, to explore relationships or to identify significant variables in regression settings. This paper develops a general strategy for the model selection problem in longitudinal sample surveys. A survey weighted penalized estimating equation approach is proposed to select significant variables and estimate the coefficients simultaneously. The proposed estimators are design consistent and perform as well as the oracle procedure when the correct submodel was known. The estimating function bootstrap is applied to obtain the standard errors of the estimated parameters with good accuracy. A fast and efficient variable selection algorithm is developed to identify significant variables for complex longitudinal survey data. Simulated examples are illustrated to show the usefulness of the proposed methodology under various model settings and sampling designs. © 2014 Elsevier Inc.

  14. Surface Estimation, Variable Selection, and the Nonparametric Oracle Property.

    Science.gov (United States)

    Storlie, Curtis B; Bondell, Howard D; Reich, Brian J; Zhang, Hao Helen

    2011-04-01

    Variable selection for multivariate nonparametric regression is an important, yet challenging, problem due, in part, to the infinite dimensionality of the function space. An ideal selection procedure should be automatic, stable, easy to use, and have desirable asymptotic properties. In particular, we define a selection procedure to be nonparametric oracle (np-oracle) if it consistently selects the correct subset of predictors and at the same time estimates the smooth surface at the optimal nonparametric rate, as the sample size goes to infinity. In this paper, we propose a model selection procedure for nonparametric models, and explore the conditions under which the new method enjoys the aforementioned properties. Developed in the framework of smoothing spline ANOVA, our estimator is obtained via solving a regularization problem with a novel adaptive penalty on the sum of functional component norms. Theoretical properties of the new estimator are established. Additionally, numerous simulated and real examples further demonstrate that the new approach substantially outperforms other existing methods in the finite sample setting.

  15. A novel variable selection approach that iteratively optimizes variable space using weighted binary matrix sampling.

    Science.gov (United States)

    Deng, Bai-chuan; Yun, Yong-huan; Liang, Yi-zeng; Yi, Lun-zhao

    2014-10-07

    In this study, a new optimization algorithm called the Variable Iterative Space Shrinkage Approach (VISSA) that is based on the idea of model population analysis (MPA) is proposed for variable selection. Unlike most of the existing optimization methods for variable selection, VISSA statistically evaluates the performance of variable space in each step of optimization. Weighted binary matrix sampling (WBMS) is proposed to generate sub-models that span the variable subspace. Two rules are highlighted during the optimization procedure. First, the variable space shrinks in each step. Second, the new variable space outperforms the previous one. The second rule, which is rarely satisfied in most of the existing methods, is the core of the VISSA strategy. Compared with some promising variable selection methods such as competitive adaptive reweighted sampling (CARS), Monte Carlo uninformative variable elimination (MCUVE) and iteratively retaining informative variables (IRIV), VISSA showed better prediction ability for the calibration of NIR data. In addition, VISSA is user-friendly; only a few insensitive parameters are needed, and the program terminates automatically without any additional conditions. The Matlab codes for implementing VISSA are freely available on the website: https://sourceforge.net/projects/multivariateanalysis/files/VISSA/.

  16. Bayesian Multiresolution Variable Selection for Ultra-High Dimensional Neuroimaging Data.

    Science.gov (United States)

    Zhao, Yize; Kang, Jian; Long, Qi

    2018-01-01

    Ultra-high dimensional variable selection has become increasingly important in analysis of neuroimaging data. For example, in the Autism Brain Imaging Data Exchange (ABIDE) study, neuroscientists are interested in identifying important biomarkers for early detection of the autism spectrum disorder (ASD) using high resolution brain images that include hundreds of thousands voxels. However, most existing methods are not feasible for solving this problem due to their extensive computational costs. In this work, we propose a novel multiresolution variable selection procedure under a Bayesian probit regression framework. It recursively uses posterior samples for coarser-scale variable selection to guide the posterior inference on finer-scale variable selection, leading to very efficient Markov chain Monte Carlo (MCMC) algorithms. The proposed algorithms are computationally feasible for ultra-high dimensional data. Also, our model incorporates two levels of structural information into variable selection using Ising priors: the spatial dependence between voxels and the functional connectivity between anatomical brain regions. Applied to the resting state functional magnetic resonance imaging (R-fMRI) data in the ABIDE study, our methods identify voxel-level imaging biomarkers highly predictive of the ASD, which are biologically meaningful and interpretable. Extensive simulations also show that our methods achieve better performance in variable selection compared to existing methods.

  17. Ensembling Variable Selectors by Stability Selection for the Cox Model

    Directory of Open Access Journals (Sweden)

    Qing-Yan Yin

    2017-01-01

    Full Text Available As a pivotal tool to build interpretive models, variable selection plays an increasingly important role in high-dimensional data analysis. In recent years, variable selection ensembles (VSEs have gained much interest due to their many advantages. Stability selection (Meinshausen and Bühlmann, 2010, a VSE technique based on subsampling in combination with a base algorithm like lasso, is an effective method to control false discovery rate (FDR and to improve selection accuracy in linear regression models. By adopting lasso as a base learner, we attempt to extend stability selection to handle variable selection problems in a Cox model. According to our experience, it is crucial to set the regularization region Λ in lasso and the parameter λmin properly so that stability selection can work well. To the best of our knowledge, however, there is no literature addressing this problem in an explicit way. Therefore, we first provide a detailed procedure to specify Λ and λmin. Then, some simulated and real-world data with various censoring rates are used to examine how well stability selection performs. It is also compared with several other variable selection approaches. Experimental results demonstrate that it achieves better or competitive performance in comparison with several other popular techniques.

  18. Two-step variable selection in quantile regression models

    Directory of Open Access Journals (Sweden)

    FAN Yali

    2015-06-01

    Full Text Available We propose a two-step variable selection procedure for high dimensional quantile regressions, in which the dimension of the covariates, pn is much larger than the sample size n. In the first step, we perform ℓ1 penalty, and we demonstrate that the first step penalized estimator with the LASSO penalty can reduce the model from an ultra-high dimensional to a model whose size has the same order as that of the true model, and the selected model can cover the true model. The second step excludes the remained irrelevant covariates by applying the adaptive LASSO penalty to the reduced model obtained from the first step. Under some regularity conditions, we show that our procedure enjoys the model selection consistency. We conduct a simulation study and a real data analysis to evaluate the finite sample performance of the proposed approach.

  19. Assessing the accuracy and stability of variable selection methods for random forest modeling in ecology.

    Science.gov (United States)

    Fox, Eric W; Hill, Ryan A; Leibowitz, Scott G; Olsen, Anthony R; Thornbrugh, Darren J; Weber, Marc H

    2017-07-01

    Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological data sets, there is limited guidance on variable selection methods for RF modeling. Typically, either a preselected set of predictor variables are used or stepwise procedures are employed which iteratively remove variables according to their importance measures. This paper investigates the application of variable selection methods to RF models for predicting probable biological stream condition. Our motivating data set consists of the good/poor condition of n = 1365 stream survey sites from the 2008/2009 National Rivers and Stream Assessment, and a large set (p = 212) of landscape features from the StreamCat data set as potential predictors. We compare two types of RF models: a full variable set model with all 212 predictors and a reduced variable set model selected using a backward elimination approach. We assess model accuracy using RF's internal out-of-bag estimate, and a cross-validation procedure with validation folds external to the variable selection process. We also assess the stability of the spatial predictions generated by the RF models to changes in the number of predictors and argue that model selection needs to consider both accuracy and stability. The results suggest that RF modeling is robust to the inclusion of many variables of moderate to low importance. We found no substantial improvement in cross-validated accuracy as a result of variable reduction. Moreover, the backward elimination procedure tended to select too few variables and exhibited numerous issues such as upwardly biased out-of-bag accuracy estimates and instabilities in the spatial predictions. We use simulations to further support and generalize results from the analysis of real data. A main purpose of this work is to elucidate issues of model selection bias and instability to ecologists interested in

  20. Measuring variability of procedure progression in proceduralized scenarios

    International Nuclear Information System (INIS)

    Kim, Yochan; Park, Jinkyun; Jung, Wondea

    2012-01-01

    Highlights: ► The VPP measure was developed to quantify how differently operators follow the procedures. ► Sources that cause variability of ways to follow a given procedure were identified. ► The VPP values for the scenarios are positively related to the scenario performance time. ► The VPP measure is meaningful for explaining characteristics of several PSFs. -- Abstract: Various performance shaping factors (PSFs) have been presented to explain the contributors to unsafe acts in a human failure event or predict a human error probability of new human performance. However, because most of these parameters of an HRA depend on the subjective knowledge and experience of HRA analyzers, the results of an HRA insufficiently provide unbiased standards to explain human performance variations or compare collected data with other data from different analyzers. To secure the validity of the HRA results, we propose a quantitative measure, which represents the variability of procedure progression (VPP) in proceduralized scenarios. A VPP measure shows how differently the operators follow the steps of the procedures. This paper introduces the sources of the VPP measure and relevance to PSFs. The assessment method of the VPP measure is also proposed, and the application examples are shown with a comparison of the performance time. Although more empirical studies should be conducted to reveal the relationship between the VPP measure and other PSFs, it is believed that the VPP measure provides evidence to quantitatively evaluate human performance variations and to cross-culturally compare the collected data.

  1. Benchmarking Variable Selection in QSAR.

    Science.gov (United States)

    Eklund, Martin; Norinder, Ulf; Boyer, Scott; Carlsson, Lars

    2012-02-01

    Variable selection is important in QSAR modeling since it can improve model performance and transparency, as well as reduce the computational cost of model fitting and predictions. Which variable selection methods that perform well in QSAR settings is largely unknown. To address this question we, in a total of 1728 benchmarking experiments, rigorously investigated how eight variable selection methods affect the predictive performance and transparency of random forest models fitted to seven QSAR datasets covering different endpoints, descriptors sets, types of response variables, and number of chemical compounds. The results show that univariate variable selection methods are suboptimal and that the number of variables in the benchmarked datasets can be reduced with about 60 % without significant loss in model performance when using multivariate adaptive regression splines MARS and forward selection. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Combining epidemiologic and biostatistical tools to enhance variable selection in HIV cohort analyses.

    Directory of Open Access Journals (Sweden)

    Christopher Rentsch

    Full Text Available BACKGROUND: Variable selection is an important step in building a multivariate regression model for which several methods and statistical packages are available. A comprehensive approach for variable selection in complex multivariate regression analyses within HIV cohorts is explored by utilizing both epidemiological and biostatistical procedures. METHODS: Three different methods for variable selection were illustrated in a study comparing survival time between subjects in the Department of Defense's National History Study and the Atlanta Veterans Affairs Medical Center's HIV Atlanta VA Cohort Study. The first two methods were stepwise selection procedures, based either on significance tests (Score test, or on information theory (Akaike Information Criterion, while the third method employed a Bayesian argument (Bayesian Model Averaging. RESULTS: All three methods resulted in a similar parsimonious survival model. Three of the covariates previously used in the multivariate model were not included in the final model suggested by the three approaches. When comparing the parsimonious model to the previously published model, there was evidence of less variance in the main survival estimates. CONCLUSIONS: The variable selection approaches considered in this study allowed building a model based on significance tests, on an information criterion, and on averaging models using their posterior probabilities. A parsimonious model that balanced these three approaches was found to provide a better fit than the previously reported model.

  3. Surgeon and type of anesthesia predict variability in surgical procedure times.

    Science.gov (United States)

    Strum, D P; Sampson, A R; May, J H; Vargas, L G

    2000-05-01

    Variability in surgical procedure times increases the cost of healthcare delivery by increasing both the underutilization and overutilization of expensive surgical resources. To reduce variability in surgical procedure times, we must identify and study its sources. Our data set consisted of all surgeries performed over a 7-yr period at a large teaching hospital, resulting in 46,322 surgical cases. To study factors associated with variability in surgical procedure times, data mining techniques were used to segment and focus the data so that the analyses would be both technically and intellectually feasible. The data were subdivided into 40 representative segments of manageable size and variability based on headers adopted from the common procedural terminology classification. Each data segment was then analyzed using a main-effects linear model to identify and quantify specific sources of variability in surgical procedure times. The single most important source of variability in surgical procedure times was surgeon effect. Type of anesthesia, age, gender, and American Society of Anesthesiologists risk class were additional sources of variability. Intrinsic case-specific variability, unexplained by any of the preceding factors, was found to be highest for shorter surgeries relative to longer procedures. Variability in procedure times among surgeons was a multiplicative function (proportionate to time) of surgical time and total procedure time, such that as procedure times increased, variability in surgeons' surgical time increased proportionately. Surgeon-specific variability should be considered when building scheduling heuristics for longer surgeries. Results concerning variability in surgical procedure times due to factors such as type of anesthesia, age, gender, and American Society of Anesthesiologists risk class may be extrapolated to scheduling in other institutions, although specifics on individual surgeons may not. This research identifies factors associated

  4. Estimation and variable selection for generalized additive partial linear models

    KAUST Repository

    Wang, Li

    2011-08-01

    We study generalized additive partial linear models, proposing the use of polynomial spline smoothing for estimation of nonparametric functions, and deriving quasi-likelihood based estimators for the linear parameters. We establish asymptotic normality for the estimators of the parametric components. The procedure avoids solving large systems of equations as in kernel-based procedures and thus results in gains in computational simplicity. We further develop a class of variable selection procedures for the linear parameters by employing a nonconcave penalized quasi-likelihood, which is shown to have an asymptotic oracle property. Monte Carlo simulations and an empirical example are presented for illustration. © Institute of Mathematical Statistics, 2011.

  5. Improved Variable Selection Algorithm Using a LASSO-Type Penalty, with an Application to Assessing Hepatitis B Infection Relevant Factors in Community Residents

    Science.gov (United States)

    Guo, Pi; Zeng, Fangfang; Hu, Xiaomin; Zhang, Dingmei; Zhu, Shuming; Deng, Yu; Hao, Yuantao

    2015-01-01

    Objectives In epidemiological studies, it is important to identify independent associations between collective exposures and a health outcome. The current stepwise selection technique ignores stochastic errors and suffers from a lack of stability. The alternative LASSO-penalized regression model can be applied to detect significant predictors from a pool of candidate variables. However, this technique is prone to false positives and tends to create excessive biases. It remains challenging to develop robust variable selection methods and enhance predictability. Material and methods Two improved algorithms denoted the two-stage hybrid and bootstrap ranking procedures, both using a LASSO-type penalty, were developed for epidemiological association analysis. The performance of the proposed procedures and other methods including conventional LASSO, Bolasso, stepwise and stability selection models were evaluated using intensive simulation. In addition, methods were compared by using an empirical analysis based on large-scale survey data of hepatitis B infection-relevant factors among Guangdong residents. Results The proposed procedures produced comparable or less biased selection results when compared to conventional variable selection models. In total, the two newly proposed procedures were stable with respect to various scenarios of simulation, demonstrating a higher power and a lower false positive rate during variable selection than the compared methods. In empirical analysis, the proposed procedures yielding a sparse set of hepatitis B infection-relevant factors gave the best predictive performance and showed that the procedures were able to select a more stringent set of factors. The individual history of hepatitis B vaccination, family and individual history of hepatitis B infection were associated with hepatitis B infection in the studied residents according to the proposed procedures. Conclusions The newly proposed procedures improve the identification of

  6. SELECTING QUASARS BY THEIR INTRINSIC VARIABILITY

    International Nuclear Information System (INIS)

    Schmidt, Kasper B.; Rix, Hans-Walter; Jester, Sebastian; Hennawi, Joseph F.; Marshall, Philip J.; Dobler, Gregory

    2010-01-01

    We present a new and simple technique for selecting extensive, complete, and pure quasar samples, based on their intrinsic variability. We parameterize the single-band variability by a power-law model for the light-curve structure function, with amplitude A and power-law index γ. We show that quasars can be efficiently separated from other non-variable and variable sources by the location of the individual sources in the A-γ plane. We use ∼60 epochs of imaging data, taken over ∼5 years, from the SDSS stripe 82 (S82) survey, where extensive spectroscopy provides a reference sample of quasars, to demonstrate the power of variability as a quasar classifier in multi-epoch surveys. For UV-excess selected objects, variability performs just as well as the standard SDSS color selection, identifying quasars with a completeness of 90% and a purity of 95%. In the redshift range 2.5 < z < 3, where color selection is known to be problematic, variability can select quasars with a completeness of 90% and a purity of 96%. This is a factor of 5-10 times more pure than existing color selection of quasars in this redshift range. Selecting objects from a broad griz color box without u-band information, variability selection in S82 can afford completeness and purity of 92%, despite a factor of 30 more contaminants than quasars in the color-selected feeder sample. This confirms that the fraction of quasars hidden in the 'stellar locus' of color space is small. To test variability selection in the context of Pan-STARRS 1 (PS1) we created mock PS1 data by down-sampling the S82 data to just six epochs over 3 years. Even with this much sparser time sampling, variability is an encouragingly efficient classifier. For instance, a 92% pure and 44% complete quasar candidate sample is attainable from the above griz-selected catalog. Finally, we show that the presented A-γ technique, besides selecting clean and pure samples of quasars (which are stochastically varying objects), is also

  7. Data re-arranging techniques leading to proper variable selections in high energy physics

    Science.gov (United States)

    Kůs, Václav; Bouř, Petr

    2017-12-01

    We introduce a new data based approach to homogeneity testing and variable selection carried out in high energy physics experiments, where one of the basic tasks is to test the homogeneity of weighted samples, mainly the Monte Carlo simulations (weighted) and real data measurements (unweighted). This technique is called ’data re-arranging’ and it enables variable selection performed by means of the classical statistical homogeneity tests such as Kolmogorov-Smirnov, Anderson-Darling, or Pearson’s chi-square divergence test. P-values of our variants of homogeneity tests are investigated and the empirical verification through 46 dimensional high energy particle physics data sets is accomplished under newly proposed (equiprobable) quantile binning. Particularly, the procedure of homogeneity testing is applied to re-arranged Monte Carlo samples and real DATA sets measured at the particle accelerator Tevatron in Fermilab at DØ experiment originating from top-antitop quark pair production in two decay channels (electron, muon) with 2, 3, or 4+ jets detected. Finally, the variable selections in the electron and muon channels induced by the re-arranging procedure for homogeneity testing are provided for Tevatron top-antitop quark data sets.

  8. Input variable selection for data-driven models of Coriolis flowmeters for two-phase flow measurement

    International Nuclear Information System (INIS)

    Wang, Lijuan; Yan, Yong; Wang, Xue; Wang, Tao

    2017-01-01

    Input variable selection is an essential step in the development of data-driven models for environmental, biological and industrial applications. Through input variable selection to eliminate the irrelevant or redundant variables, a suitable subset of variables is identified as the input of a model. Meanwhile, through input variable selection the complexity of the model structure is simplified and the computational efficiency is improved. This paper describes the procedures of the input variable selection for the data-driven models for the measurement of liquid mass flowrate and gas volume fraction under two-phase flow conditions using Coriolis flowmeters. Three advanced input variable selection methods, including partial mutual information (PMI), genetic algorithm-artificial neural network (GA-ANN) and tree-based iterative input selection (IIS) are applied in this study. Typical data-driven models incorporating support vector machine (SVM) are established individually based on the input candidates resulting from the selection methods. The validity of the selection outcomes is assessed through an output performance comparison of the SVM based data-driven models and sensitivity analysis. The validation and analysis results suggest that the input variables selected from the PMI algorithm provide more effective information for the models to measure liquid mass flowrate while the IIS algorithm provides a fewer but more effective variables for the models to predict gas volume fraction. (paper)

  9. Variable selection in near-infrared spectroscopy: Benchmarking of feature selection methods on biodiesel data

    International Nuclear Information System (INIS)

    Balabin, Roman M.; Smirnov, Sergey V.

    2011-01-01

    During the past several years, near-infrared (near-IR/NIR) spectroscopy has increasingly been adopted as an analytical tool in various fields from petroleum to biomedical sectors. The NIR spectrum (above 4000 cm -1 ) of a sample is typically measured by modern instruments at a few hundred of wavelengths. Recently, considerable effort has been directed towards developing procedures to identify variables (wavelengths) that contribute useful information. Variable selection (VS) or feature selection, also called frequency selection or wavelength selection, is a critical step in data analysis for vibrational spectroscopy (infrared, Raman, or NIRS). In this paper, we compare the performance of 16 different feature selection methods for the prediction of properties of biodiesel fuel, including density, viscosity, methanol content, and water concentration. The feature selection algorithms tested include stepwise multiple linear regression (MLR-step), interval partial least squares regression (iPLS), backward iPLS (BiPLS), forward iPLS (FiPLS), moving window partial least squares regression (MWPLS), (modified) changeable size moving window partial least squares (CSMWPLS/MCSMWPLSR), searching combination moving window partial least squares (SCMWPLS), successive projections algorithm (SPA), uninformative variable elimination (UVE, including UVE-SPA), simulated annealing (SA), back-propagation artificial neural networks (BP-ANN), Kohonen artificial neural network (K-ANN), and genetic algorithms (GAs, including GA-iPLS). Two linear techniques for calibration model building, namely multiple linear regression (MLR) and partial least squares regression/projection to latent structures (PLS/PLSR), are used for the evaluation of biofuel properties. A comparison with a non-linear calibration model, artificial neural networks (ANN-MLP), is also provided. Discussion of gasoline, ethanol-gasoline (bioethanol), and diesel fuel data is presented. The results of other spectroscopic

  10. Predicate Transformers for Recursive Procedures with Local Variables

    NARCIS (Netherlands)

    Hesselink, Wim H.

    1999-01-01

    The weakest precondition semantics of recursive procedures with local variables are developed for an imperative language with demonic and angelic operators for unbounded nondeterminate choice. This does not require stacking of local variables. The formalism serves as a foundation for a proof rule

  11. The effects of predictor method factors on selection outcomes: A modular approach to personnel selection procedures.

    Science.gov (United States)

    Lievens, Filip; Sackett, Paul R

    2017-01-01

    Past reviews and meta-analyses typically conceptualized and examined selection procedures as holistic entities. We draw on the product design literature to propose a modular approach as a complementary perspective to conceptualizing selection procedures. A modular approach means that a product is broken down into its key underlying components. Therefore, we start by presenting a modular framework that identifies the important measurement components of selection procedures. Next, we adopt this modular lens for reviewing the available evidence regarding each of these components in terms of affecting validity, subgroup differences, and applicant perceptions, as well as for identifying new research directions. As a complement to the historical focus on holistic selection procedures, we posit that the theoretical contributions of a modular approach include improved insight into the isolated workings of the different components underlying selection procedures and greater theoretical connectivity among different selection procedures and their literatures. We also outline how organizations can put a modular approach into operation to increase the variety in selection procedures and to enhance the flexibility in designing them. Overall, we believe that a modular perspective on selection procedures will provide the impetus for programmatic and theory-driven research on the different measurement components of selection procedures. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  12. Eye bank procedures: donor selection criteria.

    Science.gov (United States)

    Sousa, Sidney Júlio de Faria E; Sousa, Stella Barretto de Faria E

    2018-01-01

    Eye banks use sterile procedures to manipulate the eye, antiseptic measures for ocular surface decontamination, and rigorous criteria for donor selection to minimize the possibility of disease transmission due to corneal grafting. Donor selection focuses on analysis of medical records and specific post-mortem serological tests. To guide and standardize procedures, eye bank associations and government agencies provide lists of absolute and relative contraindications for use of the tissue based on donor health history. These lists are guardians of the Hippocratic principle "primum non nocere." However, each transplantation carries risk of transmission of potentially harmful agents to the recipient. The aim of the procedures is not to eliminate risk, but limit it to a reasonable level. The balance between safety and corneal availability needs to be maintained by exercising prudence without disproportionate rigor.

  13. Continuously variable rating: a new, simple and logical procedure to evaluate original scientific publications

    Directory of Open Access Journals (Sweden)

    Mauricio Rocha e Silva

    2011-01-01

    Full Text Available OBJECTIVE: Impact Factors (IF are widely used surrogates to evaluate single articles, in spite of known shortcomings imposed by cite distribution skewness. We quantify this asymmetry and propose a simple computer-based procedure for evaluating individual articles. METHOD: (a Analysis of symmetry. Journals clustered around nine Impact Factor points were selected from the medical ‘‘Subject Categories’’ in Journal Citation Reports 2010. Citable items published in 2008 were retrieved and ranked by granted citations over the Jan/2008 - Jun/2011 period. Frequency distribution of cites, normalized cumulative cites and absolute cites/decile were determined for each journal cluster. (b Positive Predictive Value. Three arbitrarily established evaluation classes were generated: LOW (1.33.9. Positive Predictive Value for journal clusters within each class range was estimated. (c Continuously Variable Rating. An alternative evaluation procedure is proposed to allow the rating of individually published articles in comparison to all articles published in the same journal within the same year of publication. The general guiding lines for the construction of a totally dedicated software program are delineated. RESULTS AND CONCLUSIONS: Skewness followed the Pareto Distribution for (1selected journals in the ISI database. Continuously Variable Rating is shown to be a simple computer based procedure capable of accurately providing a valid rating for each article within the journal and time frame in which it was published.

  14. Variable selection in multivariate calibration based on clustering of variable concept.

    Science.gov (United States)

    Farrokhnia, Maryam; Karimi, Sadegh

    2016-01-01

    Recently we have proposed a new variable selection algorithm, based on clustering of variable concept (CLoVA) in classification problem. With the same idea, this new concept has been applied to a regression problem and then the obtained results have been compared with conventional variable selection strategies for PLS. The basic idea behind the clustering of variable is that, the instrument channels are clustered into different clusters via clustering algorithms. Then, the spectral data of each cluster are subjected to PLS regression. Different real data sets (Cargill corn, Biscuit dough, ACE QSAR, Soy, and Tablet) have been used to evaluate the influence of the clustering of variables on the prediction performances of PLS. Almost in the all cases, the statistical parameter especially in prediction error shows the superiority of CLoVA-PLS respect to other variable selection strategies. Finally the synergy clustering of variable (sCLoVA-PLS), which is used the combination of cluster, has been proposed as an efficient and modification of CLoVA algorithm. The obtained statistical parameter indicates that variable clustering can split useful part from redundant ones, and then based on informative cluster; stable model can be reached. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. Variable selection in Logistic regression model with genetic algorithm.

    Science.gov (United States)

    Zhang, Zhongheng; Trevino, Victor; Hoseini, Sayed Shahabuddin; Belciug, Smaranda; Boopathi, Arumugam Manivanna; Zhang, Ping; Gorunescu, Florin; Subha, Velappan; Dai, Songshi

    2018-02-01

    Variable or feature selection is one of the most important steps in model specification. Especially in the case of medical-decision making, the direct use of a medical database, without a previous analysis and preprocessing step, is often counterproductive. In this way, the variable selection represents the method of choosing the most relevant attributes from the database in order to build a robust learning models and, thus, to improve the performance of the models used in the decision process. In biomedical research, the purpose of variable selection is to select clinically important and statistically significant variables, while excluding unrelated or noise variables. A variety of methods exist for variable selection, but none of them is without limitations. For example, the stepwise approach, which is highly used, adds the best variable in each cycle generally producing an acceptable set of variables. Nevertheless, it is limited by the fact that it commonly trapped in local optima. The best subset approach can systematically search the entire covariate pattern space, but the solution pool can be extremely large with tens to hundreds of variables, which is the case in nowadays clinical data. Genetic algorithms (GA) are heuristic optimization approaches and can be used for variable selection in multivariable regression models. This tutorial paper aims to provide a step-by-step approach to the use of GA in variable selection. The R code provided in the text can be extended and adapted to other data analysis needs.

  16. Ultrahigh-dimensional variable selection method for whole-genome gene-gene interaction analysis

    Directory of Open Access Journals (Sweden)

    Ueki Masao

    2012-05-01

    Full Text Available Abstract Background Genome-wide gene-gene interaction analysis using single nucleotide polymorphisms (SNPs is an attractive way for identification of genetic components that confers susceptibility of human complex diseases. Individual hypothesis testing for SNP-SNP pairs as in common genome-wide association study (GWAS however involves difficulty in setting overall p-value due to complicated correlation structure, namely, the multiple testing problem that causes unacceptable false negative results. A large number of SNP-SNP pairs than sample size, so-called the large p small n problem, precludes simultaneous analysis using multiple regression. The method that overcomes above issues is thus needed. Results We adopt an up-to-date method for ultrahigh-dimensional variable selection termed the sure independence screening (SIS for appropriate handling of numerous number of SNP-SNP interactions by including them as predictor variables in logistic regression. We propose ranking strategy using promising dummy coding methods and following variable selection procedure in the SIS method suitably modified for gene-gene interaction analysis. We also implemented the procedures in a software program, EPISIS, using the cost-effective GPGPU (General-purpose computing on graphics processing units technology. EPISIS can complete exhaustive search for SNP-SNP interactions in standard GWAS dataset within several hours. The proposed method works successfully in simulation experiments and in application to real WTCCC (Wellcome Trust Case–control Consortium data. Conclusions Based on the machine-learning principle, the proposed method gives powerful and flexible genome-wide search for various patterns of gene-gene interaction.

  17. Variable and subset selection in PLS regression

    DEFF Research Database (Denmark)

    Høskuldsson, Agnar

    2001-01-01

    The purpose of this paper is to present some useful methods for introductory analysis of variables and subsets in relation to PLS regression. We present here methods that are efficient in finding the appropriate variables or subset to use in the PLS regression. The general conclusion...... is that variable selection is important for successful analysis of chemometric data. An important aspect of the results presented is that lack of variable selection can spoil the PLS regression, and that cross-validation measures using a test set can show larger variation, when we use different subsets of X, than...

  18. Comparison of selected variables of gaming performance in football

    OpenAIRE

    Parachin, Jiří

    2014-01-01

    Title: Comparison of selected variables of gaming performance in football Objectives: Analysis of selected variables of gaming performance in the matches of professional Czech football teams in the Champions League and UEFA Europa League in 2013. During the observation to register set variables, then evaluate obtained results and compare them. Methods: The use of observational analysis and comparison of selected variables of gaming performance in competitive matches of professional football. ...

  19. Isoenzymatic variability in tropical maize populations under reciprocal recurrent selection

    Directory of Open Access Journals (Sweden)

    Pinto Luciana Rossini

    2003-01-01

    Full Text Available Maize (Zea mays L. is one of the crops in which the genetic variability has been extensively studied at isoenzymatic loci. The genetic variability of the maize populations BR-105 and BR-106, and the synthetics IG-3 and IG-4, obtained after one cycle of a high-intensity reciprocal recurrent selection (RRS, was investigated at seven isoenzymatic loci. A total of twenty alleles were identified, and most of the private alleles were found in the BR-106 population. One cycle of reciprocal recurrent selection (RRS caused reductions of 12% in the number of alleles in both populations. Changes in allele frequencies were also observed between populations and synthetics, mainly for the Est 2 locus. Populations presented similar values for the number of alleles per locus, percentage of polymorphic loci, and observed and expected heterozygosities. A decrease of the genetic variation values was observed for the synthetics as a consequence of genetic drift effects and reduction of the effective population sizes. The distribution of the genetic diversity within and between populations revealed that most of the diversity was maintained within them, i.e. BR-105 x BR-106 (G ST = 3.5% and IG-3 x IG-4 (G ST = 4.0%. The genetic distances between populations and synthetics increased approximately 21%. An increase in the genetic divergence between the populations occurred without limiting new selection procedures.

  20. Relation between task complexity and variability of procedure progression during an emergency operation

    International Nuclear Information System (INIS)

    Kim, Yochan; Park, Jinkyun; Jung, Wondea

    2013-01-01

    Highlights: • The relation between task complexity and the variability of procedure progression was investigated. • The two quantitative measures, TACOM and VPP, were applied to this study. • The task complexity was positively related with the operator’s procedural variability. • The VPP measure can be useful for explaining the operator’s behaviors. - Abstract: In this study, the relation between task complexity and variability of procedure progression during an emergency operation was investigated by comparing the two quantitative measures. To this end, the TACOM measure and VPP measure were applied to evaluate the complexity of tasks and variability of procedure progression, respectively. The TACOM scores and VPP scores were obtained for 60 tasks in the OPERA database, and a correlation analysis between two measures and a multiple regression analysis between the sub-measures of the TACOM measure and VPP measure were conducted. The results showed that the TACOM measure is positively associated with the VPP measure, and the abstraction hierarchy complexity mainly affected the variability among the sub-measures of TACOM. From these findings, it was discussed that the task complexity is related to an operator’s procedural variability and VPP measure can be useful for explaining the operator’s behaviors

  1. STEPWISE SELECTION OF VARIABLES IN DEA USING CONTRIBUTION LOADS

    Directory of Open Access Journals (Sweden)

    Fernando Fernandez-Palacin

    Full Text Available ABSTRACT In this paper, we propose a new methodology for variable selection in Data Envelopment Analysis (DEA. The methodology is based on an internal measure which evaluates the contribution of each variable in the calculation of the efficiency scores of DMUs. In order to apply the proposed method, an algorithm, known as “ADEA”, was developed and implemented in R. Step by step, the algorithm maximizes the load of the variable (input or output which contribute least to the calculation of the efficiency scores, redistributing the weights of the variables without altering the efficiency scores of the DMUs. Once the weights have been redistributed, if the lower contribution does not reach a previously given critical value, a variable with minimum contribution will be removed from the model and, as a result, the DEA will be solved again. The algorithm will stop when all variables reach a given contribution load to the DEA or until no more variables can be removed. In this way and contrary to what is usual, the algorithm provides a clear stop rule. In both cases, the efficiencies obtained from the DEA will be considered suitable and rightly interpreted in terms of the remaining variables, indicating the load themselves; moreover, the algorithm will provide a sequence of alternative nested models - potential solutions - that could be evaluated according to external criterion. To illustrate the procedure, we have applied the methodology proposed to obtain a research ranking of Spanish public universities. In this case, at each step of the algorithm, the critical value is obtained based on a simulation study.

  2. Robust cluster analysis and variable selection

    CERN Document Server

    Ritter, Gunter

    2014-01-01

    Clustering remains a vibrant area of research in statistics. Although there are many books on this topic, there are relatively few that are well founded in the theoretical aspects. In Robust Cluster Analysis and Variable Selection, Gunter Ritter presents an overview of the theory and applications of probabilistic clustering and variable selection, synthesizing the key research results of the last 50 years. The author focuses on the robust clustering methods he found to be the most useful on simulated data and real-time applications. The book provides clear guidance for the varying needs of bot

  3. A modification of the successive projections algorithm for spectral variable selection in the presence of unknown interferents.

    Science.gov (United States)

    Soares, Sófacles Figueredo Carreiro; Galvão, Roberto Kawakami Harrop; Araújo, Mário César Ugulino; da Silva, Edvan Cirino; Pereira, Claudete Fernandes; de Andrade, Stéfani Iury Evangelista; Leite, Flaviano Carvalho

    2011-03-09

    This work proposes a modification to the successive projections algorithm (SPA) aimed at selecting spectral variables for multiple linear regression (MLR) in the presence of unknown interferents not included in the calibration data set. The modified algorithm favours the selection of variables in which the effect of the interferent is less pronounced. The proposed procedure can be regarded as an adaptive modelling technique, because the spectral features of the samples to be analyzed are considered in the variable selection process. The advantages of this new approach are demonstrated in two analytical problems, namely (1) ultraviolet-visible spectrometric determination of tartrazine, allure red and sunset yellow in aqueous solutions under the interference of erythrosine, and (2) near-infrared spectrometric determination of ethanol in gasoline under the interference of toluene. In these case studies, the performance of conventional MLR-SPA models is substantially degraded by the presence of the interferent. This problem is circumvented by applying the proposed Adaptive MLR-SPA approach, which results in prediction errors smaller than those obtained by three other multivariate calibration techniques, namely stepwise regression, full-spectrum partial-least-squares (PLS) and PLS with variables selected by a genetic algorithm. An inspection of the variable selection results reveals that the Adaptive approach successfully avoids spectral regions in which the interference is more intense. Copyright © 2011 Elsevier B.V. All rights reserved.

  4. Bayesian Group Bridge for Bi-level Variable Selection.

    Science.gov (United States)

    Mallick, Himel; Yi, Nengjun

    2017-06-01

    A Bayesian bi-level variable selection method (BAGB: Bayesian Analysis of Group Bridge) is developed for regularized regression and classification. This new development is motivated by grouped data, where generic variables can be divided into multiple groups, with variables in the same group being mechanistically related or statistically correlated. As an alternative to frequentist group variable selection methods, BAGB incorporates structural information among predictors through a group-wise shrinkage prior. Posterior computation proceeds via an efficient MCMC algorithm. In addition to the usual ease-of-interpretation of hierarchical linear models, the Bayesian formulation produces valid standard errors, a feature that is notably absent in the frequentist framework. Empirical evidence of the attractiveness of the method is illustrated by extensive Monte Carlo simulations and real data analysis. Finally, several extensions of this new approach are presented, providing a unified framework for bi-level variable selection in general models with flexible penalties.

  5. Selection procedures in sports: Improving predictions of athletes’ future performance

    NARCIS (Netherlands)

    den Hartigh, Jan Rudolf; Niessen, Anna; Frencken, Wouter; Meijer, Rob R.

    The selection of athletes has been a central topic in sports sciences for decades. Yet, little consideration has been given to the theoretical underpinnings and predictive validity of the procedures. In this paper, we evaluate current selection procedures in sports given what we know from the

  6. THE TIME DOMAIN SPECTROSCOPIC SURVEY: VARIABLE SELECTION AND ANTICIPATED RESULTS

    Energy Technology Data Exchange (ETDEWEB)

    Morganson, Eric; Green, Paul J. [Harvard Smithsonian Center for Astrophysics, 60 Garden St, Cambridge, MA 02138 (United States); Anderson, Scott F.; Ruan, John J. [Department of Astronomy, University of Washington, Box 351580, Seattle, WA 98195 (United States); Myers, Adam D. [Department of Physics and Astronomy, University of Wyoming, Laramie, WY 82071 (United States); Eracleous, Michael; Brandt, William Nielsen [Department of Astronomy and Astrophysics, 525 Davey Laboratory, The Pennsylvania State University, University Park, PA 16802 (United States); Kelly, Brandon [Department of Physics, Broida Hall, University of California, Santa Barbara, CA 93106-9530 (United States); Badenes, Carlos [Department of Physics and Astronomy and Pittsburgh Particle Physics, Astrophysics and Cosmology Center (PITT PACC), University of Pittsburgh, 3941 O’Hara St, Pittsburgh, PA 15260 (United States); Bañados, Eduardo [Max-Planck-Institut für Astronomie, Königstuhl 17, D-69117 Heidelberg (Germany); Blanton, Michael R. [Center for Cosmology and Particle Physics, Department of Physics, New York University, 4 Washington Place, New York, NY 10003 (United States); Bershady, Matthew A. [Department of Astronomy, University of Wisconsin, 475 N. Charter St., Madison, WI 53706 (United States); Borissova, Jura [Instituto de Física y Astronomía, Universidad de Valparaíso, Av. Gran Bretaña 1111, Playa Ancha, Casilla 5030, and Millennium Institute of Astrophysics (MAS), Santiago (Chile); Burgett, William S. [GMTO Corp, Suite 300, 251 S. Lake Ave, Pasadena, CA 91101 (United States); Chambers, Kenneth, E-mail: emorganson@cfa.harvard.edu [Institute for Astronomy, University of Hawaii at Manoa, Honolulu, HI 96822 (United States); and others

    2015-06-20

    We present the selection algorithm and anticipated results for the Time Domain Spectroscopic Survey (TDSS). TDSS is an Sloan Digital Sky Survey (SDSS)-IV Extended Baryon Oscillation Spectroscopic Survey (eBOSS) subproject that will provide initial identification spectra of approximately 220,000 luminosity-variable objects (variable stars and active galactic nuclei across 7500 deg{sup 2} selected from a combination of SDSS and multi-epoch Pan-STARRS1 photometry. TDSS will be the largest spectroscopic survey to explicitly target variable objects, avoiding pre-selection on the basis of colors or detailed modeling of specific variability characteristics. Kernel Density Estimate analysis of our target population performed on SDSS Stripe 82 data suggests our target sample will be 95% pure (meaning 95% of objects we select have genuine luminosity variability of a few magnitudes or more). Our final spectroscopic sample will contain roughly 135,000 quasars and 85,000 stellar variables, approximately 4000 of which will be RR Lyrae stars which may be used as outer Milky Way probes. The variability-selected quasar population has a smoother redshift distribution than a color-selected sample, and variability measurements similar to those we develop here may be used to make more uniform quasar samples in large surveys. The stellar variable targets are distributed fairly uniformly across color space, indicating that TDSS will obtain spectra for a wide variety of stellar variables including pulsating variables, stars with significant chromospheric activity, cataclysmic variables, and eclipsing binaries. TDSS will serve as a pathfinder mission to identify and characterize the multitude of variable objects that will be detected photometrically in even larger variability surveys such as Large Synoptic Survey Telescope.

  7. Mathematical actions as procedural resources: An example from the separation of variables

    Directory of Open Access Journals (Sweden)

    Michael C. Wittmann

    2015-09-01

    Full Text Available [This paper is part of the Focused Collection on Upper Division Physics Courses.] Students learning to separate variables in order to solve a differential equation have multiple ways of correctly doing so. The procedures involved in separation include division or multiplication after properly grouping terms in an equation, moving terms (again, at times grouped from one location on the page to another, or simply carrying out separation as a single act without showing any steps. We describe student use of these procedures in terms of Hammer’s resources, showing that each of the previously listed procedures is its own “piece” of a larger problem solving activity. Our data come from group examinations of students separating variables while solving an air resistance problem in an intermediate mechanics class. Through detailed analysis of four groups of students, we motivate that the mathematical procedures are resources and show the issues that students must resolve in order to successfully separate variables. We use this analysis to suggest ways in which new resources (such as separation come to be.

  8. The alternative site selection procedure as covered in the report by the Repository Site Selection Procedures Working Group

    International Nuclear Information System (INIS)

    Brenner, M.

    2005-01-01

    The 2002 Act on the Regulated Termination of the Use of Nuclear Power for Industrial Electricity Generation declared Germany's opting out of the peaceful uses of nuclear power. The problem of the permanent management of radioactive residues is becoming more and more important also in the light of that political decision. At the present time, there are no repositories offering the waste management capacities required. Such facilities need to be created. At the present stage, eligible repository sites are the Konrad mine, a former iron ore mine near Salzgitter, and the Gorleben salt dome. While the fate of the Konrad mine as a repository for waste generating negligible amounts of heat continues to be uncertain, despite a plan approval decision of June 2002, the Gorleben repository is still in the planning phase, at present in a dormant state, so to speak. The federal government expressed doubt about the suitability of the Gorleben site. Against this backdrop, the Federal Ministry for the Environment, Nature Conservation, and Nuclear Safety in February 1999 established AkEnd, the Working Group on Repository Site Selection Procedures. The Group was charged with developing, based on sound scientific criteria, a transparent site selection procedure in order to facilitate the search for repository sites. The Working Group presented its final report in December 2002 after approximately four years of work. The Group's proposals about alternative site selection procedures are explained in detail and, above all, reviewed critically. (orig.)

  9. Procedures for Selecting Items for Computerized Adaptive Tests.

    Science.gov (United States)

    Kingsbury, G. Gage; Zara, Anthony R.

    1989-01-01

    Several classical approaches and alternative approaches to item selection for computerized adaptive testing (CAT) are reviewed and compared. The study also describes procedures for constrained CAT that may be added to classical item selection approaches to allow them to be used for applied testing. (TJH)

  10. Selecting minimum dataset soil variables using PLSR as a regressive multivariate method

    Science.gov (United States)

    Stellacci, Anna Maria; Armenise, Elena; Castellini, Mirko; Rossi, Roberta; Vitti, Carolina; Leogrande, Rita; De Benedetto, Daniela; Ferrara, Rossana M.; Vivaldi, Gaetano A.

    2017-04-01

    Long-term field experiments and science-based tools that characterize soil status (namely the soil quality indices, SQIs) assume a strategic role in assessing the effect of agronomic techniques and thus in improving soil management especially in marginal environments. Selecting key soil variables able to best represent soil status is a critical step for the calculation of SQIs. Current studies show the effectiveness of statistical methods for variable selection to extract relevant information deriving from multivariate datasets. Principal component analysis (PCA) has been mainly used, however supervised multivariate methods and regressive techniques are progressively being evaluated (Armenise et al., 2013; de Paul Obade et al., 2016; Pulido Moncada et al., 2014). The present study explores the effectiveness of partial least square regression (PLSR) in selecting critical soil variables, using a dataset comparing conventional tillage and sod-seeding on durum wheat. The results were compared to those obtained using PCA and stepwise discriminant analysis (SDA). The soil data derived from a long-term field experiment in Southern Italy. On samples collected in April 2015, the following set of variables was quantified: (i) chemical: total organic carbon and nitrogen (TOC and TN), alkali-extractable C (TEC and humic substances - HA-FA), water extractable N and organic C (WEN and WEOC), Olsen extractable P, exchangeable cations, pH and EC; (ii) physical: texture, dry bulk density (BD), macroporosity (Pmac), air capacity (AC), and relative field capacity (RFC); (iii) biological: carbon of the microbial biomass quantified with the fumigation-extraction method. PCA and SDA were previously applied to the multivariate dataset (Stellacci et al., 2016). PLSR was carried out on mean centered and variance scaled data of predictors (soil variables) and response (wheat yield) variables using the PLS procedure of SAS/STAT. In addition, variable importance for projection (VIP

  11. Penalized variable selection in competing risks regression.

    Science.gov (United States)

    Fu, Zhixuan; Parikh, Chirag R; Zhou, Bingqing

    2017-07-01

    Penalized variable selection methods have been extensively studied for standard time-to-event data. Such methods cannot be directly applied when subjects are at risk of multiple mutually exclusive events, known as competing risks. The proportional subdistribution hazard (PSH) model proposed by Fine and Gray (J Am Stat Assoc 94:496-509, 1999) has become a popular semi-parametric model for time-to-event data with competing risks. It allows for direct assessment of covariate effects on the cumulative incidence function. In this paper, we propose a general penalized variable selection strategy that simultaneously handles variable selection and parameter estimation in the PSH model. We rigorously establish the asymptotic properties of the proposed penalized estimators and modify the coordinate descent algorithm for implementation. Simulation studies are conducted to demonstrate the good performance of the proposed method. Data from deceased donor kidney transplants from the United Network of Organ Sharing illustrate the utility of the proposed method.

  12. Simple multicomponent batch distillation procedure with a variable reflux policy

    Directory of Open Access Journals (Sweden)

    A. N. García

    2014-06-01

    Full Text Available This paper describes a shortcut procedure for batch distillation simulation with a variable reflux policy. The procedure starts from a shortcut method developed by Sundaram and Evans in 1993 and uses an iterative cycle to calculate the reflux ratio at each moment. The functional relationship between the concentrations at the bottom and the dome is evaluated using the Fenske equation and is complemented with the equations proposed by Underwood and Gilliland. The results of this procedure are consistent with those obtained using a fast method widely validated in the relevant literature.

  13. A numeric comparison of variable selection algorithms for supervised learning

    International Nuclear Information System (INIS)

    Palombo, G.; Narsky, I.

    2009-01-01

    Datasets in modern High Energy Physics (HEP) experiments are often described by dozens or even hundreds of input variables. Reducing a full variable set to a subset that most completely represents information about data is therefore an important task in analysis of HEP data. We compare various variable selection algorithms for supervised learning using several datasets such as, for instance, imaging gamma-ray Cherenkov telescope (MAGIC) data found at the UCI repository. We use classifiers and variable selection methods implemented in the statistical package StatPatternRecognition (SPR), a free open-source C++ package developed in the HEP community ( (http://sourceforge.net/projects/statpatrec/)). For each dataset, we select a powerful classifier and estimate its learning accuracy on variable subsets obtained by various selection algorithms. When possible, we also estimate the CPU time needed for the variable subset selection. The results of this analysis are compared with those published previously for these datasets using other statistical packages such as R and Weka. We show that the most accurate, yet slowest, method is a wrapper algorithm known as generalized sequential forward selection ('Add N Remove R') implemented in SPR.

  14. Comparison of climate envelope models developed using expert-selected variables versus statistical selection

    Science.gov (United States)

    Brandt, Laura A.; Benscoter, Allison; Harvey, Rebecca G.; Speroterra, Carolina; Bucklin, David N.; Romañach, Stephanie; Watling, James I.; Mazzotti, Frank J.

    2017-01-01

    Climate envelope models are widely used to describe potential future distribution of species under different climate change scenarios. It is broadly recognized that there are both strengths and limitations to using climate envelope models and that outcomes are sensitive to initial assumptions, inputs, and modeling methods Selection of predictor variables, a central step in modeling, is one of the areas where different techniques can yield varying results. Selection of climate variables to use as predictors is often done using statistical approaches that develop correlations between occurrences and climate data. These approaches have received criticism in that they rely on the statistical properties of the data rather than directly incorporating biological information about species responses to temperature and precipitation. We evaluated and compared models and prediction maps for 15 threatened or endangered species in Florida based on two variable selection techniques: expert opinion and a statistical method. We compared model performance between these two approaches for contemporary predictions, and the spatial correlation, spatial overlap and area predicted for contemporary and future climate predictions. In general, experts identified more variables as being important than the statistical method and there was low overlap in the variable sets (0.9 for area under the curve (AUC) and >0.7 for true skill statistic (TSS). Spatial overlap, which compares the spatial configuration between maps constructed using the different variable selection techniques, was only moderate overall (about 60%), with a great deal of variability across species. Difference in spatial overlap was even greater under future climate projections, indicating additional divergence of model outputs from different variable selection techniques. Our work is in agreement with other studies which have found that for broad-scale species distribution modeling, using statistical methods of variable

  15. Exhaustive Search for Sparse Variable Selection in Linear Regression

    Science.gov (United States)

    Igarashi, Yasuhiko; Takenaka, Hikaru; Nakanishi-Ohno, Yoshinori; Uemura, Makoto; Ikeda, Shiro; Okada, Masato

    2018-04-01

    We propose a K-sparse exhaustive search (ES-K) method and a K-sparse approximate exhaustive search method (AES-K) for selecting variables in linear regression. With these methods, K-sparse combinations of variables are tested exhaustively assuming that the optimal combination of explanatory variables is K-sparse. By collecting the results of exhaustively computing ES-K, various approximate methods for selecting sparse variables can be summarized as density of states. With this density of states, we can compare different methods for selecting sparse variables such as relaxation and sampling. For large problems where the combinatorial explosion of explanatory variables is crucial, the AES-K method enables density of states to be effectively reconstructed by using the replica-exchange Monte Carlo method and the multiple histogram method. Applying the ES-K and AES-K methods to type Ia supernova data, we confirmed the conventional understanding in astronomy when an appropriate K is given beforehand. However, we found the difficulty to determine K from the data. Using virtual measurement and analysis, we argue that this is caused by data shortage.

  16. Variable selection methods in PLS regression - a comparison study on metabolomics data

    DEFF Research Database (Denmark)

    Karaman, İbrahim; Hedemann, Mette Skou; Knudsen, Knud Erik Bach

    . The aim of the metabolomics study was to investigate the metabolic profile in pigs fed various cereal fractions with special attention to the metabolism of lignans using LC-MS based metabolomic approach. References 1. Lê Cao KA, Rossouw D, Robert-Granié C, Besse P: A Sparse PLS for Variable Selection when...... integrated approach. Due to the high number of variables in data sets (both raw data and after peak picking) the selection of important variables in an explorative analysis is difficult, especially when different data sets of metabolomics data need to be related. Variable selection (or removal of irrelevant...... different strategies for variable selection on PLSR method were considered and compared with respect to selected subset of variables and the possibility for biological validation. Sparse PLSR [1] as well as PLSR with Jack-knifing [2] was applied to data in order to achieve variable selection prior...

  17. Effects of carprofen or meloxicam on selected haemostatic variables in miniature pigs after orthopaedic surgery

    Directory of Open Access Journals (Sweden)

    Petr Raušer

    2011-01-01

    Full Text Available The aim of the study was to detect and compare the haemostatic variables and bleeding after 7‑days administration of carprofen or meloxicam in clinically healthy miniature pigs. Twenty-one clinically healthy Göttingen miniature pigs were divided into 3 groups. Selected haemostatic variables such as platelet count, prothrombin time, activated partial thromboplastin time, thrombin time, fibrinogen, serum biochemical variables such as total protein, bilirubin, urea, creatinine, alkaline phosphatase, alanine aminotransferase and gamma-glutamyltransferase and haemoglobin, haematocrit, red blood cells, white blood cells and buccal mucosal bleeding time were assessed before and 7 days after daily intramuscular administration of saline (1.5 ml per animal, control group, carprofen (2 mg·kg-1 or meloxicam (0.1 mg·kg-1. In pigs receiving carprofen or meloxicam, the thrombin time was significantly increased (p p p p < 0.05 compared to the control group. Significant differences were not detected in other haemostatic, biochemical variables or bleeding time compared to other groups or to the pretreatment values. Intramuscular administration of carprofen or meloxicam in healthy miniature pigs for 7 days causes sporadic, but not clinically important changes of selected haemostatic variables. Therefore, we can recommend them for perioperative use, e.g. for their analgesic effects, in orthopaedic or other surgical procedures without increased bleeding.

  18. ACTIVE LEARNING TO OVERCOME SAMPLE SELECTION BIAS: APPLICATION TO PHOTOMETRIC VARIABLE STAR CLASSIFICATION

    Energy Technology Data Exchange (ETDEWEB)

    Richards, Joseph W.; Starr, Dan L.; Miller, Adam A.; Bloom, Joshua S.; Butler, Nathaniel R.; Berian James, J. [Astronomy Department, University of California, Berkeley, CA 94720-7450 (United States); Brink, Henrik [Dark Cosmology Centre, Juliane Maries Vej 30, 2100 Copenhagen O (Denmark); Long, James P.; Rice, John, E-mail: jwrichar@stat.berkeley.edu [Statistics Department, University of California, Berkeley, CA 94720-7450 (United States)

    2012-01-10

    Despite the great promise of machine-learning algorithms to classify and predict astrophysical parameters for the vast numbers of astrophysical sources and transients observed in large-scale surveys, the peculiarities of the training data often manifest as strongly biased predictions on the data of interest. Typically, training sets are derived from historical surveys of brighter, more nearby objects than those from more extensive, deeper surveys (testing data). This sample selection bias can cause catastrophic errors in predictions on the testing data because (1) standard assumptions for machine-learned model selection procedures break down and (2) dense regions of testing space might be completely devoid of training data. We explore possible remedies to sample selection bias, including importance weighting, co-training, and active learning (AL). We argue that AL-where the data whose inclusion in the training set would most improve predictions on the testing set are queried for manual follow-up-is an effective approach and is appropriate for many astronomical applications. For a variable star classification problem on a well-studied set of stars from Hipparcos and Optical Gravitational Lensing Experiment, AL is the optimal method in terms of error rate on the testing data, beating the off-the-shelf classifier by 3.4% and the other proposed methods by at least 3.0%. To aid with manual labeling of variable stars, we developed a Web interface which allows for easy light curve visualization and querying of external databases. Finally, we apply AL to classify variable stars in the All Sky Automated Survey, finding dramatic improvement in our agreement with the ASAS Catalog of Variable Stars, from 65.5% to 79.5%, and a significant increase in the classifier's average confidence for the testing set, from 14.6% to 42.9%, after a few AL iterations.

  19. ACTIVE LEARNING TO OVERCOME SAMPLE SELECTION BIAS: APPLICATION TO PHOTOMETRIC VARIABLE STAR CLASSIFICATION

    International Nuclear Information System (INIS)

    Richards, Joseph W.; Starr, Dan L.; Miller, Adam A.; Bloom, Joshua S.; Butler, Nathaniel R.; Berian James, J.; Brink, Henrik; Long, James P.; Rice, John

    2012-01-01

    Despite the great promise of machine-learning algorithms to classify and predict astrophysical parameters for the vast numbers of astrophysical sources and transients observed in large-scale surveys, the peculiarities of the training data often manifest as strongly biased predictions on the data of interest. Typically, training sets are derived from historical surveys of brighter, more nearby objects than those from more extensive, deeper surveys (testing data). This sample selection bias can cause catastrophic errors in predictions on the testing data because (1) standard assumptions for machine-learned model selection procedures break down and (2) dense regions of testing space might be completely devoid of training data. We explore possible remedies to sample selection bias, including importance weighting, co-training, and active learning (AL). We argue that AL—where the data whose inclusion in the training set would most improve predictions on the testing set are queried for manual follow-up—is an effective approach and is appropriate for many astronomical applications. For a variable star classification problem on a well-studied set of stars from Hipparcos and Optical Gravitational Lensing Experiment, AL is the optimal method in terms of error rate on the testing data, beating the off-the-shelf classifier by 3.4% and the other proposed methods by at least 3.0%. To aid with manual labeling of variable stars, we developed a Web interface which allows for easy light curve visualization and querying of external databases. Finally, we apply AL to classify variable stars in the All Sky Automated Survey, finding dramatic improvement in our agreement with the ASAS Catalog of Variable Stars, from 65.5% to 79.5%, and a significant increase in the classifier's average confidence for the testing set, from 14.6% to 42.9%, after a few AL iterations.

  20. Active Learning to Overcome Sample Selection Bias: Application to Photometric Variable Star Classification

    Science.gov (United States)

    Richards, Joseph W.; Starr, Dan L.; Brink, Henrik; Miller, Adam A.; Bloom, Joshua S.; Butler, Nathaniel R.; James, J. Berian; Long, James P.; Rice, John

    2012-01-01

    Despite the great promise of machine-learning algorithms to classify and predict astrophysical parameters for the vast numbers of astrophysical sources and transients observed in large-scale surveys, the peculiarities of the training data often manifest as strongly biased predictions on the data of interest. Typically, training sets are derived from historical surveys of brighter, more nearby objects than those from more extensive, deeper surveys (testing data). This sample selection bias can cause catastrophic errors in predictions on the testing data because (1) standard assumptions for machine-learned model selection procedures break down and (2) dense regions of testing space might be completely devoid of training data. We explore possible remedies to sample selection bias, including importance weighting, co-training, and active learning (AL). We argue that AL—where the data whose inclusion in the training set would most improve predictions on the testing set are queried for manual follow-up—is an effective approach and is appropriate for many astronomical applications. For a variable star classification problem on a well-studied set of stars from Hipparcos and Optical Gravitational Lensing Experiment, AL is the optimal method in terms of error rate on the testing data, beating the off-the-shelf classifier by 3.4% and the other proposed methods by at least 3.0%. To aid with manual labeling of variable stars, we developed a Web interface which allows for easy light curve visualization and querying of external databases. Finally, we apply AL to classify variable stars in the All Sky Automated Survey, finding dramatic improvement in our agreement with the ASAS Catalog of Variable Stars, from 65.5% to 79.5%, and a significant increase in the classifier's average confidence for the testing set, from 14.6% to 42.9%, after a few AL iterations.

  1. Android application for determining surgical variables in brain-tumor resection procedures.

    Science.gov (United States)

    Vijayan, Rohan C; Thompson, Reid C; Chambless, Lola B; Morone, Peter J; He, Le; Clements, Logan W; Griesenauer, Rebekah H; Kang, Hakmook; Miga, Michael I

    2017-01-01

    The fidelity of image-guided neurosurgical procedures is often compromised due to the mechanical deformations that occur during surgery. In recent work, a framework was developed to predict the extent of this brain shift in brain-tumor resection procedures. The approach uses preoperatively determined surgical variables to predict brain shift and then subsequently corrects the patient's preoperative image volume to more closely match the intraoperative state of the patient's brain. However, a clinical workflow difficulty with the execution of this framework is the preoperative acquisition of surgical variables. To simplify and expedite this process, an Android, Java-based application was developed for tablets to provide neurosurgeons with the ability to manipulate three-dimensional models of the patient's neuroanatomy and determine an expected head orientation, craniotomy size and location, and trajectory to be taken into the tumor. These variables can then be exported for use as inputs to the biomechanical model associated with the correction framework. A multisurgeon, multicase mock trial was conducted to compare the accuracy of the virtual plan to that of a mock physical surgery. It was concluded that the Android application was an accurate, efficient, and timely method for planning surgical variables.

  2. Machine learning techniques to select variable stars

    Directory of Open Access Journals (Sweden)

    García-Varela Alejandro

    2017-01-01

    Full Text Available In order to perform a supervised classification of variable stars, we propose and evaluate a set of six features extracted from the magnitude density of the light curves. They are used to train automatic classification systems using state-of-the-art classifiers implemented in the R statistical computing environment. We find that random forests is the most successful method to select variables.

  3. PLS-based and regularization-based methods for the selection of relevant variables in non-targeted metabolomics data

    Directory of Open Access Journals (Sweden)

    Renata Bujak

    2016-07-01

    Full Text Available Non-targeted metabolomics constitutes a part of systems biology and aims to determine many metabolites in complex biological samples. Datasets obtained in non-targeted metabolomics studies are multivariate and high-dimensional due to the sensitivity of mass spectrometry-based detection methods as well as complexity of biological matrices. Proper selection of variables which contribute into group classification is a crucial step, especially in metabolomics studies which are focused on searching for disease biomarker candidates. In the present study, three different statistical approaches were tested using two metabolomics datasets (RH and PH study. Orthogonal projections to latent structures-discriminant analysis (OPLS-DA without and with multiple testing correction as well as least absolute shrinkage and selection operator (LASSO were tested and compared. For the RH study, OPLS-DA model built without multiple testing correction, selected 46 and 218 variables based on VIP criteria using Pareto and UV scaling, respectively. In the case of the PH study, 217 and 320 variables were selected based on VIP criteria using Pareto and UV scaling, respectively. In the RH study, OPLS-DA model built with multiple testing correction, selected 4 and 19 variables as statistically significant in terms of Pareto and UV scaling, respectively. For PH study, 14 and 18 variables were selected based on VIP criteria in terms of Pareto and UV scaling, respectively. Additionally, the concept and fundaments of the least absolute shrinkage and selection operator (LASSO with bootstrap procedure evaluating reproducibility of results, was demonstrated. In the RH and PH study, the LASSO selected 14 and 4 variables with reproducibility between 99.3% and 100%. However, apart from the popularity of PLS-DA and OPLS-DA methods in metabolomics, it should be highlighted that they do not control type I or type II error, but only arbitrarily establish a cut-off value for PLS-DA loadings

  4. A fast chaos-based image encryption scheme with a dynamic state variables selection mechanism

    Science.gov (United States)

    Chen, Jun-xin; Zhu, Zhi-liang; Fu, Chong; Yu, Hai; Zhang, Li-bo

    2015-03-01

    In recent years, a variety of chaos-based image cryptosystems have been investigated to meet the increasing demand for real-time secure image transmission. Most of them are based on permutation-diffusion architecture, in which permutation and diffusion are two independent procedures with fixed control parameters. This property results in two flaws. (1) At least two chaotic state variables are required for encrypting one plain pixel, in permutation and diffusion stages respectively. Chaotic state variables produced with high computation complexity are not sufficiently used. (2) The key stream solely depends on the secret key, and hence the cryptosystem is vulnerable against known/chosen-plaintext attacks. In this paper, a fast chaos-based image encryption scheme with a dynamic state variables selection mechanism is proposed to enhance the security and promote the efficiency of chaos-based image cryptosystems. Experimental simulations and extensive cryptanalysis have been carried out and the results prove the superior security and high efficiency of the scheme.

  5. A Variable-Selection Heuristic for K-Means Clustering.

    Science.gov (United States)

    Brusco, Michael J.; Cradit, J. Dennis

    2001-01-01

    Presents a variable selection heuristic for nonhierarchical (K-means) cluster analysis based on the adjusted Rand index for measuring cluster recovery. Subjected the heuristic to Monte Carlo testing across more than 2,200 datasets. Results indicate that the heuristic is extremely effective at eliminating masking variables. (SLD)

  6. FIRE: an SPSS program for variable selection in multiple linear regression analysis via the relative importance of predictors.

    Science.gov (United States)

    Lorenzo-Seva, Urbano; Ferrando, Pere J

    2011-03-01

    We provide an SPSS program that implements currently recommended techniques and recent developments for selecting variables in multiple linear regression analysis via the relative importance of predictors. The approach consists of: (1) optimally splitting the data for cross-validation, (2) selecting the final set of predictors to be retained in the equation regression, and (3) assessing the behavior of the chosen model using standard indices and procedures. The SPSS syntax, a short manual, and data files related to this article are available as supplemental materials from brm.psychonomic-journals.org/content/supplemental.

  7. A Permutation Approach for Selecting the Penalty Parameter in Penalized Model Selection

    Science.gov (United States)

    Sabourin, Jeremy A; Valdar, William; Nobel, Andrew B

    2015-01-01

    Summary We describe a simple, computationally effcient, permutation-based procedure for selecting the penalty parameter in LASSO penalized regression. The procedure, permutation selection, is intended for applications where variable selection is the primary focus, and can be applied in a variety of structural settings, including that of generalized linear models. We briefly discuss connections between permutation selection and existing theory for the LASSO. In addition, we present a simulation study and an analysis of real biomedical data sets in which permutation selection is compared with selection based on the following: cross-validation (CV), the Bayesian information criterion (BIC), Scaled Sparse Linear Regression, and a selection method based on recently developed testing procedures for the LASSO. PMID:26243050

  8. ENSEMBLE VARIABILITY OF NEAR-INFRARED-SELECTED ACTIVE GALACTIC NUCLEI

    International Nuclear Information System (INIS)

    Kouzuma, S.; Yamaoka, H.

    2012-01-01

    We present the properties of the ensemble variability V for nearly 5000 near-infrared active galactic nuclei (AGNs) selected from the catalog of Quasars and Active Galactic Nuclei (13th Edition) and the SDSS-DR7 quasar catalog. From three near-infrared point source catalogs, namely, Two Micron All Sky Survey (2MASS), Deep Near Infrared Survey (DENIS), and UKIDSS/LAS catalogs, we extract 2MASS-DENIS and 2MASS-UKIDSS counterparts for cataloged AGNs by cross-identification between catalogs. We further select variable AGNs based on an optimal criterion for selecting the variable sources. The sample objects are divided into subsets according to whether near-infrared light originates by optical emission or by near-infrared emission in the rest frame; and we examine the correlations of the ensemble variability with the rest-frame wavelength, redshift, luminosity, and rest-frame time lag. In addition, we also examine the correlations of variability amplitude with optical variability, radio intensity, and radio-to-optical flux ratio. The rest-frame optical variability of our samples shows negative correlations with luminosity and positive correlations with rest-frame time lag (i.e., the structure function, SF), and this result is consistent with previous analyses. However, no well-known negative correlation exists between the rest-frame wavelength and optical variability. This inconsistency might be due to a biased sampling of high-redshift AGNs. Near-infrared variability in the rest frame is anticorrelated with the rest-frame wavelength, which is consistent with previous suggestions. However, correlations of near-infrared variability with luminosity and rest-frame time lag are the opposite of these correlations of the optical variability; that is, the near-infrared variability is positively correlated with luminosity but negatively correlated with the rest-frame time lag. Because these trends are qualitatively consistent with the properties of radio-loud quasars reported

  9. Variable selection and model choice in geoadditive regression models.

    Science.gov (United States)

    Kneib, Thomas; Hothorn, Torsten; Tutz, Gerhard

    2009-06-01

    Model choice and variable selection are issues of major concern in practical regression analyses, arising in many biometric applications such as habitat suitability analyses, where the aim is to identify the influence of potentially many environmental conditions on certain species. We describe regression models for breeding bird communities that facilitate both model choice and variable selection, by a boosting algorithm that works within a class of geoadditive regression models comprising spatial effects, nonparametric effects of continuous covariates, interaction surfaces, and varying coefficients. The major modeling components are penalized splines and their bivariate tensor product extensions. All smooth model terms are represented as the sum of a parametric component and a smooth component with one degree of freedom to obtain a fair comparison between the model terms. A generic representation of the geoadditive model allows us to devise a general boosting algorithm that automatically performs model choice and variable selection.

  10. Site selection procedure for high level radioactive waste disposal in Bulgaria

    International Nuclear Information System (INIS)

    Evstatiev, D.; Vachev, B.

    1993-01-01

    A combined site selection approach is implemented. Bulgaria's territory has been classified in three categories, presented on a 1:500000 scale map. The number of suitable sites has been reduced to 20 using the method of successive screening. The formulated site selection problem is a typical discrete multi-criteria decision making problem under uncertainty. A 5-level procedure using Expert Choice Rating and relative models is created. It is a part of a common procedure for evaluation and choice of variants for high level radwaste disposal construction. On this basis 7-8 more preferable sites are demonstrated. A new knowledge and information about the relative importance of the criteria and their subsets, about the level of criteria uncertainty and the reliability are gained. It is very useful for planning and managing of the next final stages of the site selection procedure. 7 figs., 8 refs., 4 suppls. (author)

  11. Sparse Reduced-Rank Regression for Simultaneous Dimension Reduction and Variable Selection

    KAUST Repository

    Chen, Lisha

    2012-12-01

    The reduced-rank regression is an effective method in predicting multiple response variables from the same set of predictor variables. It reduces the number of model parameters and takes advantage of interrelations between the response variables and hence improves predictive accuracy. We propose to select relevant variables for reduced-rank regression by using a sparsity-inducing penalty. We apply a group-lasso type penalty that treats each row of the matrix of the regression coefficients as a group and show that this penalty satisfies certain desirable invariance properties. We develop two numerical algorithms to solve the penalized regression problem and establish the asymptotic consistency of the proposed method. In particular, the manifold structure of the reduced-rank regression coefficient matrix is considered and studied in our theoretical analysis. In our simulation study and real data analysis, the new method is compared with several existing variable selection methods for multivariate regression and exhibits competitive performance in prediction and variable selection. © 2012 American Statistical Association.

  12. Characterizing the Optical Variability of Bright Blazars: Variability-based Selection of Fermi Active Galactic Nuclei

    Science.gov (United States)

    Ruan, John J.; Anderson, Scott F.; MacLeod, Chelsea L.; Becker, Andrew C.; Burnett, T. H.; Davenport, James R. A.; Ivezić, Željko; Kochanek, Christopher S.; Plotkin, Richard M.; Sesar, Branimir; Stuart, J. Scott

    2012-11-01

    We investigate the use of optical photometric variability to select and identify blazars in large-scale time-domain surveys, in part to aid in the identification of blazar counterparts to the ~30% of γ-ray sources in the Fermi 2FGL catalog still lacking reliable associations. Using data from the optical LINEAR asteroid survey, we characterize the optical variability of blazars by fitting a damped random walk model to individual light curves with two main model parameters, the characteristic timescales of variability τ, and driving amplitudes on short timescales \\hat{\\sigma }. Imposing cuts on minimum τ and \\hat{\\sigma } allows for blazar selection with high efficiency E and completeness C. To test the efficacy of this approach, we apply this method to optically variable LINEAR objects that fall within the several-arcminute error ellipses of γ-ray sources in the Fermi 2FGL catalog. Despite the extreme stellar contamination at the shallow depth of the LINEAR survey, we are able to recover previously associated optical counterparts to Fermi active galactic nuclei with E >= 88% and C = 88% in Fermi 95% confidence error ellipses having semimajor axis r beaming. After correcting for beaming, we estimate that the characteristic timescale of blazar variability is ~3 years in the rest frame of the jet, in contrast with the ~320 day disk flux timescale observed in quasars. The variability-based selection method presented will be useful for blazar identification in time-domain optical surveys and is also a probe of jet physics.

  13. Predictive and Descriptive CoMFA Models: The Effect of Variable Selection.

    Science.gov (United States)

    Sepehri, Bakhtyar; Omidikia, Nematollah; Kompany-Zareh, Mohsen; Ghavami, Raouf

    2018-01-01

    Aims & Scope: In this research, 8 variable selection approaches were used to investigate the effect of variable selection on the predictive power and stability of CoMFA models. Three data sets including 36 EPAC antagonists, 79 CD38 inhibitors and 57 ATAD2 bromodomain inhibitors were modelled by CoMFA. First of all, for all three data sets, CoMFA models with all CoMFA descriptors were created then by applying each variable selection method a new CoMFA model was developed so for each data set, 9 CoMFA models were built. Obtained results show noisy and uninformative variables affect CoMFA results. Based on created models, applying 5 variable selection approaches including FFD, SRD-FFD, IVE-PLS, SRD-UVEPLS and SPA-jackknife increases the predictive power and stability of CoMFA models significantly. Among them, SPA-jackknife removes most of the variables while FFD retains most of them. FFD and IVE-PLS are time consuming process while SRD-FFD and SRD-UVE-PLS run need to few seconds. Also applying FFD, SRD-FFD, IVE-PLS, SRD-UVE-PLS protect CoMFA countor maps information for both fields. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  14. The Properties of Model Selection when Retaining Theory Variables

    DEFF Research Database (Denmark)

    Hendry, David F.; Johansen, Søren

    Economic theories are often fitted directly to data to avoid possible model selection biases. We show that embedding a theory model that specifies the correct set of m relevant exogenous variables, x{t}, within the larger set of m+k candidate variables, (x{t},w{t}), then selection over the second...... set by their statistical significance can be undertaken without affecting the estimator distribution of the theory parameters. This strategy returns the theory-parameter estimates when the theory is correct, yet protects against the theory being under-specified because some w{t} are relevant....

  15. 78 FR 20148 - Reporting Procedure for Mathematical Models Selected To Predict Heated Effluent Dispersion in...

    Science.gov (United States)

    2013-04-03

    ... procedure acceptable to the NRC staff for providing summary details of mathematical modeling methods used in... NUCLEAR REGULATORY COMMISSION [NRC-2013-0062] Reporting Procedure for Mathematical Models Selected... Regulatory Guide (RG) 4.4, ``Reporting Procedure for Mathematical Models Selected to Predict Heated Effluent...

  16. CHARACTERIZING THE OPTICAL VARIABILITY OF BRIGHT BLAZARS: VARIABILITY-BASED SELECTION OF FERMI ACTIVE GALACTIC NUCLEI

    International Nuclear Information System (INIS)

    Ruan, John J.; Anderson, Scott F.; MacLeod, Chelsea L.; Becker, Andrew C.; Davenport, James R. A.; Ivezić, Željko; Burnett, T. H.; Kochanek, Christopher S.; Plotkin, Richard M.; Sesar, Branimir; Stuart, J. Scott

    2012-01-01

    We investigate the use of optical photometric variability to select and identify blazars in large-scale time-domain surveys, in part to aid in the identification of blazar counterparts to the ∼30% of γ-ray sources in the Fermi 2FGL catalog still lacking reliable associations. Using data from the optical LINEAR asteroid survey, we characterize the optical variability of blazars by fitting a damped random walk model to individual light curves with two main model parameters, the characteristic timescales of variability τ, and driving amplitudes on short timescales σ-circumflex. Imposing cuts on minimum τ and σ-circumflex allows for blazar selection with high efficiency E and completeness C. To test the efficacy of this approach, we apply this method to optically variable LINEAR objects that fall within the several-arcminute error ellipses of γ-ray sources in the Fermi 2FGL catalog. Despite the extreme stellar contamination at the shallow depth of the LINEAR survey, we are able to recover previously associated optical counterparts to Fermi active galactic nuclei with E ≥ 88% and C = 88% in Fermi 95% confidence error ellipses having semimajor axis r < 8'. We find that the suggested radio counterpart to Fermi source 2FGL J1649.6+5238 has optical variability consistent with other γ-ray blazars and is likely to be the γ-ray source. Our results suggest that the variability of the non-thermal jet emission in blazars is stochastic in nature, with unique variability properties due to the effects of relativistic beaming. After correcting for beaming, we estimate that the characteristic timescale of blazar variability is ∼3 years in the rest frame of the jet, in contrast with the ∼320 day disk flux timescale observed in quasars. The variability-based selection method presented will be useful for blazar identification in time-domain optical surveys and is also a probe of jet physics.

  17. Variable selection for mixture and promotion time cure rate models.

    Science.gov (United States)

    Masud, Abdullah; Tu, Wanzhu; Yu, Zhangsheng

    2016-11-16

    Failure-time data with cured patients are common in clinical studies. Data from these studies are typically analyzed with cure rate models. Variable selection methods have not been well developed for cure rate models. In this research, we propose two least absolute shrinkage and selection operators based methods, for variable selection in mixture and promotion time cure models with parametric or nonparametric baseline hazards. We conduct an extensive simulation study to assess the operating characteristics of the proposed methods. We illustrate the use of the methods using data from a study of childhood wheezing. © The Author(s) 2016.

  18. Meta-Statistics for Variable Selection: The R Package BioMark

    Directory of Open Access Journals (Sweden)

    Ron Wehrens

    2012-11-01

    Full Text Available Biomarker identification is an ever more important topic in the life sciences. With the advent of measurement methodologies based on microarrays and mass spectrometry, thousands of variables are routinely being measured on complex biological samples. Often, the question is what makes two groups of samples different. Classical hypothesis testing suffers from the multiple testing problem; however, correcting for this often leads to a lack of power. In addition, choosing α cutoff levels remains somewhat arbitrary. Also in a regression context, a model depending on few but relevant variables will be more accurate and precise, and easier to interpret biologically.We propose an R package, BioMark, implementing two meta-statistics for variable selection. The first, higher criticism, presents a data-dependent selection threshold for significance, instead of a cookbook value of α = 0.05. It is applicable in all cases where two groups are compared. The second, stability selection, is more general, and can also be applied in a regression context. This approach uses repeated subsampling of the data in order to assess the variability of the model coefficients and selects those that remain consistently important. It is shown using experimental spike-in data from the field of metabolomics that both approaches work well with real data. BioMark also contains functionality for simulating data with specific characteristics for algorithm development and testing.

  19. An improved and explicit surrogate variable analysis procedure by coefficient adjustment.

    Science.gov (United States)

    Lee, Seunggeun; Sun, Wei; Wright, Fred A; Zou, Fei

    2017-06-01

    Unobserved environmental, demographic, and technical factors can negatively affect the estimation and testing of the effects of primary variables. Surrogate variable analysis, proposed to tackle this problem, has been widely used in genomic studies. To estimate hidden factors that are correlated with the primary variables, surrogate variable analysis performs principal component analysis either on a subset of features or on all features, but weighting each differently. However, existing approaches may fail to identify hidden factors that are strongly correlated with the primary variables, and the extra step of feature selection and weight calculation makes the theoretical investigation of surrogate variable analysis challenging. In this paper, we propose an improved surrogate variable analysis using all measured features that has a natural connection with restricted least squares, which allows us to study its theoretical properties. Simulation studies and real data analysis show that the method is competitive to state-of-the-art methods.

  20. The Econometric Procedures of Specific Transaction Identification

    Directory of Open Access Journals (Sweden)

    Doszyń Mariusz

    2017-06-01

    Full Text Available The paper presents the econometric procedures of identifying specific transactions, in which atypical conditions or attributes may occur. These procedures are based on studentized and predictive residuals of the accordingly specified econometric models. The dependent variable is a unit transactional price, and explanatory variables are both the real properties’ attributes and accordingly defined artificial binary variables. The utility of the proposed method has been verified by means of a real market data base. The proposed procedures can be helpful during the property valuation process, making it possible to reject real properties that are specific (both from the point of view of the transaction conditions and the properties’ attributes and, consequently, to select an appropriate set of similar attributes that are essential for the valuation process.

  1. A New Variable Selection Method Based on Mutual Information Maximization by Replacing Collinear Variables for Nonlinear Quantitative Structure-Property Relationship Models

    Energy Technology Data Exchange (ETDEWEB)

    Ghasemi, Jahan B.; Zolfonoun, Ehsan [Toosi University of Technology, Tehran (Korea, Republic of)

    2012-05-15

    Selection of the most informative molecular descriptors from the original data set is a key step for development of quantitative structure activity/property relationship models. Recently, mutual information (MI) has gained increasing attention in feature selection problems. This paper presents an effective mutual information-based feature selection approach, named mutual information maximization by replacing collinear variables (MIMRCV), for nonlinear quantitative structure-property relationship models. The proposed variable selection method was applied to three different QSPR datasets, soil degradation half-life of 47 organophosphorus pesticides, GC-MS retention times of 85 volatile organic compounds, and water-to-micellar cetyltrimethylammonium bromide partition coefficients of 62 organic compounds.The obtained results revealed that using MIMRCV as feature selection method improves the predictive quality of the developed models compared to conventional MI based variable selection algorithms.

  2. A New Variable Selection Method Based on Mutual Information Maximization by Replacing Collinear Variables for Nonlinear Quantitative Structure-Property Relationship Models

    International Nuclear Information System (INIS)

    Ghasemi, Jahan B.; Zolfonoun, Ehsan

    2012-01-01

    Selection of the most informative molecular descriptors from the original data set is a key step for development of quantitative structure activity/property relationship models. Recently, mutual information (MI) has gained increasing attention in feature selection problems. This paper presents an effective mutual information-based feature selection approach, named mutual information maximization by replacing collinear variables (MIMRCV), for nonlinear quantitative structure-property relationship models. The proposed variable selection method was applied to three different QSPR datasets, soil degradation half-life of 47 organophosphorus pesticides, GC-MS retention times of 85 volatile organic compounds, and water-to-micellar cetyltrimethylammonium bromide partition coefficients of 62 organic compounds.The obtained results revealed that using MIMRCV as feature selection method improves the predictive quality of the developed models compared to conventional MI based variable selection algorithms

  3. The procedure of alternative site selection within the report of the study group on the radioactive waste final repository selection process (AKEnd)

    International Nuclear Information System (INIS)

    Brenner, M.

    2005-01-01

    The paper discusses the results of the report of the study group on the radioactive waste final repository selection process with respect to the alternative site selection procedure. Key points of the report are the long-term safety, the alternativity of sites and the concept of one repository. The critique on this report is focussed on the topics site selection and licensing procedures, civil participation, the factor time and the question of cost

  4. Portfolio Selection Based on Distance between Fuzzy Variables

    Directory of Open Access Journals (Sweden)

    Weiyi Qian

    2014-01-01

    Full Text Available This paper researches portfolio selection problem in fuzzy environment. We introduce a new simple method in which the distance between fuzzy variables is used to measure the divergence of fuzzy investment return from a prior one. Firstly, two new mathematical models are proposed by expressing divergence as distance, investment return as expected value, and risk as variance and semivariance, respectively. Secondly, the crisp forms of the new models are also provided for different types of fuzzy variables. Finally, several numerical examples are given to illustrate the effectiveness of the proposed approach.

  5. Applicant Personality and Procedural Justice Perceptions of Group Selection Interviews.

    Science.gov (United States)

    Bye, Hege H; Sandal, Gro M

    2016-01-01

    We investigated how job applicants' personalities influence perceptions of the structural and social procedural justice of group selection interviews (i.e., a group of several applicants being evaluated simultaneously). We especially addressed trait interactions between neuroticism and extraversion (the affective plane) and extraversion and agreeableness (the interpersonal plane). Data on personality (pre-interview) and justice perceptions (post-interview) were collected in a field study among job applicants ( N  = 97) attending group selection interviews for positions as teachers in a Norwegian high school. Interaction effects in hierarchical regression analyses showed that perceptions of social and structural justice increased with levels of extraversion among high scorers on neuroticism. Among emotionally stable applicants, however, being introverted or extraverted did not matter to justice perceptions. Extraversion did not impact on the perception of social justice for applicants low in agreeableness. Agreeable applicants, however, experienced the group interview as more socially fair when they were also extraverted. The impact of applicant personality on justice perceptions may be underestimated if traits interactions are not considered. Procedural fairness ratings for the group selection interview were high, contrary to the negative reactions predicted by other researchers. There was no indication that applicants with desirable traits (i.e., traits predictive of job performance) reacted negatively to this selection tool. Despite the widespread use of interviews in selection, previous studies of applicant personality and fairness reactions have not included interviews. The study demonstrates the importance of previously ignored trait interactions in understanding applicant reactions.

  6. Genome-wide prediction of traits with different genetic architecture through efficient variable selection.

    Science.gov (United States)

    Wimmer, Valentin; Lehermeier, Christina; Albrecht, Theresa; Auinger, Hans-Jürgen; Wang, Yu; Schön, Chris-Carolin

    2013-10-01

    In genome-based prediction there is considerable uncertainty about the statistical model and method required to maximize prediction accuracy. For traits influenced by a small number of quantitative trait loci (QTL), predictions are expected to benefit from methods performing variable selection [e.g., BayesB or the least absolute shrinkage and selection operator (LASSO)] compared to methods distributing effects across the genome [ridge regression best linear unbiased prediction (RR-BLUP)]. We investigate the assumptions underlying successful variable selection by combining computer simulations with large-scale experimental data sets from rice (Oryza sativa L.), wheat (Triticum aestivum L.), and Arabidopsis thaliana (L.). We demonstrate that variable selection can be successful when the number of phenotyped individuals is much larger than the number of causal mutations contributing to the trait. We show that the sample size required for efficient variable selection increases dramatically with decreasing trait heritabilities and increasing extent of linkage disequilibrium (LD). We contrast and discuss contradictory results from simulation and experimental studies with respect to superiority of variable selection methods over RR-BLUP. Our results demonstrate that due to long-range LD, medium heritabilities, and small sample sizes, superiority of variable selection methods cannot be expected in plant breeding populations even for traits like FRIGIDA gene expression in Arabidopsis and flowering time in rice, assumed to be influenced by a few major QTL. We extend our conclusions to the analysis of whole-genome sequence data and infer upper bounds for the number of causal mutations which can be identified by LASSO. Our results have major impact on the choice of statistical method needed to make credible inferences about genetic architecture and prediction accuracy of complex traits.

  7. Site selection under the underground geologic store plan. Procedures of selecting underground geologic stores as disputed by society, science, and politics. Site selection rules

    International Nuclear Information System (INIS)

    Aebersold, M.

    2008-01-01

    The new Nuclear Power Act and the Nuclear Power Ordinance of 2005 are used in Switzerland to select a site of an underground geologic store for radioactive waste in a substantive planning procedure. The ''Underground Geologic Store Substantive Plan'' is to ensure the possibility to build underground geologic stores in an independent, transparent and fair procedure. The Federal Office for Energy (BFE) is the agency responsible for this procedure. The ''Underground Geologic Store'' Substantive Plan comprises these principles: - The long term protection of people and the environment enjoys priority. Aspects of regional planning, economics and society are of secondary importance. - Site selection is based on the waste volumes arising from the five nuclear power plants currently existing in Switzerland. The Substantive Plan is no precedent for or against future nuclear power plants. - A transparent and fair procedure is an indispensable prerequisite for achieving the objectives of a Substantive Plan, i.e., finding accepted sites for underground geologic stores. The Underground Geologic Stores Substantive Plan is arranged in two parts, a conceptual part defining the rules of the selection process, and an implementation part documenting the selection process step by step and, in the end, naming specific sites of underground geologic stores in Switzerland. The objective is to be able to commission underground geologic stores in 25 or 35 years' time. In principle, 2 sites are envisaged, one for low and intermediate level waste, and one for high level waste. The Swiss Federal Council approved the conceptual part on April 2, 2008. This marks the beginning of the implementation phase and the site selection process proper. (orig.)

  8. A novel peak-hopping stepwise feature selection method with application to Raman spectroscopy

    International Nuclear Information System (INIS)

    McShane, M.J.; Cameron, B.D.; Cote, G.L.; Motamedi, M.; Spiegelman, C.H.

    1999-01-01

    A new stepwise approach to variable selection for spectroscopy that includes chemical information and attempts to test several spectral regions producing high ranking coefficients has been developed to improve on currently available methods. Existing selection techniques can, in general, be placed into two groups: the first, time-consuming optimization approaches that ignore available information about sample chemistry and require considerable expertise to arrive at appropriate solutions (e.g. genetic algorithms), and the second, stepwise procedures that tend to select many variables in the same area containing redundant information. The algorithm described here is a fast stepwise procedure that uses multiple ranking chains to identify several spectral regions correlated with known sample properties. The multiple-chain approach allows the generation of a final ranking vector that moves quickly away from the initial selection point, testing several areas exhibiting correlation between spectra and composition early in the stepping procedure. Quantitative evidence of the success of this approach as applied to Raman spectroscopy is given in terms of processing speed, number of selected variables, and prediction error in comparison with other selection methods. In this respect, the procedure described here may be considered as a significant evolutionary step in variable selection algorithms. (Copyright (c) 1999 Elsevier Science B.V., Amsterdam. All rights reserved.)

  9. Punishment induced behavioural and neurophysiological variability reveals dopamine-dependent selection of kinematic movement parameters

    Science.gov (United States)

    Galea, Joseph M.; Ruge, Diane; Buijink, Arthur; Bestmann, Sven; Rothwell, John C.

    2013-01-01

    Action selection describes the high-level process which selects between competing movements. In animals, behavioural variability is critical for the motor exploration required to select the action which optimizes reward and minimizes cost/punishment, and is guided by dopamine (DA). The aim of this study was to test in humans whether low-level movement parameters are affected by punishment and reward in ways similar to high-level action selection. Moreover, we addressed the proposed dependence of behavioural and neurophysiological variability on DA, and whether this may underpin the exploration of kinematic parameters. Participants performed an out-and-back index finger movement and were instructed that monetary reward and punishment were based on its maximal acceleration (MA). In fact, the feedback was not contingent on the participant’s behaviour but pre-determined. Blocks highly-biased towards punishment were associated with increased MA variability relative to blocks with either reward or without feedback. This increase in behavioural variability was positively correlated with neurophysiological variability, as measured by changes in cortico-spinal excitability with transcranial magnetic stimulation over the primary motor cortex. Following the administration of a DA-antagonist, the variability associated with punishment diminished and the correlation between behavioural and neurophysiological variability no longer existed. Similar changes in variability were not observed when participants executed a pre-determined MA, nor did DA influence resting neurophysiological variability. Thus, under conditions of punishment, DA-dependent processes influence the selection of low-level movement parameters. We propose that the enhanced behavioural variability reflects the exploration of kinematic parameters for less punishing, or conversely more rewarding, outcomes. PMID:23447607

  10. Joint Variable Selection and Classification with Immunohistochemical Data

    Directory of Open Access Journals (Sweden)

    Debashis Ghosh

    2009-01-01

    Full Text Available To determine if candidate cancer biomarkers have utility in a clinical setting, validation using immunohistochemical methods is typically done. Most analyses of such data have not incorporated the multivariate nature of the staining profiles. In this article, we consider modelling such data using recently developed ideas from the machine learning community. In particular, we consider the joint goals of feature selection and classification. We develop estimation procedures for the analysis of immunohistochemical profiles using the least absolute selection and shrinkage operator. These lead to novel and flexible models and algorithms for the analysis of compositional data. The techniques are illustrated using data from a cancer biomarker study.

  11. Curve fitting and modeling with splines using statistical variable selection techniques

    Science.gov (United States)

    Smith, P. L.

    1982-01-01

    The successful application of statistical variable selection techniques to fit splines is demonstrated. Major emphasis is given to knot selection, but order determination is also discussed. Two FORTRAN backward elimination programs, using the B-spline basis, were developed. The program for knot elimination is compared in detail with two other spline-fitting methods and several statistical software packages. An example is also given for the two-variable case using a tensor product basis, with a theoretical discussion of the difficulties of their use.

  12. The use of vector bootstrapping to improve variable selection precision in Lasso models

    NARCIS (Netherlands)

    Laurin, C.; Boomsma, D.I.; Lubke, G.H.

    2016-01-01

    The Lasso is a shrinkage regression method that is widely used for variable selection in statistical genetics. Commonly, K-fold cross-validation is used to fit a Lasso model. This is sometimes followed by using bootstrap confidence intervals to improve precision in the resulting variable selections.

  13. Weighted overlap dominance – a procedure for interactive selection on multidimensional interval data

    DEFF Research Database (Denmark)

    Hougaard, Jens Leth; Nielsen, Kurt

    2011-01-01

    We present an outranking procedure that supports selection of alternatives represented by multiple attributes with interval valued data. The procedure is interactive in the sense that the decision maker directs the search for preferred alternatives by providing weights of the different attributes...

  14. A Time-Series Water Level Forecasting Model Based on Imputation and Variable Selection Method.

    Science.gov (United States)

    Yang, Jun-He; Cheng, Ching-Hsue; Chan, Chia-Pan

    2017-01-01

    Reservoirs are important for households and impact the national economy. This paper proposed a time-series forecasting model based on estimating a missing value followed by variable selection to forecast the reservoir's water level. This study collected data from the Taiwan Shimen Reservoir as well as daily atmospheric data from 2008 to 2015. The two datasets are concatenated into an integrated dataset based on ordering of the data as a research dataset. The proposed time-series forecasting model summarily has three foci. First, this study uses five imputation methods to directly delete the missing value. Second, we identified the key variable via factor analysis and then deleted the unimportant variables sequentially via the variable selection method. Finally, the proposed model uses a Random Forest to build the forecasting model of the reservoir's water level. This was done to compare with the listing method under the forecasting error. These experimental results indicate that the Random Forest forecasting model when applied to variable selection with full variables has better forecasting performance than the listing model. In addition, this experiment shows that the proposed variable selection can help determine five forecast methods used here to improve the forecasting capability.

  15. Using Random Forests to Select Optimal Input Variables for Short-Term Wind Speed Forecasting Models

    Directory of Open Access Journals (Sweden)

    Hui Wang

    2017-10-01

    Full Text Available Achieving relatively high-accuracy short-term wind speed forecasting estimates is a precondition for the construction and grid-connected operation of wind power forecasting systems for wind farms. Currently, most research is focused on the structure of forecasting models and does not consider the selection of input variables, which can have significant impacts on forecasting performance. This paper presents an input variable selection method for wind speed forecasting models. The candidate input variables for various leading periods are selected and random forests (RF is employed to evaluate the importance of all variable as features. The feature subset with the best evaluation performance is selected as the optimal feature set. Then, kernel-based extreme learning machine is constructed to evaluate the performance of input variables selection based on RF. The results of the case study show that by removing the uncorrelated and redundant features, RF effectively extracts the most strongly correlated set of features from the candidate input variables. By finding the optimal feature combination to represent the original information, RF simplifies the structure of the wind speed forecasting model, shortens the training time required, and substantially improves the model’s accuracy and generalization ability, demonstrating that the input variables selected by RF are effective.

  16. A Rapid Selection Procedure for Simple Commercial Implementation of omega-Transaminase Reactions

    DEFF Research Database (Denmark)

    Gundersen Deslauriers, Maria; Tufvesson, Pär; Rackham, Emma J.

    2016-01-01

    A stepwise selection procedure is presented to quickly evaluate whether a given omega-transaminase reaction is suitable for a so-called "simple" scale-up for fast industrial implementation. Here "simple" is defined as a system without the need for extensive process development or specialized......, and (3) determination of product inhibition. The method is exemplified with experimental work focused on two products: 1-(4-bromophenyl)ethylamine and (S)-(+)3-amino-1-Boc-piperidine, synthesized from their corresponding pro-chiral ketones each with two alternative amine donors, propan-2-amine, and 1......-phenylethylamine. Each step of the method has a threshold value, which must be surpassed to allow "simple" implementation, helping select suitable combinations of substrates, enzymes, and donors. One reaction pair, 1-Boc-3-piperidone with propan-2-amine, met the criteria of the three-step selection procedure...

  17. Ethnic variability in adiposity and cardiovascular risk: the variable disease selection hypothesis.

    Science.gov (United States)

    Wells, Jonathan C K

    2009-02-01

    Evidence increasingly suggests that ethnic differences in cardiovascular risk are partly mediated by adipose tissue biology, which refers to the regional distribution of adipose tissue and its differential metabolic activity. This paper proposes a novel evolutionary hypothesis for ethnic genetic variability in adipose tissue biology. Whereas medical interest focuses on the harmful effect of excess fat, the value of adipose tissue is greatest during chronic energy insufficiency. Following Neel's influential paper on the thrifty genotype, proposed to have been favoured by exposure to cycles of feast and famine, much effort has been devoted to searching for genetic markers of 'thrifty metabolism'. However, whether famine-induced starvation was the primary selective pressure on adipose tissue biology has been questioned, while the notion that fat primarily represents a buffer against starvation appears inconsistent with historical records of mortality during famines. This paper reviews evidence for the role played by adipose tissue in immune function and proposes that adipose tissue biology responds to selective pressures acting through infectious disease. Different diseases activate the immune system in different ways and induce different metabolic costs. It is hypothesized that exposure to different infectious disease burdens has favoured ethnic genetic variability in the anatomical location of, and metabolic profile of, adipose tissue depots.

  18. ILK statement on the recommendations by the working group on procedures for the selection of repository sites

    International Nuclear Information System (INIS)

    Anon.

    2003-01-01

    The Working Group on Procedures for the Selection of Repository Sites (AkEnd) had been appointed by the German Federal Ministry for the Environment (BMU) to develop procedures and criteria for the search for, and selection of, a repository site for all kinds of radioactive waste in deep geologic formations in Germany. ILK in principle welcomes the attempt on the part of AkEnd to develop a systematic procedure. On the other hand, ILK considers the two constraints imposed by BMU inappropriate: AkEnd was not to take into account the two existing sites of Konrad and Gorleben and, instead, work from a so-called white map of Germany. ILK recommends to perform a comprehensive safety analysis of Gorleben and define a selection procedure including the facts about Gorleben and, in addition, to commission the Konrad repository as soon as possible. The one-repository concept established as a precondition by BMU greatly restricts the selection procedure. There are no technical or scientific reasons for such concept. ILK recommends to plan for separate repositories, which would also correspond to international practice. The geoscientific criteria proposed by AkEnd should be examined and revised. With respect to the site selection procedure proposed, ILK feels that procedure is unable to define a targeted approach. Great importance must be attributed to public participation. The final site selection must be made under the responsibility of the government or the parliament. (orig.) [de

  19. Random forest variable selection in spatial malaria transmission modelling in Mpumalanga Province, South Africa

    Directory of Open Access Journals (Sweden)

    Thandi Kapwata

    2016-11-01

    Full Text Available Malaria is an environmentally driven disease. In order to quantify the spatial variability of malaria transmission, it is imperative to understand the interactions between environmental variables and malaria epidemiology at a micro-geographic level using a novel statistical approach. The random forest (RF statistical learning method, a relatively new variable-importance ranking method, measures the variable importance of potentially influential parameters through the percent increase of the mean squared error. As this value increases, so does the relative importance of the associated variable. The principal aim of this study was to create predictive malaria maps generated using the selected variables based on the RF algorithm in the Ehlanzeni District of Mpumalanga Province, South Africa. From the seven environmental variables used [temperature, lag temperature, rainfall, lag rainfall, humidity, altitude, and the normalized difference vegetation index (NDVI], altitude was identified as the most influential predictor variable due its high selection frequency. It was selected as the top predictor for 4 out of 12 months of the year, followed by NDVI, temperature and lag rainfall, which were each selected twice. The combination of climatic variables that produced the highest prediction accuracy was altitude, NDVI, and temperature. This suggests that these three variables have high predictive capabilities in relation to malaria transmission. Furthermore, it is anticipated that the predictive maps generated from predictions made by the RF algorithm could be used to monitor the progression of malaria and assist in intervention and prevention efforts with respect to malaria.

  20. Uninformative variable elimination assisted by Gram-Schmidt Orthogonalization/successive projection algorithm for descriptor selection in QSAR

    DEFF Research Database (Denmark)

    Omidikia, Nematollah; Kompany-Zareh, Mohsen

    2013-01-01

    Employment of Uninformative Variable Elimination (UVE) as a robust variable selection method is reported in this study. Each regression coefficient represents the contribution of the corresponding variable in the established model, but in the presence of uninformative variables as well as colline......Employment of Uninformative Variable Elimination (UVE) as a robust variable selection method is reported in this study. Each regression coefficient represents the contribution of the corresponding variable in the established model, but in the presence of uninformative variables as well...... as collinearity reliability of the regression coefficient's magnitude is suspicious. Successive Projection Algorithm (SPA) and Gram-Schmidt Orthogonalization (GSO) were implemented as pre-selection technique for removing collinearity and redundancy among variables in the model. Uninformative variable elimination...

  1. 78 FR 72878 - Integration of Variable Energy Resources; Notice Of Filing Procedures for Order No. 764...

    Science.gov (United States)

    2013-12-04

    ... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Docket No. RM10-11-000] Integration of Variable Energy Resources; Notice Of Filing Procedures for Order No. 764 Electronic Compliance Filings Take... Variable Energy Resources, Order No. 764, FERC Stats. & Regs. ] 31,331, order on reh'g, Order No. 764-A...

  2. A Time-Series Water Level Forecasting Model Based on Imputation and Variable Selection Method

    Directory of Open Access Journals (Sweden)

    Jun-He Yang

    2017-01-01

    Full Text Available Reservoirs are important for households and impact the national economy. This paper proposed a time-series forecasting model based on estimating a missing value followed by variable selection to forecast the reservoir’s water level. This study collected data from the Taiwan Shimen Reservoir as well as daily atmospheric data from 2008 to 2015. The two datasets are concatenated into an integrated dataset based on ordering of the data as a research dataset. The proposed time-series forecasting model summarily has three foci. First, this study uses five imputation methods to directly delete the missing value. Second, we identified the key variable via factor analysis and then deleted the unimportant variables sequentially via the variable selection method. Finally, the proposed model uses a Random Forest to build the forecasting model of the reservoir’s water level. This was done to compare with the listing method under the forecasting error. These experimental results indicate that the Random Forest forecasting model when applied to variable selection with full variables has better forecasting performance than the listing model. In addition, this experiment shows that the proposed variable selection can help determine five forecast methods used here to improve the forecasting capability.

  3. Novel Harmonic Regularization Approach for Variable Selection in Cox’s Proportional Hazards Model

    Directory of Open Access Journals (Sweden)

    Ge-Jin Chu

    2014-01-01

    Full Text Available Variable selection is an important issue in regression and a number of variable selection methods have been proposed involving nonconvex penalty functions. In this paper, we investigate a novel harmonic regularization method, which can approximate nonconvex Lq  (1/2select key risk factors in the Cox’s proportional hazards model using microarray gene expression data. The harmonic regularization method can be efficiently solved using our proposed direct path seeking approach, which can produce solutions that closely approximate those for the convex loss function and the nonconvex regularization. Simulation results based on the artificial datasets and four real microarray gene expression datasets, such as real diffuse large B-cell lymphoma (DCBCL, the lung cancer, and the AML datasets, show that the harmonic regularization method can be more accurate for variable selection than existing Lasso series methods.

  4. A general procedure to generate models for urban environmental-noise pollution using feature selection and machine learning methods.

    Science.gov (United States)

    Torija, Antonio J; Ruiz, Diego P

    2015-02-01

    The prediction of environmental noise in urban environments requires the solution of a complex and non-linear problem, since there are complex relationships among the multitude of variables involved in the characterization and modelling of environmental noise and environmental-noise magnitudes. Moreover, the inclusion of the great spatial heterogeneity characteristic of urban environments seems to be essential in order to achieve an accurate environmental-noise prediction in cities. This problem is addressed in this paper, where a procedure based on feature-selection techniques and machine-learning regression methods is proposed and applied to this environmental problem. Three machine-learning regression methods, which are considered very robust in solving non-linear problems, are used to estimate the energy-equivalent sound-pressure level descriptor (LAeq). These three methods are: (i) multilayer perceptron (MLP), (ii) sequential minimal optimisation (SMO), and (iii) Gaussian processes for regression (GPR). In addition, because of the high number of input variables involved in environmental-noise modelling and estimation in urban environments, which make LAeq prediction models quite complex and costly in terms of time and resources for application to real situations, three different techniques are used to approach feature selection or data reduction. The feature-selection techniques used are: (i) correlation-based feature-subset selection (CFS), (ii) wrapper for feature-subset selection (WFS), and the data reduction technique is principal-component analysis (PCA). The subsequent analysis leads to a proposal of different schemes, depending on the needs regarding data collection and accuracy. The use of WFS as the feature-selection technique with the implementation of SMO or GPR as regression algorithm provides the best LAeq estimation (R(2)=0.94 and mean absolute error (MAE)=1.14-1.16 dB(A)). Copyright © 2014 Elsevier B.V. All rights reserved.

  5. Variability-based active galactic nucleus selection using image subtraction in the SDSS and LSST era

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Yumi; Gibson, Robert R.; Becker, Andrew C.; Ivezić, Željko; Connolly, Andrew J.; Ruan, John J.; Anderson, Scott F. [Department of Astronomy, University of Washington, Box 351580, Seattle, WA 98195 (United States); MacLeod, Chelsea L., E-mail: ymchoi@astro.washington.edu [Physics Department, U.S. Naval Academy, 572 Holloway Road, Annapolis, MD 21402 (United States)

    2014-02-10

    With upcoming all-sky surveys such as LSST poised to generate a deep digital movie of the optical sky, variability-based active galactic nucleus (AGN) selection will enable the construction of highly complete catalogs with minimum contamination. In this study, we generate g-band difference images and construct light curves (LCs) for QSO/AGN candidates listed in Sloan Digital Sky Survey Stripe 82 public catalogs compiled from different methods, including spectroscopy, optical colors, variability, and X-ray detection. Image differencing excels at identifying variable sources embedded in complex or blended emission regions such as Type II AGNs and other low-luminosity AGNs that may be omitted from traditional photometric or spectroscopic catalogs. To separate QSOs/AGNs from other sources using our difference image LCs, we explore several LC statistics and parameterize optical variability by the characteristic damping timescale (τ) and variability amplitude. By virtue of distinguishable variability parameters of AGNs, we are able to select them with high completeness of 93.4% and efficiency (i.e., purity) of 71.3%. Based on optical variability, we also select highly variable blazar candidates, whose infrared colors are consistent with known blazars. One-third of them are also radio detected. With the X-ray selected AGN candidates, we probe the optical variability of X-ray detected optically extended sources using their difference image LCs for the first time. A combination of optical variability and X-ray detection enables us to select various types of host-dominated AGNs. Contrary to the AGN unification model prediction, two Type II AGN candidates (out of six) show detectable variability on long-term timescales like typical Type I AGNs. This study will provide a baseline for future optical variability studies of extended sources.

  6. Coupled variable selection for regression modeling of complex treatment patterns in a clinical cancer registry.

    Science.gov (United States)

    Schmidtmann, I; Elsäßer, A; Weinmann, A; Binder, H

    2014-12-30

    For determining a manageable set of covariates potentially influential with respect to a time-to-event endpoint, Cox proportional hazards models can be combined with variable selection techniques, such as stepwise forward selection or backward elimination based on p-values, or regularized regression techniques such as component-wise boosting. Cox regression models have also been adapted for dealing with more complex event patterns, for example, for competing risks settings with separate, cause-specific hazard models for each event type, or for determining the prognostic effect pattern of a variable over different landmark times, with one conditional survival model for each landmark. Motivated by a clinical cancer registry application, where complex event patterns have to be dealt with and variable selection is needed at the same time, we propose a general approach for linking variable selection between several Cox models. Specifically, we combine score statistics for each covariate across models by Fisher's method as a basis for variable selection. This principle is implemented for a stepwise forward selection approach as well as for a regularized regression technique. In an application to data from hepatocellular carcinoma patients, the coupled stepwise approach is seen to facilitate joint interpretation of the different cause-specific Cox models. In conditional survival models at landmark times, which address updates of prediction as time progresses and both treatment and other potential explanatory variables may change, the coupled regularized regression approach identifies potentially important, stably selected covariates together with their effect time pattern, despite having only a small number of events. These results highlight the promise of the proposed approach for coupling variable selection between Cox models, which is particularly relevant for modeling for clinical cancer registries with their complex event patterns. Copyright © 2014 John Wiley & Sons

  7. A QSAR Study of Environmental Estrogens Based on a Novel Variable Selection Method

    Directory of Open Access Journals (Sweden)

    Aiqian Zhang

    2012-05-01

    Full Text Available A large number of descriptors were employed to characterize the molecular structure of 53 natural, synthetic, and environmental chemicals which are suspected of disrupting endocrine functions by mimicking or antagonizing natural hormones and may thus pose a serious threat to the health of humans and wildlife. In this work, a robust quantitative structure-activity relationship (QSAR model with a novel variable selection method has been proposed for the effective estrogens. The variable selection method is based on variable interaction (VSMVI with leave-multiple-out cross validation (LMOCV to select the best subset. During variable selection, model construction and assessment, the Organization for Economic Co-operation and Development (OECD principles for regulation of QSAR acceptability were fully considered, such as using an unambiguous multiple-linear regression (MLR algorithm to build the model, using several validation methods to assessment the performance of the model, giving the define of applicability domain and analyzing the outliers with the results of molecular docking. The performance of the QSAR model indicates that the VSMVI is an effective, feasible and practical tool for rapid screening of the best subset from large molecular descriptors.

  8. Operant Variability: Procedures and Processes

    Science.gov (United States)

    Machado, Armando; Tonneau, Francois

    2012-01-01

    Barba's (2012) article deftly weaves three main themes in one argument about operant variability. From general theoretical considerations on operant behavior (Catania, 1973), Barba derives methodological guidelines about response differentiation and applies them to the study of operant variability. In the process, he uncovers unnoticed features of…

  9. 45 CFR 660.6 - What procedures apply to the selection of programs and activities under these regulations?

    Science.gov (United States)

    2010-10-01

    ... 45 Public Welfare 3 2010-10-01 2010-10-01 false What procedures apply to the selection of programs... Public Welfare (Continued) NATIONAL SCIENCE FOUNDATION INTERGOVERNMENTAL REVIEW OF THE NATIONAL SCIENCE FOUNDATION PROGRAMS AND ACTIVITIES § 660.6 What procedures apply to the selection of programs and activities...

  10. Protein construct storage: Bayesian variable selection and prediction with mixtures.

    Science.gov (United States)

    Clyde, M A; Parmigiani, G

    1998-07-01

    Determining optimal conditions for protein storage while maintaining a high level of protein activity is an important question in pharmaceutical research. A designed experiment based on a space-filling design was conducted to understand the effects of factors affecting protein storage and to establish optimal storage conditions. Different model-selection strategies to identify important factors may lead to very different answers about optimal conditions. Uncertainty about which factors are important, or model uncertainty, can be a critical issue in decision-making. We use Bayesian variable selection methods for linear models to identify important variables in the protein storage data, while accounting for model uncertainty. We also use the Bayesian framework to build predictions based on a large family of models, rather than an individual model, and to evaluate the probability that certain candidate storage conditions are optimal.

  11. Procedures for the selection of stopping power ratios for electron beams: Comparison of IAEA TRS procedures and of DIN procedures with Monte Carlo results

    International Nuclear Information System (INIS)

    Roos, M.; Christ, G.

    2000-01-01

    In the International Code of Practice IAEA TRS-381 the stopping power ratios water/air are selected according to the half-value depth and the depth of measurement. In the German Standard DIN 6800-2 a different procedure is recommended, which, in addition, takes the practical electron range into account; the stopping power data for monoenergetic beams from IAEA TRS-381 are used. Both procedures are compared with recent Monte Carlo calculations carried out for various beams of clinical accelerators. It is found that the DIN procedure shows a slightly better agreement. In addition, the stopping power ratios in IAEA TRS-381 are compared with those in DIN 6800-2 for the reference conditions of the beams from the PTB linac; the maximum deviation is not larger than 0.6%. (author)

  12. The XRF spectrometer and the selection of analysis conditions (instrumental variables)

    International Nuclear Information System (INIS)

    Willis, J.P.

    2002-01-01

    Full text: This presentation will begin with a brief discussion of EDXRF and flat- and curved-crystal WDXRF spectrometers, contrasting the major differences between the three types. The remainder of the presentation will contain a detailed overview of the choice and settings of the many instrumental variables contained in a modern WDXRF spectrometer, and will discuss critically the choices facing the analyst in setting up a WDXRF spectrometer for different elements and applications. In particular it will discuss the choice of tube target (when a choice is possible), the kV and mA settings, tube filters, collimator masks, collimators, analyzing crystals, secondary collimators, detectors, pulse height selection, X-ray path medium (air, nitrogen, vacuum or helium), counting times for peak and background positions and their effect on counting statistics and lower limit of detection (LLD). The use of Figure of Merit (FOM) calculations to objectively choose the best combination of instrumental variables also will be discussed. This presentation will be followed by a shorter session on a subsequent day entitled - A Selection of XRF Conditions - Practical Session, where participants will be given the opportunity to discuss in groups the selection of the best instrumental variables for three very diverse applications. Copyright (2002) Australian X-ray Analytical Association Inc

  13. Procedural advice on self-assessment and task selection in learner-controlled education

    NARCIS (Netherlands)

    Taminiau, Bettine; Corbalan, Gemma; Kester, Liesbeth; Van Merriënboer, Jeroen; Kirschner, Paul A.

    2011-01-01

    Taminiau, E. M. C., Corbalan, G., Kester, L., Van Merriënboer, J. J. G., & Kirschner, P. A. (2010, March). Procedural advice on self-assessment and task selection in learner-controlled education. Presentation at the ICO Springschool, Niederalteich, Germany.

  14. The alternative site selection procedure as covered in the report by the Repository Site Selection Procedures Working Group; Das Verfahren der alternativen Standortsuche im Bericht des Arbeitskreises Auswahlverfahren Endlagerstandorte

    Energy Technology Data Exchange (ETDEWEB)

    Brenner, M. [Jena Univ. (Germany). Juristische Fakultaet

    2005-01-01

    The 2002 Act on the Regulated Termination of the Use of Nuclear Power for Industrial Electricity Generation declared Germany's opting out of the peaceful uses of nuclear power. The problem of the permanent management of radioactive residues is becoming more and more important also in the light of that political decision. At the present time, there are no repositories offering the waste management capacities required. Such facilities need to be created. At the present stage, eligible repository sites are the Konrad mine, a former iron ore mine near Salzgitter, and the Gorleben salt dome. While the fate of the Konrad mine as a repository for waste generating negligible amounts of heat continues to be uncertain, despite a plan approval decision of June 2002, the Gorleben repository is still in the planning phase, at present in a dormant state, so to speak. The federal government expressed doubt about the suitability of the Gorleben site. Against this backdrop, the Federal Ministry for the Environment, Nature Conservation, and Nuclear Safety in February 1999 established AkEnd, the Working Group on Repository Site Selection Procedures. The Group was charged with developing, based on sound scientific criteria, a transparent site selection procedure in order to facilitate the search for repository sites. The Working Group presented its final report in December 2002 after approximately four years of work. The Group's proposals about alternative site selection procedures are explained in detail and, above all, reviewed critically. (orig.)

  15. Automatic variable selection method and a comparison for quantitative analysis in laser-induced breakdown spectroscopy

    Science.gov (United States)

    Duan, Fajie; Fu, Xiao; Jiang, Jiajia; Huang, Tingting; Ma, Ling; Zhang, Cong

    2018-05-01

    In this work, an automatic variable selection method for quantitative analysis of soil samples using laser-induced breakdown spectroscopy (LIBS) is proposed, which is based on full spectrum correction (FSC) and modified iterative predictor weighting-partial least squares (mIPW-PLS). The method features automatic selection without artificial processes. To illustrate the feasibility and effectiveness of the method, a comparison with genetic algorithm (GA) and successive projections algorithm (SPA) for different elements (copper, barium and chromium) detection in soil was implemented. The experimental results showed that all the three methods could accomplish variable selection effectively, among which FSC-mIPW-PLS required significantly shorter computation time (12 s approximately for 40,000 initial variables) than the others. Moreover, improved quantification models were got with variable selection approaches. The root mean square errors of prediction (RMSEP) of models utilizing the new method were 27.47 (copper), 37.15 (barium) and 39.70 (chromium) mg/kg, which showed comparable prediction effect with GA and SPA.

  16. Computational procedure of optimal inventory model involving controllable backorder rate and variable lead time with defective units

    Science.gov (United States)

    Lee, Wen-Chuan; Wu, Jong-Wuu; Tsou, Hsin-Hui; Lei, Chia-Ling

    2012-10-01

    This article considers that the number of defective units in an arrival order is a binominal random variable. We derive a modified mixture inventory model with backorders and lost sales, in which the order quantity and lead time are decision variables. In our studies, we also assume that the backorder rate is dependent on the length of lead time through the amount of shortages and let the backorder rate be a control variable. In addition, we assume that the lead time demand follows a mixture of normal distributions, and then relax the assumption about the form of the mixture of distribution functions of the lead time demand and apply the minimax distribution free procedure to solve the problem. Furthermore, we develop an algorithm procedure to obtain the optimal ordering strategy for each case. Finally, three numerical examples are also given to illustrate the results.

  17. A Robust Supervised Variable Selection for Noisy High-Dimensional Data

    Czech Academy of Sciences Publication Activity Database

    Kalina, Jan; Schlenker, Anna

    2015-01-01

    Roč. 2015, Article 320385 (2015), s. 1-10 ISSN 2314-6133 R&D Projects: GA ČR GA13-17187S Institutional support: RVO:67985807 Keywords : dimensionality reduction * variable selection * robustness Subject RIV: BA - General Mathematics Impact factor: 2.134, year: 2015

  18. Recruitment and Selection of Foreign Professionals In the South African Job Market: Procedures and Processes

    Directory of Open Access Journals (Sweden)

    Chao Nkhungulu Mulenga

    2007-07-01

    Full Text Available This study investigated procedures and processes used in the selection of prospective foreign applicants by recruitment agencies in South Africa. An electronic survey was distributed to the accessible population of 244 agencies on a national employment website, yielding 57 respondents. The results indicate that the recruitment industry does not have standard, well articulated procedures for identifying and selecting prospective foreign employees and considered processing foreign applicants difficult. Difficulties with the Department of Home Affairs were a major hindrance to recruiting foreign applicants.

  19. Variable selection in the explorative analysis of several data blocks in metabolomics

    DEFF Research Database (Denmark)

    Karaman, İbrahim; Nørskov, Natalja; Yde, Christian Clement

    highly correlated data sets in one integrated approach. Due to the high number of variables in data sets from metabolomics (both raw data and after peak picking) the selection of important variables in an explorative analysis is difficult, especially when different data sets of metabolomics data need...... to be related. Tools for the handling of mental overflow minimising false discovery rates both by using statistical and biological validation in an integrative approach are needed. In this paper different strategies for variable selection were considered with respect to false discovery and the possibility...... for biological validation. The data set used in this study is metabolomics data from an animal intervention study. The aim of the metabolomics study was to investigate the metabolic profile in pigs fed various cereal fractions with special attention to the metabolism of lignans using NMR and LC-MS based...

  20. EFFECT OF CORE TRAINING ON SELECTED HEMATOLOGICAL VARIABLES AMONG BASKETBALL PLAYERS

    OpenAIRE

    K. Rejinadevi; Dr. C. Ramesh

    2017-01-01

    The purpose of the study was to find out the effect of core training on selected haematological variables among basketball players. For the purpose of the study forty men basketball players were selected as subjects from S.V.N College and Arul Anandar College, Madurai, Tamilnadu at random and their age ranged from 18 to 25 years. The selected subjects are divided in to two groups of twenty subjects each. Group I acted as core training group and Group II acted as control group. The experimenta...

  1. Target-matched insertion gain derived from three different hearing aid selection procedures.

    Science.gov (United States)

    Punch, J L; Shovels, A H; Dickinson, W W; Calder, J H; Snead, C

    1995-11-01

    Three hearing aid selection procedures were compared to determine if any one was superior in producing prescribed real-ear insertion gain. For each of three subject groups, 12 in-the-ear style hearing aids with Class D circuitry and similar dispenser controls were ordered from one of three manufacturers. Subject groups were classified based on the type of information included on the hearing aid order form: (1) the subject's audiogram, (2) a three-part matrix specifying the desired maximum output, full-on gain, and frequency response slope of the hearing aid, or (3) the desired 2-cc coupler full-in grain of the hearing aid, based on real-ear coupler difference (RECD) measurements. Following electroacoustic adjustments aimed at approximating a commonly used target insertion gain formula, results revealed no significant differences among any of the three selection procedures with respect to obtaining acceptable insertion gain values.

  2. A selective overview of feature screening for ultrahigh-dimensional data.

    Science.gov (United States)

    JingYuan, Liu; Wei, Zhong; RunZe, L I

    2015-10-01

    High-dimensional data have frequently been collected in many scientific areas including genomewide association study, biomedical imaging, tomography, tumor classifications, and finance. Analysis of high-dimensional data poses many challenges for statisticians. Feature selection and variable selection are fundamental for high-dimensional data analysis. The sparsity principle, which assumes that only a small number of predictors contribute to the response, is frequently adopted and deemed useful in the analysis of high-dimensional data. Following this general principle, a large number of variable selection approaches via penalized least squares or likelihood have been developed in the recent literature to estimate a sparse model and select significant variables simultaneously. While the penalized variable selection methods have been successfully applied in many high-dimensional analyses, modern applications in areas such as genomics and proteomics push the dimensionality of data to an even larger scale, where the dimension of data may grow exponentially with the sample size. This has been called ultrahigh-dimensional data in the literature. This work aims to present a selective overview of feature screening procedures for ultrahigh-dimensional data. We focus on insights into how to construct marginal utilities for feature screening on specific models and motivation for the need of model-free feature screening procedures.

  3. Reducing Covert Self-Injurious Behavior Maintained by Automatic Reinforcement through a Variable Momentary DRO Procedure

    Science.gov (United States)

    Toussaint, Karen A.; Tiger, Jeffrey H.

    2012-01-01

    Covert self-injurious behavior (i.e., behavior that occurs in the absence of other people) can be difficult to treat. Traditional treatments typically have involved sophisticated methods of observation and often have employed positive punishment procedures. The current study evaluated the effectiveness of a variable momentary differential…

  4. Procedural advice on self-assessment and task selection in learner-controlled education

    NARCIS (Netherlands)

    Taminiau, Bettine; Kester, Liesbeth; Corbalan, Gemma; Van Merriënboer, Jeroen; Kirschner, Paul A.

    2010-01-01

    Taminiau, E. M. C., Kester, L., Corbalan, G., Van Merriënboer, J. J. G., & Kirschner, P. A. (2010, July). Procedural advice on self-assessment and task selection in learner-controlled education. Paper presented at the Junior Researchers of EARLI Conference 2010, Frankfurt, Germany.

  5. Variable selection in PLSR and extensions to a multi-block setting for metabolomics data

    DEFF Research Database (Denmark)

    Karaman, İbrahim; Hedemann, Mette Skou; Knudsen, Knud Erik Bach

    When applying LC-MS or NMR spectroscopy in metabolomics studies, high-dimensional data are generated and effective tools for variable selection are needed in order to detect the important metabolites. Methods based on sparsity combined with PLSR have recently attracted attention in the field...... of genomics [1]. They became quickly well established in the field of statistics because a close relationship to elastic net has been established. In sparse variable selection combined with PLSR, a soft thresholding is applied on each loading weight separately. In the field of chemometrics Jack-knifing has...... been introduced for variable selection in PLSR [2]. Jack-knifing has been frequently applied in the field of spectroscopy and is implemented in software tools like The Unscrambler. In Jack-knifing uncertainty estimates of regression coefficients are estimated and a t-test is applied on these estimates...

  6. Improving the Classification Accuracy for Near-Infrared Spectroscopy of Chinese Salvia miltiorrhiza Using Local Variable Selection

    Directory of Open Access Journals (Sweden)

    Lianqing Zhu

    2018-01-01

    Full Text Available In order to improve the classification accuracy of Chinese Salvia miltiorrhiza using near-infrared spectroscopy, a novel local variable selection strategy is thus proposed. Combining the strengths of the local algorithm and interval partial least squares, the spectra data have firstly been divided into several pairs of classes in sample direction and equidistant subintervals in variable direction. Then, a local classification model has been built, and the most proper spectral region has been selected based on the new evaluation criterion considering both classification error rate and best predictive ability under the leave-one-out cross validation scheme for each pair of classes. Finally, each observation can be assigned to belong to the class according to the statistical analysis of classification results of the local classification model built on selected variables. The performance of the proposed method was demonstrated through near-infrared spectra of cultivated or wild Salvia miltiorrhiza, which are collected from 8 geographical origins in 5 provinces of China. For comparison, soft independent modelling of class analogy and partial least squares discriminant analysis methods are, respectively, employed as the classification model. Experimental results showed that classification performance of the classification model with local variable selection was obvious better than that without variable selection.

  7. Selected Macroeconomic Variables and Stock Market Movements: Empirical evidence from Thailand

    Directory of Open Access Journals (Sweden)

    Joseph Ato Forson

    2014-06-01

    Full Text Available This paper investigates and analyzes the long-run equilibrium relationship between the Thai stock Exchange Index (SETI and selected macroeconomic variables using monthly time series data that cover a 20-year period from January 1990 to December 2009. The following macroeconomic variables are included in our analysis: money supply (MS, the consumer price index (CPI, interest rate (IR and the industrial production index (IP (as a proxy for GDP. Our findings prove that the SET Index and the selected macroeconomic variables are cointegrated at I (1 and have a significant equilibrium relationship over the long run. Money supply demonstrates a strong positive relationship with the SET Index over the long run, whereas the industrial production index and consumer price index show negative long-run relationships with the SET Index. Furthermore, in non-equilibrium situations, the error correction mechanism suggests that the consumer price index, industrial production index and money supply each contribute in some way to restore equilibrium. In addition, using Toda and Yamamoto’s augmented Granger causality test, we identify a bi-causal relationship between industrial production and money supply and unilateral causal relationships between CPI and IR, IP and CPI, MS and CPI, and IP and SETI, indicating that all of these variables are sensitive to Thai stock market movements. The policy implications of these findings are also discussed.

  8. The procedure of alternative site selection within the report of the study group on the radioactive waste final repository selection process (AKEnd)

    International Nuclear Information System (INIS)

    Nies, A.

    2005-01-01

    The study group on the selection procedures of radioactive waste final repository sites has presented the report in December 2002. The author dicusses the consequences of this report with respect to the site selection focussing on two topics: the serach for the best possible site and the prevention of prejudices

  9. Does excellence have a gender? A national research on recruitment and selection procedures for professional appointments in the Netherlands

    NARCIS (Netherlands)

    Brink, M.C.L. van den; Brouns, M.L.M.; Waslander, S.

    2006-01-01

    Purpose – The purpose of this research is to show that upward mobility of female academics in regular selection procedures is evolving extremely slowly, especially in The Netherlands. This paper aims at a more profound understanding of professorial recruitment and selection procedures in relation to

  10. The Effects of Variability and Risk in Selection Utility Analysis: An Empirical Comparison.

    Science.gov (United States)

    Rich, Joseph R.; Boudreau, John W.

    1987-01-01

    Investigated utility estimate variability for the selection utility of using the Programmer Aptitude Test to select computer programmers. Comparison of Monte Carlo results to other risk assessment approaches (sensitivity analysis, break-even analysis, algebraic derivation of the distribtion) suggests that distribution information provided by Monte…

  11. Mahalanobis distance and variable selection to optimize dose response

    International Nuclear Information System (INIS)

    Moore, D.H. II; Bennett, D.E.; Wyrobek, A.J.; Kranzler, D.

    1979-01-01

    A battery of statistical techniques are combined to improve detection of low-level dose response. First, Mahalanobis distances are used to classify objects as normal or abnormal. Then the proportion classified abnormal is regressed on dose. Finally, a subset of regressor variables is selected which maximizes the slope of the dose response line. Use of the techniques is illustrated by application to mouse sperm damaged by low doses of x-rays

  12. r2VIM: A new variable selection method for random forests in genome-wide association studies.

    Science.gov (United States)

    Szymczak, Silke; Holzinger, Emily; Dasgupta, Abhijit; Malley, James D; Molloy, Anne M; Mills, James L; Brody, Lawrence C; Stambolian, Dwight; Bailey-Wilson, Joan E

    2016-01-01

    Machine learning methods and in particular random forests (RFs) are a promising alternative to standard single SNP analyses in genome-wide association studies (GWAS). RFs provide variable importance measures (VIMs) to rank SNPs according to their predictive power. However, in contrast to the established genome-wide significance threshold, no clear criteria exist to determine how many SNPs should be selected for downstream analyses. We propose a new variable selection approach, recurrent relative variable importance measure (r2VIM). Importance values are calculated relative to an observed minimal importance score for several runs of RF and only SNPs with large relative VIMs in all of the runs are selected as important. Evaluations on simulated GWAS data show that the new method controls the number of false-positives under the null hypothesis. Under a simple alternative hypothesis with several independent main effects it is only slightly less powerful than logistic regression. In an experimental GWAS data set, the same strong signal is identified while the approach selects none of the SNPs in an underpowered GWAS. The novel variable selection method r2VIM is a promising extension to standard RF for objectively selecting relevant SNPs in GWAS while controlling the number of false-positive results.

  13. Identification of solid state fermentation degree with FT-NIR spectroscopy: Comparison of wavelength variable selection methods of CARS and SCARS

    Science.gov (United States)

    Jiang, Hui; Zhang, Hang; Chen, Quansheng; Mei, Congli; Liu, Guohai

    2015-10-01

    The use of wavelength variable selection before partial least squares discriminant analysis (PLS-DA) for qualitative identification of solid state fermentation degree by FT-NIR spectroscopy technique was investigated in this study. Two wavelength variable selection methods including competitive adaptive reweighted sampling (CARS) and stability competitive adaptive reweighted sampling (SCARS) were employed to select the important wavelengths. PLS-DA was applied to calibrate identified model using selected wavelength variables by CARS and SCARS for identification of solid state fermentation degree. Experimental results showed that the number of selected wavelength variables by CARS and SCARS were 58 and 47, respectively, from the 1557 original wavelength variables. Compared with the results of full-spectrum PLS-DA, the two wavelength variable selection methods both could enhance the performance of identified models. Meanwhile, compared with CARS-PLS-DA model, the SCARS-PLS-DA model achieved better results with the identification rate of 91.43% in the validation process. The overall results sufficiently demonstrate the PLS-DA model constructed using selected wavelength variables by a proper wavelength variable method can be more accurate identification of solid state fermentation degree.

  14. Statistical model selection with “Big Data”

    Directory of Open Access Journals (Sweden)

    Jurgen A. Doornik

    2015-12-01

    Full Text Available Big Data offer potential benefits for statistical modelling, but confront problems including an excess of false positives, mistaking correlations for causes, ignoring sampling biases and selecting by inappropriate methods. We consider the many important requirements when searching for a data-based relationship using Big Data, and the possible role of Autometrics in that context. Paramount considerations include embedding relationships in general initial models, possibly restricting the number of variables to be selected over by non-statistical criteria (the formulation problem, using good quality data on all variables, analyzed with tight significance levels by a powerful selection procedure, retaining available theory insights (the selection problem while testing for relationships being well specified and invariant to shifts in explanatory variables (the evaluation problem, using a viable approach that resolves the computational problem of immense numbers of possible models.

  15. Procedure for Selection of Suitable Resources in Interactions in Complex Dynamic Systems Using Artificial Immunity

    Directory of Open Access Journals (Sweden)

    Naors Y. anadalsaleem

    2017-03-01

    Full Text Available The dynamic optimization procedure for -dimensional vector function of a system, the state of which is interpreted as adaptable immune cell, is considered Using the results of the theory of artificial immune systems. The procedures for estimate of monitoring results are discussed. The procedure for assessing the entropy is recommended as a general recursive estimation algorithm. The results are focused on solving the optimization problems of cognitive selection of suitable physical resources, what expands the scope of Electromagnetic compatibility.

  16. Sparse Reduced-Rank Regression for Simultaneous Dimension Reduction and Variable Selection

    KAUST Repository

    Chen, Lisha; Huang, Jianhua Z.

    2012-01-01

    and hence improves predictive accuracy. We propose to select relevant variables for reduced-rank regression by using a sparsity-inducing penalty. We apply a group-lasso type penalty that treats each row of the matrix of the regression coefficients as a group

  17. Intraclass Correlation Coefficients in Hierarchical Design Studies with Discrete Response Variables: A Note on a Direct Interval Estimation Procedure

    Science.gov (United States)

    Raykov, Tenko; Marcoulides, George A.

    2015-01-01

    A latent variable modeling procedure that can be used to evaluate intraclass correlation coefficients in two-level settings with discrete response variables is discussed. The approach is readily applied when the purpose is to furnish confidence intervals at prespecified confidence levels for these coefficients in setups with binary or ordinal…

  18. Endovascular repair of abdominal aortic aneurysms: vascular anatomy, device selection, procedure, and procedure-specific complications.

    Science.gov (United States)

    Bryce, Yolanda; Rogoff, Philip; Romanelli, Donald; Reichle, Ralph

    2015-01-01

    Abdominal aortic aneurysm (AAA) is abnormal dilatation of the aorta, carrying a substantial risk of rupture and thereby marked risk of death. Open repair of AAA involves lengthy surgery time, anesthesia, and substantial recovery time. Endovascular aneurysm repair (EVAR) provides a safer option for patients with advanced age and pulmonary, cardiac, and renal dysfunction. Successful endovascular repair of AAA depends on correct selection of patients (on the basis of their vascular anatomy), choice of the correct endoprosthesis, and familiarity with the technique and procedure-specific complications. The type of aneurysm is defined by its location with respect to the renal arteries, whether it is a true or false aneurysm, and whether the common iliac arteries are involved. Vascular anatomy can be divided more technically into aortic neck, aortic aneurysm, pelvic perfusion, and iliac morphology, with grades of difficulty with respect to EVAR, aortic neck morphology being the most common factor to affect EVAR appropriateness. When choosing among the devices available on the market, one must consider the patient's vascular anatomy and choose between devices that provide suprarenal fixation versus those that provide infrarenal fixation. A successful technique can be divided into preprocedural imaging, ancillary procedures before AAA stent-graft placement, the procedure itself, postprocedural medical therapy, and postprocedural imaging surveillance. Imaging surveillance is important in assessing complications such as limb thrombosis, endoleaks, graft migration, enlargement of the aneurysm sac, and rupture. Last, one must consider the issue of radiation safety with regard to EVAR. (©)RSNA, 2015.

  19. 5 CFR 335.106 - Special selection procedures for certain veterans under merit promotion.

    Science.gov (United States)

    2010-01-01

    ... veterans under merit promotion. 335.106 Section 335.106 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PROMOTION AND INTERNAL PLACEMENT General Provisions § 335.106 Special selection procedures for certain veterans under merit promotion. Preference eligibles or veterans who have...

  20. Variability in the Use of Simulation for Procedural Training in Radiology Residency: Opportunities for Improvement.

    Science.gov (United States)

    Matalon, Shanna A; Chikarmane, Sona A; Yeh, Eren D; Smith, Stacy E; Mayo-Smith, William W; Giess, Catherine S

    2018-03-19

    Increased attention to quality and safety has led to a re-evaluation of the classic apprenticeship model for procedural training. Many have proposed simulation as a supplementary teaching tool. The purpose of this study was to assess radiology resident exposure to procedural training and procedural simulation. An IRB-exempt online survey was distributed to current radiology residents in the United States by e-mail. Survey results were summarized using frequency and percentages. Chi-square tests were used for statistical analysis where appropriate. A total of 353 current residents completed the survey. 37% (n = 129/353) of respondents had never used procedure simulation. Of the residents who had used simulation, most did not do so until after having already performed procedures on patients (59%, n = 132/223). The presence of a dedicated simulation center was reported by over half of residents (56%, n = 196/353) and was associated with prior simulation experience (P = 0.007). Residents who had not had procedural simulation were somewhat likely or highly likely (3 and 4 on a 4-point Likert-scale) to participate if it were available (81%, n = 104/129). Simulation training was associated with higher comfort levels in performing procedures (P simulation training is associated with higher comfort levels when performing procedures, there is variable use in radiology resident training and its use is not currently optimized. Given the increased emphasis on patient safety, these results suggest the need to increase procedural simulation use during residency, including an earlier introduction to simulation before patient exposure. Copyright © 2018 Elsevier Inc. All rights reserved.

  1. Multivariate fault isolation of batch processes via variable selection in partial least squares discriminant analysis.

    Science.gov (United States)

    Yan, Zhengbing; Kuang, Te-Hui; Yao, Yuan

    2017-09-01

    In recent years, multivariate statistical monitoring of batch processes has become a popular research topic, wherein multivariate fault isolation is an important step aiming at the identification of the faulty variables contributing most to the detected process abnormality. Although contribution plots have been commonly used in statistical fault isolation, such methods suffer from the smearing effect between correlated variables. In particular, in batch process monitoring, the high autocorrelations and cross-correlations that exist in variable trajectories make the smearing effect unavoidable. To address such a problem, a variable selection-based fault isolation method is proposed in this research, which transforms the fault isolation problem into a variable selection problem in partial least squares discriminant analysis and solves it by calculating a sparse partial least squares model. As different from the traditional methods, the proposed method emphasizes the relative importance of each process variable. Such information may help process engineers in conducting root-cause diagnosis. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  2. 49 CFR 542.2 - Procedures for selecting low theft light duty truck lines with a majority of major parts...

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 6 2010-10-01 2010-10-01 false Procedures for selecting low theft light duty... TRUCK LINES TO BE COVERED BY THE THEFT PREVENTION STANDARD § 542.2 Procedures for selecting low theft... a low theft rate have major parts interchangeable with a majority of the covered major parts of a...

  3. Seleção de variáveis em QSAR Variable selection in QSAR

    Directory of Open Access Journals (Sweden)

    Márcia Miguel Castro Ferreira

    2002-05-01

    Full Text Available The process of building mathematical models in quantitative structure-activity relationship (QSAR studies is generally limited by the size of the dataset used to select variables from. For huge datasets, the task of selecting a given number of variables that produces the best linear model can be enormous, if not unfeasible. In this case, some methods can be used to separate good parameter combinations from the bad ones. In this paper three methodologies are analyzed: systematic search, genetic algorithm and chemometric methods. These methods have been exposed and discussed through practical examples.

  4. About hidden influence of predictor variables: Suppressor and mediator variables

    Directory of Open Access Journals (Sweden)

    Milovanović Boško

    2013-01-01

    Full Text Available In this paper procedure for researching hidden influence of predictor variables in regression models and depicting suppressor variables and mediator variables is shown. It is also shown that detection of suppressor variables and mediator variables could provide refined information about the research problem. As an example for applying this procedure, relation between Atlantic atmospheric centers and air temperature and precipitation amount in Serbia is chosen. [Projekat Ministarstva nauke Republike Srbije, br. 47007

  5. Selection of controlled variables in bioprocesses. Application to a SHARON-Anammox process for autotrophic nitrogen removal

    DEFF Research Database (Denmark)

    Mauricio Iglesias, Miguel; Valverde Perez, Borja; Sin, Gürkan

    Selecting the right controlled variables in a bioprocess is challenging since the objectives of the process (yields, product or substrate concentration) are difficult to relate with a given actuator. We apply here process control tools that can be used to assist in the selection of controlled var...... variables to the case of the SHARON-Anammox process for autotrophic nitrogen removal....

  6. Understanding Farmers' Response to Climate Variability in Nigeria ...

    African Journals Online (AJOL)

    In this study, farmers 'response to climate variability was examined. Primary and secondary data were used. A multi-stage sampling procedure was adopted in the collection of the primary data using structured questionnaires. Four vegetation zones out of seven where farming is mainly carried out were selected for the study.

  7. Comparison of Sparse and Jack-knife partial least squares regression methods for variable selection

    DEFF Research Database (Denmark)

    Karaman, Ibrahim; Qannari, El Mostafa; Martens, Harald

    2013-01-01

    The objective of this study was to compare two different techniques of variable selection, Sparse PLSR and Jack-knife PLSR, with respect to their predictive ability and their ability to identify relevant variables. Sparse PLSR is a method that is frequently used in genomics, whereas Jack-knife PL...

  8. Not accounting for interindividual variability can mask habitat selection patterns: a case study on black bears.

    Science.gov (United States)

    Lesmerises, Rémi; St-Laurent, Martin-Hugues

    2017-11-01

    Habitat selection studies conducted at the population scale commonly aim to describe general patterns that could improve our understanding of the limiting factors in species-habitat relationships. Researchers often consider interindividual variation in selection patterns to control for its effects and avoid pseudoreplication by using mixed-effect models that include individuals as random factors. Here, we highlight common pitfalls and possible misinterpretations of this strategy by describing habitat selection of 21 black bears Ursus americanus. We used Bayesian mixed-effect models and compared results obtained when using random intercept (i.e., population level) versus calculating individual coefficients for each independent variable (i.e., individual level). We then related interindividual variability to individual characteristics (i.e., age, sex, reproductive status, body condition) in a multivariate analysis. The assumption of comparable behavior among individuals was verified only in 40% of the cases in our seasonal best models. Indeed, we found strong and opposite responses among sampled bears and individual coefficients were linked to individual characteristics. For some covariates, contrasted responses canceled each other out at the population level. In other cases, interindividual variability was concealed by the composition of our sample, with the majority of the bears (e.g., old individuals and bears in good physical condition) driving the population response (e.g., selection of young forest cuts). Our results stress the need to consider interindividual variability to avoid misinterpretation and uninformative results, especially for a flexible and opportunistic species. This study helps to identify some ecological drivers of interindividual variability in bear habitat selection patterns.

  9. HEART RATE VARIABILITY CLASSIFICATION USING SADE-ELM CLASSIFIER WITH BAT FEATURE SELECTION

    Directory of Open Access Journals (Sweden)

    R Kavitha

    2017-07-01

    Full Text Available The electrical activity of the human heart is measured by the vital bio medical signal called ECG. This electrocardiogram is employed as a crucial source to gather the diagnostic information of a patient’s cardiopathy. The monitoring function of cardiac disease is diagnosed by documenting and handling the electrocardiogram (ECG impulses. In the recent years many research has been done and developing an enhanced method to identify the risk in the patient’s body condition by processing and analysing the ECG signal. This analysis of the signal helps to find the cardiac abnormalities, arrhythmias, and many other heart problems. ECG signal is processed to detect the variability in heart rhythm; heart rate variability is calculated based on the time interval between heart beats. Heart Rate Variability HRV is measured by the variation in the beat to beat interval. The Heart rate Variability (HRV is an essential aspect to diagnose the properties of the heart. Recent development enhances the potential with the aid of non-linear metrics in reference point with feature selection. In this paper, the fundamental elements are taken from the ECG signal for feature selection process where Bat algorithm is employed for feature selection to predict the best feature and presented to the classifier for accurate classification. The popular machine learning algorithm ELM is taken for classification, integrated with evolutionary algorithm named Self- Adaptive Differential Evolution Extreme Learning Machine SADEELM to improve the reliability of classification. It combines Effective Fuzzy Kohonen clustering network (EFKCN to be able to increase the accuracy of the effect for HRV transmission classification. Hence, it is observed that the experiment carried out unveils that the precision is improved by the SADE-ELM method and concurrently optimizes the computation time.

  10. [Application of characteristic NIR variables selection in portable detection of soluble solids content of apple by near infrared spectroscopy].

    Science.gov (United States)

    Fan, Shu-Xiang; Huang, Wen-Qian; Li, Jiang-Bo; Guo, Zhi-Ming; Zhaq, Chun-Jiang

    2014-10-01

    In order to detect the soluble solids content(SSC)of apple conveniently and rapidly, a ring fiber probe and a portable spectrometer were applied to obtain the spectroscopy of apple. Different wavelength variable selection methods, including unin- formative variable elimination (UVE), competitive adaptive reweighted sampling (CARS) and genetic algorithm (GA) were pro- posed to select effective wavelength variables of the NIR spectroscopy of the SSC in apple based on PLS. The back interval LS- SVM (BiLS-SVM) and GA were used to select effective wavelength variables based on LS-SVM. Selected wavelength variables and full wavelength range were set as input variables of PLS model and LS-SVM model, respectively. The results indicated that PLS model built using GA-CARS on 50 characteristic variables selected from full-spectrum which had 1512 wavelengths achieved the optimal performance. The correlation coefficient (Rp) and root mean square error of prediction (RMSEP) for prediction sets were 0.962, 0.403°Brix respectively for SSC. The proposed method of GA-CARS could effectively simplify the portable detection model of SSC in apple based on near infrared spectroscopy and enhance the predictive precision. The study can provide a reference for the development of portable apple soluble solids content spectrometer.

  11. BSL-3 laboratory practices in the United States: comparison of select agent and non-select agent facilities.

    Science.gov (United States)

    Richards, Stephanie L; Pompei, Victoria C; Anderson, Alice

    2014-01-01

    New construction of biosafety level 3 (BSL-3) laboratories in the United States has increased in the past decade to facilitate research on potential bioterrorism agents. The Centers for Disease Control and Prevention inspect BSL-3 facilities and review commissioning documentation, but no single agency has oversight over all BSL-3 facilities. This article explores the extent to which standard operating procedures in US BSL-3 facilities vary between laboratories with select agent or non-select agent status. Comparisons are made for the following variables: personnel training, decontamination, personal protective equipment (PPE), medical surveillance, security access, laboratory structure and maintenance, funding, and pest management. Facilities working with select agents had more complex training programs and decontamination procedures than non-select agent facilities. Personnel working in select agent laboratories were likely to use powered air purifying respirators, while non-select agent laboratories primarily used N95 respirators. More rigorous medical surveillance was carried out in select agent workers (although not required by the select agent program) and a higher level of restrictive access to laboratories was found. Most select agent and non-select agent laboratories reported adequate structural integrity in facilities; however, differences were observed in personnel perception of funding for repairs. Pest management was carried out by select agent personnel more frequently than non-select agent personnel. Our findings support the need to promote high quality biosafety training and standard operating procedures in both select agent and non-select agent laboratories to improve occupational health and safety.

  12. 41 CFR 60-3.6 - Use of selection procedures which have not been validated.

    Science.gov (United States)

    2010-07-01

    ... EMPLOYMENT OPPORTUNITY, DEPARTMENT OF LABOR 3-UNIFORM GUIDELINES ON EMPLOYEE SELECTION PROCEDURES (1978... validation techniques contemplated by these guidelines. In such circumstances, the user should utilize... techniques contemplated by these guidelines usually should be followed if technically feasible. Where the...

  13. Single center experience in selecting the laparoscopic Frey procedure for chronic pancreatitis.

    Science.gov (United States)

    Tan, Chun-Lu; Zhang, Hao; Li, Ke-Zhou

    2015-11-28

    To share our experience regarding the laparoscopic Frey procedure for chronic pancreatitis (CP) and patient selection. All consecutive patients undergoing duodenum-preserving pancreatic head resection from July 2013 to July 2014 were reviewed and those undergoing the Frey procedure for CP were included in this study. Data on age, gender, body mass index (BMI), American Society of Anesthesiologists score, imaging findings, inflammatory index (white blood cells, interleukin (IL)-6, and C-reaction protein), visual analogue score score during hospitalization and outpatient visit, history of CP, operative time, estimated blood loss, and postoperative data (postoperative mortality and morbidity, postoperative length of hospital stay) were obtained for patients undergoing laparoscopic surgery. The open surgery cases in this study were analyzed for risk factors related to extensive bleeding, which was the major reason for conversion during the laparoscopic procedure. Age, gender, etiology, imaging findings, amylase level, complications due to pancreatitis, functional insufficiency, and history of CP were assessed in these patients. Nine laparoscopic and 37 open Frey procedures were analyzed. Of the 46 patients, 39 were male (85%) and seven were female (16%). The etiology of CP was alcohol in 32 patients (70%) and idiopathic in 14 patients (30%). Stones were found in 38 patients (83%). An inflammatory mass was found in five patients (11%). The time from diagnosis of CP to the Frey procedure was 39 ± 19 (9-85) mo. The BMI of patients in the laparoscopic group was 20.4 ± 1.7 (17.8-22.4) kg/m(2) and was 20.6 ± 2.9 (15.4-27.7) kg/m(2) in the open group. All patients required analgesic medication for abdominal pain. Frequent acute pancreatitis or severe abdominal pain due to acute exacerbation occurred in 20 patients (43%). Pre-operative complications due to pancreatitis were observed in 18 patients (39%). Pancreatic functional insufficiency was observed in 14 patients (30

  14. Rejecting escape events in large volume Ge detectors by a pulse shape selection procedure

    International Nuclear Information System (INIS)

    Del Zoppo, A.; Agodi, C.; Alba, R.; Bellia, G.; Coniglione, R.; Loukachine, K.; Maiolino, C.; Migneco, E.; Piattelli, P.; Santonocito, D.; Sapienza, P.

    1993-01-01

    The dependence of the response to γ-rays of a large volume Ge detector on the interval width of a selected initial rise pulse slope is investigated. The number of escape events associated with a small pulse slope is found to be greater than the corresponding number of full energy events. An escape event rejection procedure based on the observed correlation between energy deposition and pulse shape is discussed. Such a procedure seems particularly suited for the design of highly granular large volume Ge detector arrays. (orig.)

  15. Occupational exposures from selected interventional radiological procedures

    International Nuclear Information System (INIS)

    Janeczek, J.; Beal, A.; James, D.

    2001-01-01

    The number of radiology and cardiology interventional procedures has significantly increased in recent years due to better diagnostic equipment resulting in an increase in radiation dose to the staff and patients. The assessment of staff doses was performed for cardiac catheterization and for three other non-cardiac procedures. The scattered radiation distribution resulting from the cardiac catheterization procedure was measured prior to the staff dose measurements. Staff dose measurements included those of the left shoulder, eye, thyroid and hand doses of the cardiologist. In non-cardiac procedures doses to the hands of the radiologist were measured for nephrostomy, fistulogram and percutaneous transluminal angioplasty procedures. Doses to the radiologist or cardiologist were found to be relatively high if correct protection was not observed. (author)

  16. A competency based selection procedure for Dutch postgraduate GP training: a pilot study on validity and reliability

    NARCIS (Netherlands)

    Vermeulen, M.I.; Tromp, F.; Zuithoff, N.P.; Pieters, R.H.; Damoiseaux, R.A.; Kuyvenhoven, M.M.

    2014-01-01

    Abstract Background: Historically, semi-structured interviews (SSI) have been the core of the Dutch selection for postgraduate general practice (GP) training. This paper describes a pilot study on a newly designed competency-based selection procedure that assesses whether candidates have the

  17. Analogous Mechanisms of Selection and Updating in Declarative and Procedural Working Memory: Experiments and a Computational Model

    Science.gov (United States)

    Oberauer, Klaus; Souza, Alessandra S.; Druey, Michel D.; Gade, Miriam

    2013-01-01

    The article investigates the mechanisms of selecting and updating representations in declarative and procedural working memory (WM). Declarative WM holds the objects of thought available, whereas procedural WM holds representations of what to do with these objects. Both systems consist of three embedded components: activated long-term memory, a…

  18. Effects of environmental variables on invasive amphibian activity: Using model selection on quantiles for counts

    Science.gov (United States)

    Muller, Benjamin J.; Cade, Brian S.; Schwarzkoph, Lin

    2018-01-01

    Many different factors influence animal activity. Often, the value of an environmental variable may influence significantly the upper or lower tails of the activity distribution. For describing relationships with heterogeneous boundaries, quantile regressions predict a quantile of the conditional distribution of the dependent variable. A quantile count model extends linear quantile regression methods to discrete response variables, and is useful if activity is quantified by trapping, where there may be many tied (equal) values in the activity distribution, over a small range of discrete values. Additionally, different environmental variables in combination may have synergistic or antagonistic effects on activity, so examining their effects together, in a modeling framework, is a useful approach. Thus, model selection on quantile counts can be used to determine the relative importance of different variables in determining activity, across the entire distribution of capture results. We conducted model selection on quantile count models to describe the factors affecting activity (numbers of captures) of cane toads (Rhinella marina) in response to several environmental variables (humidity, temperature, rainfall, wind speed, and moon luminosity) over eleven months of trapping. Environmental effects on activity are understudied in this pest animal. In the dry season, model selection on quantile count models suggested that rainfall positively affected activity, especially near the lower tails of the activity distribution. In the wet season, wind speed limited activity near the maximum of the distribution, while minimum activity increased with minimum temperature. This statistical methodology allowed us to explore, in depth, how environmental factors influenced activity across the entire distribution, and is applicable to any survey or trapping regime, in which environmental variables affect activity.

  19. A competency based selection procedure for Dutch postgraduate GP training: a pilot study on validity and reliability.

    Science.gov (United States)

    Vermeulen, Margit I; Tromp, Fred; Zuithoff, Nicolaas P A; Pieters, Ron H M; Damoiseaux, Roger A M J; Kuyvenhoven, Marijke M

    2014-12-01

    Abstract Background: Historically, semi-structured interviews (SSI) have been the core of the Dutch selection for postgraduate general practice (GP) training. This paper describes a pilot study on a newly designed competency-based selection procedure that assesses whether candidates have the competencies that are required to complete GP training. The objective was to explore reliability and validity aspects of the instruments developed. The new selection procedure comprising the National GP Knowledge Test (LHK), a situational judgement tests (SJT), a patterned behaviour descriptive interview (PBDI) and a simulated encounter (SIM) was piloted alongside the current procedure. Forty-seven candidates volunteered in both procedures. Admission decision was based on the results of the current procedure. Study participants did hardly differ from the other candidates. The mean scores of the candidates on the LHK and SJT were 21.9 % (SD 8.7) and 83.8% (SD 3.1), respectively. The mean self-reported competency scores (PBDI) were higher than the observed competencies (SIM): 3.7(SD 0.5) and 2.9(SD 0.6), respectively. Content-related competencies showed low correlations with one another when measured with different instruments, whereas more diverse competencies measured by a single instrument showed strong to moderate correlations. Moreover, a moderate correlation between LHK and SJT was found. The internal consistencies (intraclass correlation, ICC) of LHK and SJT were poor while the ICC of PBDI and SIM showed acceptable levels of reliability. Findings on content validity and reliability of these new instruments are promising to realize a competency based procedure. Further development of the instruments and research on predictive validity should be pursued.

  20. An automatic optimum number of well-distributed ground control lines selection procedure based on genetic algorithm

    Science.gov (United States)

    Yavari, Somayeh; Valadan Zoej, Mohammad Javad; Salehi, Bahram

    2018-05-01

    The procedure of selecting an optimum number and best distribution of ground control information is important in order to reach accurate and robust registration results. This paper proposes a new general procedure based on Genetic Algorithm (GA) which is applicable for all kinds of features (point, line, and areal features). However, linear features due to their unique characteristics are of interest in this investigation. This method is called Optimum number of Well-Distributed ground control Information Selection (OWDIS) procedure. Using this method, a population of binary chromosomes is randomly initialized. The ones indicate the presence of a pair of conjugate lines as a GCL and zeros specify the absence. The chromosome length is considered equal to the number of all conjugate lines. For each chromosome, the unknown parameters of a proper mathematical model can be calculated using the selected GCLs (ones in each chromosome). Then, a limited number of Check Points (CPs) are used to evaluate the Root Mean Square Error (RMSE) of each chromosome as its fitness value. The procedure continues until reaching a stopping criterion. The number and position of ones in the best chromosome indicate the selected GCLs among all conjugate lines. To evaluate the proposed method, a GeoEye and an Ikonos Images are used over different areas of Iran. Comparing the obtained results by the proposed method in a traditional RFM with conventional methods that use all conjugate lines as GCLs shows five times the accuracy improvement (pixel level accuracy) as well as the strength of the proposed method. To prevent an over-parametrization error in a traditional RFM due to the selection of a high number of improper correlated terms, an optimized line-based RFM is also proposed. The results show the superiority of the combination of the proposed OWDIS method with an optimized line-based RFM in terms of increasing the accuracy to better than 0.7 pixel, reliability, and reducing systematic

  1. Procedure for the Selection and Validation of a Calibration Model I-Description and Application.

    Science.gov (United States)

    Desharnais, Brigitte; Camirand-Lemyre, Félix; Mireault, Pascal; Skinner, Cameron D

    2017-05-01

    Calibration model selection is required for all quantitative methods in toxicology and more broadly in bioanalysis. This typically involves selecting the equation order (quadratic or linear) and weighting factor correctly modelizing the data. A mis-selection of the calibration model will generate lower quality control (QC) accuracy, with an error up to 154%. Unfortunately, simple tools to perform this selection and tests to validate the resulting model are lacking. We present a stepwise, analyst-independent scheme for selection and validation of calibration models. The success rate of this scheme is on average 40% higher than a traditional "fit and check the QCs accuracy" method of selecting the calibration model. Moreover, the process was completely automated through a script (available in Supplemental Data 3) running in RStudio (free, open-source software). The need for weighting was assessed through an F-test using the variances of the upper limit of quantification and lower limit of quantification replicate measurements. When weighting was required, the choice between 1/x and 1/x2 was determined by calculating which option generated the smallest spread of weighted normalized variances. Finally, model order was selected through a partial F-test. The chosen calibration model was validated through Cramer-von Mises or Kolmogorov-Smirnov normality testing of the standardized residuals. Performance of the different tests was assessed using 50 simulated data sets per possible calibration model (e.g., linear-no weight, quadratic-no weight, linear-1/x, etc.). This first of two papers describes the tests, procedures and outcomes of the developed procedure using real LC-MS-MS results for the quantification of cocaine and naltrexone. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  2. Improving breast cancer classification with mammography, supported on an appropriate variable selection analysis

    Science.gov (United States)

    Pérez, Noel; Guevara, Miguel A.; Silva, Augusto

    2013-02-01

    This work addresses the issue of variable selection within the context of breast cancer classification with mammography. A comprehensive repository of feature vectors was used including a hybrid subset gathering image-based and clinical features. It aimed to gather experimental evidence of variable selection in terms of cardinality, type and find a classification scheme that provides the best performance over the Area Under Receiver Operating Characteristics Curve (AUC) scores using the ranked features subset. We evaluated and classified a total of 300 subsets of features formed by the application of Chi-Square Discretization, Information-Gain, One-Rule and RELIEF methods in association with Feed-Forward Backpropagation Neural Network (FFBP), Support Vector Machine (SVM) and Decision Tree J48 (DTJ48) Machine Learning Algorithms (MLA) for a comparative performance evaluation based on AUC scores. A variable selection analysis was performed for Single-View Ranking and Multi-View Ranking groups of features. Features subsets representing Microcalcifications (MCs), Masses and both MCs and Masses lesions achieved AUC scores of 0.91, 0.954 and 0.934 respectively. Experimental evidence demonstrated that classification performance was improved by combining image-based and clinical features. The most important clinical and image-based features were StromaDistortion and Circularity respectively. Other less important but worth to use due to its consistency were Contrast, Perimeter, Microcalcification, Correlation and Elongation.

  3. Variable structure unit vector control of electric power generation ...

    African Journals Online (AJOL)

    A variable structure Automatic Generation Control (VSAGC) scheme is proposed in this paper for the control of a single area power system model dominated by steam powered electric generating plants. Unlike existing, VSAGC scheme where the selection of the control function is based on a trial and error procedure, the ...

  4. Sparse supervised principal component analysis (SSPCA) for dimension reduction and variable selection

    DEFF Research Database (Denmark)

    Sharifzadeh, Sara; Ghodsi, Ali; Clemmensen, Line H.

    2017-01-01

    Principal component analysis (PCA) is one of the main unsupervised pre-processing methods for dimension reduction. When the training labels are available, it is worth using a supervised PCA strategy. In cases that both dimension reduction and variable selection are required, sparse PCA (SPCA...

  5. Inference for feature selection using the Lasso with high-dimensional data

    DEFF Research Database (Denmark)

    Brink-Jensen, Kasper; Ekstrøm, Claus Thorn

    2014-01-01

    Penalized regression models such as the Lasso have proved useful for variable selection in many fields - especially for situations with high-dimensional data where the numbers of predictors far exceeds the number of observations. These methods identify and rank variables of importance but do...... not generally provide any inference of the selected variables. Thus, the variables selected might be the "most important" but need not be significant. We propose a significance test for the selection found by the Lasso. We introduce a procedure that computes inference and p-values for features chosen...... by the Lasso. This method rephrases the null hypothesis and uses a randomization approach which ensures that the error rate is controlled even for small samples. We demonstrate the ability of the algorithm to compute $p$-values of the expected magnitude with simulated data using a multitude of scenarios...

  6. Regional regression models of percentile flows for the contiguous United States: Expert versus data-driven independent variable selection

    Directory of Open Access Journals (Sweden)

    Geoffrey Fouad

    2018-06-01

    New hydrological insights for the region: A set of three variables selected based on an expert assessment of factors that influence percentile flows performed similarly to larger sets of variables selected using a data-driven method. Expert assessment variables included mean annual precipitation, potential evapotranspiration, and baseflow index. Larger sets of up to 37 variables contributed little, if any, additional predictive information. Variables used to describe the distribution of basin data (e.g. standard deviation were not useful, and average values were sufficient to characterize physical and climatic basin conditions. Effectiveness of the expert assessment variables may be due to the high degree of multicollinearity (i.e. cross-correlation among additional variables. A tool is provided in the Supplementary material to predict percentile flows based on the three expert assessment variables. Future work should develop new variables with a strong understanding of the processes related to percentile flows.

  7. Current Debates on Variability in Child Welfare Decision-Making: A Selected Literature Review

    Directory of Open Access Journals (Sweden)

    Emily Keddell

    2014-11-01

    Full Text Available This article considers selected drivers of decision variability in child welfare decision-making and explores current debates in relation to these drivers. Covering the related influences of national orientation, risk and responsibility, inequality and poverty, evidence-based practice, constructions of abuse and its causes, domestic violence and cognitive processes, it discusses the literature in regards to how each of these influences decision variability. It situates these debates in relation to the ethical issue of variability and the equity issues that variability raises. I propose that despite the ecological complexity that drives decision variability, that improving internal (within-country decision consistency is still a valid goal. It may be that the use of annotated case examples, kind learning systems, and continued commitments to the social justice issues of inequality and individualisation can contribute to this goal.

  8. Evaluation of 'out-of-specification' CliniMACS CD34-selection procedures of hematopoietic progenitor cell-apheresis products.

    NARCIS (Netherlands)

    Braakman, E.; Schuurhuis, G.J.; Preijers, F.W.M.B.; Voermans, C.; Theunissen, K.; Riet, I. van; Fibbe, W.E.; Slaper-Cortenbach, I.C.M.

    2008-01-01

    BACKGROUND: Immunomagnetic selection of CD34(+) hematopoietic progenitor cells (HPC) using CliniMACS CD34 selection technology is widely used to provide high-purity HPC grafts. However, the number of nucleated cells and CD34+ cells recommended by the manufacturer for processing in a single procedure

  9. Evaluation of 'out-of-specification' CliniMACS CD34-selection procedures of hematopoietic progenitor cell-apheresis products

    NARCIS (Netherlands)

    Braakman, E.; Schuurhuis, G. J.; Preijers, F. W. M. B.; Voermans, C.; Theunissen, K.; van Riet, I.; Fibbe, W. E.; Slaper-Cortenbach, I.

    2008-01-01

    BACKGROUND: Immunomagnetic selection of CD34(+) hematopoietic progenitor cells (HPC) using CliniMACS CD34 selection technology is widely used to provide high-purity HPC grafts. However, the number of nucleated cells and CD34+ cells recommended by the manufacturer for processing in a single procedure

  10. Variations in Carabidae assemblages across the farmland habitats in relation to selected environmental variables including soil properties

    Directory of Open Access Journals (Sweden)

    Beáta Baranová

    2018-03-01

    Full Text Available The variations in ground beetles (Coleoptera: Carabidae assemblages across the three types of farmland habitats, arable land, meadows and woody vegetation were studied in relation to vegetation cover structure, intensity of agrotechnical interventions and selected soil properties. Material was pitfall trapped in 2010 and 2011 on twelve sites of the agricultural landscape in the Prešov town and its near vicinity, Eastern Slovakia. A total of 14,763 ground beetle individuals were entrapped. Material collection resulted into 92 Carabidae species, with the following six species dominating: Poecilus cupreus, Pterostichus melanarius, Pseudoophonus rufipes, Brachinus crepitans, Anchomenus dorsalis and Poecilus versicolor. Studied habitats differed significantly in the number of entrapped individuals, activity abundance as well as representation of the carabids according to their habitat preferences and ability to fly. However, no significant distinction was observed in the diversity, evenness neither dominance. The most significant environmental variables affecting Carabidae assemblages species variability were soil moisture and herb layer 0-20 cm. Another best variables selected by the forward selection were intensity of agrotechnical interventions, humus content and shrub vegetation. The other from selected soil properties seem to have just secondary meaning for the adult carabids. Environmental variables have the strongest effect on the habitat specialists, whereas ground beetles without special requirements to the habitat quality seem to be affected by the studied environmental variables just little.

  11. DESIGN OF A COUNTABLE PROCEDURE FOR THE REGISTRATION OF ENVIRONMENTAL VARIABLES

    Directory of Open Access Journals (Sweden)

    Elier Eugenio Rabanal-Arencibia

    2016-01-01

    Full Text Available Many companies present in their memoirs matters of environmental character, but they are few those that are able to count the environmental facts that definitively influence in their financial states. One of the challenges of our managerial sector in Cuba is to integrate the topic of the environment to the process of taking decisions and to the business strategies. A countable system that contemplates the environmental concept in its classifier of bills, obviously will have available information for its costs and environmental revenues, what is indispensable in the long term company development, especially if it is about companies related with the exploitation of natural resources. The purpose is to carry out the Design of a Countable Procedure for the registration of environmental variables, as a support to the continuous improvement of the Environmental Accounting. 

  12. Quantile selection procedure and assoiated distribution of ratios of order statistics from a restricted family of probability distributions

    International Nuclear Information System (INIS)

    Gupta, S.S.; Panchapakesan, S.

    1975-01-01

    A quantile selection procedure in reliability problems pertaining to a restricted family of probability distributions is discussed. This family is assumed to be star-ordered with respect to the standard normal distribution folded at the origin. Motivation for this formulation of the problem is described. Both exact and asymptotic results dealing with the distribution of the maximum of ratios of order statistics from such a family are obtained and tables of the appropriate constants, percentiles of this statistic, are given in order to facilitate the use of the selection procedure

  13. Variable selection based near infrared spectroscopy quantitative and qualitative analysis on wheat wet gluten

    Science.gov (United States)

    Lü, Chengxu; Jiang, Xunpeng; Zhou, Xingfan; Zhang, Yinqiao; Zhang, Naiqian; Wei, Chongfeng; Mao, Wenhua

    2017-10-01

    Wet gluten is a useful quality indicator for wheat, and short wave near infrared spectroscopy (NIRS) is a high performance technique with the advantage of economic rapid and nondestructive test. To study the feasibility of short wave NIRS analyzing wet gluten directly from wheat seed, 54 representative wheat seed samples were collected and scanned by spectrometer. 8 spectral pretreatment method and genetic algorithm (GA) variable selection method were used to optimize analysis. Both quantitative and qualitative model of wet gluten were built by partial least squares regression and discriminate analysis. For quantitative analysis, normalization is the optimized pretreatment method, 17 wet gluten sensitive variables are selected by GA, and GA model performs a better result than that of all variable model, with R2V=0.88, and RMSEV=1.47. For qualitative analysis, automatic weighted least squares baseline is the optimized pretreatment method, all variable models perform better results than those of GA models. The correct classification rates of 3 class of 30% wet gluten content are 95.45, 84.52, and 90.00%, respectively. The short wave NIRS technique shows potential for both quantitative and qualitative analysis of wet gluten for wheat seed.

  14. The Selection, Use, and Reporting of Control Variables in International Business Research

    DEFF Research Database (Denmark)

    Nielsen, Bo Bernhard; Raswant, Arpit

    2018-01-01

    This study explores the selection, use, and reporting of control variables in studies published in the leading international business (IB) research journals. We review a sample of 246 empirical studies published in the top five IB journals over the period 2012–2015 with particular emphasis...... on selection, use, and reporting of controls. Approximately 83% of studies included only half of what we consider Minimum Standard of Practice with regards to controls, whereas only 38% of the studies met the 75% threshold. We provide recommendations on how to effectively identify, use and report controls...

  15. Radiation load of the extremities and eye lenses of the staff during selected interventional radiology procedures

    International Nuclear Information System (INIS)

    Nikodemova, Denisa; Trosanova, Dominika

    2010-01-01

    The Slovak Medical University in Bratislava is involved in the ORAMED (Optimization of Radiation Protection for Medical Staff) research project, aimed at developing a unified methodology for a more accurate assessment of professional exposure of interventional radiology staff, with focus on extremity and eye lens dosimetry in selected procedures. Three cardiac procedures and 5 angiography examinations were selected: all technical parameters were monitored and the dose equivalent levels were measured by TL dosimetry at 9 anatomic sites of the body. Preliminary results were obtained for the radiation burden of the eyes and extremities during digital subtraction angiography of the lower limbs, collected from 7 hospital departments in partner EU states. Correlations between the evaluated data and the influence of some parameters are shown

  16. Methodological development for selection of significant predictors explaining fatal road accidents.

    Science.gov (United States)

    Dadashova, Bahar; Arenas-Ramírez, Blanca; Mira-McWilliams, José; Aparicio-Izquierdo, Francisco

    2016-05-01

    Identification of the most relevant factors for explaining road accident occurrence is an important issue in road safety research, particularly for future decision-making processes in transport policy. However model selection for this particular purpose is still an ongoing research. In this paper we propose a methodological development for model selection which addresses both explanatory variable and adequate model selection issues. A variable selection procedure, TIM (two-input model) method is carried out by combining neural network design and statistical approaches. The error structure of the fitted model is assumed to follow an autoregressive process. All models are estimated using Markov Chain Monte Carlo method where the model parameters are assigned non-informative prior distributions. The final model is built using the results of the variable selection. For the application of the proposed methodology the number of fatal accidents in Spain during 2000-2011 was used. This indicator has experienced the maximum reduction internationally during the indicated years thus making it an interesting time series from a road safety policy perspective. Hence the identification of the variables that have affected this reduction is of particular interest for future decision making. The results of the variable selection process show that the selected variables are main subjects of road safety policy measures. Published by Elsevier Ltd.

  17. Bayesian inference for the genetic control of water deficit tolerance in spring wheat by stochastic search variable selection.

    Science.gov (United States)

    Safari, Parviz; Danyali, Syyedeh Fatemeh; Rahimi, Mehdi

    2018-06-02

    Drought is the main abiotic stress seriously influencing wheat production. Information about the inheritance of drought tolerance is necessary to determine the most appropriate strategy to develop tolerant cultivars and populations. In this study, generation means analysis to identify the genetic effects controlling grain yield inheritance in water deficit and normal conditions was considered as a model selection problem in a Bayesian framework. Stochastic search variable selection (SSVS) was applied to identify the most important genetic effects and the best fitted models using different generations obtained from two crosses applying two water regimes in two growing seasons. The SSVS is used to evaluate the effect of each variable on the dependent variable via posterior variable inclusion probabilities. The model with the highest posterior probability is selected as the best model. In this study, the grain yield was controlled by the main effects (additive and non-additive effects) and epistatic. The results demonstrate that breeding methods such as recurrent selection and subsequent pedigree method and hybrid production can be useful to improve grain yield.

  18. Joint Bayesian variable and graph selection for regression models with network-structured predictors

    Science.gov (United States)

    Peterson, C. B.; Stingo, F. C.; Vannucci, M.

    2015-01-01

    In this work, we develop a Bayesian approach to perform selection of predictors that are linked within a network. We achieve this by combining a sparse regression model relating the predictors to a response variable with a graphical model describing conditional dependencies among the predictors. The proposed method is well-suited for genomic applications since it allows the identification of pathways of functionally related genes or proteins which impact an outcome of interest. In contrast to previous approaches for network-guided variable selection, we infer the network among predictors using a Gaussian graphical model and do not assume that network information is available a priori. We demonstrate that our method outperforms existing methods in identifying network-structured predictors in simulation settings, and illustrate our proposed model with an application to inference of proteins relevant to glioblastoma survival. PMID:26514925

  19. Resiliency and subjective health assessment. Moderating role of selected psychosocial variables

    Directory of Open Access Journals (Sweden)

    Michalina Sołtys

    2015-12-01

    Full Text Available Background Resiliency is defined as a relatively permanent personality trait, which may be assigned to the category of health resources. The aim of this study was to determine conditions in which resiliency poses a significant health resource (moderation, thereby broadening knowledge of the specifics of the relationship between resiliency and subjective health assessment. Participants and procedure The study included 142 individuals. In order to examine the level of resiliency, the Assessment Resiliency Scale (SPP-25 by N. Ogińska-Bulik and Z. Juczyński was used. Participants evaluated subjective health state by means of an analogue-visual scale. Additionally, in the research the following moderating variables were controlled: sex, objective health status, having a partner, professional activity and age. These data were obtained by personal survey. Results The results confirmed the relationship between resiliency and subjective health assessment. Multiple regression analysis revealed that sex, having a partner and professional activity are significant moderators of associations between level of resiliency and subjective health evaluation. However, statistically significant interaction effects for health status and age as a moderator were not observed. Conclusions Resiliency is associated with subjective health assessment among adults, and selected socio-demographic features (such as sex, having a partner, professional activity moderate this relationship. This confirms the significant role of resiliency as a health resource and a reason to emphasize the benefits of enhancing the potential of individuals for their psychophysical wellbeing. However, the research requires replication in a more homogeneous sample.

  20. Variable selection for confounder control, flexible modeling and Collaborative Targeted Minimum Loss-based Estimation in causal inference

    Science.gov (United States)

    Schnitzer, Mireille E.; Lok, Judith J.; Gruber, Susan

    2015-01-01

    This paper investigates the appropriateness of the integration of flexible propensity score modeling (nonparametric or machine learning approaches) in semiparametric models for the estimation of a causal quantity, such as the mean outcome under treatment. We begin with an overview of some of the issues involved in knowledge-based and statistical variable selection in causal inference and the potential pitfalls of automated selection based on the fit of the propensity score. Using a simple example, we directly show the consequences of adjusting for pure causes of the exposure when using inverse probability of treatment weighting (IPTW). Such variables are likely to be selected when using a naive approach to model selection for the propensity score. We describe how the method of Collaborative Targeted minimum loss-based estimation (C-TMLE; van der Laan and Gruber, 2010) capitalizes on the collaborative double robustness property of semiparametric efficient estimators to select covariates for the propensity score based on the error in the conditional outcome model. Finally, we compare several approaches to automated variable selection in low-and high-dimensional settings through a simulation study. From this simulation study, we conclude that using IPTW with flexible prediction for the propensity score can result in inferior estimation, while Targeted minimum loss-based estimation and C-TMLE may benefit from flexible prediction and remain robust to the presence of variables that are highly correlated with treatment. However, in our study, standard influence function-based methods for the variance underestimated the standard errors, resulting in poor coverage under certain data-generating scenarios. PMID:26226129

  1. Heuristic and probabilistic wind power availability estimation procedures: Improved tools for technology and site selection

    Energy Technology Data Exchange (ETDEWEB)

    Nigim, K.A. [University of Waterloo, Waterloo, Ont. (Canada). Department of Electrical and Computer Engineering; Parker, Paul [University of Waterloo, Waterloo, Ont. (Canada). Department of Geography, Environmental Studies

    2007-04-15

    The paper describes two investigative procedures to estimate wind power from measured wind velocities. Wind velocity data are manipulated to visualize the site potential by investigating the probable wind power availability and its capacity to meet a targeted demand. The first procedure is an availability procedure that looks at the wind characteristics and its probable energy capturing profile. This profile of wind enables the probable maximum operating wind velocity profile for a selected wind turbine design to be predicted. The structured procedures allow for a consequent adjustment, sorting and grouping of the measured wind velocity data taken at different time intervals and hub heights. The second procedure is the adequacy procedure that investigates the probable degree of availability and the application consequences. Both procedures are programmed using MathCAD symbolic mathematical software. The math tool is used to generate a visual interpolation of the data as well as numerical results from extensive data sets that exceed the capacity of conventional spreadsheet tools. Two sites located in Southern Ontario, Canada are investigated using the procedures. Successful implementation of the procedures supports informed decision making where a hill site is shown to have much higher wind potential than that measured at the local airport. The process is suitable for a wide spectrum of users who are considering the energy potential for either a grid-tied or off-grid wind energy system. (author)

  2. Selective Sequential Zero-Base Budgeting Procedures Based on Total Factor Productivity Indicators

    OpenAIRE

    A. Ishikawa; E. F. Sudit

    1981-01-01

    The authors' purpose in this paper is to develop productivity-based sequential budgeting procedures designed to expedite identification of major problem areas in bugetary performance, as well as to reduce the costs associated with comprehensive zero-base analyses. The concept of total factor productivity is reviewed and its relations to ordinary and zero-based budgeting are discussed in detail. An outline for a selective sequential analysis based on monitoring of three key indicators of (a) i...

  3. Selection of variables for neural network analysis. Comparisons of several methods with high energy physics data

    International Nuclear Information System (INIS)

    Proriol, J.

    1994-01-01

    Five different methods are compared for selecting the most important variables with a view to classifying high energy physics events with neural networks. The different methods are: the F-test, Principal Component Analysis (PCA), a decision tree method: CART, weight evaluation, and Optimal Cell Damage (OCD). The neural networks use the variables selected with the different methods. We compare the percentages of events properly classified by each neural network. The learning set and the test set are the same for all the neural networks. (author)

  4. Demographic Variables and Selective, Sustained Attention and Planning through Cognitive Tasks among Healthy Adults

    Directory of Open Access Journals (Sweden)

    Afsaneh Zarghi

    2011-04-01

    Full Text Available Introduction: Cognitive tasks are considered to be applicable and appropriate in assessing cognitive domains. The purpose of our study is to determine the relationship existence between variables of age, sex and education with selective, sustained attention and planning abilities by means of computerized cognitive tasks among healthy adults. Methods: A cross-sectional study was implemented during 6 months from June to November, 2010 on 84 healthy adults (42 male and 42 female. The whole participants performed computerized CPT, STROOP and TOL tests after being content and trained. Results: The obtained data indicate that there is a significant correlation coefficient between age, sex and education variables (p<0.05. Discussion: The above-mentioned tests can be used to assess selective, sustained attention and planning.

  5. Preoperative testing and risk assessment: perspectives on patient selection in ambulatory anesthetic procedures

    Directory of Open Access Journals (Sweden)

    Stierer TL

    2015-08-01

    Full Text Available Tracey L Stierer,1,2 Nancy A Collop3,41Department of Anesthesiology, 2Department of Critical Care Medicine, Otolaryngology Head and Neck Surgery, Johns Hopkins Medicine, Baltimore, MD, USA; 3Department of Medicine, 4Department of Neurology, Emory University, Emory Sleep Center, Wesley Woods Center, Atlanta, GA, USAAbstract: With recent advances in surgical and anesthetic technique, there has been a growing emphasis on the delivery of care to patients undergoing ambulatory procedures of increasing complexity. Appropriate patient selection and meticulous preparation are vital to the provision of a safe, quality perioperative experience. It is not unusual for patients with complex medical histories and substantial systemic disease to be scheduled for discharge on the same day as their surgical procedure. The trend to “push the envelope” by triaging progressively sicker patients to ambulatory surgical facilities has resulted in a number of challenges for the anesthesia provider who will assume their care. It is well known that certain patient diseases are associated with increased perioperative risk. It is therefore important to define clinical factors that warrant more extensive testing of the patient and medical conditions that present a prohibitive risk for an adverse outcome. The preoperative assessment is an opportunity for the anesthesia provider to determine the status and stability of the patient’s health, provide preoperative education and instructions, and offer support and reassurance to the patient and the patient’s family members. Communication between the surgeon/proceduralist and the anesthesia provider is critical in achieving optimal outcome. A multifaceted approach is required when considering whether a specific patient will be best served having their procedure on an outpatient basis. Not only should the patient's comorbidities be stable and optimized, but details regarding the planned procedure and the resources available

  6. Effects of musical tempo on physiological, affective, and perceptual variables and performance of self-selected walking pace.

    Science.gov (United States)

    Almeida, Flávia Angélica Martins; Nunes, Renan Felipe Hartmann; Ferreira, Sandro Dos Santos; Krinski, Kleverton; Elsangedy, Hassan Mohamed; Buzzachera, Cosme Franklin; Alves, Ragami Chaves; Gregorio da Silva, Sergio

    2015-06-01

    [Purpose] This study investigated the effects of musical tempo on physiological, affective, and perceptual responses as well as the performance of self-selected walking pace. [Subjects] The study included 28 adult women between 29 and 51 years old. [Methods] The subjects were divided into three groups: no musical stimulation group (control), and 90 and 140 beats per minute musical tempo groups. Each subject underwent three experimental sessions: involved familiarization with the equipment, an incremental test to exhaustion, and a 30-min walk on a treadmill at a self-selected pace, respectively. During the self-selected walking session, physiological, perceptual, and affective variables were evaluated, and walking performance was evaluated at the end. [Results] There were no significant differences in physiological variables or affective response among groups. However, there were significant differences in perceptual response and walking performance among groups. [Conclusion] Fast music (140 beats per minute) promotes a higher rating of perceived exertion and greater performance in self-selected walking pace without significantly altering physiological variables or affective response.

  7. Statistical identification of effective input variables

    International Nuclear Information System (INIS)

    Vaurio, J.K.

    1982-09-01

    A statistical sensitivity analysis procedure has been developed for ranking the input data of large computer codes in the order of sensitivity-importance. The method is economical for large codes with many input variables, since it uses a relatively small number of computer runs. No prior judgemental elimination of input variables is needed. The sceening method is based on stagewise correlation and extensive regression analysis of output values calculated with selected input value combinations. The regression process deals with multivariate nonlinear functions, and statistical tests are also available for identifying input variables that contribute to threshold effects, i.e., discontinuities in the output variables. A computer code SCREEN has been developed for implementing the screening techniques. The efficiency has been demonstrated by several examples and applied to a fast reactor safety analysis code (Venus-II). However, the methods and the coding are general and not limited to such applications

  8. Variable Selection in Heterogeneous Datasets: A Truncated-rank Sparse Linear Mixed Model with Applications to Genome-wide Association Studies.

    Science.gov (United States)

    Wang, Haohan; Aragam, Bryon; Xing, Eric P

    2018-04-26

    A fundamental and important challenge in modern datasets of ever increasing dimensionality is variable selection, which has taken on renewed interest recently due to the growth of biological and medical datasets with complex, non-i.i.d. structures. Naïvely applying classical variable selection methods such as the Lasso to such datasets may lead to a large number of false discoveries. Motivated by genome-wide association studies in genetics, we study the problem of variable selection for datasets arising from multiple subpopulations, when this underlying population structure is unknown to the researcher. We propose a unified framework for sparse variable selection that adaptively corrects for population structure via a low-rank linear mixed model. Most importantly, the proposed method does not require prior knowledge of sample structure in the data and adaptively selects a covariance structure of the correct complexity. Through extensive experiments, we illustrate the effectiveness of this framework over existing methods. Further, we test our method on three different genomic datasets from plants, mice, and human, and discuss the knowledge we discover with our method. Copyright © 2018. Published by Elsevier Inc.

  9. Considerable variability of procedural sedation and analgesia practices for gastrointestinal endoscopic procedures in Europe

    NARCIS (Netherlands)

    Vaessen, Hermanus H B; Knape, Johannes T A

    2016-01-01

    Background/Aims: The use of moderate to deep sedation for gastrointestinal endoscopic procedures has increased in Europe considerably. Because this level of sedation is a risky medical procedure, a number of international guidelines have been developed. This survey aims to review if, and if so

  10. Concurrent variable-interval variable-ratio schedules in a dynamic choice environment.

    Science.gov (United States)

    Bell, Matthew C; Baum, William M

    2017-11-01

    Most studies of operant choice have focused on presenting subjects with a fixed pair of schedules across many experimental sessions. Using these methods, studies of concurrent variable- interval variable-ratio schedules helped to evaluate theories of choice. More recently, a growing literature has focused on dynamic choice behavior. Those dynamic choice studies have analyzed behavior on a number of different time scales using concurrent variable-interval schedules. Following the dynamic choice approach, the present experiment examined performance on concurrent variable-interval variable-ratio schedules in a rapidly changing environment. Our objectives were to compare performance on concurrent variable-interval variable-ratio schedules with extant data on concurrent variable-interval variable-interval schedules using a dynamic choice procedure and to extend earlier work on concurrent variable-interval variable-ratio schedules. We analyzed performances at different time scales, finding strong similarities between concurrent variable-interval variable-interval and concurrent variable-interval variable- ratio performance within dynamic choice procedures. Time-based measures revealed almost identical performance in the two procedures compared with response-based measures, supporting the view that choice is best understood as time allocation. Performance at the smaller time scale of visits accorded with the tendency seen in earlier research toward developing a pattern of strong preference for and long visits to the richer alternative paired with brief "samples" at the leaner alternative ("fix and sample"). © 2017 Society for the Experimental Analysis of Behavior.

  11. A Bayesian random effects discrete-choice model for resource selection: Population-level selection inference

    Science.gov (United States)

    Thomas, D.L.; Johnson, D.; Griffith, B.

    2006-01-01

    Modeling the probability of use of land units characterized by discrete and continuous measures, we present a Bayesian random-effects model to assess resource selection. This model provides simultaneous estimation of both individual- and population-level selection. Deviance information criterion (DIC), a Bayesian alternative to AIC that is sample-size specific, is used for model selection. Aerial radiolocation data from 76 adult female caribou (Rangifer tarandus) and calf pairs during 1 year on an Arctic coastal plain calving ground were used to illustrate models and assess population-level selection of landscape attributes, as well as individual heterogeneity of selection. Landscape attributes included elevation, NDVI (a measure of forage greenness), and land cover-type classification. Results from the first of a 2-stage model-selection procedure indicated that there is substantial heterogeneity among cow-calf pairs with respect to selection of the landscape attributes. In the second stage, selection of models with heterogeneity included indicated that at the population-level, NDVI and land cover class were significant attributes for selection of different landscapes by pairs on the calving ground. Population-level selection coefficients indicate that the pairs generally select landscapes with higher levels of NDVI, but the relationship is quadratic. The highest rate of selection occurs at values of NDVI less than the maximum observed. Results for land cover-class selections coefficients indicate that wet sedge, moist sedge, herbaceous tussock tundra, and shrub tussock tundra are selected at approximately the same rate, while alpine and sparsely vegetated landscapes are selected at a lower rate. Furthermore, the variability in selection by individual caribou for moist sedge and sparsely vegetated landscapes is large relative to the variability in selection of other land cover types. The example analysis illustrates that, while sometimes computationally intense, a

  12. Cataclysmic variables from a ROSAT/2MASS selection - I. Four new intermediate polars

    NARCIS (Netherlands)

    Gänsicke, B.T.; Marsh, T.R.; Edge, A.; Rodríguez-Gil, P.; Steeghs, D.; Araujo-Betancor, S.; Harlaftis, E.; Giannakis, O.; Pyrzas, S.; Morales-Rueda, L.; Aungwerojwit, A.

    2005-01-01

    We report the first results from a new search for cataclysmic variables (CVs) using a combined X-ray (ROSAT)/infrared (2MASS) target selection that discriminates against background active galactic nuclei. Identification spectra were obtained at the Isaac Newton Telescope for a total of 174 targets,

  13. A Proposed Model for Selecting Measurement Procedures for the Assessment and Treatment of Problem Behavior.

    Science.gov (United States)

    LeBlanc, Linda A; Raetz, Paige B; Sellers, Tyra P; Carr, James E

    2016-03-01

    Practicing behavior analysts frequently assess and treat problem behavior as part of their ongoing job responsibilities. Effective measurement of problem behavior is critical to success in these activities because some measures of problem behavior provide more accurate and complete information about the behavior than others. However, not every measurement procedure is appropriate for every problem behavior and therapeutic circumstance. We summarize the most commonly used measurement procedures, describe the contexts for which they are most appropriate, and propose a clinical decision-making model for selecting measurement produces given certain features of the behavior and constraints of the therapeutic environment.

  14. Selecting aesthetic gynecologic procedures for plastic surgeons: a review of target methodology.

    Science.gov (United States)

    Ostrzenski, Adam

    2013-04-01

    The objective of this article was to assist cosmetic-plastic surgeons in selecting aesthetic cosmetic gynecologic-plastic surgical interventions. Target methodological analyses of pertinent evidence-based scientific papers and anecdotal information linked to surgical techniques for cosmetic-plastic female external genitalia were examined. A search of the existing literature from 1900 through June 2011 was performed by utilizing electronic and manual databases. A total of 87 articles related to cosmetic-plastic gynecologic surgeries were identified in peer-review journals. Anecdotal information was identified in three sources (Barwijuk, Obstet Gynecol J 9(3):2178-2179, 2011; Benson, 5th annual congress on aesthetic vaginal surgery, Tucson, AZ, USA, November 14-15, 2010; Scheinberg, Obstet Gynecol J 9(3):2191, 2011). Among those articles on cosmetic-plastic gynecologic surgical technique that were reviewed, three articles met the criteria for evidence-based medicine level II, one article was level II-1 and two papers were level II-2. The remaining papers were classified as level III. The pertinent 25 papers met the inclusion criteria and were analyzed. There was no documentation on the safety and effectiveness of cosmetic-plastic gynecologic procedures in the scientific literature. All published surgical interventions are not suitable for a cosmetic-plastic practice. The absence of documentation on safety and effectiveness related to cosmetic-plastic gynecologic procedures prevents the establishment of a standard of practice. Traditional gynecologic surgical procedures cannot be labeled and used as cosmetic-plastic procedures, it is a deceptive practice. Obtaining legal trademarks on traditional gynecologic procedures and creating a business model that tries to control clinical-scientific knowledge dissemination is unethical. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings

  15. Social variables exert selective pressures in the evolution and form of primate mimetic musculature.

    Science.gov (United States)

    Burrows, Anne M; Li, Ly; Waller, Bridget M; Micheletta, Jerome

    2016-04-01

    Mammals use their faces in social interactions more so than any other vertebrates. Primates are an extreme among most mammals in their complex, direct, lifelong social interactions and their frequent use of facial displays is a means of proximate visual communication with conspecifics. The available repertoire of facial displays is primarily controlled by mimetic musculature, the muscles that move the face. The form of these muscles is, in turn, limited by and influenced by phylogenetic inertia but here we use examples, both morphological and physiological, to illustrate the influence that social variables may exert on the evolution and form of mimetic musculature among primates. Ecomorphology is concerned with the adaptive responses of morphology to various ecological variables such as diet, foliage density, predation pressures, and time of day activity. We present evidence that social variables also exert selective pressures on morphology, specifically using mimetic muscles among primates as an example. Social variables include group size, dominance 'style', and mating systems. We present two case studies to illustrate the potential influence of social behavior on adaptive morphology of mimetic musculature in primates: (1) gross morphology of the mimetic muscles around the external ear in closely related species of macaque (Macaca mulatta and Macaca nigra) characterized by varying dominance styles and (2) comparative physiology of the orbicularis oris muscle among select ape species. This muscle is used in both facial displays/expressions and in vocalizations/human speech. We present qualitative observations of myosin fiber-type distribution in this muscle of siamang (Symphalangus syndactylus), chimpanzee (Pan troglodytes), and human to demonstrate the potential influence of visual and auditory communication on muscle physiology. In sum, ecomorphologists should be aware of social selective pressures as well as ecological ones, and that observed morphology might

  16. Parent-Implemented Procedural Modification of Escape Extinction in the Treatment of Food Selectivity in a Young Child with Autism

    Science.gov (United States)

    Tarbox, Jonathan; Schiff, Averil; Najdowski, Adel C.

    2010-01-01

    Fool selectivity is characterized by the consumption of an inadequate variety of foods. The effectiveness of behavioral treatment procedures, particularly nonremoval of the spoon, is well validated by research. The role of parents in the treatment of feeding disorders and the feasibility of behavioral procedures for parent implementation in the…

  17. The procedure of alternative site selection within the report of the study group on the radioactive waste final repository selection process (AKEnd); Das Verfahren der alternativen Standortsuche im Bericht des Arbeitskreises Auswahlverfahren Endlagerstandorte (AKEnd)

    Energy Technology Data Exchange (ETDEWEB)

    Brenner, M. [Jena Univ. (Germany)

    2005-07-01

    The paper discusses the results of the report of the study group on the radioactive waste final repository selection process with respect to the alternative site selection procedure. Key points of the report are the long-term safety, the alternativity of sites and the concept of one repository. The critique on this report is focussed on the topics site selection and licensing procedures, civil participation, the factor time and the question of cost.

  18. Locating disease genes using Bayesian variable selection with the Haseman-Elston method

    Directory of Open Access Journals (Sweden)

    He Qimei

    2003-12-01

    Full Text Available Abstract Background We applied stochastic search variable selection (SSVS, a Bayesian model selection method, to the simulated data of Genetic Analysis Workshop 13. We used SSVS with the revisited Haseman-Elston method to find the markers linked to the loci determining change in cholesterol over time. To study gene-gene interaction (epistasis and gene-environment interaction, we adopted prior structures, which incorporate the relationship among the predictors. This allows SSVS to search in the model space more efficiently and avoid the less likely models. Results In applying SSVS, instead of looking at the posterior distribution of each of the candidate models, which is sensitive to the setting of the prior, we ranked the candidate variables (markers according to their marginal posterior probability, which was shown to be more robust to the prior. Compared with traditional methods that consider one marker at a time, our method considers all markers simultaneously and obtains more favorable results. Conclusions We showed that SSVS is a powerful method for identifying linked markers using the Haseman-Elston method, even for weak effects. SSVS is very effective because it does a smart search over the entire model space.

  19. Bayesian variable selection for post-analytic interrogation of susceptibility loci.

    Science.gov (United States)

    Chen, Siying; Nunez, Sara; Reilly, Muredach P; Foulkes, Andrea S

    2017-06-01

    Understanding the complex interplay among protein coding genes and regulatory elements requires rigorous interrogation with analytic tools designed for discerning the relative contributions of overlapping genomic regions. To this aim, we offer a novel application of Bayesian variable selection (BVS) for classifying genomic class level associations using existing large meta-analysis summary level resources. This approach is applied using the expectation maximization variable selection (EMVS) algorithm to typed and imputed SNPs across 502 protein coding genes (PCGs) and 220 long intergenic non-coding RNAs (lncRNAs) that overlap 45 known loci for coronary artery disease (CAD) using publicly available Global Lipids Gentics Consortium (GLGC) (Teslovich et al., 2010; Willer et al., 2013) meta-analysis summary statistics for low-density lipoprotein cholesterol (LDL-C). The analysis reveals 33 PCGs and three lncRNAs across 11 loci with >50% posterior probabilities for inclusion in an additive model of association. The findings are consistent with previous reports, while providing some new insight into the architecture of LDL-cholesterol to be investigated further. As genomic taxonomies continue to evolve, additional classes such as enhancer elements and splicing regions, can easily be layered into the proposed analysis framework. Moreover, application of this approach to alternative publicly available meta-analysis resources, or more generally as a post-analytic strategy to further interrogate regions that are identified through single point analysis, is straightforward. All coding examples are implemented in R version 3.2.1 and provided as supplemental material. © 2016, The International Biometric Society.

  20. QUASI-STELLAR OBJECT SELECTION ALGORITHM USING TIME VARIABILITY AND MACHINE LEARNING: SELECTION OF 1620 QUASI-STELLAR OBJECT CANDIDATES FROM MACHO LARGE MAGELLANIC CLOUD DATABASE

    International Nuclear Information System (INIS)

    Kim, Dae-Won; Protopapas, Pavlos; Alcock, Charles; Trichas, Markos; Byun, Yong-Ik; Khardon, Roni

    2011-01-01

    We present a new quasi-stellar object (QSO) selection algorithm using a Support Vector Machine, a supervised classification method, on a set of extracted time series features including period, amplitude, color, and autocorrelation value. We train a model that separates QSOs from variable stars, non-variable stars, and microlensing events using 58 known QSOs, 1629 variable stars, and 4288 non-variables in the MAssive Compact Halo Object (MACHO) database as a training set. To estimate the efficiency and the accuracy of the model, we perform a cross-validation test using the training set. The test shows that the model correctly identifies ∼80% of known QSOs with a 25% false-positive rate. The majority of the false positives are Be stars. We applied the trained model to the MACHO Large Magellanic Cloud (LMC) data set, which consists of 40 million light curves, and found 1620 QSO candidates. During the selection none of the 33,242 known MACHO variables were misclassified as QSO candidates. In order to estimate the true false-positive rate, we crossmatched the candidates with astronomical catalogs including the Spitzer Surveying the Agents of a Galaxy's Evolution LMC catalog and a few X-ray catalogs. The results further suggest that the majority of the candidates, more than 70%, are QSOs.

  1. Multivariate modeling of complications with data driven variable selection: Guarding against overfitting and effects of data set size

    International Nuclear Information System (INIS)

    Schaaf, Arjen van der; Xu Chengjian; Luijk, Peter van; Veld, Aart A. van’t; Langendijk, Johannes A.; Schilstra, Cornelis

    2012-01-01

    Purpose: Multivariate modeling of complications after radiotherapy is frequently used in conjunction with data driven variable selection. This study quantifies the risk of overfitting in a data driven modeling method using bootstrapping for data with typical clinical characteristics, and estimates the minimum amount of data needed to obtain models with relatively high predictive power. Materials and methods: To facilitate repeated modeling and cross-validation with independent datasets for the assessment of true predictive power, a method was developed to generate simulated data with statistical properties similar to real clinical data sets. Characteristics of three clinical data sets from radiotherapy treatment of head and neck cancer patients were used to simulate data with set sizes between 50 and 1000 patients. A logistic regression method using bootstrapping and forward variable selection was used for complication modeling, resulting for each simulated data set in a selected number of variables and an estimated predictive power. The true optimal number of variables and true predictive power were calculated using cross-validation with very large independent data sets. Results: For all simulated data set sizes the number of variables selected by the bootstrapping method was on average close to the true optimal number of variables, but showed considerable spread. Bootstrapping is more accurate in selecting the optimal number of variables than the AIC and BIC alternatives, but this did not translate into a significant difference of the true predictive power. The true predictive power asymptotically converged toward a maximum predictive power for large data sets, and the estimated predictive power converged toward the true predictive power. More than half of the potential predictive power is gained after approximately 200 samples. Our simulations demonstrated severe overfitting (a predicative power lower than that of predicting 50% probability) in a number of small

  2. Stochastic weather inputs for improved urban water demand forecasting: application of nonlinear input variable selection and machine learning methods

    Science.gov (United States)

    Quilty, J.; Adamowski, J. F.

    2015-12-01

    Urban water supply systems are often stressed during seasonal outdoor water use as water demands related to the climate are variable in nature making it difficult to optimize the operation of the water supply system. Urban water demand forecasts (UWD) failing to include meteorological conditions as inputs to the forecast model may produce poor forecasts as they cannot account for the increase/decrease in demand related to meteorological conditions. Meteorological records stochastically simulated into the future can be used as inputs to data-driven UWD forecasts generally resulting in improved forecast accuracy. This study aims to produce data-driven UWD forecasts for two different Canadian water utilities (Montreal and Victoria) using machine learning methods by first selecting historical UWD and meteorological records derived from a stochastic weather generator using nonlinear input variable selection. The nonlinear input variable selection methods considered in this work are derived from the concept of conditional mutual information, a nonlinear dependency measure based on (multivariate) probability density functions and accounts for relevancy, conditional relevancy, and redundancy from a potential set of input variables. The results of our study indicate that stochastic weather inputs can improve UWD forecast accuracy for the two sites considered in this work. Nonlinear input variable selection is suggested as a means to identify which meteorological conditions should be utilized in the forecast.

  3. Doubly sparse factor models for unifying feature transformation and feature selection

    International Nuclear Information System (INIS)

    Katahira, Kentaro; Okanoya, Kazuo; Okada, Masato; Matsumoto, Narihisa; Sugase-Miyamoto, Yasuko

    2010-01-01

    A number of unsupervised learning methods for high-dimensional data are largely divided into two groups based on their procedures, i.e., (1) feature selection, which discards irrelevant dimensions of the data, and (2) feature transformation, which constructs new variables by transforming and mixing over all dimensions. We propose a method that both selects and transforms features in a common Bayesian inference procedure. Our method imposes a doubly automatic relevance determination (ARD) prior on the factor loading matrix. We propose a variational Bayesian inference for our model and demonstrate the performance of our method on both synthetic and real data.

  4. Doubly sparse factor models for unifying feature transformation and feature selection

    Energy Technology Data Exchange (ETDEWEB)

    Katahira, Kentaro; Okanoya, Kazuo; Okada, Masato [ERATO, Okanoya Emotional Information Project, Japan Science Technology Agency, Saitama (Japan); Matsumoto, Narihisa; Sugase-Miyamoto, Yasuko, E-mail: okada@k.u-tokyo.ac.j [Human Technology Research Institute, National Institute of Advanced Industrial Science and Technology, Ibaraki (Japan)

    2010-06-01

    A number of unsupervised learning methods for high-dimensional data are largely divided into two groups based on their procedures, i.e., (1) feature selection, which discards irrelevant dimensions of the data, and (2) feature transformation, which constructs new variables by transforming and mixing over all dimensions. We propose a method that both selects and transforms features in a common Bayesian inference procedure. Our method imposes a doubly automatic relevance determination (ARD) prior on the factor loading matrix. We propose a variational Bayesian inference for our model and demonstrate the performance of our method on both synthetic and real data.

  5. Determination of main fruits in adulterated nectars by ATR-FTIR spectroscopy combined with multivariate calibration and variable selection methods.

    Science.gov (United States)

    Miaw, Carolina Sheng Whei; Assis, Camila; Silva, Alessandro Rangel Carolino Sales; Cunha, Maria Luísa; Sena, Marcelo Martins; de Souza, Scheilla Vitorino Carvalho

    2018-07-15

    Grape, orange, peach and passion fruit nectars were formulated and adulterated by dilution with syrup, apple and cashew juices at 10 levels for each adulterant. Attenuated total reflectance Fourier transform mid infrared (ATR-FTIR) spectra were obtained. Partial least squares (PLS) multivariate calibration models allied to different variable selection methods, such as interval partial least squares (iPLS), ordered predictors selection (OPS) and genetic algorithm (GA), were used to quantify the main fruits. PLS improved by iPLS-OPS variable selection showed the highest predictive capacity to quantify the main fruit contents. The selected variables in the final models varied from 72 to 100; the root mean square errors of prediction were estimated from 0.5 to 2.6%; the correlation coefficients of prediction ranged from 0.948 to 0.990; and, the mean relative errors of prediction varied from 3.0 to 6.7%. All of the developed models were validated. Copyright © 2018 Elsevier Ltd. All rights reserved.

  6. IMPROVED VARIABLE STAR SEARCH IN LARGE PHOTOMETRIC DATA SETS: NEW VARIABLES IN CoRoT FIELD LRa02 DETECTED BY BEST II

    International Nuclear Information System (INIS)

    Fruth, T.; Cabrera, J.; Csizmadia, Sz.; Eigmüller, P.; Erikson, A.; Kirste, S.; Pasternacki, T.; Rauer, H.; Titz-Weider, R.; Kabath, P.; Chini, R.; Lemke, R.; Murphy, M.

    2012-01-01

    The CoRoT field LRa02 has been observed with the Berlin Exoplanet Search Telescope II (BEST II) during the southern summer 2007/2008. A first analysis of stellar variability led to the publication of 345 newly discovered variable stars. Now, a deeper analysis of this data set was used to optimize the variability search procedure. Several methods and parameters have been tested in order to improve the selection process compared to the widely used J index for variability ranking. This paper describes an empirical approach to treat systematic trends in photometric data based upon the analysis of variance statistics that can significantly decrease the rate of false detections. Finally, the process of reanalysis and method improvement has virtually doubled the number of variable stars compared to the first analysis by Kabath et al. A supplementary catalog of 272 previously unknown periodic variables plus 52 stars with suspected variability is presented. Improved ephemerides are given for 19 known variables in the field. In addition, the BEST II results are compared with CoRoT data and its automatic variability classification.

  7. Calibration Variable Selection and Natural Zero Determination for Semispan and Canard Balances

    Science.gov (United States)

    Ulbrich, Norbert M.

    2013-01-01

    Independent calibration variables for the characterization of semispan and canard wind tunnel balances are discussed. It is shown that the variable selection for a semispan balance is determined by the location of the resultant normal and axial forces that act on the balance. These two forces are the first and second calibration variable. The pitching moment becomes the third calibration variable after the normal and axial forces are shifted to the pitch axis of the balance. Two geometric distances, i.e., the rolling and yawing moment arms, are the fourth and fifth calibration variable. They are traditionally substituted by corresponding moments to simplify the use of calibration data during a wind tunnel test. A canard balance is related to a semispan balance. It also only measures loads on one half of a lifting surface. However, the axial force and yawing moment are of no interest to users of a canard balance. Therefore, its calibration variable set is reduced to the normal force, pitching moment, and rolling moment. The combined load diagrams of the rolling and yawing moment for a semispan balance are discussed. They may be used to illustrate connections between the wind tunnel model geometry, the test section size, and the calibration load schedule. Then, methods are reviewed that may be used to obtain the natural zeros of a semispan or canard balance. In addition, characteristics of three semispan balance calibration rigs are discussed. Finally, basic requirements for a full characterization of a semispan balance are reviewed.

  8. Selection for altruism through random drift in variable size populations

    Directory of Open Access Journals (Sweden)

    Houchmandzadeh Bahram

    2012-05-01

    Full Text Available Abstract Background Altruistic behavior is defined as helping others at a cost to oneself and a lowered fitness. The lower fitness implies that altruists should be selected against, which is in contradiction with their widespread presence is nature. Present models of selection for altruism (kin or multilevel show that altruistic behaviors can have ‘hidden’ advantages if the ‘common good’ produced by altruists is restricted to some related or unrelated groups. These models are mostly deterministic, or assume a frequency dependent fitness. Results Evolutionary dynamics is a competition between deterministic selection pressure and stochastic events due to random sampling from one generation to the next. We show here that an altruistic allele extending the carrying capacity of the habitat can win by increasing the random drift of “selfish” alleles. In other terms, the fixation probability of altruistic genes can be higher than those of a selfish ones, even though altruists have a smaller fitness. Moreover when populations are geographically structured, the altruists advantage can be highly amplified and the fixation probability of selfish genes can tend toward zero. The above results are obtained both by numerical and analytical calculations. Analytical results are obtained in the limit of large populations. Conclusions The theory we present does not involve kin or multilevel selection, but is based on the existence of random drift in variable size populations. The model is a generalization of the original Fisher-Wright and Moran models where the carrying capacity depends on the number of altruists.

  9. Classification and quantitation of milk powder by near-infrared spectroscopy and mutual information-based variable selection and partial least squares

    Science.gov (United States)

    Chen, Hui; Tan, Chao; Lin, Zan; Wu, Tong

    2018-01-01

    Milk is among the most popular nutrient source worldwide, which is of great interest due to its beneficial medicinal properties. The feasibility of the classification of milk powder samples with respect to their brands and the determination of protein concentration is investigated by NIR spectroscopy along with chemometrics. Two datasets were prepared for experiment. One contains 179 samples of four brands for classification and the other contains 30 samples for quantitative analysis. Principal component analysis (PCA) was used for exploratory analysis. Based on an effective model-independent variable selection method, i.e., minimal-redundancy maximal-relevance (MRMR), only 18 variables were selected to construct a partial least-square discriminant analysis (PLS-DA) model. On the test set, the PLS-DA model based on the selected variable set was compared with the full-spectrum PLS-DA model, both of which achieved 100% accuracy. In quantitative analysis, the partial least-square regression (PLSR) model constructed by the selected subset of 260 variables outperforms significantly the full-spectrum model. It seems that the combination of NIR spectroscopy, MRMR and PLS-DA or PLSR is a powerful tool for classifying different brands of milk and determining the protein content.

  10. Expert and non-expert groups perception of LILW repository site selection procedure

    International Nuclear Information System (INIS)

    Zeleznik, N.; Polic, M.

    2001-01-01

    Slovenia is now in the process of the site selection for a low and intermediate level radioactive waste (LILW) repository. Earlier searches for the LILW repository site confronted the Agency for radwaste management (ARAO) with a number of problems, mainly concerning the contacts with the local communities and their willingness to accept the repository. Therefore the Agency started with a new, so-called mixed mode approach to the site selection, where the special role of a mediator is introduced. The mediator represents the link between the investor and the local community, and facilitates the communication and negotiations between both. In this study we try to find out how people perceive the mediating process and conditions under which the LILW repository would be accepted in the local community. Therefore a special survey was conducted. The results showed some of the conditions under which participants would possibly accept the LILW repository. Differences in the perception between non-expert and expert groups were demonstrated and analysed, especially in the assessment of the consequences of LILW repository construction on the environment. Also the socio-psychological influences of the LILW repository were noted and examined. Consequences and recommendations for future work on the site selection procedure were prepared on the basis of the research results.(author)

  11. [Costing nuclear medicine diagnostic procedures].

    Science.gov (United States)

    Markou, Pavlos

    2005-01-01

    To the Editor: Referring to a recent special report about the cost analysis of twenty-nine nuclear medicine procedures, I would like to clarify some basic aspects for determining costs of nuclear medicine procedure with various costing methodologies. Activity Based Costing (ABC) method, is a new approach in imaging services costing that can provide the most accurate cost data, but is difficult to perform in nuclear medicine diagnostic procedures. That is because ABC requires determining and analyzing all direct and indirect costs of each procedure, according all its activities. Traditional costing methods, like those for estimating incomes and expenses per procedure or fixed and variable costs per procedure, which are widely used in break-even point analysis and the method of ratio-of-costs-to-charges per procedure may be easily performed in nuclear medicine departments, to evaluate the variability and differences between costs and reimbursement - charges.

  12. Total sulfur determination in residues of crude oil distillation using FT-IR/ATR and variable selection methods

    Science.gov (United States)

    Müller, Aline Lima Hermes; Picoloto, Rochele Sogari; Mello, Paola de Azevedo; Ferrão, Marco Flores; dos Santos, Maria de Fátima Pereira; Guimarães, Regina Célia Lourenço; Müller, Edson Irineu; Flores, Erico Marlon Moraes

    2012-04-01

    Total sulfur concentration was determined in atmospheric residue (AR) and vacuum residue (VR) samples obtained from petroleum distillation process by Fourier transform infrared spectroscopy with attenuated total reflectance (FT-IR/ATR) in association with chemometric methods. Calibration and prediction set consisted of 40 and 20 samples, respectively. Calibration models were developed using two variable selection models: interval partial least squares (iPLS) and synergy interval partial least squares (siPLS). Different treatments and pre-processing steps were also evaluated for the development of models. The pre-treatment based on multiplicative scatter correction (MSC) and the mean centered data were selected for models construction. The use of siPLS as variable selection method provided a model with root mean square error of prediction (RMSEP) values significantly better than those obtained by PLS model using all variables. The best model was obtained using siPLS algorithm with spectra divided in 20 intervals and combinations of 3 intervals (911-824, 823-736 and 737-650 cm-1). This model produced a RMSECV of 400 mg kg-1 S and RMSEP of 420 mg kg-1 S, showing a correlation coefficient of 0.990.

  13. Sensor combination and chemometric variable selection for online monitoring of Streptomyces coelicolor fed-batch cultivations

    DEFF Research Database (Denmark)

    Ödman, Peter; Johansen, C.L.; Olsson, L.

    2010-01-01

    of biomass and substrate (casamino acids) concentrations, respectively. The effect of combination of fluorescence and gas analyzer data as well as of different variable selection methods was investigated. Improved prediction models were obtained by combination of data from the two sensors and by variable......Fed-batch cultivations of Streptomyces coelicolor, producing the antibiotic actinorhodin, were monitored online by multiwavelength fluorescence spectroscopy and off-gas analysis. Partial least squares (PLS), locally weighted regression, and multilinear PLS (N-PLS) models were built for prediction...

  14. Developing Characterization Procedures for Qualifying both Novel Selective Laser Sintering Polymer Powders and Recycled Powders

    Energy Technology Data Exchange (ETDEWEB)

    Bajric, Sendin [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-09-12

    Selective laser sintering (SLS) is an additive technique which is showing great promise over conventional manufacturing techniques. SLS requires certain key material properties for a polymer powder to be successfully processed into an end-use part, and therefore limited selection of materials are available. Furthermore, there has been evidence of a powder’s quality deteriorating following each SLS processing cycle. The current investigation serves to build a path forward in identifying new SLS powder materials by developing characterization procedures for identifying key material properties as well as for detecting changes in a powder’s quality. Thermogravimetric analyses, differential scanning calorimetry, and bulk density measurements were investigated.

  15. Cholinergic enhancement reduces functional connectivity and BOLD variability in visual extrastriate cortex during selective attention.

    Science.gov (United States)

    Ricciardi, Emiliano; Handjaras, Giacomo; Bernardi, Giulio; Pietrini, Pietro; Furey, Maura L

    2013-01-01

    Enhancing cholinergic function improves performance on various cognitive tasks and alters neural responses in task specific brain regions. We have hypothesized that the changes in neural activity observed during increased cholinergic function reflect an increase in neural efficiency that leads to improved task performance. The current study tested this hypothesis by assessing neural efficiency based on cholinergically-mediated effects on regional brain connectivity and BOLD signal variability. Nine subjects participated in a double-blind, placebo-controlled crossover fMRI study. Following an infusion of physostigmine (1 mg/h) or placebo, echo-planar imaging (EPI) was conducted as participants performed a selective attention task. During the task, two images comprised of superimposed pictures of faces and houses were presented. Subjects were instructed periodically to shift their attention from one stimulus component to the other and to perform a matching task using hand held response buttons. A control condition included phase-scrambled images of superimposed faces and houses that were presented in the same temporal and spatial manner as the attention task; participants were instructed to perform a matching task. Cholinergic enhancement improved performance during the selective attention task, with no change during the control task. Functional connectivity analyses showed that the strength of connectivity between ventral visual processing areas and task-related occipital, parietal and prefrontal regions reduced significantly during cholinergic enhancement, exclusively during the selective attention task. Physostigmine administration also reduced BOLD signal temporal variability relative to placebo throughout temporal and occipital visual processing areas, again during the selective attention task only. Together with the observed behavioral improvement, the decreases in connectivity strength throughout task-relevant regions and BOLD variability within stimulus

  16. Assessing the accuracy and stability of variable selection methods for random forest modeling in ecology

    Science.gov (United States)

    Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological datasets there is limited guidance on variable selection methods for RF modeling. Typically, e...

  17. Fuzzy target selection using RFM variables

    NARCIS (Netherlands)

    Kaymak, U.

    2001-01-01

    An important data mining problem from the world of direct marketing is target selection. The main task in target selection is the determination of potential customers for a product from a client database. Target selection algorithms identify the profiles of customer groups for a particular product,

  18. Role of maturity timing in selection procedures and in the specialisation of playing positions in youth basketball

    NARCIS (Netherlands)

    te Wierike, Sanne Cornelia Maria; Elferink-Gemser, Marije Titia; Tromp, Eveline Jenny Yvonne; Vaeyens, Roel; Visscher, Chris

    2015-01-01

    This study investigated the role of maturity timing in selection procedures and in the specialisation of playing positions in youth male basketball. Forty-three talented Dutch players (14.66 +/- 1.09years) participated in this study. Maturity timing (age at peak height velocity), anthropometric,

  19. Selecting a Risk-Based SQC Procedure for a HbA1c Total QC Plan.

    Science.gov (United States)

    Westgard, Sten A; Bayat, Hassan; Westgard, James O

    2017-09-01

    Recent US practice guidelines and laboratory regulations for quality control (QC) emphasize the development of QC plans and the application of risk management principles. The US Clinical Laboratory Improvement Amendments (CLIA) now includes an option to comply with QC regulations by developing an individualized QC plan (IQCP) based on a risk assessment of the total testing process. The Clinical and Laboratory Standards Institute (CLSI) has provided new practice guidelines for application of risk management to QC plans and statistical QC (SQC). We describe an alternative approach for developing a total QC plan (TQCP) that includes a risk-based SQC procedure. CLIA compliance is maintained by analyzing at least 2 levels of controls per day. A Sigma-Metric SQC Run Size nomogram provides a graphical tool to simplify the selection of risk-based SQC procedures. Current HbA1c method performance, as demonstrated by published method validation studies, is estimated to be 4-Sigma quality at best. Optimal SQC strategies require more QC than the CLIA minimum requirement of 2 levels per day. More complex control algorithms, more control measurements, and a bracketed mode of operation are needed to assure the intended quality of results. A total QC plan with a risk-based SQC procedure provides a simpler alternative to an individualized QC plan. A Sigma-Metric SQC Run Size nomogram provides a practical tool for selecting appropriate control rules, numbers of control measurements, and run size (or frequency of SQC). Applications demonstrate the need for continued improvement of analytical performance of HbA1c laboratory methods.

  20. Diagnostic Value of Selected Echocardiographic Variables to Identify Pulmonary Hypertension in Dogs with Myxomatous Mitral Valve Disease.

    Science.gov (United States)

    Tidholm, A; Höglund, K; Häggström, J; Ljungvall, I

    2015-01-01

    Pulmonary hypertension (PH) is commonly associated with myxomatous mitral valve disease (MMVD). Because dogs with PH present without measureable tricuspid regurgitation (TR), it would be useful to investigate echocardiographic variables that can identify PH. To investigate associations between estimated systolic TR pressure gradient (TRPG) and dog characteristics and selected echocardiographic variables. 156 privately owned dogs. Prospective observational study comparing the estimations of TRPG with dog characteristics and selected echocardiographic variables in dogs with MMVD and measureable TR. Tricuspid regurgitation pressure gradient was significantly (P modeled as linear variables LA/Ao (P modeled as second order polynomial variables: AT/DT (P = .0039) and LVIDDn (P value for the final model was 0.45 and receiver operating characteristic curve analysis suggested the model's performance to predict PH, defined as 36, 45, and 55 mmHg as fair (area under the curve [AUC] = 0.80), good (AUC = 0.86), and excellent (AUC = 0.92), respectively. In dogs with MMVD, the presence of PH might be suspected with the combination of decreased PA AT/DT, increased RVIDDn and LA/Ao, and a small or great LVIDDn. Copyright © 2015 The Authors Journal of Veterinary Internal Medicine published by Wiley Periodicals, Inc. on behalf of the American College of Veterinary Internal Medicine.

  1. Extreme precipitation variability, forage quality and large herbivore diet selection in arid environments

    Science.gov (United States)

    Cain, James W.; Gedir, Jay V.; Marshal, Jason P.; Krausman, Paul R.; Allen, Jamison D.; Duff, Glenn C.; Jansen, Brian; Morgart, John R.

    2017-01-01

    Nutritional ecology forms the interface between environmental variability and large herbivore behaviour, life history characteristics, and population dynamics. Forage conditions in arid and semi-arid regions are driven by unpredictable spatial and temporal patterns in rainfall. Diet selection by herbivores should be directed towards overcoming the most pressing nutritional limitation (i.e. energy, protein [nitrogen, N], moisture) within the constraints imposed by temporal and spatial variability in forage conditions. We investigated the influence of precipitation-induced shifts in forage nutritional quality and subsequent large herbivore responses across widely varying precipitation conditions in an arid environment. Specifically, we assessed seasonal changes in diet breadth and forage selection of adult female desert bighorn sheep Ovis canadensis mexicana in relation to potential nutritional limitations in forage N, moisture and energy content (as proxied by dry matter digestibility, DMD). Succulents were consistently high in moisture but low in N and grasses were low in N and moisture until the wet period. Nitrogen and moisture content of shrubs and forbs varied among seasons and climatic periods, whereas trees had consistently high N and moderate moisture levels. Shrubs, trees and succulents composed most of the seasonal sheep diets but had little variation in DMD. Across all seasons during drought and during summer with average precipitation, forages selected by sheep were higher in N and moisture than that of available forage. Differences in DMD between sheep diets and available forage were minor. Diet breadth was lowest during drought and increased with precipitation, reflecting a reliance on few key forage species during drought. Overall, forage selection was more strongly associated with N and moisture content than energy content. Our study demonstrates that unlike north-temperate ungulates which are generally reported to be energy-limited, N and moisture

  2. Impact of strong selection for the PrP major gene on genetic variability of four French sheep breeds (Open Access publication

    Directory of Open Access Journals (Sweden)

    Pantano Thais

    2008-11-01

    Full Text Available Abstract Effective selection on the PrP gene has been implemented since October 2001 in all French sheep breeds. After four years, the ARR "resistant" allele frequency increased by about 35% in young males. The aim of this study was to evaluate the impact of this strong selection on genetic variability. It is focussed on four French sheep breeds and based on the comparison of two groups of 94 animals within each breed: the first group of animals was born before the selection began, and the second, 3–4 years later. Genetic variability was assessed using genealogical and molecular data (29 microsatellite markers. The expected loss of genetic variability on the PrP gene was confirmed. Moreover, among the five markers located in the PrP region, only the three closest ones were affected. The evolution of the number of alleles, heterozygote deficiency within population, expected heterozygosity and the Reynolds distances agreed with the criteria from pedigree and pointed out that neutral genetic variability was not much affected. This trend depended on breed, i.e. on their initial states (population size, PrP frequencies and on the selection strategies for improving scrapie resistance while carrying out selection for production traits.

  3. Selectivity assessment of an arsenic sequential extraction procedure for evaluating mobility in mine wastes

    International Nuclear Information System (INIS)

    Drahota, Petr; Grösslová, Zuzana; Kindlová, Helena

    2014-01-01

    Highlights: • Extraction efficiency and selectivity of phosphate and oxalate were tested. • Pure As-bearing mineral phases and mine wastes were used. • The reagents were found to be specific and selective for most major forms of As. • An optimized sequential extraction scheme for mine wastes has been developed. • It has been tested over a model mineral mixtures and natural mine waste materials. - Abstract: An optimized sequential extraction (SE) scheme for mine waste materials has been developed and tested for As partitioning over a range of pure As-bearing mineral phases, their model mixtures, and natural mine waste materials. This optimized SE procedure employs five extraction steps: (1) nitrogen-purged deionized water, 10 h; (2) 0.01 M NH 4 H 2 PO 4 , 16 h; (3) 0.2 M NH 4 -oxalate in the dark, pH3, 2 h; (4) 0.2 M NH 4 -oxalate, pH3/80 °C, 4 h; (5) KClO 3 /HCl/HNO 3 digestion. Selectivity and specificity tests on natural mine wastes and major pure As-bearing mineral phases showed that these As fractions appear to be primarily associated with: (1) readily soluble; (2) adsorbed; (3) amorphous and poorly-crystalline arsenates, oxides and hydroxosulfates of Fe; (4) well-crystalline arsenates, oxides, and hydroxosulfates of Fe; as well as (5) sulfides and arsenides. The specificity and selectivity of extractants, and the reproducibility of the optimized SE procedure were further verified by artificial model mineral mixtures and different natural mine waste materials. Partitioning data for extraction steps 3, 4, and 5 showed good agreement with those calculated in the model mineral mixtures (<15% difference), as well as that expected in different natural mine waste materials. The sum of the As recovered in the different extractant pools was not significantly different (89–112%) than the results for acid digestion. This suggests that the optimized SE scheme can reliably be employed for As partitioning in mine waste materials

  4. A power set-based statistical selection procedure to locate susceptible rare variants associated with complex traits with sequencing data.

    Science.gov (United States)

    Sun, Hokeun; Wang, Shuang

    2014-08-15

    Existing association methods for rare variants from sequencing data have focused on aggregating variants in a gene or a genetic region because of the fact that analysing individual rare variants is underpowered. However, these existing rare variant detection methods are not able to identify which rare variants in a gene or a genetic region of all variants are associated with the complex diseases or traits. Once phenotypic associations of a gene or a genetic region are identified, the natural next step in the association study with sequencing data is to locate the susceptible rare variants within the gene or the genetic region. In this article, we propose a power set-based statistical selection procedure that is able to identify the locations of the potentially susceptible rare variants within a disease-related gene or a genetic region. The selection performance of the proposed selection procedure was evaluated through simulation studies, where we demonstrated the feasibility and superior power over several comparable existing methods. In particular, the proposed method is able to handle the mixed effects when both risk and protective variants are present in a gene or a genetic region. The proposed selection procedure was also applied to the sequence data on the ANGPTL gene family from the Dallas Heart Study to identify potentially susceptible rare variants within the trait-related genes. An R package 'rvsel' can be downloaded from http://www.columbia.edu/∼sw2206/ and http://statsun.pusan.ac.kr. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  5. A Quantile Regression Approach to Estimating the Distribution of Anesthetic Procedure Time during Induction.

    Directory of Open Access Journals (Sweden)

    Hsin-Lun Wu

    Full Text Available Although procedure time analyses are important for operating room management, it is not easy to extract useful information from clinical procedure time data. A novel approach was proposed to analyze procedure time during anesthetic induction. A two-step regression analysis was performed to explore influential factors of anesthetic induction time (AIT. Linear regression with stepwise model selection was used to select significant correlates of AIT and then quantile regression was employed to illustrate the dynamic relationships between AIT and selected variables at distinct quantiles. A total of 1,060 patients were analyzed. The first and second-year residents (R1-R2 required longer AIT than the third and fourth-year residents and attending anesthesiologists (p = 0.006. Factors prolonging AIT included American Society of Anesthesiologist physical status ≧ III, arterial, central venous and epidural catheterization, and use of bronchoscopy. Presence of surgeon before induction would decrease AIT (p < 0.001. Types of surgery also had significant influence on AIT. Quantile regression satisfactorily estimated extra time needed to complete induction for each influential factor at distinct quantiles. Our analysis on AIT demonstrated the benefit of quantile regression analysis to provide more comprehensive view of the relationships between procedure time and related factors. This novel two-step regression approach has potential applications to procedure time analysis in operating room management.

  6. 28 CFR 30.6 - What procedures apply to the selection of programs and activities under these regulations?

    Science.gov (United States)

    2010-07-01

    ... 28 Judicial Administration 1 2010-07-01 2010-07-01 false What procedures apply to the selection of programs and activities under these regulations? 30.6 Section 30.6 Judicial Administration DEPARTMENT OF... consult with local elected officials. (b) Each state that adopts a process shall notify the Attorney...

  7. 49 CFR 17.6 - What procedures apply to the selection of programs and activities under these regulations?

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 1 2010-10-01 2010-10-01 false What procedures apply to the selection of programs and activities under these regulations? 17.6 Section 17.6 Transportation Office of the Secretary of Transportation INTERGOVERNMENTAL REVIEW OF DEPARTMENT OF TRANSPORTATION PROGRAMS AND ACTIVITIES § 17.6 What...

  8. The usefulness of the Basic Question Procedure for determining non-response bias in substantive variables - A test of four telephone questionnaires

    NARCIS (Netherlands)

    van Goor, H.; van Goor, A.

    2007-01-01

    The Basic Question Procedure (BQP) is a method for determining non-response bias. The BQP involves asking one basic question - that is, the question relating to the central substantive variable of the study - of those persons who refuse to participate in the survey. We studied the usefulness of this

  9. Predictive market segmentation model: An application of logistic regression model and CHAID procedure

    Directory of Open Access Journals (Sweden)

    Soldić-Aleksić Jasna

    2009-01-01

    Full Text Available Market segmentation presents one of the key concepts of the modern marketing. The main goal of market segmentation is focused on creating groups (segments of customers that have similar characteristics, needs, wishes and/or similar behavior regarding the purchase of concrete product/service. Companies can create specific marketing plan for each of these segments and therefore gain short or long term competitive advantage on the market. Depending on the concrete marketing goal, different segmentation schemes and techniques may be applied. This paper presents a predictive market segmentation model based on the application of logistic regression model and CHAID analysis. The logistic regression model was used for the purpose of variables selection (from the initial pool of eleven variables which are statistically significant for explaining the dependent variable. Selected variables were afterwards included in the CHAID procedure that generated the predictive market segmentation model. The model results are presented on the concrete empirical example in the following form: summary model results, CHAID tree, Gain chart, Index chart, risk and classification tables.

  10. Selective Intra-procedural AAA sac Embolization During EVAR Reduces the Rate of Type II Endoleak.

    Science.gov (United States)

    Mascoli, C; Freyrie, A; Gargiulo, M; Gallitto, E; Pini, R; Faggioli, G; Serra, C; De Molo, C; Stella, A

    2016-05-01

    The pre-treatment presence of at least six efferent patent vessels (EPV) from the AAA sac and/or AAA thrombus volume ratio (VR%) AAA sac embolization (Group A, 2012-2013) were retrospectively selected and compared with a control group of patients with the same p-MRF, who underwent EVAR without intra-procedural sac embolization (Group B, 2008-2010). The presence of ELIIp was evaluated by duplex ultrasound at 0 and 6 months, and by contrast enhanced ultrasound at 12 months. The association between AAA diameter, age, COPD, smoking, anticoagulant therapy, and AAA sac embolization with ELIIp was evaluated using multiple logistic regression. The primary endpoint was the effectiveness of the intra-procedural AAA sac embolization for ELIIp prevention. Secondary endpoints were AAA sac evolution and freedom from ELIIp and embolization related re-interventions at 6-12 months. Seventy patients were analyzed: 26 Group A and 44 Group B; the groups were homogeneous for clinical/morphological characteristics. In Group A the median number of coils positioned in AAA sac was 4.1 (IQR 1). There were no complications related to the embolization procedures. A significantly lower number of ELIIp was detected in Group A than in Group B (8/26 vs. 33/44, respectively, p AAA sac embolization was the only factor independently associated with freedom from ELIIp at 6 (OR 0.196, 95% CI 0.06-0.63; p = .007) and 12 months (OR 0.098, 95% CI 0.02-0.35; p AAA sac diameter shrinkage were detected between the two groups at 6-12 months (p = .42 and p = .58, respectively). Freedom from ELIIp related and embolization related re-interventions was 100% in both groups, at 6 and 12 months. Selective intra-procedural AAA sac embolization in patients with p-MRF is safe and could be an effective method to reduce ELIIp. Further studies are mandatory to support these results at long-term follow up. Copyright © 2015 European Society for Vascular Surgery. Published by Elsevier Ltd. All rights reserved.

  11. A materials selection procedure for sandwiched beams via parametric optimization with applications in automotive industry

    International Nuclear Information System (INIS)

    Aly, Mohamed F.; Hamza, Karim T.; Farag, Mahmoud M.

    2014-01-01

    Highlights: • Sandwich panels optimization model. • Sandwich panels design procedure. • Study of sandwich panels for automotive vehicle flooring. • Study of sandwich panels for truck cabin exterior. - Abstract: The future of automotive industry faces many challenges in meeting increasingly strict restrictions on emissions, energy usage and recyclability of components alongside the need to maintain cost competiveness. Weight reduction through innovative design of components and proper material selection can have profound impact towards attaining such goals since most of the lifecycle energy usage occurs during the operation phase of a vehicle. In electric and hybrid vehicles, weight reduction has another important effect of extending the electric mode driving range between stops or gasoline mode. This paper adopts parametric models for design optimization and material selection of sandwich panels with the objective of weight and cost minimization subject to structural integrity constraints such as strength, stiffness and buckling resistance. The proposed design procedure employs a pre-compiled library of candidate sandwich panel material combinations, for which optimization of the layered thicknesses is conducted and the best one is reported. Example demonstration studies from the automotive industry are presented for the replacement of Aluminum and Steel panels with polypropylene-filled sandwich panel alternatives

  12. Ultrasonic variables affecting inspection

    International Nuclear Information System (INIS)

    Lautzenheiser, C.E.; Whiting, A.R.; McElroy, J.T.

    1977-01-01

    There are many variables which affect the detection of the effects and reproducibility of results when utilizing ultrasonic techniques. The most important variable is the procedure, as this document specifies, to a great extent, the controls that are exercised over the other variables. The most important variable is personnel with regards to training, qualification, integrity, data recording, and data analysis. Although the data is very limited, these data indicate that, if the procedure is carefully controlled, reliability of defect detection and reproducibility of results are both approximately 90 percent for reliability of detection, this applies to relatively small defects as reliability increases substantially as defect size increases above the recording limit. (author)

  13. Intra-individual variability as a predictor of learning

    Directory of Open Access Journals (Sweden)

    Matija Svetina

    2004-05-01

    Full Text Available Learning is one of the most important aspects of children's behaviour. A new theory that emerged from evolutionary principles and information-processing models assumes learning to be run by two basic mechanisms: variability and selection. The theory is based on the underlying assumption that intra-individual variability of strategies that children use to solve a problem, is a core mechanism of learning change. This assumption was tested in the case of multiple classification (MC task. 30 6-year-old children were tested for intelligence, short-term memory, and MC. Procedure followed classical pre-test/learning/post-test scheme. Amount of learning was measured through percentage of correct answers before and after learning sessions, whereas intra-individual variability was assessed through children's explanations of their answers on MC problems. The results yielded intra-individual variability to explain learning changes beyond inter-individual differences in intelligence or short-term memory. Although the results rose some new questions to be considered in further research, the data supported the hypothesis of intra-individual variability as predictor of learning change.

  14. Field screening procedures for determining the presence of volatile organic compounds in soil

    International Nuclear Information System (INIS)

    Crockett, A.B.; DeHaan, M.S.

    1991-01-01

    Many field screening procedures have been used to detect the presence of volatile organic compounds (VOC) in soils but almost none have been documented and verified. Users of these procedures have not really known whether their objectives in screening were met. A reliable VOC screening procedure could significantly reduce the number of samples currently being submitted to laboratories, thereby reducing costs and improving site characterization. The Environmental Protection Agency's Environmental Monitoring Systems Laboratory in Las Vegas (EMSL-LV) has therefore sponsored a research effort to evaluate and improve headspace methods for screening soils for VOC in the field. The research involved comparing several extraction procedures using soils from actual waste sites, and determining the agitation and mixing necessary to achieve equilibrium. Headspace was analyzed using a relatively simple portable gas chromatograph with a short column. The results were variable and show that several procedures should be attempted and the results evaluated before selecting a screening procedure. 10 refs., 6 tabs

  15. 49 CFR 542.1 - Procedures for selecting new light duty truck lines that are likely to have high or low theft rates.

    Science.gov (United States)

    2010-10-01

    ... lines that are likely to have high or low theft rates. 542.1 Section 542.1 Transportation Other... OF TRANSPORTATION PROCEDURES FOR SELECTING LIGHT DUTY TRUCK LINES TO BE COVERED BY THE THEFT... or low theft rates. (a) Scope. This section sets forth the procedures for motor vehicle manufacturers...

  16. A sequential extraction procedure to determine Ra and U isotopes by alpha-particle spectrometry in selective leachates

    International Nuclear Information System (INIS)

    Aguado, J.L.; Bolivar, J.P.; San-Miguel, E.G.; Garcia-Tenorio, R.

    2003-01-01

    A radiochemical sequential extraction procedure has been developed in our laboratory to determine 226 Ra and 234,238 U by alpha spectrometry in environmental samples. This method has been validated for both radionuclides by comparing in selected samples the values obtained through its application with the results obtained by applying alternative procedures. Recoveries obtained, counting periods applied and background levels found in the alpha spectra give suitable detection limits to allow the Ra and U determination in operational forms defined in riverbed contaminated sediments. Results obtained in these speciation studies show that 226 Ra and 234,238 U contamination tend to be associated to precipitated forms of the sediments. (author)

  17. Patient radiation dose audits for fluoroscopically guided interventional procedures

    International Nuclear Information System (INIS)

    Balter, Stephen; Rosenstein, Marvin; Miller, Donald L.; Schueler, Beth; Spelic, David

    2011-01-01

    Purpose: Quality management for any use of medical x-ray imaging should include monitoring of radiation dose. Fluoroscopically guided interventional (FGI) procedures are inherently clinically variable and have the potential for inducing deterministic injuries in patients. The use of a conventional diagnostic reference level is not appropriate for FGI procedures. A similar but more detailed quality process for management of radiation dose in FGI procedures is described. Methods: A method that takes into account both the inherent variability of FGI procedures and the risk of deterministic injuries from these procedures is suggested. The substantial radiation dose level (SRDL) is an absolute action level (with regard to patient follow-up) below which skin injury is highly unlikely and above which skin injury is possible. The quality process for FGI procedures collects data from all instances of a given procedure from a number of facilities into an advisory data set (ADS). An individual facility collects a facility data set (FDS) comprised of all instances of the same procedure at that facility. The individual FDS is then compared to the multifacility ADS with regard to the overall shape of the dose distributions and the percent of instances in both the ADS and the FDS that exceed the SRDL. Results: Samples of an ADS and FDS for percutaneous coronary intervention, using the dose metric of reference air kerma (K a,r ) (i.e., the cumulative air kerma at the reference point), are used to illustrate the proposed quality process for FGI procedures. Investigation is warranted whenever the FDS is noticeably different from the ADS for the specific FGI procedure and particularly in two circumstances: (1) When the facility's local median K a,r exceeds the 75th percentile of the ADS and (2) when the percent of instances where K a,r exceeds the facility-selected SRDL is greater for the FDS than for the ADS. Conclusions: Analysis of the two data sets (ADS and FDS) and of the

  18. Shade selection performed by novice dental professionals and colorimeter.

    Science.gov (United States)

    Klemetti, E; Matela, A-M; Haag, P; Kononen, M

    2006-01-01

    The objective of this study was to test inter-observer variability in shade selection for porcelain restorations, using three different shade guides: Vita Lumin Vacuum, Vita 3D-Master and Procera. Nineteen young dental professionals acted as observers. The results were also compared with those of a digital colorimeter (Shade Eye Ex; Shofu, Japan). Regarding repeatability, no significant differences were found between the three shade guides, although repeatability was relatively low (33-43%). Agreement with the colorimetric results was also low (8-34%). In conclusion, shade selection shows moderate to great inter-observer variation. In teaching and standardizing the shade selection procedure, a digital colorimeter may be a useful educational tool.

  19. Genotype-by-environment interactions leads to variable selection on life-history strategy in Common Evening Primrose (Oenothera biennis).

    Science.gov (United States)

    Johnson, M T J

    2007-01-01

    Monocarpic plant species, where reproduction is fatal, frequently exhibit variation in the length of their prereproductive period prior to flowering. If this life-history variation in flowering strategy has a genetic basis, genotype-by-environment interactions (G x E) may maintain phenotypic diversity in flowering strategy. The native monocarpic plant Common Evening Primrose (Oenothera biennis L., Onagraceae) exhibits phenotypic variation for annual vs. biennial flowering strategies. I tested whether there was a genetic basis to variation in flowering strategy in O. biennis, and whether environmental variation causes G x E that imposes variable selection on flowering strategy. In a field experiment, I randomized more than 900 plants from 14 clonal families (genotypes) into five distinct habitats that represented a natural productivity gradient. G x E strongly affected the lifetime fruit production of O. biennis, with the rank-order in relative fitness of genotypes changing substantially between habitats. I detected genetic variation in annual vs. biennial strategies in most habitats, as well as a G x E effect on flowering strategy. This variation in flowering strategy was correlated with genetic variation in relative fitness, and phenotypic and genotypic selection analyses revealed that environmental variation resulted in variable directional selection on annual vs. biennial strategies. Specifically, a biennial strategy was favoured in moderately productive environments, whereas an annual strategy was favoured in low-productivity environments. These results highlight the importance of variable selection for the maintenance of genetic variation in the life-history strategy of a monocarpic plant.

  20. On the Computation of the Efficient Frontier of the Portfolio Selection Problem

    Directory of Open Access Journals (Sweden)

    Clara Calvo

    2012-01-01

    Full Text Available An easy-to-use procedure is presented for improving the ε-constraint method for computing the efficient frontier of the portfolio selection problem endowed with additional cardinality and semicontinuous variable constraints. The proposed method provides not only a numerical plotting of the frontier but also an analytical description of it, including the explicit equations of the arcs of parabola it comprises and the change points between them. This information is useful for performing a sensitivity analysis as well as for providing additional criteria to the investor in order to select an efficient portfolio. Computational results are provided to test the efficiency of the algorithm and to illustrate its applications. The procedure has been implemented in Mathematica.

  1. Effects of selected design variables on three ramp, external compression inlet performance. [boundary layer control bypasses, and mass flow rate

    Science.gov (United States)

    Kamman, J. H.; Hall, C. L.

    1975-01-01

    Two inlet performance tests and one inlet/airframe drag test were conducted in 1969 at the NASA-Ames Research Center. The basic inlet system was two-dimensional, three ramp (overhead), external compression, with variable capture area. The data from these tests were analyzed to show the effects of selected design variables on the performance of this type of inlet system. The inlet design variables investigated include inlet bleed, bypass, operating mass flow ratio, inlet geometry, and variable capture area.

  2. Selective versus routine patch metal allergy testing to select bar material for the Nuss procedure in 932 patients over 10years.

    Science.gov (United States)

    Obermeyer, Robert J; Gaffar, Sheema; Kelly, Robert E; Kuhn, M Ann; Frantz, Frazier W; McGuire, Margaret M; Paulson, James F; Kelly, Cynthia S

    2018-02-01

    The aim of the study was to determine the role of patch metal allergy testing to select bar material for the Nuss procedure. An IRB-approved (11-04-WC-0098) single institution retrospective, cohort study comparing selective versus routine patch metal allergy testing to select stainless steel or titanium bars for Nuss repair was performed. In Cohort A (9/2004-1/2011), selective patch testing was performed based on clinical risk factors. In Cohort B (2/2011-9/2014), all patients were patch tested. The cohorts were compared for incidence of bar allergy and resultant premature bar loss. Risk factors for stainless steel allergy or positive patch test were evaluated. Cohort A had 628 patients with 63 (10.0%) selected for patch testing, while all 304 patients in Cohort B were tested. Over 10years, 15 (1.8%) of the 842 stainless steel Nuss repairs resulted in a bar allergy, and 5 had a negative preoperative patch test. The incidence of stainless steel bar allergy (1.8% vs 1.7%, p=0.57) and resultant bar loss (0.5% vs 1.3%, p=0.23) was not statistically different between cohorts. An allergic reaction to a stainless steel bar or a positive patch test was more common in females (OR=2.3, pbar allergies occur at a low incidence with either routine or selective patch metal allergy testing. If selective testing is performed, it is advisable in females and patients with a personal or family history of metal sensitivity. A negative preoperative patch metal allergy test does not preclude the possibility of a postoperative stainless steel bar allergy. Level III Treatment Study and Study of Diagnostic Test. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. Realism of procedural task trainers in a pediatric emergency medicine procedures course.

    Science.gov (United States)

    Shefrin, Allan; Khazei, Afshin; Cheng, Adam

    2015-01-01

    Pediatric emergency medicine (PEM) physicians have minimal experience in life saving procedures and have turned to task trainers to learn these skills. Realism of these models is an important consideration that has received little study. PEM physicians and trainees participated in a day long procedural training course that utilized commercially available and homemade task trainers to teach pericardiocentesis, chest tube insertion, cricothyroidotomy and central line insertion. Participants rated the realism of the task trainers as part of a post-course survey. The homemade task trainers received variable realism ratings, with 91% of participants rating the pork rib chest tube model as realistic, 82% rating the gelatin pericardiocentesis mold as realistic and 36% rating the ventilator tubing cricothyroidotomy model as realistic. Commercial trainers also received variable ratings, with 45% rating the chest drain and pericardiocentesis simulator as realistic, 74% rating the crichotracheotomy trainer as realistic and 80% rating the central line insertion trainer as realistic. Task training models utilized in our course received variable realism ratings. When deciding what type of task trainer to use future courses should carefully consider the desired aspect of realism, and how it aligns with the procedural skill, balanced with cost considerations.

  4. Online Monitoring of Copper Damascene Electroplating Bath by Voltammetry: Selection of Variables for Multiblock and Hierarchical Chemometric Analysis of Voltammetric Data

    Directory of Open Access Journals (Sweden)

    Aleksander Jaworski

    2017-01-01

    Full Text Available The Real Time Analyzer (RTA utilizing DC- and AC-voltammetric techniques is an in situ, online monitoring system that provides a complete chemical analysis of different electrochemical deposition solutions. The RTA employs multivariate calibration when predicting concentration parameters from a multivariate data set. Although the hierarchical and multiblock Principal Component Regression- (PCR- and Partial Least Squares- (PLS- based methods can handle data sets even when the number of variables significantly exceeds the number of samples, it can be advantageous to reduce the number of variables to obtain improvement of the model predictions and better interpretation. This presentation focuses on the introduction of a multistep, rigorous method of data-selection-based Least Squares Regression, Simple Modeling of Class Analogy modeling power, and, as a novel application in electroanalysis, Uninformative Variable Elimination by PLS and by PCR, Variable Importance in the Projection coupled with PLS, Interval PLS, Interval PCR, and Moving Window PLS. Selection criteria of the optimum decomposition technique for the specific data are also demonstrated. The chief goal of this paper is to introduce to the community of electroanalytical chemists numerous variable selection methods which are well established in spectroscopy and can be successfully applied to voltammetric data analysis.

  5. Statistical test data selection for reliability evalution of process computer software

    International Nuclear Information System (INIS)

    Volkmann, K.P.; Hoermann, H.; Ehrenberger, W.

    1976-01-01

    The paper presents a concept for converting knowledge about the characteristics of process states into practicable procedures for the statistical selection of test cases in testing process computer software. Process states are defined as vectors whose components consist of values of input variables lying in discrete positions or within given limits. Two approaches for test data selection, based on knowledge about cases of demand, are outlined referring to a purely probabilistic method and to the mathematics of stratified sampling. (orig.) [de

  6. INDUCED GENETIC VARIABILITY AND SELECTION FOR HIGH YIELDING MUTANTS IN BREAD WHEAT(TRITICUM AESTIVUM L.)

    International Nuclear Information System (INIS)

    SOBIEH, S.EL-S.S.

    2007-01-01

    This study was conducted during the two winter seasons of 2004/2005 and 2005/2006 at the experimental farm belonging to Plant Research Department, Nuclear Research Centre, AEA, Egypt.The aim of this study is to determine the effect of gamma rays(150, 200 and 250 Gy) on means of yield and its attributes for exotic wheat variety (vir-25) and induction of genetic variability that permits to perform visual selection through the irradiated populations, as well as to determine difference in seed protein patterns between vir-25 parent variety and some selectants in M2 generation.The results showed that the different doses of gamma rays had non-significant effect on mean value of yield/plant and significant effect on mean values of it's attributes. 0n the other hand, the considered genetic variability was generated as result of applying gamma irradiation. The highest amount of induced genetic variability was detected for number of grains/ spike, spike length and number of spikes/plant. Additionally, these three traits exhibited strong association with grain yield/plant, hence, they were used as a criterion for selection.Some variant plants were selected from radiation treatment 250 Gy, with 2-10 spikes per plant.These variant plants exhibited increasing in spike length and number of gains/spike.The results also revealed that protein electrophoresis were varied in the number and position of bands from genotype to another and various genotypes share bands with molecular weights 31.4 and 3.2 KD.Many bands were found to be specific for the genotype and the nine wheat mutants were characterized by the presence of bands of molecular weights: 151.9, 125.7, 14.1 and 5.7 KD at M-167.4, 21.7 and 8.2 at M-299.7 KD at M-3136.1, 97.6, 49.8, 27.9 and 20.6 KD at M-4 135.2, 95.3 and 28.1 KD at M-5 135.5, 67.7, 47.1, 32.3, 21.9 and 9.6 KD at M-6 126.1, 112.1, 103.3, 58.8, 20.9 and 12.1 KD at M-7 127.7, 116.6, 93.9, 55.0 and 47.4 KD at M-8 141.7, 96.1, 79.8, 68.9, 42.1, 32.7, 22.0 and 13

  7. Procedure to select test organisms for environmental risk assessment of genetically modified crops in aquatic systems.

    Science.gov (United States)

    Hilbeck, Angelika; Bundschuh, Rebecca; Bundschuh, Mirco; Hofmann, Frieder; Oehen, Bernadette; Otto, Mathias; Schulz, Ralf; Trtikova, Miluse

    2017-11-01

    For a long time, the environmental risk assessment (ERA) of genetically modified (GM) crops focused mainly on terrestrial ecosystems. This changed when it was scientifically established that aquatic ecosystems are exposed to GM crop residues that may negatively affect aquatic species. To assist the risk assessment process, we present a tool to identify ecologically relevant species usable in tiered testing prior to authorization or for biological monitoring in the field. The tool is derived from a selection procedure for terrestrial ecosystems with substantial but necessary changes to adequately consider the differences in the type of ecosystems. By using available information from the Water Framework Directive (2000/60/EC), the procedure can draw upon existing biological data on aquatic systems. The proposed procedure for aquatic ecosystems was tested for the first time during an expert workshop in 2013, using the cultivation of Bacillus thuringiensis (Bt) maize as the GM crop and 1 stream type as the receiving environment in the model system. During this workshop, species executing important ecological functions in aquatic environments were identified in a stepwise procedure according to predefined ecological criteria. By doing so, we demonstrated that the procedure is practicable with regard to its goal: From the initial long list of 141 potentially exposed aquatic species, 7 species and 1 genus were identified as the most suitable candidates for nontarget testing programs. Integr Environ Assess Manag 2017;13:974-979. © 2017 SETAC. © 2017 SETAC.

  8. Large Variability in the Diversity of Physiologically Complex Surgical Procedures Exists Nationwide Among All Hospitals Including Among Large Teaching Hospitals.

    Science.gov (United States)

    Dexter, Franklin; Epstein, Richard H; Thenuwara, Kokila; Lubarsky, David A

    2017-11-22

    Multiple previous studies have shown that having a large diversity of procedures has a substantial impact on quality management of hospital surgical suites. At hospitals with substantial diversity, unless sophisticated statistical methods suitable for rare events are used, anesthesiologists working in surgical suites will have inaccurate predictions of surgical blood usage, case durations, cost accounting and price transparency, times remaining in late running cases, and use of intraoperative equipment. What is unknown is whether large diversity is a feature of only a few very unique set of hospitals nationwide (eg, the largest hospitals in each state or province). The 2013 United States Nationwide Readmissions Database was used to study heterogeneity among 1981 hospitals in their diversities of physiologically complex surgical procedures (ie, the procedure codes). The diversity of surgical procedures performed at each hospital was quantified using a summary measure, the number of different physiologically complex surgical procedures commonly performed at the hospital (ie, 1/Herfindahl). A total of 53.9% of all hospitals commonly performed 3-fold larger diversity (ie, >30 commonly performed physiologically complex procedures). Larger hospitals had greater diversity than the small- and medium-sized hospitals (P 30 procedures (lower 99% CL, 71.9% of hospitals). However, there was considerable variability among the large teaching hospitals in their diversity (interquartile range of the numbers of commonly performed physiologically complex procedures = 19.3; lower 99% CL, 12.8 procedures). The diversity of procedures represents a substantive differentiator among hospitals. Thus, the usefulness of statistical methods for operating room management should be expected to be heterogeneous among hospitals. Our results also show that "large teaching hospital" alone is an insufficient description for accurate prediction of the extent to which a hospital sustains the

  9. Multi-omics facilitated variable selection in Cox-regression model for cancer prognosis prediction.

    Science.gov (United States)

    Liu, Cong; Wang, Xujun; Genchev, Georgi Z; Lu, Hui

    2017-07-15

    New developments in high-throughput genomic technologies have enabled the measurement of diverse types of omics biomarkers in a cost-efficient and clinically-feasible manner. Developing computational methods and tools for analysis and translation of such genomic data into clinically-relevant information is an ongoing and active area of investigation. For example, several studies have utilized an unsupervised learning framework to cluster patients by integrating omics data. Despite such recent advances, predicting cancer prognosis using integrated omics biomarkers remains a challenge. There is also a shortage of computational tools for predicting cancer prognosis by using supervised learning methods. The current standard approach is to fit a Cox regression model by concatenating the different types of omics data in a linear manner, while penalty could be added for feature selection. A more powerful approach, however, would be to incorporate data by considering relationships among omics datatypes. Here we developed two methods: a SKI-Cox method and a wLASSO-Cox method to incorporate the association among different types of omics data. Both methods fit the Cox proportional hazards model and predict a risk score based on mRNA expression profiles. SKI-Cox borrows the information generated by these additional types of omics data to guide variable selection, while wLASSO-Cox incorporates this information as a penalty factor during model fitting. We show that SKI-Cox and wLASSO-Cox models select more true variables than a LASSO-Cox model in simulation studies. We assess the performance of SKI-Cox and wLASSO-Cox using TCGA glioblastoma multiforme and lung adenocarcinoma data. In each case, mRNA expression, methylation, and copy number variation data are integrated to predict the overall survival time of cancer patients. Our methods achieve better performance in predicting patients' survival in glioblastoma and lung adenocarcinoma. Copyright © 2017. Published by Elsevier

  10. Energy-efficient relay selection and optimal power allocation for performance-constrained dual-hop variable-gain AF relaying

    KAUST Repository

    Zafar, Ammar; Radaydeh, Redha Mahmoud Mesleh; Chen, Yunfei; Alouini, Mohamed-Slim

    2013-01-01

    This paper investigates the energy-efficiency enhancement of a variable-gain dual-hop amplify-and-forward (AF) relay network utilizing selective relaying. The objective is to minimize the total consumed power while keeping the end-to-end signal

  11. Reliable selection of earthquake ground motions for performance-based design

    DEFF Research Database (Denmark)

    Katsanos, Evangelos; Sextos, A.G.

    2016-01-01

    A decision support process is presented to accommodate selecting and scaling of earthquake motions as required for the time domain analysis of structures. Prequalified code-compatible suites of seismic motions are provided through a multi-criterion approach to satisfy prescribed reduced variability...... of the method, by being subjected to numerous suites of motions that were highly ranked according to both the proposed approach (δsv-sc) and the conventional index (δconv), already used by most existing code-based earthquake records selection and scaling procedures. The findings reveal the superiority...

  12. Bounding the conservatism in flaw-related variables for pressure vessel integrity analyses

    International Nuclear Information System (INIS)

    Foulds, J.R.; Kennedy, E.L.

    1993-01-01

    The fracture mechanics-based integrity analysis of a pressure vessel, whether performed deterministically or probabilistically, requires use of one or more flaw-related input variables, such as flaw size, number of flaws, flaw location, and flaw type. The specific values of these variables are generally selected with the intent to ensure conservative predictions of vessel integrity. These selected values, however, are largely independent of vessel-specific inspection results, or are, at best, deduced by ''conservative'' interpretation of vessel-specific inspection results without adequate consideration of the pertinent inspection system performance (reliability). In either case, the conservatism associated with the flaw-related variables chosen for analysis remains examination (NDE) technology and the recently formulated ASME Code procedures for qualifying NDE system capability and performance (as applied to selected nuclear power plant components) now provides a systematic means of bounding the conservatism in flaw-related input variables for pressure vessel integrity analyses. This is essentially achieved by establishing probabilistic (risk)-based limits on the assigned variable values, dependent upon the vessel inspection results and on the inspection system unreliability. Described herein is this probabilistic method and its potential application to: (i) defining a vessel-specific ''reference'' flaw for calculating pressure-temperature limit curves in the deterministic evaluation of pressurized water reactor (PWR) reactor vessels, and (ii) limiting the flaw distribution input to a PWR reactor vessel-specific, probabilistic integrity analysis for pressurized thermal shock loads

  13. Biocontrol of Phytophthora Blight and Anthracnose in Pepper by Sequentially Selected Antagonistic Rhizobacteria against Phytophthora capsici.

    Science.gov (United States)

    Sang, Mee Kyung; Shrestha, Anupama; Kim, Du-Yeon; Park, Kyungseok; Pak, Chun Ho; Kim, Ki Deok

    2013-06-01

    We previously developed a sequential screening procedure to select antagonistic bacterial strains against Phytophthora capsici in pepper plants. In this study, we used a modified screening procedure to select effective biocontrol strains against P. capsici; we evaluated the effect of selected strains on Phytophthora blight and anthracnose occurrence and fruit yield in pepper plants under field and plastic house conditions from 2007 to 2009. We selected four potential biocontrol strains (Pseudomonas otitidis YJR27, P. putida YJR92, Tsukamurella tyrosinosolvens YJR102, and Novosphingobium capsulatum YJR107) among 239 bacterial strains. In the 3-year field tests, all the selected strains significantly (P anthracnose incidence in at least one of the test years, but their biocontrol activities were variable. In addition, strains YJR27, YJR92, and YJR102, in certain harvests, increased pepper fruit numbers in field tests and red fruit weights in plastic house tests. Taken together, these results indicate that the screening procedure is rapid and reliable for the selection of potential biocontrol strains against P. capsici in pepper plants. In addition, these selected strains exhibited biocontrol activities against anthracnose, and some of the strains showed plant growth-promotion activities on pepper fruit.

  14. A Bayesian variable selection procedure for ranking overlapping gene sets

    DEFF Research Database (Denmark)

    Skarman, Axel; Mahdi Shariati, Mohammad; Janss, Luc

    2012-01-01

    Background Genome-wide expression profiling using microarrays or sequence-based technologies allows us to identify genes and genetic pathways whose expression patterns influence complex traits. Different methods to prioritize gene sets, such as the genes in a given molecular pathway, have been de...

  15. Are the results of questionnaires measuring non-cognitive characteristics during the selection procedure for medical school application biased by social desirability?

    Science.gov (United States)

    Obst, Katrin U; Brüheim, Linda; Westermann, Jürgen; Katalinic, Alexander; Kötter, Thomas

    2016-01-01

    Introduction: A stronger consideration of non-cognitive characteristics in Medical School application procedures is desirable. Psychometric tests could be used as an economic supplement to face-to-face interviews which are frequently conducted during university internal procedures for Medical School applications (AdH, Auswahlverfahren der Hochschulen). This study investigates whether the results of psychometric questionnaires measuring non-cognitive characteristics such as personality traits, empathy, and resilience towards stress are vulnerable to distortions of social desirability when used in the context of selection procedures at Medical Schools. Methods: This study took place during the AdH of Lübeck University in August 2015. The following questionnaires have been included: NEO-FFI, SPF, and AVEM. In a 2x1 between-subject experiment we compared the answers from an alleged application condition and a control condition. In the alleged application condition we told applicants that these questionnaires were part of the application procedure. In the control condition applicants were informed about the study prior to completing the questionnaires. Results: All included questionnaires showed differences which can be regarded as social-desirability effects. These differences did not affect the entire scales but, rather, single subscales. Conclusion: These results challenge the informative value of these questionnaires when used for Medical School application procedures. Future studies may investigate the extent to which the differences influence the actual selection of applicants and what implications can be drawn from them for the use of psychometric questionnaires as part of study-place allocation procedures at Medical Schools.

  16. Are the results of questionnaires measuring non-cognitive characteristics during the selection procedure for medical school application biased by social desirability?

    Directory of Open Access Journals (Sweden)

    Obst, Katrin U.

    2016-11-01

    Full Text Available Introduction: A stronger consideration of non-cognitive characteristics in Medical School application procedures is desirable. Psychometric tests could be used as an economic supplement to face-to-face interviews which are frequently conducted during university internal procedures for Medical School applications (AdH, Auswahlverfahren der Hochschulen. This study investigates whether the results of psychometric questionnaires measuring non-cognitive characteristics such as personality traits, empathy, and resilience towards stress are vulnerable to distortions of social desirability when used in the context of selection procedures at Medical Schools.Methods: This study took place during the AdH of Lübeck University in August 2015. The following questionnaires have been included: NEO-FFI, SPF, and AVEM. In a 2x1 between-subject experiment we compared the answers from an alleged application condition and a control condition. In the alleged application condition we told applicants that these questionnaires were part of the application procedure. In the control condition applicants were informed about the study prior to completing the questionnaires.Results: All included questionnaires showed differences which can be regarded as social-desirability effects. These differences did not affect the entire scales but, rather, single subscales.Conclusion: These results challenge the informative value of these questionnaires when used for Medical School application procedures. Future studies may investigate the extent to which the differences influence the actual selection of applicants and what implications can be drawn from them for the use of psychometric questionnaires as part of study-place allocation procedures at Medical Schools.

  17. Petroleomics by electrospray ionization FT-ICR mass spectrometry coupled to partial least squares with variable selection methods: prediction of the total acid number of crude oils.

    Science.gov (United States)

    Terra, Luciana A; Filgueiras, Paulo R; Tose, Lílian V; Romão, Wanderson; de Souza, Douglas D; de Castro, Eustáquio V R; de Oliveira, Mirela S L; Dias, Júlio C M; Poppi, Ronei J

    2014-10-07

    Negative-ion mode electrospray ionization, ESI(-), with Fourier transform ion cyclotron resonance mass spectrometry (FT-ICR MS) was coupled to a Partial Least Squares (PLS) regression and variable selection methods to estimate the total acid number (TAN) of Brazilian crude oil samples. Generally, ESI(-)-FT-ICR mass spectra present a power of resolution of ca. 500,000 and a mass accuracy less than 1 ppm, producing a data matrix containing over 5700 variables per sample. These variables correspond to heteroatom-containing species detected as deprotonated molecules, [M - H](-) ions, which are identified primarily as naphthenic acids, phenols and carbazole analog species. The TAN values for all samples ranged from 0.06 to 3.61 mg of KOH g(-1). To facilitate the spectral interpretation, three methods of variable selection were studied: variable importance in the projection (VIP), interval partial least squares (iPLS) and elimination of uninformative variables (UVE). The UVE method seems to be more appropriate for selecting important variables, reducing the dimension of the variables to 183 and producing a root mean square error of prediction of 0.32 mg of KOH g(-1). By reducing the size of the data, it was possible to relate the selected variables with their corresponding molecular formulas, thus identifying the main chemical species responsible for the TAN values.

  18. Demographic Variables and Selective, Sustained Attention and Planning through Cognitive Tasks among Healthy Adults

    OpenAIRE

    Afsaneh Zarghi; Zali; A; Tehranidost; M; Mohammad Reza Zarindast; Ashrafi; F; Doroodgar; Khodadadi

    2011-01-01

    Introduction: Cognitive tasks are considered to be applicable and appropriate in assessing cognitive domains. The purpose of our study is to determine the relationship existence between variables of age, sex and education with selective, sustained attention and planning abilities by means of computerized cognitive tasks among healthy adults. Methods: A cross-sectional study was implemented during 6 months from June to November, 2010 on 84 healthy adults (42 male and 42 female). The whole part...

  19. The Use of Variable Q1 Isolation Windows Improves Selectivity in LC-SWATH-MS Acquisition.

    Science.gov (United States)

    Zhang, Ying; Bilbao, Aivett; Bruderer, Tobias; Luban, Jeremy; Strambio-De-Castillia, Caterina; Lisacek, Frédérique; Hopfgartner, Gérard; Varesio, Emmanuel

    2015-10-02

    As tryptic peptides and metabolites are not equally distributed along the mass range, the probability of cross fragment ion interference is higher in certain windows when fixed Q1 SWATH windows are applied. We evaluated the benefits of utilizing variable Q1 SWATH windows with regards to selectivity improvement. Variable windows based on equalizing the distribution of either the precursor ion population (PIP) or the total ion current (TIC) within each window were generated by an in-house software, swathTUNER. These two variable Q1 SWATH window strategies outperformed, with respect to quantification and identification, the basic approach using a fixed window width (FIX) for proteomic profiling of human monocyte-derived dendritic cells (MDDCs). Thus, 13.8 and 8.4% additional peptide precursors, which resulted in 13.1 and 10.0% more proteins, were confidently identified by SWATH using the strategy PIP and TIC, respectively, in the MDDC proteomic sample. On the basis of the spectral library purity score, some improvement warranted by variable Q1 windows was also observed, albeit to a lesser extent, in the metabolomic profiling of human urine. We show that the novel concept of "scheduled SWATH" proposed here, which incorporates (i) variable isolation windows and (ii) precursor retention time segmentation further improves both peptide and metabolite identifications.

  20. Boards and the Selection Procedures Post Gender Quotas

    DEFF Research Database (Denmark)

    Arna Arnardóttir, Auður; Sigurjonsson, Olaf; Terjesen, Siri

    Purpose: Director Selection process can greatly effect board’s behavior and effectiveness and ultimately the firm’s performance and outcome. Director selection practices are hence important and yet underresearched topic, especially practices applied in the wake of gender quota legislations....... The purpose of this paper is to contribute to the extant literature by gaining greater understanding into how new female board members are recruited and selected when demand for one gender is high. Design/methodology/approach: Mixed research methodology was applied. Questionnaire (N=260) and in......-depth interviews (N=20) were conducted with Icelandic non-executive board directors, to identify the selection criteria that are deemed most important when selecting the new female director candidates taking seat on boards in the wake of gender quota legislation and compare those practices with previous selection...

  1. Extended reviewing or the role of potential siting cantons in the ongoing Swiss site selection procedure ('Sectoral Plan')

    International Nuclear Information System (INIS)

    Flueeler, Thomas

    2014-01-01

    The disposition of nuclear waste in Switzerland has a long-standing and sinuous history reflecting its complex socio-technical nature (Flueeler, 2006). Upon the twofold failure to site a repository for low- and intermediate-level radioactive waste at Wellenberg during the 1990's and 2000's, it was recognised that the respective site selections had not been fully transparent. The Swiss government, the Federal Council, accepted the lesson and, after an extensive nationwide consultation at that, established a new site selection process 'from scratch': a systematic, stepwise, traceable, fair and binding procedure with a safety-first approach, yet extensively participatory. The so-called Sectoral Plan for Deep Geological Repositories guarantees the inclusion of the affected and concerned cantons and communities, as well as the relevant authorities in neighbouring countries from an early stage (Swiss Nuclear Energy Act, 2003; BFE, 2008). This contribution shares experience and insights in the ongoing procedure from a cantonal point of view that is an intermediate position between national needs and regional concerns, and with technical regulatory expertise between highly specialised experts and involved publics. (authors)

  2. Experimental design technique applied to the validation of an instrumental Neutron Activation Analysis procedure

    International Nuclear Information System (INIS)

    Santos, Uanda Paula de M. dos; Moreira, Edson Gonçalves

    2017-01-01

    In this study optimization of procedures and standardization of Instrumental Neutron Activation Analysis (INAA) method were carried out for the determination of the elements bromine, chlorine, magnesium, manganese, potassium, sodium and vanadium in biological matrix materials using short irradiations at a pneumatic system. 2 k experimental designs were applied for evaluation of the individual contribution of selected variables of the analytical procedure in the final mass fraction result. The chosen experimental designs were the 2 3 and the 2 4 , depending on the radionuclide half life. Different certified reference materials and multi-element comparators were analyzed considering the following variables: sample decay time, irradiation time, counting time and sample distance to detector. Comparator concentration, sample mass and irradiation time were maintained constant in this procedure. By means of the statistical analysis and theoretical and experimental considerations, it was determined the optimized experimental conditions for the analytical methods that will be adopted for the validation procedure of INAA methods in the Neutron Activation Analysis Laboratory (LAN) of the Research Reactor Center (CRPq) at the Nuclear and Energy Research Institute (IPEN /CNEN-SP). Optimized conditions were estimated based on the results of z-score tests, main effect, interaction effects and better irradiation conditions. (author)

  3. Experimental design technique applied to the validation of an instrumental Neutron Activation Analysis procedure

    Energy Technology Data Exchange (ETDEWEB)

    Santos, Uanda Paula de M. dos; Moreira, Edson Gonçalves, E-mail: uandapaula@gmail.com, E-mail: emoreira@ipen.br [Instituto de Pesquisas Energéticas e Nucleares (IPEN/CNEN-SP), São Paulo, SP (Brazil)

    2017-07-01

    In this study optimization of procedures and standardization of Instrumental Neutron Activation Analysis (INAA) method were carried out for the determination of the elements bromine, chlorine, magnesium, manganese, potassium, sodium and vanadium in biological matrix materials using short irradiations at a pneumatic system. 2{sup k} experimental designs were applied for evaluation of the individual contribution of selected variables of the analytical procedure in the final mass fraction result. The chosen experimental designs were the 2{sup 3} and the 2{sup 4}, depending on the radionuclide half life. Different certified reference materials and multi-element comparators were analyzed considering the following variables: sample decay time, irradiation time, counting time and sample distance to detector. Comparator concentration, sample mass and irradiation time were maintained constant in this procedure. By means of the statistical analysis and theoretical and experimental considerations, it was determined the optimized experimental conditions for the analytical methods that will be adopted for the validation procedure of INAA methods in the Neutron Activation Analysis Laboratory (LAN) of the Research Reactor Center (CRPq) at the Nuclear and Energy Research Institute (IPEN /CNEN-SP). Optimized conditions were estimated based on the results of z-score tests, main effect, interaction effects and better irradiation conditions. (author)

  4. Cost Accounting as a Tool for Increasing Cost Transparency in Selective Hepatic Transarterial Chemoembolization.

    Science.gov (United States)

    Ahmed, Osman; Patel, Mikin; Ward, Thomas; Sze, Daniel Y; Telischak, Kristen; Kothary, Nishita; Hofmann, Lawrence V

    2015-12-01

    To increase cost transparency and uncover potential areas for savings in patients receiving selective transarterial chemoembolization at a tertiary care academic center. The hospital cost accounting system charge master sheet for direct and total costs associated with selective transarterial chemoembolization in fiscal years 2013 and 2014 was queried for each of the four highest volume interventional radiologists at a single institution. There were 517 cases (range, 83-150 per physician) performed; direct costs incurred relating to care before, during, and after the procedure with respect to labor, supply, and equipment fees were calculated. A median of 48 activity codes were charged per selective transarterial chemoembolization from five cost centers, represented by the angiography suite, units for care before and after the procedure, pharmacy, and observation floors. The average direct cost of selective transarterial chemoembolization did not significantly differ among operators at $9,126.94, $8,768.77, $9,027.33, and $8,909.75 (P = .31). Intraprocedural costs accounted for 82.8% of total direct costs and provided the greatest degree in cost variability ($7,268.47-$7,691.27). The differences in intraprocedural expense among providers were not statistically significant (P = .09), even when separated into more specific procedure-related labor and supply costs. Cost accounting systems could effectively be interrogated as a method for calculating direct costs associated with selective transarterial chemoembolization. The greatest source of expenditure and variability in cost among providers was shown to be intraprocedural labor and supplies, although the effect did not appear to be operator dependent. Copyright © 2015 SIR. Published by Elsevier Inc. All rights reserved.

  5. A Time-Series Water Level Forecasting Model Based on Imputation and Variable Selection Method

    OpenAIRE

    Jun-He Yang; Ching-Hsue Cheng; Chia-Pan Chan

    2017-01-01

    Reservoirs are important for households and impact the national economy. This paper proposed a time-series forecasting model based on estimating a missing value followed by variable selection to forecast the reservoir's water level. This study collected data from the Taiwan Shimen Reservoir as well as daily atmospheric data from 2008 to 2015. The two datasets are concatenated into an integrated dataset based on ordering of the data as a research dataset. The proposed time-series forecasting m...

  6. Generalizability of a composite student selection procedure at a university-based chiropractic program

    DEFF Research Database (Denmark)

    O'Neill, Lotte Dyhrberg; Korsholm, Lars; Wallstedt, Birgitta

    2009-01-01

    , rater and residual effects were estimated for a mixed model with the restricted maximum likelihood method. The reliability of obtained applicant ranks (generalizability coefficients) was calculated for the individual admission criteria and for the composite admission procedure. RESULTS: Very good......PURPOSE: Non-cognitive admission criteria are typically used in chiropractic student selection to supplement grades. The reliability of non-cognitive student admission criteria in chiropractic education has not previously been examined. In addition, very few studies have examined the overall test...... test, and an admission interview. METHODS: Data from 105 Chiropractic applicants from the 2007 admission at the University of Southern Denmark were available for analysis. Each admission parameter was double scored using two random, blinded, and independent raters. Variance components for applicant...

  7. Selection of blood sampling times for determination of 51Cr-EDTA clearance in a screeening procedure

    International Nuclear Information System (INIS)

    Gullquist, R.; Askergren, A.; Brandt, R.; Silk, B.; Strandell, T.; Huddinge University Hospital

    1983-01-01

    In a group of 44 construction workers various blood sampling protocols were compared with regard to variability of the 51 Cr-EDTA clearance on repeated determinations. A comparison was also made among the different blood sampling protocols with a reference method using Simpson's formula in the area calculation. A double slope method lasting for two and a half hours was finally choosen and suggested as a screening procedure in industrial environment with blood sampling at 5, 15, 90, 120, 135 and 150 minutes after injection and with the patient resting in a semirecumbent position. (orig.) [de

  8. Site selection under the underground geologic store plan. Procedures of selecting underground geologic stores as disputed by society, science, and politics. Site selection rules; Mit dem Sachplan Geologische Tiefenlager auf Standortsuche. Auswahlverfahren fuer geologische Tiefenlager im Spannungsfeld von Gesellschaft, Wissenschaft und Politik, Regeln fuer die Standortsuche

    Energy Technology Data Exchange (ETDEWEB)

    Aebersold, M. [Bundesamt fuer Energie BFE, Sektion Entsorgung Radioaktive Abfaelle, Bern (Switzerland)

    2008-10-15

    The new Nuclear Power Act and the Nuclear Power Ordinance of 2005 are used in Switzerland to select a site of an underground geologic store for radioactive waste in a substantive planning procedure. The ''Underground Geologic Store Substantive Plan'' is to ensure the possibility to build underground geologic stores in an independent, transparent and fair procedure. The Federal Office for Energy (BFE) is the agency responsible for this procedure. The ''Underground Geologic Store'' Substantive Plan comprises these principles: - The long term protection of people and the environment enjoys priority. Aspects of regional planning, economics and society are of secondary importance. - Site selection is based on the waste volumes arising from the five nuclear power plants currently existing in Switzerland. The Substantive Plan is no precedent for or against future nuclear power plants. - A transparent and fair procedure is an indispensable prerequisite for achieving the objectives of a Substantive Plan, i.e., finding accepted sites for underground geologic stores. The Underground Geologic Stores Substantive Plan is arranged in two parts, a conceptual part defining the rules of the selection process, and an implementation part documenting the selection process step by step and, in the end, naming specific sites of underground geologic stores in Switzerland. The objective is to be able to commission underground geologic stores in 25 or 35 years' time. In principle, 2 sites are envisaged, one for low and intermediate level waste, and one for high level waste. The Swiss Federal Council approved the conceptual part on April 2, 2008. This marks the beginning of the implementation phase and the site selection process proper. (orig.)

  9. Relation between sick leave and selected exposure variables among women semiconductor workers in Malaysia

    Science.gov (United States)

    Chee, H; Rampal, K

    2003-01-01

    Aims: To determine the relation between sick leave and selected exposure variables among women semiconductor workers. Methods: This was a cross sectional survey of production workers from 18 semiconductor factories. Those selected had to be women, direct production operators up to the level of line leader, and Malaysian citizens. Sick leave and exposure to physical and chemical hazards were determined by self reporting. Three sick leave variables were used; number of sick leave days taken in the past year was the variable of interest in logistic regression models where the effects of age, marital status, work task, work schedule, work section, and duration of work in factory and work section were also explored. Results: Marital status was strongly linked to the taking of sick leave. Age, work schedule, and duration of work in the factory were significant confounders only in certain cases. After adjusting for these confounders, chemical and physical exposures, with the exception of poor ventilation and smelling chemicals, showed no significant relation to the taking of sick leave within the past year. Work section was a good predictor for taking sick leave, as wafer polishing workers faced higher odds of taking sick leave for each of the three cut off points of seven days, three days, and not at all, while parts assembly workers also faced significantly higher odds of taking sick leave. Conclusion: In Malaysia, the wafer fabrication factories only carry out a limited portion of the work processes, in particular, wafer polishing and the processes immediately prior to and following it. This study, in showing higher illness rates for workers in wafer polishing compared to semiconductor assembly, has implications for the governmental policy of encouraging the setting up of wafer fabrication plants with the full range of work processes. PMID:12660374

  10. Input variable selection for interpolating high-resolution climate ...

    African Journals Online (AJOL)

    Although the primary input data of climate interpolations are usually meteorological data, other related (independent) variables are frequently incorporated in the interpolation process. One such variable is elevation, which is known to have a strong influence on climate. This research investigates the potential of 4 additional ...

  11. Energy-efficient relay selection and optimal power allocation for performance-constrained dual-hop variable-gain AF relaying

    KAUST Repository

    Zafar, Ammar

    2013-12-01

    This paper investigates the energy-efficiency enhancement of a variable-gain dual-hop amplify-and-forward (AF) relay network utilizing selective relaying. The objective is to minimize the total consumed power while keeping the end-to-end signal-to-noise-ratio (SNR) above a certain peak value and satisfying the peak power constraints at the source and relay nodes. To achieve this objective, an optimal relay selection and power allocation strategy is derived by solving the power minimization problem. Numerical results show that the derived optimal strategy enhances the energy-efficiency as compared to a benchmark scheme in which both the source and the selected relay transmit at peak power. © 2013 IEEE.

  12. Investigating personal, cognitive and organizational variables as predictors of unsafe behaviors among line workers in an industrial company

    Directory of Open Access Journals (Sweden)

    A. Neissi

    2013-08-01

     .Conclusion: The results of this study showed the importance of safety competency, prevention focus, safety rules and procedures, safety efficiency and consciousness as predictors of unsafe work behaviors. Therefore, it is recommended to rely on these variables in the safety training courses and also in selecting people for high risk environments.

  13. Variable mechanical ventilation.

    Science.gov (United States)

    Fontela, Paula Caitano; Prestes, Renata Bernardy; Forgiarini, Luiz Alberto; Friedman, Gilberto

    2017-01-01

    To review the literature on the use of variable mechanical ventilation and the main outcomes of this technique. Search, selection, and analysis of all original articles on variable ventilation, without restriction on the period of publication and language, available in the electronic databases LILACS, MEDLINE®, and PubMed, by searching the terms "variable ventilation" OR "noisy ventilation" OR "biologically variable ventilation". A total of 36 studies were selected. Of these, 24 were original studies, including 21 experimental studies and three clinical studies. Several experimental studies reported the beneficial effects of distinct variable ventilation strategies on lung function using different models of lung injury and healthy lungs. Variable ventilation seems to be a viable strategy for improving gas exchange and respiratory mechanics and preventing lung injury associated with mechanical ventilation. However, further clinical studies are necessary to assess the potential of variable ventilation strategies for the clinical improvement of patients undergoing mechanical ventilation.

  14. Sex-specific selection for MHC variability in Alpine chamois

    Directory of Open Access Journals (Sweden)

    Schaschl Helmut

    2012-02-01

    Full Text Available Abstract Background In mammals, males typically have shorter lives than females. This difference is thought to be due to behavioural traits which enhance competitive abilities, and hence male reproductive success, but impair survival. Furthermore, in many species males usually show higher parasite burden than females. Consequently, the intensity of selection for genetic factors which reduce susceptibility to pathogens may differ between sexes. High variability at the major histocompatibility complex (MHC genes is believed to be advantageous for detecting and combating the range of infectious agents present in the environment. Increased heterozygosity at these immune genes is expected to be important for individual longevity. However, whether males in natural populations benefit more from MHC heterozygosity than females has rarely been investigated. We investigated this question in a long-term study of free-living Alpine chamois (Rupicapra rupicapra, a polygynous mountain ungulate. Results Here we show that male chamois survive significantly (P = 0.022 longer if heterozygous at the MHC class II DRB locus, whereas females do not. Improved survival of males was not a result of heterozygote advantage per se, as background heterozygosity (estimated across twelve microsatellite loci did not change significantly with age. Furthermore, reproductively active males depleted their body fat reserves earlier than females leading to significantly impaired survival rates in this sex (P Conclusions Increased MHC class II DRB heterozygosity with age in males, suggests that MHC heterozygous males survive longer than homozygotes. Reproductively active males appear to be less likely to survive than females most likely because of the energetic challenge of the winter rut, accompanied by earlier depletion of their body fat stores, and a generally higher parasite burden. This scenario renders the MHC-mediated immune response more important for males than for females

  15. The Selection of Procedures in One-stage Urethroplasty for Treatment of Coexisting Urethral Strictures in Anterior and Posterior Urethra.

    Science.gov (United States)

    Lv, XiangGuo; Xu, Yue-Min; Xie, Hong; Feng, Chao; Zhang, Jiong

    2016-07-01

    To explore selection of the procedures in one-stage urethroplasty for treatment of coexisting urethral strictures in the anterior and posterior urethra. Between 2008 and 2014, a total of 27 patients with existing strictures simultaneously at anterior urethra and posterior urethra were treated in our hospital. Two types of procedures were selected for treatment of the anterior urethral strictures. A penile skin flap and the lingual mucosa were used for augmented urethroplasty in 20 and 7 cases, respectively. Three types of procedures, namely, non-transecting end-to-end urethral anastomosis (n = 3), traditional end-to-end urethral anastomosis (n = 17), other grafts substitution urethroplasty, including pedicle scrotal skin urethroplasty (n = 2), and lingual mucosal graft urethroplasty (n = 5), were utilized in the treatment of posterior urethral strictures. The patients were mean followed up 30 months with an overall success rate of 88.9%. The majority of the patients exhibited wide patent urethras on retrograde urethrography and the patients' urinary peak flow ranged from 14.2 to 37.9 ml/s. Complications developed in 3 patients (11.1%). Of the 17 patients who underwent traditional urethral end-to-end anastomosis, urethral strictures occurred in 2 patients at 4 and 6 months after the operation. These patients achieved a satisfactory voiding function after salvage pedicle scrotal skin urethroplasty. A urethral pseudodiverticulum was observed in another patient 9 months after pedicle penile flap urethroplasty; and after a salvage procedure, he regained excellent voiding function. Synchronous anterior and posterior strictures can be successfully reconstructed with a combination of substitution and anastomotic urethroplasty techniques. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Considerations for the selection of an applicable energy efficiency test procedure for electric motors in Malaysia: Lessons for other developing countries

    International Nuclear Information System (INIS)

    Yanti, P.A.A.; Mahlia, T.M.I.

    2009-01-01

    Electric motors are a major energy-consuming appliance in the industrial sector. According to a survey, electric motors account for more than 70% of the total growth from 1991 to 2004 in electricity consumption in this sector in Malaysia. To reduce electricity consumption, Malaysia should consider resetting the minimum energy efficiency standards for electric motors sometime in the coming year. The first step towards adopting energy efficiency standards is the creation of a procedure for testing and rating equipment. An energy test procedure is the technical foundation for all energy efficiency standards, energy labels and other related programs. The test conditions in the test procedure must represent the conditions of the country. This paper presents the process for the selection of an energy test procedure for electric motors in Malaysia based on the country's conditions and requirements. The adoption of test procedures for electric motors internationally by several countries is also discussed in this paper. Even though the paper only discusses the test procedure for electric motors in Malaysia, the methods can be directly applied in other countries without major modifications.

  17. IRAS variables as galactic structure tracers - Classification of the bright variables

    Science.gov (United States)

    Allen, L. E.; Kleinmann, S. G.; Weinberg, M. D.

    1993-01-01

    The characteristics of the 'bright infrared variables' (BIRVs), a sample consisting of the 300 brightest stars in the IRAS Point Source Catalog with IRAS variability index VAR of 98 or greater, are investigated with the purpose of establishing which of IRAS variables are AGB stars (e.g., oxygen-rich Miras and carbon stars, as was assumed by Weinberg (1992)). Results of the analysis of optical, infrared, and microwave spectroscopy of these stars indicate that, out of 88 stars in the BIRV sample identified with cataloged variables, 86 can be classified as Miras. Results of a similar analysis performed for a color-selected sample of stars, using the color limits employed by Habing (1988) to select AGB stars, showed that, out of 52 percent of classified stars, 38 percent are non-AGB stars, including H II regions, planetary nebulae, supergiants, and young stellar objects, indicating that studies using color-selected samples are subject to misinterpretation.

  18. Repeat what after whom? Exploring variable selectivity in a cross-dialectal shadowing task.

    Directory of Open Access Journals (Sweden)

    Abby eWalker

    2015-05-01

    Full Text Available Twenty women from Christchurch, New Zealand and sixteen from Columbus Ohio (dialect region U.S. Midland participated in a bimodal lexical naming task where they repeated monosyllabic words after four speakers from four regional dialects: New Zealand, Australia, U.S. Inland North and U.S. Midland. The resulting utterances were acoustically analyzed, and presented to listeners on Amazon Mechanical Turk in an AXB task. Convergence is observed, but differs depending on the dialect of the speaker, the dialect of the model, the particular word class being shadowed, and the order in which dialects are presented to participants. We argue that these patterns are generally consistent with findings that convergence is promoted by a large phonetic distance between shadower and model (Babel, 2010, contra Kim, Horton & Bradlow, 2011, and greater existing variability in a vowel class (Babel, 2012. The results also suggest that more comparisons of accommodation towards different dialects are warranted, and that the investigation of the socio-indexical meaning of specific linguistic forms in context is a promising avenue for understanding variable selectivity in convergence.

  19. Chaos synchronization using single variable feedback based on backstepping method

    International Nuclear Information System (INIS)

    Zhang Jian; Li Chunguang; Zhang Hongbin; Yu Juebang

    2004-01-01

    In recent years, backstepping method has been developed in the field of nonlinear control, such as controller, observer and output regulation. In this paper, an effective backstepping design is applied to chaos synchronization. There are some advantages in this method for synchronizing chaotic systems, such as (a) the synchronization error is exponential convergent; (b) only one variable information of the master system is needed; (c) it presents a systematic procedure for selecting a proper controller. Numerical simulations for the Chua's circuit and the Roessler system demonstrate that this method is very effective

  20. gamboostLSS: An R Package for Model Building and Variable Selection in the GAMLSS Framework

    OpenAIRE

    Hofner, Benjamin; Mayr, Andreas; Schmid, Matthias

    2014-01-01

    Generalized additive models for location, scale and shape are a flexible class of regression models that allow to model multiple parameters of a distribution function, such as the mean and the standard deviation, simultaneously. With the R package gamboostLSS, we provide a boosting method to fit these models. Variable selection and model choice are naturally available within this regularized regression framework. To introduce and illustrate the R package gamboostLSS and its infrastructure, we...

  1. Variable Lifting Index (VLI): A New Method for Evaluating Variable Lifting Tasks.

    Science.gov (United States)

    Waters, Thomas; Occhipinti, Enrico; Colombini, Daniela; Alvarez-Casado, Enrique; Fox, Robert

    2016-08-01

    We seek to develop a new approach for analyzing the physical demands of highly variable lifting tasks through an adaptation of the Revised NIOSH (National Institute for Occupational Safety and Health) Lifting Equation (RNLE) into a Variable Lifting Index (VLI). There are many jobs that contain individual lifts that vary from lift to lift due to the task requirements. The NIOSH Lifting Equation is not suitable in its present form to analyze variable lifting tasks. In extending the prior work on the VLI, two procedures are presented to allow users to analyze variable lifting tasks. One approach involves the sampling of lifting tasks performed by a worker over a shift and the calculation of the Frequency Independent Lift Index (FILI) for each sampled lift and the aggregation of the FILI values into six categories. The Composite Lift Index (CLI) equation is used with lifting index (LI) category frequency data to calculate the VLI. The second approach employs a detailed systematic collection of lifting task data from production and/or organizational sources. The data are organized into simplified task parameter categories and further aggregated into six FILI categories, which also use the CLI equation to calculate the VLI. The two procedures will allow practitioners to systematically employ the VLI method to a variety of work situations where highly variable lifting tasks are performed. The scientific basis for the VLI procedure is similar to that for the CLI originally presented by NIOSH; however, the VLI method remains to be validated. The VLI method allows an analyst to assess highly variable manual lifting jobs in which the task characteristics vary from lift to lift during a shift. © 2015, Human Factors and Ergonomics Society.

  2. Selective dopamine D3 receptor antagonism by SB-277011A attenuates cocaine reinforcement as assessed by progressive-ratio and variable-cost–variable-payoff fixed-ratio cocaine self-administration in rats

    OpenAIRE

    Xi, Zheng-Xiong; Gilbert, Jeremy G.; Pak, Arlene C.; Ashby, Charles R.; Heidbreder, Christian A.; Gardner, Eliot L.

    2005-01-01

    In rats, acute administration of SB-277011A, a highly selective dopamine (DA) D3 receptor antagonist, blocks cocaine-enhanced brain stimulation reward, cocaine-seeking behaviour and reinstatement of cocaine-seeking behaviour. Here, we investigated whether SB-277011A attenuates cocaine reinforcement as assessed by cocaine self-administration under variable-cost–variable-payoff fixed-ratio (FR) and progressive-ratio (PR) reinforcement schedules. Acute i.p. administration of SB-277011A (3–24 mg/...

  3. A vertical-energy-thresholding procedure for data reduction with multiple complex curves.

    Science.gov (United States)

    Jung, Uk; Jeong, Myong K; Lu, Jye-Chyi

    2006-10-01

    Due to the development of sensing and computer technology, measurements of many process variables are available in current manufacturing processes. It is very challenging, however, to process a large amount of information in a limited time in order to make decisions about the health of the processes and products. This paper develops a "preprocessing" procedure for multiple sets of complicated functional data in order to reduce the data size for supporting timely decision analyses. The data type studied has been used for fault detection, root-cause analysis, and quality improvement in such engineering applications as automobile and semiconductor manufacturing and nanomachining processes. The proposed vertical-energy-thresholding (VET) procedure balances the reconstruction error against data-reduction efficiency so that it is effective in capturing key patterns in the multiple data signals. The selected wavelet coefficients are treated as the "reduced-size" data in subsequent analyses for decision making. This enhances the ability of the existing statistical and machine-learning procedures to handle high-dimensional functional data. A few real-life examples demonstrate the effectiveness of our proposed procedure compared to several ad hoc techniques extended from single-curve-based data modeling and denoising procedures.

  4. Independent SU(2)-loop variables

    International Nuclear Information System (INIS)

    Loll, R.

    1991-04-01

    We give a reduction procedure for SU(2)-trace variables and introduce a complete set of indepentent, gauge-invariant and almost local loop variables for the configuration space of SU(2)-lattice gauge theory in 2+1 dimensions. (orig.)

  5. Bias in the Listeria monocytogenes enrichment procedure: Lineage 2 strains outcompete lineage 1 strains in University of Vermont selective enrichments

    DEFF Research Database (Denmark)

    Bruhn, Jesper Bartholin; Vogel, Birte Fonnesbech; Gram, Lone

    2005-01-01

    compounds in UVM I and II influenced this bias. The results of the present study demonstrate that the selective procedures used for isolation of L. monocytogenes may not allow a true representation of the types present in foods. Our results could have a significant impact on epidemiological studies...

  6. Ultrahigh Dimensional Variable Selection for Interpolation of Point Referenced Spatial Data: A Digital Soil Mapping Case Study

    Science.gov (United States)

    Lamb, David W.; Mengersen, Kerrie

    2016-01-01

    Modern soil mapping is characterised by the need to interpolate point referenced (geostatistical) observations and the availability of large numbers of environmental characteristics for consideration as covariates to aid this interpolation. Modelling tasks of this nature also occur in other fields such as biogeography and environmental science. This analysis employs the Least Angle Regression (LAR) algorithm for fitting Least Absolute Shrinkage and Selection Operator (LASSO) penalized Multiple Linear Regressions models. This analysis demonstrates the efficiency of the LAR algorithm at selecting covariates to aid the interpolation of geostatistical soil carbon observations. Where an exhaustive search of the models that could be constructed from 800 potential covariate terms and 60 observations would be prohibitively demanding, LASSO variable selection is accomplished with trivial computational investment. PMID:27603135

  7. Managers’ Perception on Budget Participation, Procedural Justice and Managerial Performance

    Directory of Open Access Journals (Sweden)

    Sady Mazzioni

    2014-08-01

    Full Text Available The purpose of this paper is analyzing the influence of budget participation on the relationship between procedural justice and managerial performance in organizations in Santa Catarina. In order to achieve the objective, a descriptive quantitative research was carried out, based on a questionnaire, with 5 and 7-pont Likert scale, answered by 44 managers. For data treatment, it was used the statistical package Smart PLS to determine, by regression technique, the contribution of each variable (participation and procedural justice on the dependent variable (performance. Descriptive statistics points to a greater awareness regarding the possibility of participation in budgeting process and a less meaningful feeling regarding superiors’ procedural justice, leaving the assessment of managerial performance in an intermediate condition. Initially, it was found that the correlations between independent (procedural justice and intervening (budget participation variables, and between intervening and dependent (managerial performance variables are statistically meaningful. However, the correlation between independent and dependent variables was not statistically meaningful. The results of using structural equation (path analysis indicated that one of the assumptions of the indirect effect was not confirmed, that is, the relationship between independent and dependent variables should diminish after being controlled by the intervening variable. In the present study, the reverse situation was observed, starting from a non-significant correlation and reaching a significant negative correlation, after the effect of the intervening variable.

  8. Birth order and selected work-related personality variables.

    Science.gov (United States)

    Phillips, A S; Bedeian, A G; Mossholder, K W; Touliatos, J

    1988-12-01

    A possible link between birth order and various individual characteristics (e. g., intelligence, potential eminence, need for achievement, sociability) has been suggested by personality theorists such as Adler for over a century. The present study examines whether birth order is associated with selected personality variables that may be related to various work outcomes. 3 of 7 hypotheses were supported and the effect sizes for these were small. Firstborns scored significantly higher than later borns on measures of dominance, good impression, and achievement via conformity. No differences between firstborns and later borns were found in managerial potential, work orientation, achievement via independence, and sociability. The study's sample consisted of 835 public, government, and industrial accountants responding to a national US survey of accounting professionals. The nature of the sample may have been partially responsible for the results obtained. Its homogeneity may have caused any birth order effects to wash out. It can be argued that successful membership in the accountancy profession requires internalization of a set of prescribed rules and standards. It may be that accountants as a group are locked in to a behavioral framework. Any differentiation would result from spurious interpersonal differences, not from predictable birth-order related characteristics. A final interpretation is that birth order effects are nonexistent or statistical artifacts. Given the present data and particularistic sample, however, the authors have insufficient information from which to draw such a conclusion.

  9. An adaptive technique for multiscale approximate entropy (MAEbin) threshold (r) selection: application to heart rate variability (HRV) and systolic blood pressure variability (SBPV) under postural stress.

    Science.gov (United States)

    Singh, Amritpal; Saini, Barjinder Singh; Singh, Dilbag

    2016-06-01

    Multiscale approximate entropy (MAE) is used to quantify the complexity of a time series as a function of time scale τ. Approximate entropy (ApEn) tolerance threshold selection 'r' is based on either: (1) arbitrary selection in the recommended range (0.1-0.25) times standard deviation of time series (2) or finding maximum ApEn (ApEnmax) i.e., the point where self-matches start to prevail over other matches and choosing the corresponding 'r' (rmax) as threshold (3) or computing rchon by empirically finding the relation between rmax, SD1/SD2 ratio and N using curve fitting, where, SD1 and SD2 are short-term and long-term variability of a time series respectively. None of these methods is gold standard for selection of 'r'. In our previous study [1], an adaptive procedure for selection of 'r' is proposed for approximate entropy (ApEn). In this paper, this is extended to multiple time scales using MAEbin and multiscale cross-MAEbin (XMAEbin). We applied this to simulations i.e. 50 realizations (n = 50) of random number series, fractional Brownian motion (fBm) and MIX (P) [1] series of data length of N = 300 and short term recordings of HRV and SBPV performed under postural stress from supine to standing. MAEbin and XMAEbin analysis was performed on laboratory recorded data of 50 healthy young subjects experiencing postural stress from supine to upright. The study showed that (i) ApEnbin of HRV is more than SBPV in supine position but is lower than SBPV in upright position (ii) ApEnbin of HRV decreases from supine i.e. 1.7324 ± 0.112 (mean ± SD) to upright 1.4916 ± 0.108 due to vagal inhibition (iii) ApEnbin of SBPV increases from supine i.e. 1.5535 ± 0.098 to upright i.e. 1.6241 ± 0.101 due sympathetic activation (iv) individual and cross complexities of RRi and systolic blood pressure (SBP) series depend on time scale under consideration (v) XMAEbin calculated using ApEnmax is correlated with cross-MAE calculated using ApEn (0.1-0.26) in steps of 0

  10. Variability in dose estimates associated with the food-chain transport and ingestion of selected radionuclides

    International Nuclear Information System (INIS)

    Hoffman, F.O.; Gardner, R.H.; Eckerman, K.F.

    1982-06-01

    Dose predictions for the ingestion of 90 Sr and 137 Cs, using aquatic and terrestrial food chain transport models similar to those in the Nuclear Regulatory Commission's Regulatory Guide 1.109, are evaluated through estimating the variability of model parameters and determining the effect of this variability on model output. The variability in the predicted dose equivalent is determined using analytical and numerical procedures. In addition, a detailed discussion is included on 90 Sr dosimetry. The overall estimates of uncertainty are most relevant to conditions where site-specific data is unavailable and when model structure and parameter estimates are unbiased. Based on the comparisons performed in this report, it is concluded that the use of the generic default parameters in Regulatory Guide 1.109 will usually produce conservative dose estimates that exceed the 90th percentile of the predicted distribution of dose equivalents. An exception is the meat pathway for 137 Cs, in which use of generic default values results in a dose estimate at the 24th percentile. Among the terrestrial pathways of exposure, the non-leafy vegetable pathway is the most important for 90 Sr. For 90 Sr, the parameters for soil retention, soil-to-plant transfer, and internal dosimetry contribute most significantly to the variability in the predicted dose for the combined exposure to all terrestrial pathways. For 137 Cs, the meat transfer coefficient the mass interception factor for pasture forage, and the ingestion dose factor are the most important parameters. The freshwater finfish bioaccumulation factor is the most important parameter for the dose prediction of 90 Sr and 137 Cs transported over the water-fish-man pathway

  11. Statistical Dependence of Pipe Breaks on Explanatory Variables

    Directory of Open Access Journals (Sweden)

    Patricia Gómez-Martínez

    2017-02-01

    Full Text Available Aging infrastructure is the main challenge currently faced by water suppliers. Estimation of assets lifetime requires reliable criteria to plan assets repair and renewal strategies. To do so, pipe break prediction is one of the most important inputs. This paper analyzes the statistical dependence of pipe breaks on explanatory variables, determining their optimal combination and quantifying their influence on failure prediction accuracy. A large set of registered data from Madrid water supply network, managed by Canal de Isabel II, has been filtered, classified and studied. Several statistical Bayesian models have been built and validated from the available information with a technique that combines reference periods of time as well as geographical location. Statistical models of increasing complexity are built from zero up to five explanatory variables following two approaches: a set of independent variables or a combination of two joint variables plus an additional number of independent variables. With the aim of finding the variable combination that provides the most accurate prediction, models are compared following an objective validation procedure based on the model skill to predict the number of pipe breaks in a large set of geographical locations. As expected, model performance improves as the number of explanatory variables increases. However, the rate of improvement is not constant. Performance metrics improve significantly up to three variables, but the tendency is softened for higher order models, especially in trunk mains where performance is reduced. Slight differences are found between trunk mains and distribution lines when selecting the most influent variables and models.

  12. The effect of aquatic plyometric training with and without resistance on selected physical fitness variables among volleyball players

    Directory of Open Access Journals (Sweden)

    K. KAMALAKKANNAN

    2011-06-01

    Full Text Available The purpose of this study is to analyze the effect of aquatic plyometric training with and without the use ofweights on selected physical fitness variables among volleyball players. To achieve the purpose of these study 36physically active undergraduate volleyball players between 18 and 20 years of age volunteered as participants.The participants were randomly categorized into three groups of 12 each: a control group (CG, an aquaticPlyometric training with weight group (APTWG, and an aquatic Plyometric training without weight group(APTWOG. The subjects of the control group were not exposed to any training. Both experimental groupsunderwent their respective experimental treatment for 12 weeks, 3 days per week and a single session on eachday. Speed, endurance, and explosive power were measured as the dependent variables for this study. 36 days ofexperimental treatment was conducted for all the groups and pre and post data was collected. The collected datawere analyzed using an analysis of covariance (ANCOVA and followed by a Scheffé’s post hoc test. The resultsrevealed significant differences between groups on all the selected dependent variables. This study demonstratedthat aquatic plyometric training can be one effective means for improving speed, endurance, and explosivepower in volley ball players

  13. FCERI AND HISTAMINE METABOLISM GENE VARIABILITY IN SELECTIVE RESPONDERS TO NSAIDS

    Directory of Open Access Journals (Sweden)

    Gemma Amo

    2016-09-01

    Full Text Available The high-affinity IgE receptor (Fcε RI is a heterotetramer of three subunits: Fcε RIα, Fcε RIβ and Fcε RIγ (αβγ2 encoded by three genes designated as FCER1A, FCER1B (MS4A2 and FCER1G, respectively. Recent evidence points to FCERI gene variability as a relevant factor in the risk of developing allergic diseases. Because Fcε RI plays a key role in the events downstream of the triggering factors in immunological response, we hypothesized that FCERI gene variants might be related with the risk of, or with the clinical response to, selective (IgE mediated non-steroidal anti-inflammatory (NSAID hypersensitivity.From a cohort of 314 patients suffering from selective hypersensitivity to metamizole, ibuprofen, diclofenac, paracetamol, acetylsalicylic acid (ASA, propifenazone, naproxen, ketoprofen, dexketoprofen, etofenamate, aceclofenac, etoricoxib, dexibuprofen, indomethacin, oxyphenylbutazone or piroxicam, and 585 unrelated healthy controls that tolerated these NSAIDs, we analyzed the putative effects of the FCERI SNPs FCER1A rs2494262, rs2427837 and rs2251746; FCER1B rs1441586, rs569108 and rs512555; FCER1G rs11587213, rs2070901 and rs11421. Furthermore, in order to identify additional genetic markers which might be associated with the risk of developing selective NSAID hypersensitivity, or which may modify the putative association of FCERI gene variations with risk, we analyzed polymorphisms known to affect histamine synthesis or metabolism, such as rs17740607, rs2073440, rs1801105, rs2052129, rs10156191, rs1049742 and rs1049793 in the HDC, HNMT and DAO genes.No major genetic associations with risk or with clinical presentation, and no gene-gene interactions, or gene-phenotype interactions (including age, gender, IgE concentration, antecedents of atopy, culprit drug or clinical presentation were identified in patients. However, logistic regression analyses indicated that the presence of antecedents of atopy and the DAO SNP rs2052129 (GG

  14. A New Spectral Shape-Based Record Selection Approach Using Np and Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Edén Bojórquez

    2013-01-01

    Full Text Available With the aim to improve code-based real records selection criteria, an approach inspired in a parameter proxy of spectral shape, named Np, is analyzed. The procedure is based on several objectives aimed to minimize the record-to-record variability of the ground motions selected for seismic structural assessment. In order to select the best ground motion set of records to be used as an input for nonlinear dynamic analysis, an optimization approach is applied using genetic algorithms focuse on finding the set of records more compatible with a target spectrum and target Np values. The results of the new Np-based approach suggest that the real accelerograms obtained with this procedure, reduce the scatter of the response spectra as compared with the traditional approach; furthermore, the mean spectrum of the set of records is very similar to the target seismic design spectrum in the range of interest periods, and at the same time, similar Np values are obtained for the selected records and the target spectrum.

  15. Automatic creation of LabVIEW network shared variables

    International Nuclear Information System (INIS)

    Kluge, T.; Schroeder, H.

    2012-01-01

    We are in the process of preparing the LabVIEW controlled system components of our Solid State Direct Drive experiments for the integration into a Supervisory Control And Data Acquisition (SCADA) or distributed control system. The predetermined route to this is the generation of LabVIEW network shared variables that can easily be exported by LabVIEW to the SCADA system using OLE for Process Control (OPC) or other means. Many repetitive tasks are associated with the creation of the shared variables and the required code. We are introducing an efficient and inexpensive procedure that automatically creates shared variable libraries and sets default values for the shared variables. Furthermore, LabVIEW controls are created that are used for managing the connection to the shared variable inside the LabVIEW code operating on the shared variables. The procedure takes as input an XML spread-sheet defining the required input. The procedure utilizes XSLT and LabVIEW scripting. In a later state of the project the code generation can be expanded to also create code and configuration files that will become necessary in order to access the shared variables from the SCADA system of choice. (authors)

  16. Joint High-Dimensional Bayesian Variable and Covariance Selection with an Application to eQTL Analysis

    KAUST Repository

    Bhadra, Anindya

    2013-04-22

    We describe a Bayesian technique to (a) perform a sparse joint selection of significant predictor variables and significant inverse covariance matrix elements of the response variables in a high-dimensional linear Gaussian sparse seemingly unrelated regression (SSUR) setting and (b) perform an association analysis between the high-dimensional sets of predictors and responses in such a setting. To search the high-dimensional model space, where both the number of predictors and the number of possibly correlated responses can be larger than the sample size, we demonstrate that a marginalization-based collapsed Gibbs sampler, in combination with spike and slab type of priors, offers a computationally feasible and efficient solution. As an example, we apply our method to an expression quantitative trait loci (eQTL) analysis on publicly available single nucleotide polymorphism (SNP) and gene expression data for humans where the primary interest lies in finding the significant associations between the sets of SNPs and possibly correlated genetic transcripts. Our method also allows for inference on the sparse interaction network of the transcripts (response variables) after accounting for the effect of the SNPs (predictor variables). We exploit properties of Gaussian graphical models to make statements concerning conditional independence of the responses. Our method compares favorably to existing Bayesian approaches developed for this purpose. © 2013, The International Biometric Society.

  17. A correctness proof of sorting by means of formal procedures

    NARCIS (Netherlands)

    Fokkinga, M.M.

    1987-01-01

    We consider a recursive sorting algorithm in which, in each invocation, a new variable and a new procedure (using the variable globally) are defined and the procedure is passed to recursive calls. This algorithm is proved correct with Hoare-style pre- and postassertions. We also discuss the same

  18. 47 CFR 1.1604 - Post-selection hearings.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 1 2010-10-01 2010-10-01 false Post-selection hearings. 1.1604 Section 1.1604 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Random Selection Procedures for Mass Media Services General Procedures § 1.1604 Post-selection hearings. (a) Following the random...

  19. Motivation of medical students: selection by motivation or motivation by selection.

    Science.gov (United States)

    Wouters, Anouk; Croiset, Gerda; Galindo-Garre, Francisca; Kusurkar, Rashmi A

    2016-01-29

    Medical schools try to implement selection procedures that will allow them to select the most motivated students for their programs. Though there is a general feeling that selection stimulates student motivation, conclusive evidence for this is lacking. The current study aims to use the perspective of Self-determination Theory (SDT) of motivation as a lens to examine how medical students' motivation differs in relation to different selection procedures. The hypotheses were that 1) selected students report higher strength and autonomous motivation than non-selected students, and 2) recently selected students report higher strength and autonomous motivation than non-selected students and students who were selected longer ago. First- (Y1) and fourth-year (Y4) medical students in the six-year regular programme and first-year students in the four-year graduate entry programme (GE) completed questionnaires measuring motivation strength and type (autonomous-AM, controlled-CM). Scores were compared between students admitted based on selection, lottery or top pre-university GPA (top GPA) using ANCOVAs. Selected students' answers on open-ended questions were analysed using inductive thematic analysis to identify reasons for changes in motivation. The response rate was 61.4 % (n = 357). Selected students (Y1, Y4 and GE) reported a significantly higher strength of motivation than non-selected students (Y1 and Y4 lottery and top GPA) (p motivation as they felt autonomous, competent and that they belonged to a special group. These reported reasons are in alignment with the basic psychological needs described by Self-Determination Theory as important in enhancing autonomous motivation. A comprehensive selection procedure, compared to less demanding admission procedures, does not seem to yield a student population which stands out in terms of autonomous motivation. The current findings indicate that selection might temporarily enhance students' motivation. The mechanism

  20. Simultaneous grouping and ranking with combination of SOM and TOPSIS for selection of preferable analytical procedure for furan determination in food.

    Science.gov (United States)

    Jędrkiewicz, Renata; Tsakovski, Stefan; Lavenu, Aurore; Namieśnik, Jacek; Tobiszewski, Marek

    2018-02-01

    Novel methodology for grouping and ranking with application of self-organizing maps and multicriteria decision analysis is presented. The dataset consists of 22 objects that are analytical procedures applied to furan determination in food samples. They are described by 10 variables, referred to their analytical performance, environmental and economic aspects. Multivariate statistics analysis allows to limit the amount of input data for ranking analysis. Assessment results show that the most beneficial procedures are based on microextraction techniques with GC-MS final determination. It is presented how the information obtained from both tools complement each other. The applicability of combination of grouping and ranking is also discussed. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Downscaling reanalysis data to high-resolution variables above a glacier surface (Cordillera Blanca, Peru)

    Science.gov (United States)

    Hofer, Marlis; Mölg, Thomas; Marzeion, Ben; Kaser, Georg

    2010-05-01

    Recently initiated observation networks in the Cordillera Blanca provide temporally high-resolution, yet short-term atmospheric data. The aim of this study is to extend the existing time series into the past. We present an empirical-statistical downscaling (ESD) model that links 6-hourly NCEP/NCAR reanalysis data to the local target variables, measured at the tropical glacier Artesonraju (Northern Cordillera Blanca). The approach is particular in the context of ESD for two reasons. First, the observational time series for model calibration are short (only about two years). Second, unlike most ESD studies in climate research, we focus on variables at a high temporal resolution (i.e., six-hourly values). Our target variables are two important drivers in the surface energy balance of tropical glaciers; air temperature and specific humidity. The selection of predictor fields from the reanalysis data is based on regression analyses and climatologic considerations. The ESD modelling procedure includes combined empirical orthogonal function and multiple regression analyses. Principal component screening is based on cross-validation using the Akaike Information Criterion as model selection criterion. Double cross-validation is applied for model evaluation. Potential autocorrelation in the time series is considered by defining the block length in the resampling procedure. Apart from the selection of predictor fields, the modelling procedure is automated and does not include subjective choices. We assess the ESD model sensitivity to the predictor choice by using both single- and mixed-field predictors of the variables air temperature (1000 hPa), specific humidity (1000 hPa), and zonal wind speed (500 hPa). The chosen downscaling domain ranges from 80 to 50 degrees west and from 0 to 20 degrees south. Statistical transfer functions are derived individually for different months and times of day (month/hour-models). The forecast skill of the month/hour-models largely depends on

  2. Angular scanning and variable wavelength surface plasmon resonance allowing free sensor surface selection for optimum material- and bio-sensing

    NARCIS (Netherlands)

    Lakayan, Dina; Tuppurainen, Jussipekka; Albers, Martin; van Lint, Matthijs J.; van Iperen, Dick J.; Weda, Jelmer J.A.; Kuncova-Kallio, Johana; Somsen, Govert W.; Kool, Jeroen

    2018-01-01

    A variable-wavelength Kretschmann configuration surface plasmon resonance (SPR) apparatus with angle scanning is presented. The setup provides the possibility of selecting the optimum wavelength with respect to the properties of the metal layer of the sensorchip, sample matrix, and biomolecular

  3. Exploration and safety evaluations of salt formations and site selection procedures; Erkundung und Sicherheitsbewertung von Salzformationen und Standortauswahlverfahren

    Energy Technology Data Exchange (ETDEWEB)

    Krapf, Eva Barbara

    2016-12-12

    In 2011 the final decision for the withdrawal from the nuclear energy program was decided in the Federal Republic of Germany. The majority of the produced radioactive waste originate in the operation as well as in the decommissioning and dismantling of nuclear facilities. The long-term containment of especially heat-developing and high-level waste in an underground disposal facility is pursued. The Site Selection Act (StandAG), passed in 2013, defined further procedural steps as well as responsibilities and the way of public participation during the site selection. In this context the newly founded Commission Storage of Highly Radioactive Waste was assigned with the task of giving relevant recommendations based on their investigation of specific aspects and fundamental questions. The objective of this procedure is the selection of the site that can provide the best possible safety for humans and the environment during the defined period of one million years. The Commissions' final report was published in July 2016. In this thesis a possible approach for exploring sites in connection with safety investigations is recommended. The site selection procedure described in the StandAG represents the basis for the considerations. Geoscientific exclusion criteria, minimum requirements as well as weighing criteria can be developed regarding the relevant geoscientific and climatic changes during the defined period of one million years. In contrast to the recommendations made by the Commission Storage of Highly Radioactive Waste no previously existing report has been revised and adapted. Rather, all issues relevant for the long-term containment of radioactive waste in a disposal facility had been newly developed. The considerations are related to salt domes as host rock. Furthermore, according to the StandAG preliminary safety investigations are required in every step of the site selection. The recommendations made in this thesis concerning content and feasibility of

  4. An experiment on selecting most informative variables in socio-economic data

    Directory of Open Access Journals (Sweden)

    L. Jenkins

    2014-01-01

    Full Text Available In many studies where data are collected on several variables, there is a motivation to find if fewer variables would provide almost as much information. Variance of a variable about its mean is the common statistical measure of information content, and that is used here. We are interested whether the variability in one variable is sufficiently correlated with that in one or more of the other variables that the first variable is redundant. We wish to find one or more ‘principal variables’ that sufficiently reflect the information content in all the original variables. The paper explains the method of principal variables and reports experiments using the technique to see if just a few variables are sufficient to reflect the information in 11 socioeconomic variables on 130 countries from a World Bank (WB database. While the method of principal variables is highly successful in a statistical sense, the WB data varies greatly from year to year, demonstrating that fewer variables wo uld be inadequate for this data.

  5. Materials selection in mechanical design

    International Nuclear Information System (INIS)

    Ashby, M.F.; Cebon, D.

    1993-01-01

    A novel materials-selection procedure has been developed and implemented in software. The procedure makes use of Materials Selection Charts: a new way of displaying material property data; and performance indices: combinations of material properties which govern performance. Optimisation methods are employed for simultaneous selection of both material and shape. (orig.)

  6. Materials selection in mechanical design

    OpenAIRE

    Ashby , M.; Cebon , D.

    1993-01-01

    A novel materials-selection procedure has been developed and implemented in software. The procedure makes use of Materials Selection Charts: a new way of displaying material property data; and performance indices: combinations of material properties which govern performance. Optimisation methods are employed for simultaneous selection of both material and shape.

  7. Realism of procedural task trainers in a pediatric emergency medicine procedures course

    Directory of Open Access Journals (Sweden)

    Allan Shefrin

    2015-04-01

    Conclusions: Task training models utilized in our course received variable realism ratings. When deciding what type of task trainer to use future courses should carefully consider the desired aspect of realism, and how it aligns with the procedural skill, balanced with cost considerations.

  8. Radiographic implications of procedures involving cardiac implantable electronic devices (CIEDs – Selected aspects

    Directory of Open Access Journals (Sweden)

    Roman Steckiewicz

    2017-06-01

    Full Text Available Background: Some cardiac implantable electronic device (CIED implantation procedures require the use of X-rays, which is reflected by such parameters as total fluoroscopy time (TFT and dose-area product (DAP – defined as the absorbed dose multiplied by the area irradiated. Material and Methods: This retrospective study evaluated 522 CIED implantation (424 de novo and 98 device upgrade and new lead placement procedures in 176 women and 346 men (mean age 75±11 years over the period 2012–2015. The recorded procedure-related parameters TFT and DAP were evaluated in the subgroups specified below. The group of 424 de novo procedures included 203 pacemaker (PM and 171 implantable cardioverter-defibrillator (ICD implantation procedures, separately stratified by single-chamber and dual-chamber systems. Another subgroup of de novo procedures involved 50 cardiac resynchronization therapy (CRT devices. The evaluated parameters in the group of 98 upgrade procedures were compared between 2 subgroups: CRT only and combined PM and ICD implantation procedures. Results: We observed differences in TFT and DAP values between procedure types, with PM-related procedures showing the lowest, ICD – intermediate (with values for single-chamber considerably lower than those for dual-chamber systems and CRT implantation procedures – highest X-ray exposure. Upgrades to CRT were associated with 4 times higher TFT and DAP values in comparison to those during other upgrade procedures. Cardiac resynchronization therapy de novo implantation procedures and upgrades to CRT showed similar mean values of these evaluated parameters. Conclusions: Total fluoroscopy time and DAP values correlated progressively with CIED implantation procedure complexity, with CRT-related procedures showing the highest values of both parameters. Med Pr 2017;68(3:363–374

  9. Variable Selection for Nonparametric Gaussian Process Priors: Models and Computational Strategies.

    Science.gov (United States)

    Savitsky, Terrance; Vannucci, Marina; Sha, Naijun

    2011-02-01

    This paper presents a unified treatment of Gaussian process models that extends to data from the exponential dispersion family and to survival data. Our specific interest is in the analysis of data sets with predictors that have an a priori unknown form of possibly nonlinear associations to the response. The modeling approach we describe incorporates Gaussian processes in a generalized linear model framework to obtain a class of nonparametric regression models where the covariance matrix depends on the predictors. We consider, in particular, continuous, categorical and count responses. We also look into models that account for survival outcomes. We explore alternative covariance formulations for the Gaussian process prior and demonstrate the flexibility of the construction. Next, we focus on the important problem of selecting variables from the set of possible predictors and describe a general framework that employs mixture priors. We compare alternative MCMC strategies for posterior inference and achieve a computationally efficient and practical approach. We demonstrate performances on simulated and benchmark data sets.

  10. Required number of records for ASCE/SEI 7 ground-motion scaling procedure

    Science.gov (United States)

    Reyes, Juan C.; Kalkan, Erol

    2011-01-01

    -motions, it is demonstrated that the ASCE/SEI 7 scaling procedure is overly conservative if fewer than seven ground-motions are employed. Utilizing seven or more randomly selected records provides a more accurate estimate of the EDPs accompanied by reduced record-to-record variability of the responses. Consistency in accuracy and efficiency is achieved only if records are selected on the basis of their spectral shape and A(Tn).

  11. gamboostLSS: An R Package for Model Building and Variable Selection in the GAMLSS Framework

    Directory of Open Access Journals (Sweden)

    Benjamin Hofner

    2016-10-01

    Full Text Available Generalized additive models for location, scale and shape are a flexible class of regression models that allow to model multiple parameters of a distribution function, such as the mean and the standard deviation, simultaneously. With the R package gamboostLSS, we provide a boosting method to fit these models. Variable selection and model choice are naturally available within this regularized regression framework. To introduce and illustrate the R package gamboostLSS and its infrastructure, we use a data set on stunted growth in India. In addition to the specification and application of the model itself, we present a variety of convenience functions, including methods for tuning parameter selection, prediction and visualization of results. The package gamboostLSS is available from the Comprehensive R Archive Network (CRAN at https://CRAN.R-project.org/package=gamboostLSS.

  12. A comparison of procedures to select important variables for describing datasets

    Czech Academy of Sciences Publication Activity Database

    Andrade, J. M.; Holík, M.; Halámek, Josef

    2004-01-01

    Roč. 63, č. 4 (2004), s. 865-872 ISSN 0039-9140 R&D Projects: GA ČR GA102/02/0553 Keywords : procrustes rotation * robustness * multicollinearity Subject RIV: BD - Theory of Information Impact factor: 2.532, year: 2004

  13. Procedural violation in the licensing procedure and possible legal consequences; Verfahrensmaengel im Konzessionierungsverfahren und etwaige Rechtsfolgen

    Energy Technology Data Exchange (ETDEWEB)

    Meyer-Hetling, Astrid; Probst, Matthias Ernst; Wolkenhauer, Soeren [Kanzlei Becker Buettner Held (BBH), Berlin (Germany)

    2012-07-15

    With respect to paragraph 46 sect. 2 to 4 EnWG (Energy Economy Law) communities are required to provide a publication procedure and competition procedure ('licensing procedure') for the new assignment of easement agreements for the establishment of local power supply systems and natural gas supply systems. The specific design of the selection process legally is regulated only rudimentary. Nevertheless old concessionaires increasingly deny the statutory grid transfer to the new concessionaires relying on supposed errors in the selection process. The unclear legal situation and the inconsistent, sometimes unreasonably strict jurisdiction and jurisprudence of antitrust as well as regulatory authorities resulted to a considerable legal certainty in communities and grid operators. Unless the legislature establishes the necessary legal clarity, the competent courts and authorities are invoked to act moderately in the examination of licensing procedures.

  14. Ultrasound in the diagnosis and treatment of developmental dysplasia of the hip. Evaluation of a selective screening procedure

    DEFF Research Database (Denmark)

    Strandberg, C.; Konradsen, L.A.; Ellitsgaard, N.

    2008-01-01

    INTRODUCTION: With the intention of reducing the treatment frequency of Developmental Dysplasia of the Hip (DDH), two hospitals in Copenhagen implemented a screening and treatment procedure based on selective referral to ultrasonography of the hip (US). This paper describes and evaluates...... 0.03%. No relationship was seen between morphological parameters at the first US and the outcome of hips classified as minor dysplastic or not fully developed (NFD). A statistically significant relationship was seen between the degree of dysplasia and the time until US normalization of the hips (p......= 0.02). There was no relapse of dysplasia after treatment. The median duration of treatment was six, eight and nine weeks for mild, moderate and severe dysplasia respectively. CONCLUSION: The procedure resulted in a low rate of treatment and a small number of late diagnosed cases. Prediction...

  15. New non-cognitive procedures for medical applicant selection: a qualitative analysis in one school.

    Science.gov (United States)

    Katz, Sara; Vinker, Shlomo

    2014-11-07

    Recent data have called into question the reliability and predictive validity of standard admission procedures to medical schools. Eliciting non-cognitive attributes of medical school applicants using qualitative tools and methods has thus become a major challenge. 299 applicants aged 18-25 formed the research group. A set of six research tools was developed in addition to the two existing ones. These included: a portfolio task, an intuitive task, a cognitive task, a personal task, an open self-efficacy questionnaire and field-notes. The criteria-based methodology design used constant comparative analysis and grounded theory techniques to produce a personal attributes profile per participant, scored on a 5-point scale holistic rubric. Qualitative validity of data gathering was checked by comparing the profiles elicited from the existing interview against the profiles elicited from the other tools, and by comparing two profiles of each of the applicants who handed in two portfolio tasks. Qualitative validity of data analysis was checked by comparing researcher results with those of an external rater (n =10). Differences between aggregated profile groups were checked by the Npar Wilcoxon Signed Ranks Test and by Spearman Rank Order Correlation Test. All subjects gave written informed consent to their participation. Privacy was protected by using code numbers. A concept map of 12 personal attributes emerged, the core constructs of which were motivation, sociability and cognition. A personal profile was elicited. Inter-rater agreement was 83.3%. Differences between groups by aggregated profiles were found significant (p < .05, p < .01, p < .001).A random sample of sixth year students (n = 12) underwent the same admission procedure as the research group. Rank order was different; and arrogance was a new construct elicited in the sixth year group. This study suggests a broadening of the methodology for selecting medical school applicants. This methodology

  16. Improving observational study estimates of treatment effects using joint modeling of selection effects and outcomes: the case of AAA repair.

    Science.gov (United States)

    O'Malley, A James; Cotterill, Philip; Schermerhorn, Marc L; Landon, Bruce E

    2011-12-01

    When 2 treatment approaches are available, there are likely to be unmeasured confounders that influence choice of procedure, which complicates estimation of the causal effect of treatment on outcomes using observational data. To estimate the effect of endovascular (endo) versus open surgical (open) repair, including possible modification by institutional volume, on survival after treatment for abdominal aortic aneurysm, accounting for observed and unobserved confounding variables. Observational study of data from the Medicare program using a joint model of treatment selection and survival given treatment to estimate the effects of type of surgery and institutional volume on survival. We studied 61,414 eligible repairs of intact abdominal aortic aneurysms during 2001 to 2004. The outcome, perioperative death, is defined as in-hospital death or death within 30 days of operation. The key predictors are use of endo, transformed endo and open volume, and endo-volume interactions. There is strong evidence of nonrandom selection of treatment with potential confounding variables including institutional volume and procedure date, variables not typically adjusted for in clinical trials. The best fitting model included heterogeneous transformations of endo volume for endo cases and open volume for open cases as predictors. Consistent with our hypothesis, accounting for unmeasured selection reduced the mortality benefit of endo. The effect of endo versus open surgery varies nonlinearly with endo and open volume. Accounting for institutional experience and unmeasured selection enables better decision-making by physicians making treatment referrals, investigators evaluating treatments, and policy makers.

  17. 47 CFR 1.1602 - Designation for random selection.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 1 2010-10-01 2010-10-01 false Designation for random selection. 1.1602 Section 1.1602 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Random Selection Procedures for Mass Media Services General Procedures § 1.1602 Designation for random selection...

  18. 47 CFR 1.1603 - Conduct of random selection.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 1 2010-10-01 2010-10-01 false Conduct of random selection. 1.1603 Section 1.1603 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Random Selection Procedures for Mass Media Services General Procedures § 1.1603 Conduct of random selection. The...

  19. Ion-selective electrode reviews

    CERN Document Server

    Thomas, J D R

    1982-01-01

    Ion-Selective Electrode Reviews, Volume 3, provides a review of articles on ion-selective electrodes (ISEs). The volume begins with an article on methods based on titration procedures for surfactant analysis, which have been developed for discrete batch operation and for continuous AutoAnalyser use. Separate chapters deal with detection limits of ion-selective electrodes; the possibility of using inorganic ion-exchange materials as ion-sensors; and the effect of solvent on potentials of cells with ion-selective electrodes. Also included is a chapter on advances in calibration procedures, the d

  20. Pan endoscopic approach "hysterolaparoscopy" as an initial procedure in selected infertile women.

    Science.gov (United States)

    Vaid, Keya; Mehra, Sheila; Verma, Mita; Jain, Sandhya; Sharma, Abha; Bhaskaran, Sruti

    2014-02-01

    normal uterine cavity. When these 112 women (58.03%) with normal HSG report were further subjected to hysterolaparoscopy, only 35/193 (18.13%) of them actually had normal tubes and uterus; rest 77 women (39.89%) were benefited by one step procedure of hysterolaparoscopic evaluation and intervention and further treatment done. Hysterolaparoscopy (Pan Endoscopic) approach is better than HSG and should be encouraged as first and final procedure in selected infertile women.

  1. VARIABILITY OF AMYLOSE AND AMYLOPECTIN IN WINTER WHEAT AND SELECTION FOR SPECIAL PURPOSES

    Directory of Open Access Journals (Sweden)

    Nikolina Weg Krstičević

    2015-06-01

    Full Text Available The aim of this study was to investigate the variability of amylose and amylopectin in 24 Croatian and six foreign winter wheat varieties and to detect the potential of these varieties for special purposes. Starch composition analysis was based on the separation of amylose and amylopectin and the determination of their amounts and ratios. Analysis of the amount of amylose and amylopectin determined statistically highly significant differences between the varieties. The tested varieties are mostly bread wheat of different quality which have the usual content of amylose and amylopectin. Some varieties were identified among them with high amylopectin and low amylose content and one variety with high amylose content. They have the potential in future breeding programs and selection for special purposes.

  2. Evaluation of selection procedures of an international school | O ...

    African Journals Online (AJOL)

    Consequently the current admission procedures used by a southern African international school were ... The Culture-Fair Intelligence Test (Scale 2 Form A) appeared to have more predictive value than the MAT-SF for academic achievement.

  3. 5 CFR 720.206 - Selection guidelines.

    Science.gov (United States)

    2010-01-01

    ... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false Selection guidelines. 720.206 Section 720... guidelines. This subpart sets forth requirements for a recruitment program, not a selection program... procedures and criteria must be consistent with the Uniform Guidelines on Employee Selection Procedures (43...

  4. Habitat Heterogeneity Variably Influences Habitat Selection by Wild Herbivores in a Semi-Arid Tropical Savanna Ecosystem.

    Directory of Open Access Journals (Sweden)

    Victor K Muposhi

    Full Text Available An understanding of the habitat selection patterns by wild herbivores is critical for adaptive management, particularly towards ecosystem management and wildlife conservation in semi arid savanna ecosystems. We tested the following predictions: (i surface water availability, habitat quality and human presence have a strong influence on the spatial distribution of wild herbivores in the dry season, (ii habitat suitability for large herbivores would be higher compared to medium-sized herbivores in the dry season, and (iii spatial extent of suitable habitats for wild herbivores will be different between years, i.e., 2006 and 2010, in Matetsi Safari Area, Zimbabwe. MaxEnt modeling was done to determine the habitat suitability of large herbivores and medium-sized herbivores. MaxEnt modeling of habitat suitability for large herbivores using the environmental variables was successful for the selected species in 2006 and 2010, except for elephant (Loxodonta africana for the year 2010. Overall, large herbivores probability of occurrence was mostly influenced by distance from rivers. Distance from roads influenced much of the variability in the probability of occurrence of medium-sized herbivores. The overall predicted area for large and medium-sized herbivores was not different. Large herbivores may not necessarily utilize larger habitat patches over medium-sized herbivores due to the habitat homogenizing effect of water provisioning. Effect of surface water availability, proximity to riverine ecosystems and roads on habitat suitability of large and medium-sized herbivores in the dry season was highly variable thus could change from one year to another. We recommend adaptive management initiatives aimed at ensuring dynamic water supply in protected areas through temporal closure and or opening of water points to promote heterogeneity of wildlife habitats.

  5. A rational procedure for the selection of appropriate procurement ...

    African Journals Online (AJOL)

    Construction work is procured via a number of systems, such as being Open and ... routes for construction projects as a means to enhance the quality of management ... systems to decision-making procedures in the construction industry.

  6. Information Overload in Multi-Stage Selection Procedures

    NARCIS (Netherlands)

    S.S. Ficco (Stefano); V.A. Karamychev (Vladimir)

    2004-01-01

    textabstractThe paper studies information processing imperfections in a fully rational decision-making network. It is shown that imperfect information transmission and imperfect information acquisition in a multi-stage selection game yield information overload. The paper analyses the mechanisms

  7. Hybrid Model Based on Genetic Algorithms and SVM Applied to Variable Selection within Fruit Juice Classification

    Directory of Open Access Journals (Sweden)

    C. Fernandez-Lozano

    2013-01-01

    Full Text Available Given the background of the use of Neural Networks in problems of apple juice classification, this paper aim at implementing a newly developed method in the field of machine learning: the Support Vector Machines (SVM. Therefore, a hybrid model that combines genetic algorithms and support vector machines is suggested in such a way that, when using SVM as a fitness function of the Genetic Algorithm (GA, the most representative variables for a specific classification problem can be selected.

  8. Expectancy bias in a selective conditioning procedure: trait anxiety increases the threat value of a blocked stimulus.

    Science.gov (United States)

    Boddez, Yannick; Vervliet, Bram; Baeyens, Frank; Lauwers, Stephanie; Hermans, Dirk; Beckers, Tom

    2012-06-01

    In a blocking procedure, a single conditioned stimulus (CS) is paired with an unconditioned stimulus (US), such as electric shock, in the first stage. During the subsequent stage, the CS is presented together with a second CS and this compound is followed by the same US. Fear conditioning studies in non-human animals have demonstrated that fear responding to the added second CS typically remains low, despite its being paired with the US. Accordingly, the blocking procedure is well suited as a laboratory model for studying (deficits in) selective threat appraisal. The present study tested the relation between trait anxiety and blocking in human aversive conditioning. Healthy participants filled in a trait anxiety questionnaire and underwent blocking treatment in the human aversive conditioning paradigm. Threat appraisal was measured through shock expectancy ratings and skin conductance. As hypothesized, trait anxiety was positively associated with shock expectancy ratings to the blocked stimulus. In skin conductance responding, no significant effects of stimulus type could be detected during blocking training or testing. The current study does not allow strong claims to be made regarding the theoretical process underlying the expectancy bias we observed. The observed shock expectancy bias might be one of the mechanisms leading to non-specific fear in individuals at risk for developing anxiety disorders. A deficit in blocking, or a deficit in selective threat appraisal at the more general level, indeed results in fear becoming non-specific and disconnected from the most likely causes or predictors of danger. Copyright © 2011 Elsevier Ltd. All rights reserved.

  9. Neuronal Intra-Individual Variability Masks Response Selection Differences between ADHD Subtypes—A Need to Change Perspectives

    Directory of Open Access Journals (Sweden)

    Annet Bluschke

    2017-06-01

    Full Text Available Due to the high intra-individual variability in attention deficit/hyperactivity disorder (ADHD, there may be considerable bias in knowledge about altered neurophysiological processes underlying executive dysfunctions in patients with different ADHD subtypes. When aiming to establish dimensional cognitive-neurophysiological constructs representing symptoms of ADHD as suggested by the initiative for Research Domain Criteria, it is crucial to consider such processes independent of variability. We examined patients with the predominantly inattentive subtype (attention deficit disorder, ADD and the combined subtype of ADHD (ADHD-C in a flanker task measuring conflict control. Groups were matched for task performance. Besides using classic event-related potential (ERP techniques and source localization, neurophysiological data was also analyzed using residue iteration decomposition (RIDE to statistically account for intra-individual variability and S-LORETA to estimate the sources of the activations. The analysis of classic ERPs related to conflict monitoring revealed no differences between patients with ADD and ADHD-C. When individual variability was accounted for, clear differences became apparent in the RIDE C-cluster (analog to the P3 ERP-component. While patients with ADD distinguished between compatible and incompatible flanker trials early on, patients with ADHD-C seemed to employ more cognitive resources overall. These differences are reflected in inferior parietal areas. The study demonstrates differences in neuronal mechanisms related to response selection processes between ADD and ADHD-C which, according to source localization, arise from the inferior parietal cortex. Importantly, these differences could only be detected when accounting for intra-individual variability. The results imply that it is very likely that differences in neurophysiological processes between ADHD subtypes are underestimated and have not been recognized because intra

  10. Robotic vascular resections during Whipple procedure

    OpenAIRE

    Allan, Bassan J.; Novak, Stephanie M.; Hogg, Melissa E.; Zeh, Herbert J.

    2018-01-01

    Indications for resection of pancreatic cancers have evolved to include selected patients with involvement of peri-pancreatic vascular structures. Open Whipple procedures have been the standard approach for patients requiring reconstruction of the portal vein (PV) or superior mesenteric vein (SMV). Recently, high-volume centers are performing minimally invasive Whipple procedures with portovenous resections. Our institution has performed seventy robotic Whipple procedures with concomitant vas...

  11. A simplified procedure of linear regression in a preliminary analysis

    Directory of Open Access Journals (Sweden)

    Silvia Facchinetti

    2013-05-01

    Full Text Available The analysis of a statistical large data-set can be led by the study of a particularly interesting variable Y – regressed – and an explicative variable X, chosen among the remained variables, conjointly observed. The study gives a simplified procedure to obtain the functional link of the variables y=y(x by a partition of the data-set into m subsets, in which the observations are synthesized by location indices (mean or median of X and Y. Polynomial models for y(x of order r are considered to verify the characteristics of the given procedure, in particular we assume r= 1 and 2. The distributions of the parameter estimators are obtained by simulation, when the fitting is done for m= r + 1. Comparisons of the results, in terms of distribution and efficiency, are made with the results obtained by the ordinary least square methods. The study also gives some considerations on the consistency of the estimated parameters obtained by the given procedure.

  12. Modelling Technical and Economic Parameters in Selection of Manufacturing Devices

    Directory of Open Access Journals (Sweden)

    Naqib Daneshjo

    2017-11-01

    Full Text Available Sustainable science and technology development is also conditioned by continuous development of means of production which have a key role in structure of each production system. Mechanical nature of the means of production is complemented by controlling and electronic devices in context of intelligent industry. A selection of production machines for a technological process or technological project has so far been practically resolved, often only intuitively. With regard to increasing intelligence, the number of variable parameters that have to be considered when choosing a production device is also increasing. It is necessary to use computing techniques and decision making methods according to heuristic methods and more precise methodological procedures during the selection. The authors present an innovative model for optimization of technical and economic parameters in the selection of manufacturing devices for industry 4.0.

  13. A flow system for generation of concentration perturbation in two-dimensional correlation near-infrared spectroscopy: application to variable selection in multivariate calibration.

    Science.gov (United States)

    Pereira, Claudete Fernandes; Pasquini, Celio

    2010-05-01

    A flow system is proposed to produce a concentration perturbation in liquid samples, aiming at the generation of two-dimensional correlation near-infrared spectra. The system presents advantages in relation to batch systems employed for the same purpose: the experiments are accomplished in a closed system; application of perturbation is rapid and easy; and the experiments can be carried out with micro-scale volumes. The perturbation system has been evaluated in the investigation and selection of relevant variables for multivariate calibration models for the determination of quality parameters of gasoline, including ethanol content, MON (motor octane number), and RON (research octane number). The main advantage of this variable selection approach is the direct association between spectral features and chemical composition, allowing easy interpretation of the regression models.

  14. Assessment of acute pesticide toxicity with selected biochemical variables in suicide attempting subjects

    International Nuclear Information System (INIS)

    Soomro, A.M.; Seehar, G.M.; Bhanger, M.I.

    2003-01-01

    Pesticide induced changes were assessed in thirty two subjects of attempted suicide cases. Among all, the farmers and their families were recorded as most frequently suicide attempting. The values obtained from seven biochemical variables of 29 years old (average age) hospitalized subjects were compared to the same number and age matched normal volunteers. The results revealed major differences in the mean values of the selected parameters. The mean difference calculate; alkaline phosphatase (178.7 mu/l), Bilirubin (7.5 mg/dl), GPT (59.2 mu/l) and glucose (38.6 mg/dl) were higher than the controls, which indicate the hepatotoxicity induced by the pesticides in suicide attempting individuals. Increase in serum creatinine and urea indicated renal malfunction that could be linked with pesticide induced nephrotoxicity among them. (author)

  15. Gorleben. Waste management site based on an appropriate selection procedure

    International Nuclear Information System (INIS)

    Tiggemann, Anselm

    2010-01-01

    On February 22, 1977, the Lower Saxony state government decided in favor of Gorleben as a ''preliminary'' site of a ''potential'' facility for managing the back end of the fuel cycle of the nuclear power plants in the Federal Republic of Germany. The Lower Saxony files, closed until recently, now allow both the factual basis and the political background to be reconstructed comprehensively. The first selection procedure, financed by the federal government, for the site of a ''nuclear waste management center,'' which had been conducted by Kernbrennstoff-Wiederaufarbeitungsgesellschaft (KEWA) in 1974, had not considered Gorleben in any detail. As early as in the winter of 1975/76, Gorleben and a number of other potential sites were indicated to KEWA by the Lower Saxony State Ministry of Economics. The new finding is KEWA's conclusion of 1976 that Gorleben surpassed all potential sites examined so far in terms of suitability. As a consequence, Gorleben was regarded as an alternative alongside the 3 sites favored before, i.e. Wahn, Lutterloh, and Lichtenhorst, when the 3 Federal Ministers, Hans Matthoefer (SPD), Werner Maihofer (F.D.P.), and Hans Friderichs (F.D.P.), discussed the nuclear waste management project with Minister President Albrecht (CDU) in November 1976. The Lower Saxony State Cabinet commissioned an interministerial working party (IMAK) to find other potential sites besides Wahn, Lutterloh, Lichtenhorst, and Gorleben. IMAK proposed Gorleben, Lichtenhorst, Mariaglueck, and Wahn for further examination. IMAK recommended to the State Cabinet in another proposal to earmark either Gorleben or Lichtenhorst. (orig.)

  16. Impact of perennial energy crops income variability on the crop selection of risk averse farmers

    International Nuclear Information System (INIS)

    Alexander, Peter; Moran, Dominic

    2013-01-01

    The UK Government policy is for the area of perennial energy crops in the UK to expand significantly. Farmers need to choose these crops in preference to conventional rotations for this to be achievable. This paper looks at the potential level and variability of perennial energy crop incomes and the relation to incomes from conventional arable crops. Assuming energy crop prices are correlated to oil prices the results suggests that incomes from them are not well correlated to conventional arable crop incomes. A farm scale mathematical programming model is then used to attempt to understand the affect on risk averse farmers crop selection. The inclusion of risk reduces the energy crop price required for the selection of these crops. However yields towards the highest of those predicted in the UK are still required to make them an optimal choice, suggesting only a small area of energy crops within the UK would be expected to be chosen to be grown. This must be regarded as a tentative conclusion, primarily due to high sensitivity found to crop yields, resulting in the proposal for further work to apply the model using spatially disaggregated data. - Highlights: ► Energy crop and conventional crop incomes suggested as uncorrelated. ► Diversification effect of energy crops investigated for a risk averse farmer. ► Energy crops indicated as optimal selection only on highest yielding UK sites. ► Large establishment grant rates to substantially alter crop selections.

  17. Comparison of automatic procedures in the selection of peaks over threshold in flood frequency analysis: A Canadian case study in the context of climate change

    Science.gov (United States)

    Durocher, M.; Mostofi Zadeh, S.; Burn, D. H.; Ashkar, F.

    2017-12-01

    Floods are one of the most costly hazards and frequency analysis of river discharges is an important part of the tools at our disposal to evaluate their inherent risks and to provide an adequate response. In comparison to the common examination of annual streamflow maximums, peaks over threshold (POT) is an interesting alternative that makes better use of the available information by including more than one flood event per year (on average). However, a major challenge is the selection of a satisfactory threshold above which peaks are assumed to respect certain conditions necessary for an adequate estimation of the risk. Additionally, studies have shown that POT is also a valuable approach to investigate the evolution of flood regimes in the context of climate change. Recently, automatic procedures for the selection of the threshold were suggested to guide that important choice, which otherwise rely on graphical tools and expert judgment. Furthermore, having an automatic procedure that is objective allows for quickly repeating the analysis on a large number of samples, which is useful in the context of large databases or for uncertainty analysis based on a resampling approach. This study investigates the impact of considering such procedures in a case study including many sites across Canada. A simulation study is conducted to evaluate the bias and predictive power of the automatic procedures in similar conditions as well as investigating the power of derived nonstationarity tests. The results obtained are also evaluated in the light of expert judgments established in a previous study. Ultimately, this study provides a thorough examination of the considerations that need to be addressed when conducting POT analysis using automatic threshold selection.

  18. Robotic vascular resections during Whipple procedure.

    Science.gov (United States)

    Allan, Bassan J; Novak, Stephanie M; Hogg, Melissa E; Zeh, Herbert J

    2018-01-01

    Indications for resection of pancreatic cancers have evolved to include selected patients with involvement of peri-pancreatic vascular structures. Open Whipple procedures have been the standard approach for patients requiring reconstruction of the portal vein (PV) or superior mesenteric vein (SMV). Recently, high-volume centers are performing minimally invasive Whipple procedures with portovenous resections. Our institution has performed seventy robotic Whipple procedures with concomitant vascular resections. This report outlines our technique.

  19. A Method to Select Software Test Cases in Consideration of Past Input Sequence

    International Nuclear Information System (INIS)

    Kim, Hee Eun; Kim, Bo Gyung; Kang, Hyun Gook

    2015-01-01

    In the Korea Nuclear I and C Systems (KNICS) project, the software for the fully-digitalized reactor protection system (RPS) was developed under a strict procedure. Even though the behavior of the software is deterministic, the randomness of input sequence produces probabilistic behavior of software. A software failure occurs when some inputs to the software occur and interact with the internal state of the digital system to trigger a fault that was introduced into the software during the software lifecycle. In this paper, the method to select test set for software failure probability estimation is suggested. This test set reflects past input sequence of software, which covers all possible cases. In this study, the method to select test cases for software failure probability quantification was suggested. To obtain profile of paired state variables, relationships of the variables need to be considered. The effect of input from human operator also have to be considered. As an example, test set of PZR-PR-Lo-Trip logic was examined. This method provides framework for selecting test cases of safety-critical software

  20. Low complexity transmit antenna selection with power balancing in OFDM systems

    KAUST Repository

    Park, Kihong

    2010-10-01

    In this paper, we consider multi-carrier systems with multiple transmit antennas under the power balancing constraint, which is defined as the constraint that the power on each antenna should be limited under a certain level due to the linearity of the power amplifier of the RF chain. Applying transmit antenna selection and fixed-power variable-rate transmission per subcarrier as a function of channel variations, we propose an implementation-friendly antenna selection method which offers a reduced complexity in comparison with the optimal antenna selection scheme. More specifically, in order to solve the subcarrier imbalance across the antennas, we operate a two-step reallocation procedure to minimize the loss of spectral efficiency. We also provide an analytic lower bound on the spectral efficiency for the proposed scheme. From selected numerical results, we show that our suboptimal scheme offers almost the same spectral efficiency as the optimal one. © 2010 IEEE.

  1. Variable selection based on clustering analysis for improvement of polyphenols prediction in green tea using synchronous fluorescence spectra

    Science.gov (United States)

    Shan, Jiajia; Wang, Xue; Zhou, Hao; Han, Shuqing; Riza, Dimas Firmanda Al; Kondo, Naoshi

    2018-04-01

    Synchronous fluorescence spectra, combined with multivariate analysis were used to predict flavonoids content in green tea rapidly and nondestructively. This paper presented a new and efficient spectral intervals selection method called clustering based partial least square (CL-PLS), which selected informative wavelengths by combining clustering concept and partial least square (PLS) methods to improve models’ performance by synchronous fluorescence spectra. The fluorescence spectra of tea samples were obtained and k-means and kohonen-self organizing map clustering algorithms were carried out to cluster full spectra into several clusters, and sub-PLS regression model was developed on each cluster. Finally, CL-PLS models consisting of gradually selected clusters were built. Correlation coefficient (R) was used to evaluate the effect on prediction performance of PLS models. In addition, variable influence on projection partial least square (VIP-PLS), selectivity ratio partial least square (SR-PLS), interval partial least square (iPLS) models and full spectra PLS model were investigated and the results were compared. The results showed that CL-PLS presented the best result for flavonoids prediction using synchronous fluorescence spectra.

  2. Unexpected effects of computer presented procedures

    International Nuclear Information System (INIS)

    Blackman, H.S.; Nelson, W.R.

    1988-01-01

    Results from experiments conducted at the Idaho National Engineering Laboratory have been presented regarding the computer presentation of procedural information. The results come from the experimental evaluation of an expert system which presented procedural instructions to be performed by a nuclear power plant operator. Lessons learned and implications from the study are discussed as well as design issues that should be considered to avoid some of the pitfalls in computer presented or selected procedures

  3. The utility experience of implementing the emergency operating procedure tracking system

    International Nuclear Information System (INIS)

    Chang, W.C.; Cheng, J.F.

    1990-01-01

    This report presents the experience of a project sponsored by the Electric Power Research Institute (EPRI), Taiwan Power Company (TPC) and supported by the Nuclear Software Service (NSS), General Electric Company (GE) and Science Applications International Corporation (SAIC) to implement the Emergency Operating Procedure Tracking System (EOPTS) in Kuosheng Nuclear Power Station Simulator. Before implement the EOPTS in Kuosheng Simulator, the Safety Parameter Display System (SPDS) of the Energency Response Facility Technical Data System (ERFTDS) shall be simulated, the hardware and software linkage between the simulator and ERFTDS shall be established, that include installation of a VAX-8200 computer, Gould - Vax computer hardware linkage, ERFTDS software installation, simulator source variables selection and linkage it to the ERFTDS database

  4. ILK statement on the recommendations by the working group on procedures for the selection of repository sites; ILK-Stellungnahme zu den Empfehlungen des Arbeitskreises Auswahlverfahren Endlagerstandorte Internationale (AkEnd)

    Energy Technology Data Exchange (ETDEWEB)

    Anon.

    2003-11-01

    The Working Group on Procedures for the Selection of Repository Sites (AkEnd) had been appointed by the German Federal Ministry for the Environment (BMU) to develop procedures and criteria for the search for, and selection of, a repository site for all kinds of radioactive waste in deep geologic formations in Germany. ILK in principle welcomes the attempt on the part of AkEnd to develop a systematic procedure. On the other hand, ILK considers the two constraints imposed by BMU inappropriate: AkEnd was not to take into account the two existing sites of Konrad and Gorleben and, instead, work from a so-called white map of Germany. ILK recommends to perform a comprehensive safety analysis of Gorleben and define a selection procedure including the facts about Gorleben and, in addition, to commission the Konrad repository as soon as possible. The one-repository concept established as a precondition by BMU greatly restricts the selection procedure. There are no technical or scientific reasons for such concept. ILK recommends to plan for separate repositories, which would also correspond to international practice. The geoscientific criteria proposed by AkEnd should be examined and revised. With respect to the site selection procedure proposed, ILK feels that procedure is unable to define a targeted approach. Great importance must be attributed to public participation. The final site selection must be made under the responsibility of the government or the parliament. (orig.) [German] Der Arbeitskreis Auswahlverfahren Endlagerstandorte (AkEnd) hat Ende 2002 seine Empfehlungen vorgestellt. Der AkEnd war vom Bundesumweltministerium (BMU) berufen worden, um Verfahren und Kriterien fuer die Suche und die Auswahl eines Endlagerstandortes fuer alle Arten radioaktiver Abfaelle in tiefen geologischen Formationen in Deutschland zu entwickeln. Die ILK begruesst grundsaetzlich den Versuch des AkEnd, ein systematisches Verfahren zu entwickeln. Allerdings haelt die ILK die beiden vom BMU

  5. Selective attrition and intraindividual variability in response time moderate cognitive change.

    Science.gov (United States)

    Yao, Christie; Stawski, Robert S; Hultsch, David F; MacDonald, Stuart W S

    2016-01-01

    Selection of a developmental time metric is useful for understanding causal processes that underlie aging-related cognitive change and for the identification of potential moderators of cognitive decline. Building on research suggesting that time to attrition is a metric sensitive to non-normative influences of aging (e.g., subclinical health conditions), we examined reason for attrition and intraindividual variability (IIV) in reaction time as predictors of cognitive performance. Three hundred and four community dwelling older adults (64-92 years) completed annual assessments in a longitudinal study. IIV was calculated from baseline performance on reaction time tasks. Multilevel models were fit to examine patterns and predictors of cognitive change. We show that time to attrition was associated with cognitive decline. Greater IIV was associated with declines on executive functioning and episodic memory measures. Attrition due to personal health reasons was also associated with decreased executive functioning compared to that of individuals who remained in the study. These findings suggest that time to attrition is a useful metric for representing cognitive change, and reason for attrition and IIV are predictive of non-normative influences that may underlie instances of cognitive loss in older adults.

  6. Virtual Reality as a Distraction Intervention to Relieve Pain and Distress During Medical Procedures: A Comprehensive Literature Review.

    Science.gov (United States)

    Indovina, Paola; Barone, Daniela; Gallo, Luigi; Chirico, Andrea; De Pietro, Giuseppe; Antonio, Giordano

    2018-02-26

    This review aims to provide a framework for evaluating the utility of virtual reality (VR) as a distraction intervention to alleviate pain and distress during medical procedures. We firstly describe the theoretical bases underlying the VR analgesic and anxiolytic effects and define the main factors contributing to its efficacy, which largely emerged from studies on healthy volunteers. Then, we provide a comprehensive overview of the clinical trials using VR distraction during different medical procedures, such as burn injury treatments, chemotherapy, surgery, dental treatment, and other diagnostic and therapeutic procedures. A broad literature search was performed using as main terms "virtual reality", "distraction" and "pain". No date limit was applied and all the retrieved studies on immersive VR distraction during medical procedures were selected. VR has proven to be effective in reducing procedural pain, as almost invariably observed even in patients subjected to extremely painful procedures, such as patients with burn injuries undergoing wound care and physical therapy. Moreover, VR seemed to decrease cancer-related symptoms in different settings, including during chemotherapy. Only mild and infrequent side effects were observed. Despite these promising results, future long-term randomized controlled trials with larger sample sizes and evaluating not only self-report measures but also physiological variables are needed. Further studies are also required both to establish predictive factors to select patients who can benefit from VR distraction and to design hardware/software systems tailored to the specific needs of different patients and able to provide the greatest distraction at the lowest cost.

  7. 28 CFR 104.31 - Procedure for claims evaluation.

    Science.gov (United States)

    2010-07-01

    ... COMPENSATION FUND OF 2001 Claim Intake, Assistance, and Review Procedures § 104.31 Procedure for claims..., described herein as “Track A” and “Track B,” selected by the claimant on the Personal Injury Compensation Form or Death Compensation Form. (1) Procedure for Track A. The Claims Evaluator shall determine...

  8. Unexpected effects of computer presented procedures

    International Nuclear Information System (INIS)

    Blackman, H.S.; Nelson, W.R.

    1988-01-01

    Results from experiments conducted at the Idaho National Engineering Laboratory will be presented regarding the computer presentation of procedural information. The results come from the experimental evaluation of an expert system which presented procedural instructions to be performed by a nuclear power plant operator. Lessons learned and implications from the study will be discussed as well as design issues that should be considered to avoid some of the pitfalls in computer presented or selected procedures. 1 ref., 1 fig

  9. Choosing a Surgeon: An Exploratory Study of Factors Influencing Selection of a Gender Affirmation Surgeon.

    Science.gov (United States)

    Ettner, Randi; Ettner, Frederic; White, Tonya

    2016-01-01

    Purpose: Selecting a healthcare provider is often a complicated process. Many factors appear to govern the decision as to how to select the provider in the patient-provider relationship. While the possibility of changing primary care physicians or specialists exists, decisions regarding surgeons are immutable once surgery has been performed. This study is an attempt to assess the importance attached to various factors involved in selecting a surgeon to perform gender affirmation surgery (GAS). It was hypothesized that owing to the intimate nature of the surgery, the expense typically involved, the emotional meaning attached to the surgery, and other variables, decisions regarding choice of surgeon for this procedure would involve factors other than those that inform more typical healthcare provider selection or surgeon selection for other plastic/reconstructive procedures. Methods: Questionnaires were distributed to individuals who had undergone GAS and individuals who had undergone elective plastic surgery to assess decision-making. Results: The results generally confirm previous findings regarding how patients select providers. Conclusion: Choosing a surgeon to perform gender-affirming surgery is a challenging process, but patients are quite rational in their decision-making. Unlike prior studies, we did not find a preference for gender-concordant surgeons, even though the surgery involves the genital area. Providing strategies and resources for surgical selection can improve patient satisfaction.

  10. A new procedure for implementing a geological disposal

    International Nuclear Information System (INIS)

    Anon.

    2014-01-01

    The British government has launched a new procedure for selecting and implementing a geological disposal. This procedure is based on long-term cooperation with municipalities that wish to home this facility. In a preliminary 2 year long step, a national geological survey will be performed in order to determine regions that are suitable to home a geological disposal. Then discussions between municipalities that are voluntary and the enterprise in charge of developing the project will begin. Municipalities will receive an investment up to 1 million pounds a year in the first years of the selecting procedure and then 2.5 million pounds a year when discussions become more formal. British authorities consider that the procedure for selecting a site may last up to 20 years. A previous attempt to find a site failed in 2013 when 2 regions that had been interested in the project since 2008, were finally rebuffed by the regional council that opposed the project. Scotland and Wales have their own strategy for the management of radioactive waste. (A.C.)

  11. Mastery Learning and the Decreasing Variability Hypothesis.

    Science.gov (United States)

    Livingston, Jennifer A.; Gentile, J. Ronald

    1996-01-01

    This report results from studies that tested two variations of Bloom's decreasing variability hypothesis using performance on successive units of achievement in four graduate classrooms that used mastery learning procedures. Data do not support the decreasing variability hypothesis; rather, they show no change over time. (SM)

  12. Model Selection in Continuous Test Norming With GAMLSS.

    Science.gov (United States)

    Voncken, Lieke; Albers, Casper J; Timmerman, Marieke E

    2017-06-01

    To compute norms from reference group test scores, continuous norming is preferred over traditional norming. A suitable continuous norming approach for continuous data is the use of the Box-Cox Power Exponential model, which is found in the generalized additive models for location, scale, and shape. Applying the Box-Cox Power Exponential model for test norming requires model selection, but it is unknown how well this can be done with an automatic selection procedure. In a simulation study, we compared the performance of two stepwise model selection procedures combined with four model-fit criteria (Akaike information criterion, Bayesian information criterion, generalized Akaike information criterion (3), cross-validation), varying data complexity, sampling design, and sample size in a fully crossed design. The new procedure combined with one of the generalized Akaike information criterion was the most efficient model selection procedure (i.e., required the smallest sample size). The advocated model selection procedure is illustrated with norming data of an intelligence test.

  13. Whipple procedure: patient selection and special considerations

    Directory of Open Access Journals (Sweden)

    Tan-Tam C

    2016-07-01

    Full Text Available Clara Tan-Tam,1 Maja Segedi,2 Stephen W Chung2 1Department of Surgery, Bassett Healthcare, Columbia University, Cooperstown, New York, NY, USA; 2Department of Hepatobiliary and Pancreatic Surgery and Liver Transplant, Vancouver General Hospital, University of British Columbia, Vancouver, BC, Canada Abstract: At the inception of pancreatic surgery by Dr Whipple in 1930s, the mortality and morbidity risk was more than 20%. With further understanding of disease processes and improvements in pancreas resection techniques, the mortality risk has decreased to less than 5%. Age and chronic illnesses are no longer a contraindication to surgical treatment. Life expectancy and quality of life at a later age have improved, making older patients more likely to receive pancreatic surgery , thereby also putting emphasis on operative patient selection to minimize complications. This review summarizes the benign and malignant illnesses that are treated with pancreas operations, and innovations and improvements in pancreatic surgery and perioperative care, and describes the careful selection process for patients who would benefit from an operation. These indications are not reserved only to Whipple operation, but to pancreatectomies as well.Keywords: pancreaticoduodenectomy, mortality, morbidity, cancer, trauma, pancreatitis

  14. Analysis of spatio-temporal variability of C-factor derived from remote sensing data

    Science.gov (United States)

    Pechanec, Vilem; Benc, Antonin; Purkyt, Jan; Cudlin, Pavel

    2016-04-01

    In some risk areas water erosion as the present task has got the strong influence on agriculture and can threaten inhabitants. In our country combination of USLE and RUSLE models has been used for water erosion assessment (Krása et al., 2013). Role of vegetation cover is characterized by the help of vegetation protection factor, so-called C- factor. Value of C-factor is given by the ratio of washing-off on a plot with arable crops to standard plot which is kept as fallow regularly spud after any rain (Janeček et al., 2012). Under conditions we cannot identify crop structure and its turn, determination of C-factor can be problem in large areas. In such case we only determine C-factor according to the average crop representation. New technologies open possibilities for acceleration and specification of the approach. Present-day approach for the C-factor determination is based on the analysis of multispectral image data. Red and infrared spectrum is extracted and these parts of image are used for computation of vegetation index series (NDVI, TSAVI). Acquired values for fractional time sections (during vegetation period) are averaged out. At the same time values of vegetation indices for a forest and cleared area are determined. Also regressive coefficients are computed. Final calculation is done by the help of regressive equations expressing relation between values of NDVI and C-factor (De Jong, 1994; Van der Knijff, 1999; Karaburun, 2010). Up-to-date land use layer is used for the determination of erosion threatened areas on the base of selection of individual landscape segments of erosion susceptible categories of land use. By means of Landsat 7 data C-factor has been determined for the whole area of the Czech Republic in every month of the year of 2014. At the model area in a small watershed C-factor has been determined by the conventional (tabular) procedure. Analysis was focused on: i) variability assessment of C-factor values while using the conventional

  15. A computational procedure for finding multiple solutions of convective heat transfer equations

    International Nuclear Information System (INIS)

    Mishra, S; DebRoy, T

    2005-01-01

    In recent years numerical solutions of the convective heat transfer equations have provided significant insight into the complex materials processing operations. However, these computational methods suffer from two major shortcomings. First, these procedures are designed to calculate temperature fields and cooling rates as output and the unidirectional structure of these solutions preclude specification of these variables as input even when their desired values are known. Second, and more important, these procedures cannot determine multiple pathways or multiple sets of input variables to achieve a particular output from the convective heat transfer equations. Here we propose a new method that overcomes the aforementioned shortcomings of the commonly used solutions of the convective heat transfer equations. The procedure combines the conventional numerical solution methods with a real number based genetic algorithm (GA) to achieve bi-directionality, i.e. the ability to calculate the required input variables to achieve a specific output such as temperature field or cooling rate. More important, the ability of the GA to find a population of solutions enables this procedure to search for and find multiple sets of input variables, all of which can lead to the desired specific output. The proposed computational procedure has been applied to convective heat transfer in a liquid layer locally heated on its free surface by an electric arc, where various sets of input variables are computed to achieve a specific fusion zone geometry defined by an equilibrium temperature. Good agreement is achieved between the model predictions and the independent experimental results, indicating significant promise for the application of this procedure in finding multiple solutions of convective heat transfer equations

  16. A comparative assessment of alternative waste management procedures for selected reprocessing wastes

    International Nuclear Information System (INIS)

    Hickford, G.E.; Plews, M.J.

    1983-07-01

    This report, which has been prepared by Associated Nuclear Services for the Department of the Environment, presents the results of a study and comparative assessment of management procedures for low and intermediate level solid waste streams arising from current and future fuel reprocessing operations on the Sellafield site. The characteristics and origins of the wastes under study are discussed and a reference waste inventory is presented, based on published information. Waste management strategy in the UK and its implications for waste conditioning, packaging and disposal are discussed. Wastes currently arising which are not suitable for Drigg burial or sea dumping are stored in an untreated form. Work is in hand to provide additional and improved disposal facilities which will accommodate all the waste streams under study. For each waste stream viable procedures are identified for further assessment. The procedures comprise a series of on-site operations-recovery from storage, pre-treatment, treatment, encapsulation, and packaging, prior to storage or disposal of the conditioned waste form. Assessments and comparisons of each procedure for each waste are presented. These address various process, operational, economic, radiological and general safety factors. The results are presented in a series of tables with supporting text. For the majority of wastes direct encapsulation with minimal treatment appears to be a viable procedure. Occupational exposure and general safety are not identified as significant factors governing the choice of procedures. The conditioned wastes meet the general requirements for safe handling during storage and transportation. The less active wastes suitable for disposal by currently available routes meet the appropriate disposal criteria. It is not possible to consider in detail the suitability for disposal of the more active wastes for which disposal facilities are not yet available. (Author)

  17. NUMBER OF SUCCESSIVE CYCLES NECESSARY TO ACHIEVE STABILITY OF SELECTED GROUND REACTION FORCE VARIABLES DURING CONTINUOUS JUMPING

    Directory of Open Access Journals (Sweden)

    Jasmes M.W. Brownjohn

    2009-12-01

    Full Text Available Because of inherent variability in all human cyclical movements, such as walking, running and jumping, data collected across a single cycle might be atypical and potentially unable to represent an individual's generalized performance. The study described here was designed to determine the number of successive cycles due to continuous, repetitive countermovement jumping which a test subject should perform in a single experimental session to achieve stability of the mean of the corresponding continuously measured ground reaction force (GRF variables. Seven vertical GRF variables (period of jumping cycle, duration of contact phase, peak force amplitude and its timing, average rate of force development, average rate of force relaxation and impulse were extracted on the cycle-by-cycle basis from vertical jumping force time histories generated by twelve participants who were jumping in response to regular electronic metronome beats in the range 2-2.8 Hz. Stability of the selected GRF variables across successive jumping cycles was examined for three jumping rates (2, 2.4 and 2.8 Hz using two statistical methods: intra-class correlation (ICC analysis and segmental averaging technique (SAT. Results of the ICC analysis indicated that an average of four successive cycles (mean 4.5 ± 2.7 for 2 Hz; 3.9 ± 2.6 for 2.4 Hz; 3.3 ± 2.7 for 2.8 Hz were necessary to achieve maximum ICC values. Except for jumping period, maximum ICC values took values from 0.592 to 0.991 and all were significantly (p < 0.05 different from zero. Results of the SAT revealed that an average of ten successive cycles (mean 10.5 ± 3.5 for 2 Hz; 9.2 ± 3.8 for 2.4 Hz; 9.0 ± 3.9 for 2.8 Hz were necessary to achieve stability of the selected parameters using criteria previously reported in the literature. Using 10 reference trials, the SAT required standard deviation criterion values of 0.49, 0.41 and 0.55 for 2 Hz, 2.4 Hz and 2.8 Hz jumping rates, respectively, in order to approximate

  18. Disparities in Aesthetic Procedures Performed by Plastic Surgery Residents.

    Science.gov (United States)

    Silvestre, Jason; Serletti, Joseph M; Chang, Benjamin

    2017-05-01

    Operative experience in aesthetic surgery is an important issue affecting plastic surgery residents. This study addresses the variability of aesthetic surgery experience during plastic surgery residency. National operative case logs of chief residents in independent/combined and integrated plastic surgery residency programs were analyzed (2011-2015). Fold differences between the bottom and top 10th percentiles of residents were calculated for each aesthetic procedure category and training model. The number of residents not achieving case minimums was also calculated. Case logs of 818 plastic surgery residents were analyzed. There was marked variability in craniofacial (range, 6.0-15.0), breast (range, 2.4-5.9), trunk/extremity (range, 3.0-16.0), and miscellaneous (range, 2.7-22.0) procedure categories. In 2015, the bottom 10th percentile of integrated and independent/combined residents did not achieve case minimums for botulinum toxin and dermal fillers. Case minimums were achieved for the other aesthetic procedure categories for all graduating years. Significant variability persists for many aesthetic procedure categories during plastic surgery residency training. Greater efforts may be needed to improve the aesthetic surgery experience of plastic surgery residents. © 2016 The American Society for Aesthetic Plastic Surgery, Inc. Reprints and permission: journals.permissions@oup.com

  19. 7 CFR 983.152 - Failed lots/rework procedure.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Failed lots/rework procedure. 983.152 Section 983.152..., ARIZONA, AND NEW MEXICO Rules and Regulations § 983.152 Failed lots/rework procedure. (a) Inshell rework procedure for aflatoxin. If inshell rework is selected as a remedy to meet the aflatoxin regulations of this...

  20. Variable selectivity and the role of nutritional quality in food selection by a planktonic rotifer

    International Nuclear Information System (INIS)

    Sierszen, M.E.

    1990-01-01

    To investigate the potential for selective feeding to enhance fitness, I test the hypothesis that an herbivorous zooplankter selects those food items that best support its reproduction. Under this hypothesis, growth and reproduction on selected food items should be higher than on less preferred items. The hypothesis is not supported. In situ selectivity by the rotifer Keratella taurocephala for Cryptomonas relative to Chlamydomonas goes through a seasonal cycle, in apparent response to fluctuating Cryptomonas populations. However, reproduction on a unialgal diet of Cryptomonas is consistently high and similar to that on Chlamydomonas. Oocystis, which also supports reproduction equivalent to that supported by Chlamydomonas, is sometimes rejected by K. taurocephala. In addition, K. taurocephala does not discriminate between Merismopedia and Chlamydomonas even though Merismopedia supports virtually no reproduction by the rotifer. Selection by K. taurocephala does not simply maximize the intake of food items that yield high reproduction. Selectivity is a complex, dynamic process, one function of which may be the exploitation of locally or seasonally abundant foods. (author)

  1. Selection of antibiotics in detection procedure of Escherichia coli O157:H7 in vegetables

    Science.gov (United States)

    Hoang, Hoang A.; Nhung, Nguyen T. T.

    2017-09-01

    Detection of Escherichia coli O157:H7 in ready-to-eat fresh vegetables is important since this bacteria is considered as one of the most important pathogens in relation to public health. However, it could be a big challenge for detection of initial low concentrations of E. coli O157:H7 in the samples. In this study, selection of antibiotics that suppress growth of background bacteria to enable detection of E. coli O157:H7 in ready-to-eat fresh vegetables was investigated. Firstly, different combinations of two antibiotics, i.e. novobiocin (N) and vancomycin (V), in BHI broth were conducted. The three antibiotic combinations were preliminary examined their effect on the growth of E. coli O157:H7 and Bacillus spp. in broth based on OD600nm measurement. The combination of both the antibiotics was selected to examine their possibility to support detection of E. coli O157:H7 in vegetables. It was successful when two antibiotics showed their support in detection of E. coli O157:H7 at very low concentration of 2 CFU per one gram of lettuce. Usage of these antibiotics is simple and cheap in the detection procedure and could be applied to other types of ready-to-eat fresh vegetables popular in Vietnam.

  2. A Soft Computing Based Approach Using Modified Selection Strategy for Feature Reduction of Medical Systems

    Directory of Open Access Journals (Sweden)

    Kursat Zuhtuogullari

    2013-01-01

    Full Text Available The systems consisting high input spaces require high processing times and memory usage. Most of the attribute selection algorithms have the problems of input dimensions limits and information storage problems. These problems are eliminated by means of developed feature reduction software using new modified selection mechanism with middle region solution candidates adding. The hybrid system software is constructed for reducing the input attributes of the systems with large number of input variables. The designed software also supports the roulette wheel selection mechanism. Linear order crossover is used as the recombination operator. In the genetic algorithm based soft computing methods, locking to the local solutions is also a problem which is eliminated by using developed software. Faster and effective results are obtained in the test procedures. Twelve input variables of the urological system have been reduced to the reducts (reduced input attributes with seven, six, and five elements. It can be seen from the obtained results that the developed software with modified selection has the advantages in the fields of memory allocation, execution time, classification accuracy, sensitivity, and specificity values when compared with the other reduction algorithms by using the urological test data.

  3. A soft computing based approach using modified selection strategy for feature reduction of medical systems.

    Science.gov (United States)

    Zuhtuogullari, Kursat; Allahverdi, Novruz; Arikan, Nihat

    2013-01-01

    The systems consisting high input spaces require high processing times and memory usage. Most of the attribute selection algorithms have the problems of input dimensions limits and information storage problems. These problems are eliminated by means of developed feature reduction software using new modified selection mechanism with middle region solution candidates adding. The hybrid system software is constructed for reducing the input attributes of the systems with large number of input variables. The designed software also supports the roulette wheel selection mechanism. Linear order crossover is used as the recombination operator. In the genetic algorithm based soft computing methods, locking to the local solutions is also a problem which is eliminated by using developed software. Faster and effective results are obtained in the test procedures. Twelve input variables of the urological system have been reduced to the reducts (reduced input attributes) with seven, six, and five elements. It can be seen from the obtained results that the developed software with modified selection has the advantages in the fields of memory allocation, execution time, classification accuracy, sensitivity, and specificity values when compared with the other reduction algorithms by using the urological test data.

  4. Synthesis, Characterization, and Variable-Temperature NMR Studies of Silver(I) Complexes for Selective Nitrene Transfer.

    Science.gov (United States)

    Huang, Minxue; Corbin, Joshua R; Dolan, Nicholas S; Fry, Charles G; Vinokur, Anastasiya I; Guzei, Ilia A; Schomaker, Jennifer M

    2017-06-05

    An array of silver complexes supported by nitrogen-donor ligands catalyze the transformation of C═C and C-H bonds to valuable C-N bonds via nitrene transfer. The ability to achieve high chemoselectivity and site selectivity in an amination event requires an understanding of both the solid- and solution-state behavior of these catalysts. X-ray structural characterizations were helpful in determining ligand features that promote the formation of monomeric versus dimeric complexes. Variable-temperature 1 H and DOSY NMR experiments were especially useful for understanding how the ligand identity influences the nuclearity, coordination number, and fluxional behavior of silver(I) complexes in solution. These insights are valuable for developing improved ligand designs.

  5. Non-additive Effects in Genomic Selection

    Directory of Open Access Journals (Sweden)

    Luis Varona

    2018-03-01

    Full Text Available In the last decade, genomic selection has become a standard in the genetic evaluation of livestock populations. However, most procedures for the implementation of genomic selection only consider the additive effects associated with SNP (Single Nucleotide Polymorphism markers used to calculate the prediction of the breeding values of candidates for selection. Nevertheless, the availability of estimates of non-additive effects is of interest because: (i they contribute to an increase in the accuracy of the prediction of breeding values and the genetic response; (ii they allow the definition of mate allocation procedures between candidates for selection; and (iii they can be used to enhance non-additive genetic variation through the definition of appropriate crossbreeding or purebred breeding schemes. This study presents a review of methods for the incorporation of non-additive genetic effects into genomic selection procedures and their potential applications in the prediction of future performance, mate allocation, crossbreeding, and purebred selection. The work concludes with a brief outline of some ideas for future lines of that may help the standard inclusion of non-additive effects in genomic selection.

  6. Variability of indication criteria in knee and hip replacement: an observational study.

    Science.gov (United States)

    Cobos, Raquel; Latorre, Amaia; Aizpuru, Felipe; Guenaga, Jose I; Sarasqueta, Cristina; Escobar, Antonio; García, Lidia; Herrera-Espiñeira, Carmen

    2010-10-26

    Total knee (TKR) and hip (THR) replacement (arthroplasty) are effective surgical procedures that relieve pain, improve patients' quality of life and increase functional capacity. Studies on variations in medical practice usually place the indications for performing these procedures to be highly variable, because surgeons appear to follow different criteria when recommending surgery in patients with different severity levels. We therefore proposed a study to evaluate inter-hospital variability in arthroplasty indication. The pre-surgical condition of 1603 patients included was compared by their personal characteristics, clinical situation and self-perceived health status. Patients were asked to complete two health-related quality of life questionnaires: the generic SF-12 (Short Form) and the specific WOMAC (Western Ontario and Mcmaster Universities) scale. The type of patient undergoing primary arthroplasty was similar in the 15 different hospitals evaluated.The variability in baseline WOMAC score between hospitals in THR and TKR indication was described by range, mean and standard deviation (SD), mean and standard deviation weighted by the number of procedures at each hospital, high/low ratio or extremal quotient (EQ5-95), variation coefficient (CV5-95) and weighted variation coefficient (WCV5-95) for 5-95 percentile range. The variability in subjective and objective signs was evaluated using median, range and WCV5-95. The appropriateness of the procedures performed was calculated using a specific threshold proposed by Quintana et al for assessing pain and functional capacity. The variability expressed as WCV5-95 was very low, between 0.05 and 0.11 for all three dimensions on WOMAC scale for both types of procedure in all participating hospitals. The variability in the physical and mental SF-12 components was very low for both types of procedure (0.08 and 0.07 for hip and 0.03 and 0.07 for knee surgery patients). However, a moderate-high variability was detected in

  7. Procedures for selecting and buying district heating equipment. Sofia district heating. Final report

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-11-01

    The aim of this Final Report, prepared for the project `Procedures for Selecting and Buying DistRict Heating Equipment - Sofia District Heating Company`, is to establish an overview of the activities accomplished, the outputs delivered and the general experience gained as a result of the project. The main objective of the project is to enable Sofia District Heating Company to prepare specifications and tender documents, identify possible suppliers, evaluate offers, etc. in connection with purchase of district heating equipment. This objective has been reached by using rehabilitation of sub-stations as an example requested by Sofia DH. The project was originally planned to be finalized end of 1995, but due to the extensions of the scope of work, the project has been prolonged until end 1997. The following main activities were accomplished: Preparation of a detailed work plan; Collection of background information; Discussion and advice about technical specifications and tender documents for sub-station rehabilitation; Input to terms of reference for a master plan study; Input to technical specification for heat meters; Collection of ideas for topics and examples related to dissemination of information to consumers about matters related to district heating consumption. (EG)

  8. Pathogen-mediated selection for MHC variability in wild zebrafish

    Czech Academy of Sciences Publication Activity Database

    Smith, C.; Ondračková, Markéta; Spence, R.; Adams, S.; Betts, D. S.; Mallon, E.

    2011-01-01

    Roč. 13, č. 6 (2011), s. 589-605 ISSN 1522-0613 Institutional support: RVO:68081766 Keywords : digenean * frequency-dependent selection * heterozygote advantage * major histocompatibility complex * metazoan parasite * pathogen-driven selection Subject RIV: EG - Zoology Impact factor: 1.029, year: 2011

  9. Unifying parameter estimation and the Deutsch-Jozsa algorithm for continuous variables

    International Nuclear Information System (INIS)

    Zwierz, Marcin; Perez-Delgado, Carlos A.; Kok, Pieter

    2010-01-01

    We reveal a close relationship between quantum metrology and the Deutsch-Jozsa algorithm on continuous-variable quantum systems. We develop a general procedure, characterized by two parameters, that unifies parameter estimation and the Deutsch-Jozsa algorithm. Depending on which parameter we keep constant, the procedure implements either the parameter-estimation protocol or the Deutsch-Jozsa algorithm. The parameter-estimation part of the procedure attains the Heisenberg limit and is therefore optimal. Due to the use of approximate normalizable continuous-variable eigenstates, the Deutsch-Jozsa algorithm is probabilistic. The procedure estimates a value of an unknown parameter and solves the Deutsch-Jozsa problem without the use of any entanglement.

  10. The 'Whip-Stow' procedure: an innovative modification to the whipple procedure in the management of premalignant and malignant pancreatic head disease.

    Science.gov (United States)

    Jeyarajah, D Rohan; Khithani, Amit; Curtis, David; Galanopoulos, Christos A

    2010-01-01

    Pancreaticoduodenectomy (PD) is the standard of care in the treatment of premalignant and malignant diseases of the head of the pancreas. Variability exists in anastomosis with the pancreatic remnant. This work describes a safe and easy modification for the pancreatic anastomosis after PD. Ten patients underwent the "Whip-Stow" procedure for the management of the pancreatic remnant. PD combined with a Puestow (lateral pancreaticojejunostomy [LPJ]) was completed using a running single-layer, 4-0 Prolene obeying a duct-to-mucosa technique. LPJ and pancreaticogastrostomy (PG) historical leak rates are reported to be 13.9 and 15.8 per cent, respectively. Mortality, leak, and postoperative bleeding rates were 0 per cent in all patients. The Whip-Stow was completed without loops or microscope with a 4-0 single-layer suture decreasing the time and complexity of the anastomosis. Average time was 12 minutes as compared with the 50 minutes of a 5 or 6-0 interrupted, multilayered duct-mucosa anastomosis. Benefits included a long-segment LPJ. In this study, the Whip-Stow procedure has proven to be a safe and simple approach to pancreatic anastomosis in selected patients. This new technique provides the benefit of technical ease while obeying the age old principles of obtaining a wide duct to mucosa anastomosis.

  11. Reliability of application of inspection procedures

    Energy Technology Data Exchange (ETDEWEB)

    Murgatroyd, R A

    1988-12-31

    This document deals with the reliability of application of inspection procedures. A method to ensure that the inspection of defects thanks to fracture mechanics is reliable is described. The Systematic Human Error Reduction and Prediction Analysis (SHERPA) methodology is applied to every task performed by the inspector to estimate the possibility of error. It appears that it is essential that inspection procedures should be sufficiently rigorous to avoid substantial errors, and that the selection procedures and the training period for inspectors should be optimised. (TEC). 3 refs.

  12. Reliability of application of inspection procedures

    International Nuclear Information System (INIS)

    Murgatroyd, R.A.

    1988-01-01

    This document deals with the reliability of application of inspection procedures. A method to ensure that the inspection of defects thanks to fracture mechanics is reliable is described. The Systematic Human Error Reduction and Prediction Analysis (SHERPA) methodology is applied to every task performed by the inspector to estimate the possibility of error. It appears that it is essential that inspection procedures should be sufficiently rigorous to avoid substantial errors, and that the selection procedures and the training period for inspectors should be optimised. (TEC)

  13. Generalized structural equations improve sexual-selection analyses.

    Directory of Open Access Journals (Sweden)

    Sonia Lombardi

    Full Text Available Sexual selection is an intense evolutionary force, which operates through competition for the access to breeding resources. There are many cases where male copulatory success is highly asymmetric, and few males are able to sire most females. Two main hypotheses were proposed to explain this asymmetry: "female choice" and "male dominance". The literature reports contrasting results. This variability may reflect actual differences among studied populations, but it may also be generated by methodological differences and statistical shortcomings in data analysis. A review of the statistical methods used so far in lek studies, shows a prevalence of Linear Models (LM and Generalized Linear Models (GLM which may be affected by problems in inferring cause-effect relationships; multi-collinearity among explanatory variables and erroneous handling of non-normal and non-continuous distributions of the response variable. In lek breeding, selective pressure is maximal, because large numbers of males and females congregate in small arenas. We used a dataset on lekking fallow deer (Dama dama, to contrast the methods and procedures employed so far, and we propose a novel approach based on Generalized Structural Equations Models (GSEMs. GSEMs combine the power and flexibility of both SEM and GLM in a unified modeling framework. We showed that LMs fail to identify several important predictors of male copulatory success and yields very imprecise parameter estimates. Minor variations in data transformation yield wide changes in results and the method appears unreliable. GLMs improved the analysis, but GSEMs provided better results, because the use of latent variables decreases the impact of measurement errors. Using GSEMs, we were able to test contrasting hypotheses and calculate both direct and indirect effects, and we reached a high precision of the estimates, which implies a high predictive ability. In synthesis, we recommend the use of GSEMs in studies on

  14. Improved variable reduction in partial least squares modelling by Global-Minimum Error Uninformative-Variable Elimination.

    Science.gov (United States)

    Andries, Jan P M; Vander Heyden, Yvan; Buydens, Lutgarde M C

    2017-08-22

    The calibration performance of Partial Least Squares regression (PLS) can be improved by eliminating uninformative variables. For PLS, many variable elimination methods have been developed. One is the Uninformative-Variable Elimination for PLS (UVE-PLS). However, the number of variables retained by UVE-PLS is usually still large. In UVE-PLS, variable elimination is repeated as long as the root mean squared error of cross validation (RMSECV) is decreasing. The set of variables in this first local minimum is retained. In this paper, a modification of UVE-PLS is proposed and investigated, in which UVE is repeated until no further reduction in variables is possible, followed by a search for the global RMSECV minimum. The method is called Global-Minimum Error Uninformative-Variable Elimination for PLS, denoted as GME-UVE-PLS or simply GME-UVE. After each iteration, the predictive ability of the PLS model, built with the remaining variable set, is assessed by RMSECV. The variable set with the global RMSECV minimum is then finally selected. The goal is to obtain smaller sets of variables with similar or improved predictability than those from the classical UVE-PLS method. The performance of the GME-UVE-PLS method is investigated using four data sets, i.e. a simulated set, NIR and NMR spectra, and a theoretical molecular descriptors set, resulting in twelve profile-response (X-y) calibrations. The selective and predictive performances of the models resulting from GME-UVE-PLS are statistically compared to those from UVE-PLS and 1-step UVE, one-sided paired t-tests. The results demonstrate that variable reduction with the proposed GME-UVE-PLS method, usually eliminates significantly more variables than the classical UVE-PLS, while the predictive abilities of the resulting models are better. With GME-UVE-PLS, a lower number of uninformative variables, without a chemical meaning for the response, may be retained than with UVE-PLS. The selectivity of the classical UVE method

  15. Multi-objective Optimization of Departure Procedures at Gimpo International Airport

    Science.gov (United States)

    Kim, Junghyun; Lim, Dongwook; Monteiro, Dylan Jonathan; Kirby, Michelle; Mavris, Dimitri

    2018-04-01

    Most aviation communities have increasing concerns about the environmental impacts, which are directly linked to health issues for local residents near the airport. In this study, the environmental impact of different departure procedures using the Aviation Environmental Design Tool (AEDT) was analyzed. First, actual operational data were compiled at Gimpo International Airport (March 20, 2017) from an open source. Two modifications were made in the AEDT to model the operational circumstances better and the preliminary AEDT simulations were performed according to the acquired operational procedures. Simulated noise results showed good agreements with noise measurement data at specific locations. Second, a multi-objective optimization of departure procedures was performed for the Boeing 737-800. Four design variables were selected and AEDT was linked to a variety of advanced design methods. The results showed that takeoff thrust had the greatest influence and it was found that fuel burn and noise had an inverse relationship. Two points representing each fuel burn and noise optimum on the Pareto front were parsed and run in AEDT to compare with the baseline. The results showed that the noise optimum case reduced Sound Exposure Level 80-dB noise exposure area by approximately 5% while the fuel burn optimum case reduced total fuel burn by 1% relative to the baseline for aircraft-level analysis.

  16. The impact of selected organizational variables and managerial leadership on radiation therapists' organizational commitment

    International Nuclear Information System (INIS)

    Akroyd, Duane; Legg, Jeff; Jackowski, Melissa B.; Adams, Robert D.

    2009-01-01

    The purpose of this study was to examine the impact of selected organizational factors and the leadership behavior of supervisors on radiation therapists' commitment to their organizations. The population for this study consists of all full time clinical radiation therapists registered by the American Registry of Radiologic Technologists (ARRT) in the United States. A random sample of 800 radiation therapists was obtained from the ARRT for this study. Questionnaires were mailed to all participants and measured organizational variables; managerial leadership variable and three components of organizational commitment (affective, continuance and normative). It was determined that organizational support, and leadership behavior of supervisors each had a significant and positive affect on normative and affective commitment of radiation therapists and each of the models predicted over 40% of the variance in radiation therapists organizational commitment. This study examined radiation therapists' commitment to their organizations and found that affective (emotional attachment to the organization) and normative (feelings of obligation to the organization) commitments were more important than continuance commitment (awareness of the costs of leaving the organization). This study can help radiation oncology administrators and physicians to understand the values their radiation therapy employees hold that are predictive of their commitment to the organization. A crucial result of the study is the importance of the perceived support of the organization and the leadership skills of managers/supervisors on radiation therapists' commitment to the organization.

  17. Variable selection in multiple linear regression: The influence of ...

    African Journals Online (AJOL)

    provide an indication of whether the fit of the selected model improves or ... and calculate M(−i); quantify the influence of case i in terms of a function, f(•), of M and ..... [21] Venter JH & Snyman JLJ, 1997, Linear model selection based on risk ...

  18. A Numerical Procedure for Flow Distribution and Pressure Drops for U and Z Type Configurations Plate Heat Exchangers with Variable Coefficients

    International Nuclear Information System (INIS)

    López, R; Lecuona, A; Ventas, R; Vereda, C

    2012-01-01

    In Plate Heat Exchangers it is important to determine the flow distribution and pressure drops, because they affect directly the performance of a heat exchanger. This work proposes an incompressible, one-dimensional, steady state, discrete model allowing for variable overall momentum coefficients to determine these magnitudes. The model consists on a modified version of the Bajura and Jones model for dividing and combining flow manifolds. The numerical procedure is based on the finite differences approximation approach proposed by Datta and Majumdar. A linear overall momentum coefficient distribution is used in the dividing manifold, but the model is not limited to linear distributions. Comparisons are made with experimental, numerical and analytical data, yielding good results.

  19. Psychological Selection of NASA Astronauts for International Space Station Missions

    Science.gov (United States)

    Galarza, Laura

    1999-01-01

    During the upcoming manned International Space Station (ISS) missions, astronauts will encounter the unique conditions of living and working with a multicultural crew in a confined and isolated space environment. The environmental, social, and mission-related challenges of these missions will require crewmembers to emphasize effective teamwork, leadership, group living and self-management to maintain the morale and productivity of the crew. The need for crew members to possess and display skills and behaviors needed for successful adaptability to ISS missions led us to upgrade the tools and procedures we use for astronaut selection. The upgraded tools include personality and biographical data measures. Content and construct-related validation techniques were used to link upgraded selection tools to critical skills needed for ISS missions. The results of these validation efforts showed that various personality and biographical data variables are related to expert and interview ratings of critical ISS skills. Upgraded and planned selection tools better address the critical skills, demands, and working conditions of ISS missions and facilitate the selection of astronauts who will more easily cope and adapt to ISS flights.

  20. The Performance of Variable Annuities

    OpenAIRE

    Michael J. McNamara; Henry R. Oppenheimer

    1991-01-01

    Variable annuities have become increasingly important in retirement plans. This paper provides an examination of the investment performance of variable annuities for the period year-end 1973 to year-end 1988. Returns, risk, and selectivity measures are analyzed for the sample of annuities, for individual variable annuities, and for subsamples of annuities with similar portfolio size and turnover. While the investment returns of variable annuities were greater than inflation over the period, t...

  1. Resolving combinatorial ambiguities in dilepton t t¯ event topologies with constrained M2 variables

    Science.gov (United States)

    Debnath, Dipsikha; Kim, Doojin; Kim, Jeong Han; Kong, Kyoungchul; Matchev, Konstantin T.

    2017-10-01

    We advocate the use of on-shell constrained M2 variables in order to mitigate the combinatorial problem in supersymmetry-like events with two invisible particles at the LHC. We show that in comparison to other approaches in the literature, the constrained M2 variables provide superior ansätze for the unmeasured invisible momenta and therefore can be usefully applied to discriminate combinatorial ambiguities. We illustrate our procedure with the example of dilepton t t ¯ events. We critically review the existing methods based on the Cambridge MT 2 variable and MAOS reconstruction of invisible momenta, and show that their algorithm can be simplified without loss of sensitivity, due to a perfect correlation between events with complex solutions for the invisible momenta and events exhibiting a kinematic endpoint violation. Then we demonstrate that the efficiency for selecting the correct partition is further improved by utilizing the M2 variables instead. Finally, we also consider the general case when the underlying mass spectrum is unknown, and no kinematic endpoint information is available.

  2. Exploratory Spectroscopy of Magnetic Cataclysmic Variables Candidates and Other Variable Objects

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, A. S.; Palhares, M. S. [IP and D, Universidade do Vale do Paraíba, 12244-000, São José dos Campos, SP (Brazil); Rodrigues, C. V.; Cieslinski, D.; Jablonski, F. J. [Divisão de Astrofísica, Instituto Nacional de Pesquisas Espaciais, 12227-010, São José dos Campos, SP (Brazil); Silva, K. M. G. [Gemini Observatory, Casilla 603, La Serena (Chile); Almeida, L. A. [Instituto de Astronomia, Geofísica e Ciências Atmosféricas, Universidade de São Paulo, 05508-900, São Paulo, SP (Brazil); Rodríguez-Ardila, A., E-mail: alexandre@univap.br [Laboratório Nacional de Astrofísica LNA/MCTI, 37504-364, Itajubá MG (Brazil)

    2017-04-01

    The increasing number of synoptic surveys made by small robotic telescopes, such as the photometric Catalina Real-Time Transient Survey (CRTS), provides a unique opportunity to discover variable sources and improves the statistical samples of such classes of objects. Our goal is the discovery of magnetic Cataclysmic Variables (mCVs). These are rare objects that probe interesting accretion scenarios controlled by the white-dwarf magnetic field. In particular, improved statistics of mCVs would help to address open questions on their formation and evolution. We performed an optical spectroscopy survey to search for signatures of magnetic accretion in 45 variable objects selected mostly from the CRTS. In this sample, we found 32 CVs, 22 being mCV candidates, 13 of which were previously unreported as such. If the proposed classifications are confirmed, it would represent an increase of 4% in the number of known polars and 12% in the number of known IPs. A fraction of our initial sample was classified as extragalactic sources or other types of variable stars by the inspection of the identification spectra. Despite the inherent complexity in identifying a source as an mCV, variability-based selection, followed by spectroscopic snapshot observations, has proved to be an efficient strategy for their discoveries, being a relatively inexpensive approach in terms of telescope time.

  3. Matched-pair analyses of resting and dynamic morphology between Monarc and TVT-O procedures by ultrasound.

    Science.gov (United States)

    Yang, Jenn-Ming; Yang, Shwu-Huey; Huang, Wen-Chen; Tzeng, Chii-Ruey

    2013-07-01

    To determine morphologic differences between Monarc and TVT-O procedures in axial and coronal planes by three- and four-dimensional (3D and 4D) ultrasound. Retrospective chart audits and ultrasound analyses were conducted on 128 women who had undergone either Monarc or TVT-O procedures for urodynamic stress incontinence. Thirty matched pairs of the two successful procedures were randomly selected and compared. Matched variables were age, parity, body mass index, cesarean status, menopausal status, and primary surgeries. Six-month postoperative 3D and 4D ultrasound results obtained at rest, on straining, and during coughing in these 60 women were analyzed. Assessed ultrasound parameters included the axial tape urethral distance (aTUD), axial central urethral echolucent area (aUCEA), axial tape angle (aTA), and coronal tape angle (cTA), all of which were measured at three equidistant points along the tapes. Paired t-tests were used to compare differences in ultrasound parameters between women after the two procedures and a P value TVT-O procedures. There were no significant differences in other resting ultrasound parameters between these two procedures. Additionally, after both procedures women had comparable straining and coughing ultrasound manifestations as well as respective dynamic changes. Despite flatter resting tape angulations in women following Monarc procedures, both Monarc and TVT-O tapes had equivalent dynamic patterns and changes assessed by 4D ultrasound. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  4. Resolving the Conflict Between Associative Overdominance and Background Selection

    Science.gov (United States)

    Zhao, Lei; Charlesworth, Brian

    2016-01-01

    In small populations, genetic linkage between a polymorphic neutral locus and loci subject to selection, either against partially recessive mutations or in favor of heterozygotes, may result in an apparent selective advantage to heterozygotes at the neutral locus (associative overdominance) and a retardation of the rate of loss of variability by genetic drift at this locus. In large populations, selection against deleterious mutations has previously been shown to reduce variability at linked neutral loci (background selection). We describe analytical, numerical, and simulation studies that shed light on the conditions under which retardation vs. acceleration of loss of variability occurs at a neutral locus linked to a locus under selection. We consider a finite, randomly mating population initiated from an infinite population in equilibrium at a locus under selection. With mutation and selection, retardation occurs only when S, the product of twice the effective population size and the selection coefficient, is of order 1. With S >> 1, background selection always causes an acceleration of loss of variability. Apparent heterozygote advantage at the neutral locus is, however, always observed when mutations are partially recessive, even if there is an accelerated rate of loss of variability. With heterozygote advantage at the selected locus, loss of variability is nearly always retarded. The results shed light on experiments on the loss of variability at marker loci in laboratory populations and on the results of computer simulations of the effects of multiple selected loci on neutral variability. PMID:27182952

  5. Modelling Seasonal GWR of Daily PM2.5 with Proper Auxiliary Variables for the Yangtze River Delta

    Directory of Open Access Journals (Sweden)

    Man Jiang

    2017-04-01

    Full Text Available Over the past decades, regional haze episodes have frequently occurred in eastern China, especially in the Yangtze River Delta (YRD. Satellite derived Aerosol Optical Depth (AOD has been used to retrieve the spatial coverage of PM2.5 concentrations. To improve the retrieval accuracy of the daily AOD-PM2.5 model, various auxiliary variables like meteorological or geographical factors have been adopted into the Geographically Weighted Regression (GWR model. However, these variables are always arbitrarily selected without deep consideration of their potentially varying temporal or spatial contributions in the model performance. In this manuscript, we put forward an automatic procedure to select proper auxiliary variables from meteorological and geographical factors and obtain their optimal combinations to construct four seasonal GWR models. We employ two different schemes to comprehensively test the performance of our proposed GWR models: (1 comparison with other regular GWR models by varying the number of auxiliary variables; and (2 comparison with observed ground-level PM2.5 concentrations. The result shows that our GWR models of “AOD + 3” with three common meteorological variables generally perform better than all the other GWR models involved. Our models also show powerful prediction capabilities in PM2.5 concentrations with only slight overfitting. The determination coefficients R2 of our seasonal models are 0.8259 in spring, 0.7818 in summer, 0.8407 in autumn, and 0.7689 in winter. Also, the seasonal models in summer and autumn behave better than those in spring and winter. The comparison between seasonal and yearly models further validates the specific seasonal pattern of auxiliary variables of the GWR model in the YRD. We also stress the importance of key variables and propose a selection process in the AOD-PM2.5 model. Our work validates the significance of proper auxiliary variables in modelling the AOD-PM2.5 relationships and

  6. Specified assurance level sampling procedure

    International Nuclear Information System (INIS)

    Willner, O.

    1980-11-01

    In the nuclear industry design specifications for certain quality characteristics require that the final product be inspected by a sampling plan which can demonstrate product conformance to stated assurance levels. The Specified Assurance Level (SAL) Sampling Procedure has been developed to permit the direct selection of attribute sampling plans which can meet commonly used assurance levels. The SAL procedure contains sampling plans which yield the minimum sample size at stated assurance levels. The SAL procedure also provides sampling plans with acceptance numbers ranging from 0 to 10, thus, making available to the user a wide choice of plans all designed to comply with a stated assurance level

  7. Sources of variability and systematic error in mouse timing behavior.

    Science.gov (United States)

    Gallistel, C R; King, Adam; McDonald, Robert

    2004-01-01

    In the peak procedure, starts and stops in responding bracket the target time at which food is expected. The variability in start and stop times is proportional to the target time (scalar variability), as is the systematic error in the mean center (scalar error). The authors investigated the source of the error and the variability, using head poking in the mouse, with target intervals of 5 s, 15 s, and 45 s, in the standard procedure, and in a variant with 3 different target intervals at 3 different locations in a single trial. The authors conclude that the systematic error is due to the asymmetric location of start and stop decision criteria, and the scalar variability derives primarily from sources other than memory.

  8. Pricing of common cosmetic surgery procedures: local economic factors trump supply and demand.

    Science.gov (United States)

    Richardson, Clare; Mattison, Gennaya; Workman, Adrienne; Gupta, Subhas

    2015-02-01

    The pricing of cosmetic surgery procedures has long been thought to coincide with laws of basic economics, including the model of supply and demand. However, the highly variable prices of these procedures indicate that additional economic contributors are probable. The authors sought to reassess the fit of cosmetic surgery costs to the model of supply and demand and to determine the driving forces behind the pricing of cosmetic surgery procedures. Ten plastic surgery practices were randomly selected from each of 15 US cities of various population sizes. Average prices of breast augmentation, mastopexy, abdominoplasty, blepharoplasty, and rhytidectomy in each city were compared with economic and demographic statistics. The average price of cosmetic surgery procedures correlated substantially with population size (r = 0.767), cost-of-living index (r = 0.784), cost to own real estate (r = 0.714), and cost to rent real estate (r = 0.695) across the 15 US cities. Cosmetic surgery pricing also was found to correlate (albeit weakly) with household income (r = 0.436) and per capita income (r = 0.576). Virtually no correlations existed between pricing and the density of plastic surgeons (r = 0.185) or the average age of residents (r = 0.076). Results of this study demonstrate a correlation between costs of cosmetic surgery procedures and local economic factors. Cosmetic surgery pricing cannot be completely explained by the supply-and-demand model because no association was found between procedure cost and the density of plastic surgeons. © 2015 The American Society for Aesthetic Plastic Surgery, Inc. Reprints and permission: journals.permissions@oup.com.

  9. Uncovering Voter Preference Structures Using a Best-Worst Scaling Procedure: Method and Empirical Example in the British General Election of 2010

    DEFF Research Database (Denmark)

    Ormrod, Robert P.; Savigny, Heather

    Best-Worst scaling (BWS) is a method that can provide insights into the preference structures of voters. By asking voters to select the ‘best’ and ‘worst’ option (‘most important’ and ‘least important’ media in our investigation) from a short list of alternatives it is possible to uncover the rel...... the least information. We furthermore investigate group differences using an ANOVA procedure to demonstrate how contextual variables can enrich our empirical investigations using the BWS method....

  10. Rewarding Leadership and Fair Procedures as Determinants of Self-Esteem

    NARCIS (Netherlands)

    de Cremer, D.A.; van Knippenberg, B.M.; van Knippenberg, D.; Mullenders, D.; Stinglhamber, F.

    2005-01-01

    In the present research, the authors examined the effect of procedural fairness and rewarding leadership style on an important variable for employees: self-esteem. The authors predicted that procedural fairness would positively influence people's reported self-esteem if the leader adopted a style of

  11. Optimization of instrumental neutron activation analysis method by means of 2k experimental design technique aiming the validation of analytical procedures

    International Nuclear Information System (INIS)

    Petroni, Robson; Moreira, Edson G.

    2013-01-01

    In this study optimization of procedures and standardization of Instrumental Neutron Activation Analysis (INAA) methods were carried out for the determination of the elements arsenic, chromium, cobalt, iron, rubidium, scandium, selenium and zinc in biological materials. The aim is to validate the analytical methods for future accreditation at the National Institute of Metrology, Quality and Technology (INMETRO). The 2 k experimental design was applied for evaluation of the individual contribution of selected variables of the analytical procedure in the final mass fraction result. Samples of Mussel Tissue Certified Reference Material and multi-element standards were analyzed considering the following variables: sample decay time, counting time and sample distance to detector. The standard multi-element concentration (comparator standard), mass of the sample and irradiation time were maintained constant in this procedure. By means of the statistical analysis and theoretical and experimental considerations it was determined the optimized experimental conditions for the analytical methods that will be adopted for the validation procedure of INAA methods in the Neutron Activation Analysis Laboratory (LAN) of the Research Reactor Center (CRPq) at the Nuclear and Energy Research Institute (IPEN - CNEN/SP). Optimized conditions were estimated based on the results of z-score tests, main effect and interaction effects. The results obtained with the different experimental configurations were evaluated for accuracy (precision and trueness) for each measurement. (author)

  12. Detecting correlation between allele frequencies and environmental variables as a signature of selection. A fast computational approach for genome-wide studies

    DEFF Research Database (Denmark)

    Guillot, Gilles; Vitalis, Renaud; Rouzic, Arnaud le

    2014-01-01

    to disentangle the potential effect of environmental variables from the confounding effect of population history. For the routine analysis of genome-wide datasets, one also needs fast inference and model selection algorithms. We propose a method based on an explicit spatial model which is an instance of spatial...... for the most common types of genetic markers, obtained either at the individual or at the population level. Analyzing the simulated data produced under a geostatistical model then under an explicit model of selection, we show that the method is efficient. We also re-analyze a dataset relative to nineteen pine...

  13. TMACS test procedure TP003: Graphics. Revision 5

    International Nuclear Information System (INIS)

    Scanlan, P.K.

    1994-01-01

    The TMACS Software Project Test Procedures translate the project's acceptance criteria into test steps. Software releases are certified when the affected Test Procedures are successfully performed and the customers authorize installation of these changes. This Test Procedure addresses the graphics requirements of the TMACS. The features to be tested are the data display graphics and the graphic elements that provide for operator control and selection of displays

  14. TMACS test procedure TP003: Graphics. Revision 6

    International Nuclear Information System (INIS)

    Scanlan, P.K.; Washburn, S.

    1994-01-01

    The TMACS Software Project Test Procedures translate the project's acceptance criteria into test steps. Software releases are certified when the affected Test Procedures are successfully performed and the customers authorize installation of these changes. This Test Procedure addresses the graphics requirements of the TMACS. The features to be tested are the data display graphics and the graphic elements that provide for operator control and selection of displays

  15. Selection of entropy-measure parameters for knowledge discovery in heart rate variability data.

    Science.gov (United States)

    Mayer, Christopher C; Bachler, Martin; Hörtenhuber, Matthias; Stocker, Christof; Holzinger, Andreas; Wassertheurer, Siegfried

    2014-01-01

    Heart rate variability is the variation of the time interval between consecutive heartbeats. Entropy is a commonly used tool to describe the regularity of data sets. Entropy functions are defined using multiple parameters, the selection of which is controversial and depends on the intended purpose. This study describes the results of tests conducted to support parameter selection, towards the goal of enabling further biomarker discovery. This study deals with approximate, sample, fuzzy, and fuzzy measure entropies. All data were obtained from PhysioNet, a free-access, on-line archive of physiological signals, and represent various medical conditions. Five tests were defined and conducted to examine the influence of: varying the threshold value r (as multiples of the sample standard deviation σ, or the entropy-maximizing rChon), the data length N, the weighting factors n for fuzzy and fuzzy measure entropies, and the thresholds rF and rL for fuzzy measure entropy. The results were tested for normality using Lilliefors' composite goodness-of-fit test. Consequently, the p-value was calculated with either a two sample t-test or a Wilcoxon rank sum test. The first test shows a cross-over of entropy values with regard to a change of r. Thus, a clear statement that a higher entropy corresponds to a high irregularity is not possible, but is rather an indicator of differences in regularity. N should be at least 200 data points for r = 0.2 σ and should even exceed a length of 1000 for r = rChon. The results for the weighting parameters n for the fuzzy membership function show different behavior when coupled with different r values, therefore the weighting parameters have been chosen independently for the different threshold values. The tests concerning rF and rL showed that there is no optimal choice, but r = rF = rL is reasonable with r = rChon or r = 0.2σ. Some of the tests showed a dependency of the test significance on the data at hand. Nevertheless, as the medical

  16. Genetic variability, partial regression, Co-heritability studies and their implication in selection of high yielding potato gen

    International Nuclear Information System (INIS)

    Iqbal, Z.M.; Khan, S.A.

    2003-01-01

    Partial regression coefficient, genotypic and phenotypic variabilities, heritability co-heritability and genetic advance were studied in 15 Potato varieties of exotic and local origin. Both genotypic and phenotypic coefficients of variations were high for scab and rhizoctonia incidence percentage. Significant partial regression coefficient for emergence percentage indicated its relative importance in tuber yield. High heritability (broadsense) estimates coupled with high genetic advance for plant height, number of stems per plant and scab percentage revealed substantial contribution of additive genetic variance in the expression of these traits. Hence, the selection based on these characters could play a significant role in their improvement the dominance and epistatic variance was more important for character expression of yield ha/sup -1/, emergence and rhizoctonia percentage. This phenomenon is mainly due to the accumulative effects of low heritability and low to moderate genetic advance. The high co-heritability coupled with negative genotypic and phenotypic covariance revealed that selection of varieties having low scab and rhizoctonia percentage resulted in more potato yield. (author)

  17. Variability of indication criteria in knee and hip replacement: an observational study

    Directory of Open Access Journals (Sweden)

    Sarasqueta Cristina

    2010-10-01

    Full Text Available Abstract Background Total knee (TKR and hip (THR replacement (arthroplasty are effective surgical procedures that relieve pain, improve patients' quality of life and increase functional capacity. Studies on variations in medical practice usually place the indications for performing these procedures to be highly variable, because surgeons appear to follow different criteria when recommending surgery in patients with different severity levels. We therefore proposed a study to evaluate inter-hospital variability in arthroplasty indication. Methods The pre-surgical condition of 1603 patients included was compared by their personal characteristics, clinical situation and self-perceived health status. Patients were asked to complete two health-related quality of life questionnaires: the generic SF-12 (Short Form and the specific WOMAC (Western Ontario and Mcmaster Universities scale. The type of patient undergoing primary arthroplasty was similar in the 15 different hospitals evaluated. The variability in baseline WOMAC score between hospitals in THR and TKR indication was described by range, mean and standard deviation (SD, mean and standard deviation weighted by the number of procedures at each hospital, high/low ratio or extremal quotient (EQ5-95, variation coefficient (CV5-95 and weighted variation coefficient (WCV5-95 for 5-95 percentile range. The variability in subjective and objective signs was evaluated using median, range and WCV5-95. The appropriateness of the procedures performed was calculated using a specific threshold proposed by Quintana et al for assessing pain and functional capacity. Results The variability expressed as WCV5-95 was very low, between 0.05 and 0.11 for all three dimensions on WOMAC scale for both types of procedure in all participating hospitals. The variability in the physical and mental SF-12 components was very low for both types of procedure (0.08 and 0.07 for hip and 0.03 and 0.07 for knee surgery patients

  18. Documentation for assessment of modal pushover-based scaling procedure for nonlinear response history analysis of "ordinary standard" bridges

    Science.gov (United States)

    Kalkan, Erol; Kwong, Neal S.

    2010-01-01

    The earthquake engineering profession is increasingly utilizing nonlinear response history analyses (RHA) to evaluate seismic performance of existing structures and proposed designs of new structures. One of the main ingredients of nonlinear RHA is a set of ground-motion records representing the expected hazard environment for the structure. When recorded motions do not exist (as is the case for the central United States), or when high-intensity records are needed (as is the case for San Francisco and Los Angeles), ground motions from other tectonically similar regions need to be selected and scaled. The modal-pushover-based scaling (MPS) procedure recently was developed to determine scale factors for a small number of records, such that the scaled records provide accurate and efficient estimates of 'true' median structural responses. The adjective 'accurate' refers to the discrepancy between the benchmark responses and those computed from the MPS procedure. The adjective 'efficient' refers to the record-to-record variability of responses. Herein, the accuracy and efficiency of the MPS procedure are evaluated by applying it to four types of existing 'ordinary standard' bridges typical of reinforced-concrete bridge construction in California. These bridges are the single-bent overpass, multi span bridge, curved-bridge, and skew-bridge. As compared to benchmark analyses of unscaled records using a larger catalog of ground motions, it is demonstrated that the MPS procedure provided an accurate estimate of the engineering demand parameters (EDPs) accompanied by significantly reduced record-to-record variability of the responses. Thus, the MPS procedure is a useful tool for scaling ground motions as input to nonlinear RHAs of 'ordinary standard' bridges.

  19. 78 FR 47047 - Proposed Policy for Discontinuance of Certain Instrument Approach Procedures

    Science.gov (United States)

    2013-08-02

    ... the cancellation of certain Non-directional Beacon (NDB) and Very High Frequency (VHF) Omnidirectional... approach procedures. The FAA proposes specific criteria to guide the identification and selection of... selection of potential NDB and VOR procedures for cancellation. Once the criteria are established and the...

  20. A Path Analysis of Latino Parental, Teenager and Cultural Variables in Teenagers' Sexual Attitudes, Norms, Self-Efficacy, and Sexual Intentions1

    OpenAIRE

    Gaioso, Vanessa Pirani; Villarruel, Antonia Maria; Wilson, Lynda Anne; Azuero, Andres; Childs, Gwendolyn Denice; Davies, Susan Lane

    2015-01-01

    OBJECTIVE: to test a theoretical model based on the Parent-Based Expansion of the Theory of Planned Behavior examining relation between selected parental, teenager and cultural variables and Latino teenagers' intentions to engage in sexual behavior. METHOD: a cross-sectional correlational design based on a secondary data analysis of 130 Latino parent and teenager dyads. RESULTS: regression and path analysis procedures were used to test seven hypotheses and the results demonstrated partial sup...

  1. Pain Management for Gynecologic Procedures in the Office.

    Science.gov (United States)

    Ireland, Luu Doan; Allen, Rebecca H

    2016-02-01

    Satisfactory pain control for women undergoing office gynecologic procedures is critical for both patient comfort and procedure success. Therefore, it is important for clinicians to be aware of the safety and efficacy of different pain control regimens. This article aimed to review the literature regarding pain control regimens for procedures such as endometrial biopsy, intrauterine device insertion, colposcopy and loop electrosurgical excisional procedure, uterine aspiration, and hysteroscopy. A search of published literature using PubMed was conducted using the following keywords: "pain" or "anesthesia." These terms were paired with the following keywords: "intrauterine device" or "IUD," "endometrial biopsy," "uterine aspiration" or "abortion," "colposcopy" or "loop electrosurgical excisional procedure" or "LEEP," "hysteroscopy" or "hysteroscopic sterilization." The search was conducted through July 2015. Articles were hand reviewed and selected by the authors for study quality. Meta-analyses and randomized controlled trials were prioritized. Although local anesthesia is commonly used for gynecologic procedures, a multimodal approach may be more effective including oral medication, a dedicated emotional support person, and visual or auditory distraction. Women who are nulliparous, are postmenopausal, have a history of dysmenorrhea, or suffer from anxiety are more likely to experience greater pain with gynecologic procedures. Evidence for some interventions exists; however, the interpretation of intervention comparisons is limited by the use of different regimens, pain measurement scales, patient populations, and procedure techniques. There are many options for pain management for office gynecologic procedures, and depending on the procedure, different modalities may work best. The importance of patient counseling and selection cannot be overstated.

  2. A practical procedure for the selection of time-to-failure models based on the assessment of trends in maintenance data

    International Nuclear Information System (INIS)

    Louit, D.M.; Pascual, R.; Jardine, A.K.S.

    2009-01-01

    Many times, reliability studies rely on false premises such as independent and identically distributed time between failures assumption (renewal process). This can lead to erroneous model selection for the time to failure of a particular component or system, which can in turn lead to wrong conclusions and decisions. A strong statistical focus, a lack of a systematic approach and sometimes inadequate theoretical background seem to have made it difficult for maintenance analysts to adopt the necessary stage of data testing before the selection of a suitable model. In this paper, a framework for model selection to represent the failure process for a component or system is presented, based on a review of available trend tests. The paper focuses only on single-time-variable models and is primarily directed to analysts responsible for reliability analyses in an industrial maintenance environment. The model selection framework is directed towards the discrimination between the use of statistical distributions to represent the time to failure ('renewal approach'); and the use of stochastic point processes ('repairable systems approach'), when there may be the presence of system ageing or reliability growth. An illustrative example based on failure data from a fleet of backhoes is included.

  3. The role of protozoa-driven selection in shaping human genetic variability.

    Science.gov (United States)

    Pozzoli, Uberto; Fumagalli, Matteo; Cagliani, Rachele; Comi, Giacomo P; Bresolin, Nereo; Clerici, Mario; Sironi, Manuela

    2010-03-01

    Protozoa exert a strong selective pressure in humans. The selection signatures left by these pathogens can be exploited to identify genetic modulators of infection susceptibility. We show that protozoa diversity in different geographic locations is a good measure of protozoa-driven selective pressure; protozoa diversity captured selection signatures at known malaria resistance loci and identified several selected single nucleotide polymorphisms in immune and hemolytic anemia genes. A genome-wide search enabled us to identify 5180 variants mapping to 1145 genes that are subjected to protozoa-driven selective pressure. We provide a genome-wide estimate of protozoa-driven selective pressure and identify candidate susceptibility genes for protozoa-borne diseases. Copyright 2010 Elsevier Ltd. All rights reserved.

  4. THE HOST GALAXY PROPERTIES OF VARIABILITY SELECTED AGN IN THE PAN-STARRS1 MEDIUM DEEP SURVEY

    Energy Technology Data Exchange (ETDEWEB)

    Heinis, S.; Gezari, S.; Kumar, S. [Department of Astronomy, University of Maryland, College Park, MD (United States); Burgett, W. S.; Flewelling, H.; Huber, M. E.; Kaiser, N.; Wainscoat, R. J.; Waters, C. [Institute for Astronomy, University of Hawaii at Manoa, Honolulu, HI 96822 (United States)

    2016-07-20

    We study the properties of 975 active galactic nuclei (AGNs) selected by variability in the Pan-STARRS1 Medium deep Survey. Using complementary multi-wavelength data from the ultraviolet to the far-infrared, we use spectral energy distribution fitting to determine the AGN and host properties at z < 1 and compare to a well-matched control sample. We confirm the trend previously observed: that the variability amplitude decreases with AGN luminosity, but we also observe that the slope of this relation steepens with wavelength, resulting in a “redder when brighter” trend at low luminosities. Our results show that AGNs are hosted by more massive hosts than control sample galaxies, while the rest frame dust-corrected NUV r color distribution of AGN hosts is similar to control galaxies. We find a positive correlation between the AGN luminosity and star formation rate (SFR), independent of redshift. AGN hosts populate the entire range of SFRs within and outside of the Main Sequence of star-forming galaxies. Comparing the distribution of AGN hosts and control galaxies, we show that AGN hosts are less likely to be hosted by quiescent galaxies and more likely to be hosted by Main Sequence or starburst galaxies.

  5. South African medical schools: Current state of selection criteria and medical students' demographic profile.

    Science.gov (United States)

    van der Merwe, L J; van Zyl, G J; St Clair Gibson, A; Viljoen, M; Iputo, J E; Mammen, M; Chitha, W; Perez, A M; Hartman, N; Fonn, S; Green-Thompson, L; Ayo-Ysuf, O A; Botha, G C; Manning, D; Botha, S J; Hift, R; Retief, P; van Heerden, B B; Volmink, J

    2015-12-16

    Selection of medical students at South African (SA) medical schools must promote equitable and fair access to students from all population groups, while ensuring optimal student throughput and success, and training future healthcare practitioners who will fulfil the needs of the local society. In keeping with international practices, a variety of academic and non-academic measures are used to select applicants for medical training programmes in SA medical schools. To provide an overview of the selection procedures used by all eight medical schools in SA, and the student demographics (race and gender) at these medical schools, and to determine to what extent collective practices are achieving the goals of student diversity and inclusivity. A retrospective, quantitative, descriptive study design was used. All eight medical schools in SA provided information regarding selection criteria, selection procedures, and student demographics (race and gender). Descriptive analysis of data was done by calculating frequencies and percentages of the variables measured. Medical schools in SA make use of academic and non-academic criteria in their selection processes. The latter include indices of socioeconomic disadvantage. Most undergraduate medical students in SA are black (38.7%), followed by white (33.0%), coloured (13.4%) and Indian/Asian (13.6%). The majority of students are female (62.2%). The number of black students is still proportionately lower than in the general population, while other groups are overrepresented. Selection policies for undergraduate medical programmes aimed at redress should be continued and further refined, along with the provision of support to ensure student success.

  6. Dealing with selection bias in educational transition models

    DEFF Research Database (Denmark)

    Holm, Anders; Jæger, Mads Meier

    2011-01-01

    This paper proposes the bivariate probit selection model (BPSM) as an alternative to the traditional Mare model for analyzing educational transitions. The BPSM accounts for selection on unobserved variables by allowing for unobserved variables which affect the probability of making educational tr...... account for selection on unobserved variables and high-quality data are both required in order to estimate credible educational transition models.......This paper proposes the bivariate probit selection model (BPSM) as an alternative to the traditional Mare model for analyzing educational transitions. The BPSM accounts for selection on unobserved variables by allowing for unobserved variables which affect the probability of making educational...... transitions to be correlated across transitions. We use simulated and real data to illustrate how the BPSM improves on the traditional Mare model in terms of correcting for selection bias and providing credible estimates of the effect of family background on educational success. We conclude that models which...

  7. Genetic and Psychosocial Predictors of Aggression: Variable Selection and Model Building With Component-Wise Gradient Boosting.

    Science.gov (United States)

    Suchting, Robert; Gowin, Joshua L; Green, Charles E; Walss-Bass, Consuelo; Lane, Scott D

    2018-01-01

    Rationale : Given datasets with a large or diverse set of predictors of aggression, machine learning (ML) provides efficient tools for identifying the most salient variables and building a parsimonious statistical model. ML techniques permit efficient exploration of data, have not been widely used in aggression research, and may have utility for those seeking prediction of aggressive behavior. Objectives : The present study examined predictors of aggression and constructed an optimized model using ML techniques. Predictors were derived from a dataset that included demographic, psychometric and genetic predictors, specifically FK506 binding protein 5 (FKBP5) polymorphisms, which have been shown to alter response to threatening stimuli, but have not been tested as predictors of aggressive behavior in adults. Methods : The data analysis approach utilized component-wise gradient boosting and model reduction via backward elimination to: (a) select variables from an initial set of 20 to build a model of trait aggression; and then (b) reduce that model to maximize parsimony and generalizability. Results : From a dataset of N = 47 participants, component-wise gradient boosting selected 8 of 20 possible predictors to model Buss-Perry Aggression Questionnaire (BPAQ) total score, with R 2 = 0.66. This model was simplified using backward elimination, retaining six predictors: smoking status, psychopathy (interpersonal manipulation and callous affect), childhood trauma (physical abuse and neglect), and the FKBP5_13 gene (rs1360780). The six-factor model approximated the initial eight-factor model at 99.4% of R 2 . Conclusions : Using an inductive data science approach, the gradient boosting model identified predictors consistent with previous experimental work in aggression; specifically psychopathy and trauma exposure. Additionally, allelic variants in FKBP5 were identified for the first time, but the relatively small sample size limits generality of results and calls for

  8. Assessment of modal-pushover-based scaling procedure for nonlinear response history analysis of ordinary standard bridges

    Science.gov (United States)

    Kalkan, E.; Kwong, N.

    2012-01-01

    The earthquake engineering profession is increasingly utilizing nonlinear response history analyses (RHA) to evaluate seismic performance of existing structures and proposed designs of new structures. One of the main ingredients of nonlinear RHA is a set of ground motion records representing the expected hazard environment for the structure. When recorded motions do not exist (as is the case in the central United States) or when high-intensity records are needed (as is the case in San Francisco and Los Angeles), ground motions from other tectonically similar regions need to be selected and scaled. The modal-pushover-based scaling (MPS) procedure was recently developed to determine scale factors for a small number of records such that the scaled records provide accurate and efficient estimates of “true” median structural responses. The adjective “accurate” refers to the discrepancy between the benchmark responses and those computed from the MPS procedure. The adjective “efficient” refers to the record-to-record variability of responses. In this paper, the accuracy and efficiency of the MPS procedure are evaluated by applying it to four types of existing Ordinary Standard bridges typical of reinforced concrete bridge construction in California. These bridges are the single-bent overpass, multi-span bridge, curved bridge, and skew bridge. As compared with benchmark analyses of unscaled records using a larger catalog of ground motions, it is demonstrated that the MPS procedure provided an accurate estimate of the engineering demand parameters (EDPs) accompanied by significantly reduced record-to-record variability of the EDPs. Thus, it is a useful tool for scaling ground motions as input to nonlinear RHAs of Ordinary Standard bridges.

  9. 48 CFR 570.105-2 - Two-phase design-build selection procedures.

    Science.gov (United States)

    2010-10-01

    ... lease construction projects with options to purchase the real property leased. Use the procedures in.... (iii) The capability and experience of potential contractors. (iv) The suitability of the project for...

  10. Model selection for semiparametric marginal mean regression accounting for within-cluster subsampling variability and informative cluster size.

    Science.gov (United States)

    Shen, Chung-Wei; Chen, Yi-Hau

    2018-03-13

    We propose a model selection criterion for semiparametric marginal mean regression based on generalized estimating equations. The work is motivated by a longitudinal study on the physical frailty outcome in the elderly, where the cluster size, that is, the number of the observed outcomes in each subject, is "informative" in the sense that it is related to the frailty outcome itself. The new proposal, called Resampling Cluster Information Criterion (RCIC), is based on the resampling idea utilized in the within-cluster resampling method (Hoffman, Sen, and Weinberg, 2001, Biometrika 88, 1121-1134) and accommodates informative cluster size. The implementation of RCIC, however, is free of performing actual resampling of the data and hence is computationally convenient. Compared with the existing model selection methods for marginal mean regression, the RCIC method incorporates an additional component accounting for variability of the model over within-cluster subsampling, and leads to remarkable improvements in selecting the correct model, regardless of whether the cluster size is informative or not. Applying the RCIC method to the longitudinal frailty study, we identify being female, old age, low income and life satisfaction, and chronic health conditions as significant risk factors for physical frailty in the elderly. © 2018, The International Biometric Society.

  11. Optimization of instrumental neutron activation analysis method by means of 2{sup k} experimental design technique aiming the validation of analytical procedures

    Energy Technology Data Exchange (ETDEWEB)

    Petroni, Robson; Moreira, Edson G., E-mail: rpetroni@ipen.br, E-mail: emoreira@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2013-07-01

    In this study optimization of procedures and standardization of Instrumental Neutron Activation Analysis (INAA) methods were carried out for the determination of the elements arsenic, chromium, cobalt, iron, rubidium, scandium, selenium and zinc in biological materials. The aim is to validate the analytical methods for future accreditation at the National Institute of Metrology, Quality and Technology (INMETRO). The 2{sup k} experimental design was applied for evaluation of the individual contribution of selected variables of the analytical procedure in the final mass fraction result. Samples of Mussel Tissue Certified Reference Material and multi-element standards were analyzed considering the following variables: sample decay time, counting time and sample distance to detector. The standard multi-element concentration (comparator standard), mass of the sample and irradiation time were maintained constant in this procedure. By means of the statistical analysis and theoretical and experimental considerations it was determined the optimized experimental conditions for the analytical methods that will be adopted for the validation procedure of INAA methods in the Neutron Activation Analysis Laboratory (LAN) of the Research Reactor Center (CRPq) at the Nuclear and Energy Research Institute (IPEN - CNEN/SP). Optimized conditions were estimated based on the results of z-score tests, main effect and interaction effects. The results obtained with the different experimental configurations were evaluated for accuracy (precision and trueness) for each measurement. (author)

  12. Selection of procedures for inservice inspections; Auswahl der Verfahren fuer wiederkehrende Pruefungen

    Energy Technology Data Exchange (ETDEWEB)

    Brast, G [Preussische Elektrizitaets-AG (Preussenelektra), Hannover (Germany); Britz, A [Bayernwerk AG, Muenchen (Germany); Maier, H J [Stuttgart Univ. (Germany). Staatliche Materialpruefungsanstalt; Seidenkranz, T [TUEV Energie- und Systemtechnik GmbH, Mannheim (Germany)

    1998-11-01

    At present, selection of procedures for inservice inspection has to take into account the legal basis, i.e. the existing regulatory codes, and the practical aspects, i.e. experience and information obtained by the general, initial inservice inspection or performance data obtained by the latest, recurrent inspection. However, regulatory codes are being reviewed to a certain extent in order to permit integration of technological progress. Depending on the degree of availability in future, of inspection task-specific, sensitive and qualified NDE techniques for inservice inspections (`risk based ISI`), the framework of defined inspection intervals, sites, and detection limits will be broken up and altered in response to progress made. This opens up new opportunities for an optimization of inservice inspections for proof of component integrity. (orig./CB) [Deutsch] Zur Zeit muss sich die Auswahl der Pruefverfahren an den gueltigen Regelwerken und, da es sich um wiederkehrende Pruefungen handelt, an der Basispruefung bzw. der letzten wiederkehrenden Pruefung orientieren. Jedoch vollzieht sich zur Zeit eine Oeffnung der Regelwerke, mit der man auch der Weiterentwicklung der Prueftechniken Rechnung traegt. In dem Masse, wie zukuenftig auf die Pruefaufgabe/Pruefaussage optimal abgestimmte und qualifizierte Prueftechniken mit einer hohen Nachweisempfindlichkeit am Bauteil fuer zielgerichtete wiederkehrende Pruefungen (als `risk based ISI`) zur Verfuegung stehen, wird der Rahmen mit festgelegten Pruefintervallen, Prueforten und festen Registriergrenzen gesprengt und variabel gestaltet werden koennen. Damit ergeben sich neue Moeglichkeiten fuer eine Optimierung der WKP zum Nachweis der Integritaet des Bauteils. (orig./MM)

  13. Polycyclic Aromatic Hydrocarbons in Electrocautery Smoke during Peritonectomy Procedures

    Directory of Open Access Journals (Sweden)

    Sara Näslund Andréasson

    2012-01-01

    Full Text Available Objective. This study identified and quantified polycyclic aromatic hydrocarbons (PAHs in electrocautery smoke during 40 peritonectomy procedures and investigated any correlations and/or differences between levels of PAHs and perioperative variables. Methods. PAHs were measured in personal and stationary sampling by 40 mm Millipore cassettes, for adsorption of both gaseous and particle-bound PAHs. Results. All 16 USEPA priority pollutant PAHs were detected during peritonectomy procedures, naphthalene being the most abundant. For the only two PAHs with Swedish occupational exposure limits (OELs, benzo[a]pyrene and naphthalene, limits were never exceeded. Amount of bleeding was the only perioperative variable that correlated with levels of PAHs. Conclusions. Low levels of PAHs were detected in electrocautery smoke during peritonectomy procedures, and an increased amount of bleeding correlated with higher levels of PAHs. For evaluation of long-term health effects, more studies are needed.

  14. Polycyclic Aromatic Hydrocarbons in Electrocautery Smoke during Peritonectomy Procedures

    Science.gov (United States)

    Näslund Andréasson, Sara; Mahteme, Haile; Sahlberg, Bo; Anundi, Helena

    2012-01-01

    Objective. This study identified and quantified polycyclic aromatic hydrocarbons (PAHs) in electrocautery smoke during 40 peritonectomy procedures and investigated any correlations and/or differences between levels of PAHs and perioperative variables. Methods. PAHs were measured in personal and stationary sampling by 40 mm Millipore cassettes, for adsorption of both gaseous and particle-bound PAHs. Results. All 16 USEPA priority pollutant PAHs were detected during peritonectomy procedures, naphthalene being the most abundant. For the only two PAHs with Swedish occupational exposure limits (OELs), benzo[a]pyrene and naphthalene, limits were never exceeded. Amount of bleeding was the only perioperative variable that correlated with levels of PAHs. Conclusions. Low levels of PAHs were detected in electrocautery smoke during peritonectomy procedures, and an increased amount of bleeding correlated with higher levels of PAHs. For evaluation of long-term health effects, more studies are needed. PMID:22685482

  15. Bias in random forest variable importance measures: Illustrations, sources and a solution

    Directory of Open Access Journals (Sweden)

    Hothorn Torsten

    2007-01-01

    Full Text Available Abstract Background Variable importance measures for random forests have been receiving increased attention as a means of variable selection in many classification tasks in bioinformatics and related scientific fields, for instance to select a subset of genetic markers relevant for the prediction of a certain disease. We show that random forest variable importance measures are a sensible means for variable selection in many applications, but are not reliable in situations where potential predictor variables vary in their scale of measurement or their number of categories. This is particularly important in genomics and computational biology, where predictors often include variables of different types, for example when predictors include both sequence data and continuous variables such as folding energy, or when amino acid sequence data show different numbers of categories. Results Simulation studies are presented illustrating that, when random forest variable importance measures are used with data of varying types, the results are misleading because suboptimal predictor variables may be artificially preferred in variable selection. The two mechanisms underlying this deficiency are biased variable selection in the individual classification trees used to build the random forest on one hand, and effects induced by bootstrap sampling with replacement on the other hand. Conclusion We propose to employ an alternative implementation of random forests, that provides unbiased variable selection in the individual classification trees. When this method is applied using subsampling without replacement, the resulting variable importance measures can be used reliably for variable selection even in situations where the potential predictor variables vary in their scale of measurement or their number of categories. The usage of both random forest algorithms and their variable importance measures in the R system for statistical computing is illustrated and

  16. A procedure for selection on marking in hardwoods

    Science.gov (United States)

    George R., Jr. Trimble; Joseph J. Mendel; Richard A. Kennell

    1974-01-01

    This method of applying individual-tree selection silviculture to hardwood stands combines silvicultural considerations with financial maturity guidelines into a tree-marking system. To develop this system it was necessary to determine rates of return based on 4/4 lumber, for many of the important Appalachian species. Trees were viewed as capital investments that...

  17. 42 CFR 431.814 - Sampling plan and procedures.

    Science.gov (United States)

    2010-10-01

    ... reliability of the reduced sample. (4) The sample selection procedure. Systematic random sampling is... sampling, and yield estimates with the same or better precision than achieved in systematic random sampling... 42 Public Health 4 2010-10-01 2010-10-01 false Sampling plan and procedures. 431.814 Section 431...

  18. Statistical selection : a way of thinking !

    NARCIS (Netherlands)

    Laan, van der P.; Aarts, E.H.L.; Eikelder, ten H.M.M.; Hemerik, C.; Rem, M.

    1995-01-01

    Statistical selection of the best population is discussed in general terms and the principles of statistical selection procedures are presented. Advantages and disadvantages of Subset Selection, one of the main approaches, are indicated. The selection of an almost best population is considered and

  19. Statistical selection : a way of thinking!

    NARCIS (Netherlands)

    Laan, van der P.

    1995-01-01

    Statistical selection of the best population is discussed in general terms and the principles of statistical selection procedures are presented. Advantages and disadvantages of Subset Selection, one of the main approaches, are indicated. The selection of an almost best population is considered and

  20. Soil Cd, Cr, Cu, Ni, Pb and Zn sorption and retention models using SVM: Variable selection and competitive model.

    Science.gov (United States)

    González Costa, J J; Reigosa, M J; Matías, J M; Covelo, E F

    2017-09-01

    The aim of this study was to model the sorption and retention of Cd, Cu, Ni, Pb and Zn in soils. To that extent, the sorption and retention of these metals were studied and the soil characterization was performed separately. Multiple stepwise regression was used to produce multivariate models with linear techniques and with support vector machines, all of which included 15 explanatory variables characterizing soils. When the R-squared values are represented, two different groups are noticed. Cr, Cu and Pb sorption and retention show a higher R-squared; the most explanatory variables being humified organic matter, Al oxides and, in some cases, cation-exchange capacity (CEC). The other group of metals (Cd, Ni and Zn) shows a lower R-squared, and clays are the most explanatory variables, including a percentage of vermiculite and slime. In some cases, quartz, plagioclase or hematite percentages also show some explanatory capacity. Support Vector Machine (SVM) regression shows that the different models are not as regular as in multiple regression in terms of number of variables, the regression for nickel adsorption being the one with the highest number of variables in its optimal model. On the other hand, there are cases where the most explanatory variables are the same for two metals, as it happens with Cd and Cr adsorption. A similar adsorption mechanism is thus postulated. These patterns of the introduction of variables in the model allow us to create explainability sequences. Those which are the most similar to the selectivity sequences obtained by Covelo (2005) are Mn oxides in multiple regression and change capacity in SVM. Among all the variables, the only one that is explanatory for all the metals after applying the maximum parsimony principle is the percentage of sand in the retention process. In the competitive model arising from the aforementioned sequences, the most intense competitiveness for the adsorption and retention of different metals appears between

  1. Trends and variability in the hydrological regime of the Mackenzie River Basin

    Science.gov (United States)

    Abdul Aziz, Omar I.; Burn, Donald H.

    2006-03-01

    Trends and variability in the hydrological regime were analyzed for the Mackenzie River Basin in northern Canada. The procedure utilized the Mann-Kendall non-parametric test to detect trends, the Trend Free Pre-Whitening (TFPW) approach for correcting time-series data for autocorrelation and a bootstrap resampling method to account for the cross-correlation structure of the data. A total of 19 hydrological and six meteorological variables were selected for the study. Analysis was conducted on hydrological data from a network of 54 hydrometric stations and meteorological data from a network of 10 stations. The results indicated that several hydrological variables exhibit a greater number of significant trends than are expected to occur by chance. Noteworthy were strong increasing trends over the winter month flows of December to April as well as in the annual minimum flow and weak decreasing trends in the early summer and late fall flows as well as in the annual mean flow. An earlier onset of the spring freshet is noted over the basin. The results are expected to assist water resources managers and policy makers in making better planning decisions in the Mackenzie River Basin.

  2. An optimized procedure for preconcentration, determination and on-line recovery of palladium using highly selective diphenyldiketone-monothiosemicarbazone modified silica gel

    International Nuclear Information System (INIS)

    Sharma, R.K.; Pandey, Amit; Gulati, Shikha; Adholeya, Alok

    2012-01-01

    Highlights: ► Diphenyldiketone-monothiosemicarbazone modified silica gel. ► Highly selective, efficient and reusable chelating resin. ► Solid phase extraction system for on-line separation and preconcentration of Pd(II) ions. ► Application in catalytic converter and spiked tap water samples for on-line recovery of Pd(II) ions. - Abstract: A novel, highly selective, efficient and reusable chelating resin, diphenyldiketone-monothiosemicarbazone modified silica gel, was prepared and applied for the on-line separation and preconcentration of Pd(II) ions in catalytic converter and spiked tap water samples. Several parameters like effect of pH, sample volume, flow rate, type of eluent, and influence of various ionic interferences, etc. were evaluated for effective adsorption of palladium at trace levels. The resin was found to be highly selective for Pd(II) ions in the pH range 4–5 with a very high sorption capacity of 0.73 mmol/g and preconcentration factor of 335. The present environment friendly procedure has also been applied for large-scale extraction by employing the use of newly designed reactor in which on-line separation and preconcentration of Pd can be carried out easily and efficiently in short duration of time.

  3. SENDS criteria from the diversification of MAST procedures. Implementation of preoperative simulation

    International Nuclear Information System (INIS)

    Rieger, B.

    2015-01-01

    Minimal access spinal technologies (MAST) lead to a diversification of surgical procedures, which requires careful selection of the procedure and outcome monitoring. For a rational selection of the procedure simulation, endoscopy, navigation, decompression and stabilization (SENDS) criteria can be derived from the development of the MAST procedures. Preoperative simulation has diagnostic and therapeutic values. The SENDS criteria can be verified indirectly via outcome control. Biomechanically meaningful diagnostic x-rays of the spinal segment to be surgically treated are currently carried out with the patient in inclination and reclination. Software-related preoperative simulation based on these x-ray images facilitates the selection and implementation of the MAST procedure. For preoperative simulation motion shots are needed in inclination, neutral position and reclination and the dimensions can be obtained using an x-ray ball or a computed tomography (CT) scan. The SENDS criteria are useful because established procedures based on these criteria reach a comparable outcome. Preoperative simulation appears to be a useful selection criterion. Preoperatively it is necessary to collate patient and segment information in order to provide each patient with individualized treatment. So far there is no evidence for a better outcome after preoperative simulation but a reduction of surgery time and intraoperative radiation exposure could already be demonstrated. Minimally invasive methods should be preferred if there is a comparable outcome. The establishment of new procedures has to be accompanied by the maintenance of a spine register. Minimally invasive surgical procedures should be individualized for each patient and segment. Mobility X-ray images should be prepared for use with the preoperative simulation as the information content significantly increases with respect to the MAST procedure. (orig.) [de

  4. Interval ridge regression (iRR) as a fast and robust method for quantitative prediction and variable selection applied to edible oil adulteration.

    Science.gov (United States)

    Jović, Ozren; Smrečki, Neven; Popović, Zora

    2016-04-01

    A novel quantitative prediction and variable selection method called interval ridge regression (iRR) is studied in this work. The method is performed on six data sets of FTIR, two data sets of UV-vis and one data set of DSC. The obtained results show that models built with ridge regression on optimal variables selected with iRR significantly outperfom models built with ridge regression on all variables in both calibration (6 out of 9 cases) and validation (2 out of 9 cases). In this study, iRR is also compared with interval partial least squares regression (iPLS). iRR outperfomed iPLS in validation (insignificantly in 6 out of 9 cases and significantly in one out of 9 cases for poil, a well known health beneficial nutrient, is studied in this work by mixing it with cheap and widely used oils such as soybean (So) oil, rapeseed (R) oil and sunflower (Su) oil. Binary mixture sets of hempseed oil with these three oils (HSo, HR and HSu) and a ternary mixture set of H oil, R oil and Su oil (HRSu) were considered. The obtained accuracy indicates that using iRR on FTIR and UV-vis data, each particular oil can be very successfully quantified (in all 8 cases RMSEPoil (R(2)>0.99). Copyright © 2015 Elsevier B.V. All rights reserved.

  5. Selecting candidate predictor variables for the modelling of post ...

    African Journals Online (AJOL)

    Objectives: The objective of this project was to determine the variables most likely to be associated with post- .... (as defined subjectively by the research team) in global .... ed on their lack of knowledge of wealth scoring tools. ... HIV serology.

  6. 32 CFR 1697.8 - Procedures for salary offset.

    Science.gov (United States)

    2010-07-01

    ... 32 National Defense 6 2010-07-01 2010-07-01 false Procedures for salary offset. 1697.8 Section 1697.8 National Defense Other Regulations Relating to National Defense SELECTIVE SERVICE SYSTEM SALARY OFFSET § 1697.8 Procedures for salary offset. (a) Deductions to liquidate an employee's debt will be by...

  7. More than just tapping: index finger-tapping measures procedural learning in schizophrenia.

    Science.gov (United States)

    Da Silva, Felipe N; Irani, Farzin; Richard, Jan; Brensinger, Colleen M; Bilker, Warren B; Gur, Raquel E; Gur, Ruben C

    2012-05-01

    Finger-tapping has been widely studied using behavioral and neuroimaging paradigms. Evidence supports the use of finger-tapping as an endophenotype in schizophrenia, but its relationship with motor procedural learning remains unexplored. To our knowledge, this study presents the first use of index finger-tapping to study procedural learning in individuals with schizophrenia or schizoaffective disorder (SCZ/SZA) as compared to healthy controls. A computerized index finger-tapping test was administered to 1169 SCZ/SZA patients (62% male, 88% right-handed), and 689 healthy controls (40% male, 93% right-handed). Number of taps per trial and learning slopes across trials for the dominant and non-dominant hands were examined for motor speed and procedural learning, respectively. Both healthy controls and SCZ/SZA patients demonstrated procedural learning for their dominant hand but not for their non-dominant hand. In addition, patients showed a greater capacity for procedural learning even though they demonstrated more variability in procedural learning compared to healthy controls. Left-handers of both groups performed better than right-handers and had less variability in mean number of taps between non-dominant and dominant hands. Males also had less variability in mean tap count between dominant and non-dominant hands than females. As expected, patients had a lower mean number of taps than healthy controls, males outperformed females and dominant-hand trials had more mean taps than non-dominant hand trials in both groups. The index finger-tapping test can measure both motor speed and procedural learning, and motor procedural learning may be intact in SCZ/SZA patients. Copyright © 2012 Elsevier B.V. All rights reserved.

  8. Rewarding leadership and fair procedures as determinants of self-esteem

    OpenAIRE

    De Cremer, D.; Knippenberg, D.; Knippenberg, B.; Mullenders, D.; Stinglhamber, F.

    2005-01-01

    In the present research, the authors examined the effect of procedural fairness and rewarding leadership style on an important variable for employees: self-esteem. The authors predicted that procedural fairness would positively influence people's reported self-esteem if the leader adopted a style of rewarding behavior for a job well done. Results from a scenario experiment, a laboratory experiment, and an organizational survey indeed show that procedural fairness and rewarding leadership styl...

  9. Evaluation of standardized and applied variables in predicting treatment outcomes of polytrauma patients.

    Science.gov (United States)

    Aksamija, Goran; Mulabdic, Adi; Rasic, Ismar; Muhovic, Samir; Gavric, Igor

    2011-01-01

    Polytrauma is defined as an injury where they are affected by at least two different organ systems or body, with at least one life-threatening injuries. Given the multilevel model care of polytrauma patients within KCUS are inevitable weaknesses in the management of this category of patients. To determine the dynamics of existing procedures in treatment of polytrauma patients on admission to KCUS, and based on statistical analysis of variables applied to determine and define the factors that influence the final outcome of treatment, and determine their mutual relationship, which may result in eliminating the flaws in the approach to the problem. The study was based on 263 polytrauma patients. Parametric and non-parametric statistical methods were used. Basic statistics were calculated, based on the calculated parameters for the final achievement of research objectives, multicoleration analysis, image analysis, discriminant analysis and multifactorial analysis were used. From the universe of variables for this study we selected sample of n = 25 variables, of which the first two modular, others belong to the common measurement space (n = 23) and in this paper defined as a system variable methods, procedures and assessments of polytrauma patients. After the multicoleration analysis, since the image analysis gave a reliable measurement results, we started the analysis of eigenvalues, that is defining the factors upon which they obtain information about the system solve the problem of the existing model and its correlation with treatment outcome. The study singled out the essential factors that determine the current organizational model of care, which may affect the treatment and better outcome of polytrauma patients. This analysis has shown the maximum correlative relationships between these practices and contributed to development guidelines that are defined by isolated factors.

  10. Physical attraction to reliable, low variability nervous systems: Reaction time variability predicts attractiveness.

    Science.gov (United States)

    Butler, Emily E; Saville, Christopher W N; Ward, Robert; Ramsey, Richard

    2017-01-01

    The human face cues a range of important fitness information, which guides mate selection towards desirable others. Given humans' high investment in the central nervous system (CNS), cues to CNS function should be especially important in social selection. We tested if facial attractiveness preferences are sensitive to the reliability of human nervous system function. Several decades of research suggest an operational measure for CNS reliability is reaction time variability, which is measured by standard deviation of reaction times across trials. Across two experiments, we show that low reaction time variability is associated with facial attractiveness. Moreover, variability in performance made a unique contribution to attractiveness judgements above and beyond both physical health and sex-typicality judgements, which have previously been associated with perceptions of attractiveness. In a third experiment, we empirically estimated the distribution of attractiveness preferences expected by chance and show that the size and direction of our results in Experiments 1 and 2 are statistically unlikely without reference to reaction time variability. We conclude that an operating characteristic of the human nervous system, reliability of information processing, is signalled to others through facial appearance. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Modeling soil organic matter (SOM) from satellite data using VISNIR-SWIR spectroscopy and PLS regression with step-down variable selection algorithm: case study of Campos Amazonicos National Park savanna enclave, Brazil

    Science.gov (United States)

    Rosero-Vlasova, O.; Borini Alves, D.; Vlassova, L.; Perez-Cabello, F.; Montorio Lloveria, R.

    2017-10-01

    Deforestation in Amazon basin due, among other factors, to frequent wildfires demands continuous post-fire monitoring of soil and vegetation. Thus, the study posed two objectives: (1) evaluate the capacity of Visible - Near InfraRed - ShortWave InfraRed (VIS-NIR-SWIR) spectroscopy to estimate soil organic matter (SOM) in fire-affected soils, and (2) assess the feasibility of SOM mapping from satellite images. For this purpose, 30 soil samples (surface layer) were collected in 2016 in areas of grass and riparian vegetation of Campos Amazonicos National Park, Brazil, repeatedly affected by wildfires. Standard laboratory procedures were applied to determine SOM. Reflectance spectra of soils were obtained in controlled laboratory conditions using Fieldspec4 spectroradiometer (spectral range 350nm- 2500nm). Measured spectra were resampled to simulate reflectances for Landsat-8, Sentinel-2 and EnMap spectral bands, used as predictors in SOM models developed using Partial Least Squares regression and step-down variable selection algorithm (PLSR-SD). The best fit was achieved with models based on reflectances simulated for EnMap bands (R2=0.93; R2cv=0.82 and NMSE=0.07; NMSEcv=0.19). The model uses only 8 out of 244 predictors (bands) chosen by the step-down variable selection algorithm. The least reliable estimates (R2=0.55 and R2cv=0.40 and NMSE=0.43; NMSEcv=0.60) resulted from Landsat model, while Sentinel-2 model showed R2=0.68 and R2cv=0.63; NMSE=0.31 and NMSEcv=0.38. The results confirm high potential of VIS-NIR-SWIR spectroscopy for SOM estimation. Application of step-down produces sparser and better-fit models. Finally, SOM can be estimated with an acceptable accuracy (NMSE 0.35) from EnMap and Sentinel-2 data enabling mapping and analysis of impacts of repeated wildfires on soils in the study area.

  12. A general digital computer procedure for synthesizing linear automatic control systems

    International Nuclear Information System (INIS)

    Cummins, J.D.

    1961-10-01

    The fundamental concepts required for synthesizing a linear automatic control system are considered. A generalized procedure for synthesizing automatic control systems is demonstrated. This procedure has been programmed for the Ferranti Mercury and the IBM 7090 computers. Details of the programmes are given. The procedure uses the linearized set of equations which describe the plant to be controlled as the starting point. Subsequent computations determine the transfer functions between any desired variables. The programmes also compute the root and phase loci for any linear (and some non-linear) configurations in the complex plane, the open loop and closed loop frequency responses of a system, the residues of a function of the complex variable 's' and the time response corresponding to these residues. With these general programmes available the design of 'one point' automatic control systems becomes a routine scientific procedure. Also dynamic assessments of plant may be carried out. Certain classes of multipoint automatic control problems may also be solved with these procedures. Autonomous systems, invariant systems and orthogonal systems may also be studied. (author)

  13. Usefulness of a PARAFAC decomposition in the fiber selection procedure to determine chlorophenols by means SPME-GC-MS.

    Science.gov (United States)

    Morales, Rocío; Cruz Ortiz, M; Sarabia, Luis A

    2012-05-01

    In this work, a procedure based on solid-phase microextraction and gas chromatography coupled with mass spectrometry is proposed to determine chlorophenols in water without derivatization. The following chlorophenols are studied: 2,4-dichlorophenol; 2,4,6-trichlorophenol; 2,3,4,6-tetrachlorophenol and pentachlorophenol. Three kinds of SPME fibers, polyacrylate, polydimethylsiloxane, and polydimethylsiloxane/divinylbenzene are compared to identify the most suitable one for the extraction process on the basis of two criteria: (a) to select the equilibrium time studying the kinetics of the extraction, and (b) to obtain the best values of the figures of merit. In both cases, a three-way PARAllel FACtor analysis decomposition is used. For the first step, the three-way experimental data are arranged as follows: if I extraction times are considered, the tensor of data, X, of dimensions I × J × K is generated by concatenating the I matrices formed by the abundances of the J m/z ions recorded in K elution times around the retention time for each chlorophenol. The second-order property of PARAFAC (or PARAFAC2) assesses the unequivocal identification of each chlorophenol, as consequence, the loadings in the first mode estimated by the PARAFAC decomposition are the kinetic profile. For the second step, a calibration based on a PARAFAC decomposition is used for each fiber. The best figures of merit were obtained with PDMS/DVB fiber. The values of decision limit, CCα, achieved are between 0.29 and 0.67 μg L(-1) for the four chlorophenols. The accuracy (trueness and precision) of the procedure was assessed. This procedure has been applied to river water samples.

  14. Variable Selection in Time Series Forecasting Using Random Forests

    Directory of Open Access Journals (Sweden)

    Hristos Tyralis

    2017-10-01

    Full Text Available Time series forecasting using machine learning algorithms has gained popularity recently. Random forest is a machine learning algorithm implemented in time series forecasting; however, most of its forecasting properties have remained unexplored. Here we focus on assessing the performance of random forests in one-step forecasting using two large datasets of short time series with the aim to suggest an optimal set of predictor variables. Furthermore, we compare its performance to benchmarking methods. The first dataset is composed by 16,000 simulated time series from a variety of Autoregressive Fractionally Integrated Moving Average (ARFIMA models. The second dataset consists of 135 mean annual temperature time series. The highest predictive performance of RF is observed when using a low number of recent lagged predictor variables. This outcome could be useful in relevant future applications, with the prospect to achieve higher predictive accuracy.

  15. 5 CFR 330.1204 - Selection.

    Science.gov (United States)

    2010-01-01

    ... Employees § 330.1204 Selection. (a) If two or more individuals apply for a vacancy and the hiring agency... agency (under appropriate selection procedures, then: (3) Current or former Federal employees displaced... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Selection. 330.1204 Section 330.1204...

  16. Program Baseline Change Control Procedure

    International Nuclear Information System (INIS)

    1993-02-01

    This procedure establishes the responsibilities and process for approving initial issues of and changes to the technical, cost, and schedule baselines, and selected management documents developed by the Office of Civilian Radioactive Waste Management (OCRWM) for the Civilian Radioactive Waste Management System. This procedure implements the OCRWM Baseline Management Plan and DOE Order 4700.1, Chg 1. It streamlines the change control process to enhance integration, accountability, and traceability of Level 0 and Level I decisions through standardized Baseline Change Proposal (BCP) forms to be used by the Level 0, 1, 2, and 3 Baseline Change Control Boards (BCCBs) and to be tracked in the OCRWM-wide Configuration Information System (CIS) Database.This procedure applies to all technical, cost, and schedule baselines controlled by the Energy System Acquisition Advisory Board (ESAAB) BCCB (Level 0) and, OCRWM Program Baseline Control Board (PBCCB) (Level 1). All baseline BCPs initiated by Level 2 or lower BCCBs, which require approval from ESAAB or PBCCB, shall be processed in accordance with this procedure. This procedure also applies to all Program-level management documents controlled by the OCRWM PBCCB

  17. A practical procedure for the selection of time-to-failure models based on the assessment of trends in maintenance data

    Energy Technology Data Exchange (ETDEWEB)

    Louit, D.M. [Komatsu Chile, Av. Americo Vespucio 0631, Quilicura, Santiago (Chile)], E-mail: rpascual@ing.puc.cl; Pascual, R. [Centro de Mineria, Pontificia Universidad Catolica de Chile, Av. Vicuna Mackenna 4860, Santiago (Chile); Jardine, A.K.S. [Department of Mechanical and Industrial Engineering, University of Toronto, 5 King' s College Road, Toronto, Ont., M5S 3G8 (Canada)

    2009-10-15

    Many times, reliability studies rely on false premises such as independent and identically distributed time between failures assumption (renewal process). This can lead to erroneous model selection for the time to failure of a particular component or system, which can in turn lead to wrong conclusions and decisions. A strong statistical focus, a lack of a systematic approach and sometimes inadequate theoretical background seem to have made it difficult for maintenance analysts to adopt the necessary stage of data testing before the selection of a suitable model. In this paper, a framework for model selection to represent the failure process for a component or system is presented, based on a review of available trend tests. The paper focuses only on single-time-variable models and is primarily directed to analysts responsible for reliability analyses in an industrial maintenance environment. The model selection framework is directed towards the discrimination between the use of statistical distributions to represent the time to failure ('renewal approach'); and the use of stochastic point processes ('repairable systems approach'), when there may be the presence of system ageing or reliability growth. An illustrative example based on failure data from a fleet of backhoes is included.

  18. Machine learning search for variable stars

    Science.gov (United States)

    Pashchenko, Ilya N.; Sokolovsky, Kirill V.; Gavras, Panagiotis

    2018-04-01

    Photometric variability detection is often considered as a hypothesis testing problem: an object is variable if the null hypothesis that its brightness is constant can be ruled out given the measurements and their uncertainties. The practical applicability of this approach is limited by uncorrected systematic errors. We propose a new variability detection technique sensitive to a wide range of variability types while being robust to outliers and underestimated measurement uncertainties. We consider variability detection as a classification problem that can be approached with machine learning. Logistic Regression (LR), Support Vector Machines (SVM), k Nearest Neighbours (kNN), Neural Nets (NN), Random Forests (RF), and Stochastic Gradient Boosting classifier (SGB) are applied to 18 features (variability indices) quantifying scatter and/or correlation between points in a light curve. We use a subset of Optical Gravitational Lensing Experiment phase two (OGLE-II) Large Magellanic Cloud (LMC) photometry (30 265 light curves) that was searched for variability using traditional methods (168 known variable objects) as the training set and then apply the NN to a new test set of 31 798 OGLE-II LMC light curves. Among 205 candidates selected in the test set, 178 are real variables, while 13 low-amplitude variables are new discoveries. The machine learning classifiers considered are found to be more efficient (select more variables and fewer false candidates) compared to traditional techniques using individual variability indices or their linear combination. The NN, SGB, SVM, and RF show a higher efficiency compared to LR and kNN.

  19. The quasar luminosity function from a variability-selected sample

    Science.gov (United States)

    Hawkins, M. R. S.; Veron, P.

    1993-01-01

    A sample of quasars is selected from a 10-yr sequence of 30 UK Schmidt plates. Luminosity functions are derived in several redshift intervals, which in each case show a featureless power-law rise towards low luminosities. There is no sign of the 'break' found in the recent UVX sample of Boyle et al. It is suggested that reasons for the disagreement are connected with biases in the selection of the UVX sample. The question of the nature of quasar evolution appears to be still unresolved.

  20. Internal variables in thermoelasticity

    CERN Document Server

    Berezovski, Arkadi

    2017-01-01

    This book describes an effective method for modeling advanced materials like polymers, composite materials and biomaterials, which are, as a rule, inhomogeneous. The thermoelastic theory with internal variables presented here provides a general framework for predicting a material’s reaction to external loading. The basic physical principles provide the primary theoretical information, including the evolution equations of the internal variables. The cornerstones of this framework are the material representation of continuum mechanics, a weak nonlocality, a non-zero extra entropy flux, and a consecutive employment of the dissipation inequality. Examples of thermoelastic phenomena are provided, accompanied by detailed procedures demonstrating how to simulate them.

  1. Model selection with multiple regression on distance matrices leads to incorrect inferences.

    Directory of Open Access Journals (Sweden)

    Ryan P Franckowiak

    Full Text Available In landscape genetics, model selection procedures based on Information Theoretic and Bayesian principles have been used with multiple regression on distance matrices (MRM to test the relationship between multiple vectors of pairwise genetic, geographic, and environmental distance. Using Monte Carlo simulations, we examined the ability of model selection criteria based on Akaike's information criterion (AIC, its small-sample correction (AICc, and the Bayesian information criterion (BIC to reliably rank candidate models when applied with MRM while varying the sample size. The results showed a serious problem: all three criteria exhibit a systematic bias toward selecting unnecessarily complex models containing spurious random variables and erroneously suggest a high level of support for the incorrectly ranked best model. These problems effectively increased with increasing sample size. The failure of AIC, AICc, and BIC was likely driven by the inflated sample size and different sum-of-squares partitioned by MRM, and the resulting effect on delta values. Based on these findings, we strongly discourage the continued application of AIC, AICc, and BIC for model selection with MRM.

  2. The site selection process

    International Nuclear Information System (INIS)

    Kittel, J.H.

    1989-01-01

    One of the most arduous tasks associated with the management of radioactive wastes is the siting of new disposal facilities. Experience has shown that the performance of the disposal facility during and after disposal operations is critically dependent on the characteristics of the site itself. The site selection process consists of defining needs and objectives, identifying geographic regions of interest, screening and selecting candidate sites, collecting data on the candidate sites, and finally selecting the preferred site. Before the site selection procedures can be implemented, however, a formal legal system must be in place that defines broad objectives and, most importantly, clearly establishes responsibilities and accompanying authorities for the decision-making steps in the procedure. Site selection authorities should make every effort to develop trust and credibility with the public, local officials, and the news media. The responsibilities of supporting agencies must also be spelled out. Finally, a stable funding arrangement must be established so that activities such as data collection can proceed without interruption. Several examples, both international and within the US, are given

  3. Anesthesia for radiologic procedures

    International Nuclear Information System (INIS)

    Forestner, J.E.

    1987-01-01

    Anesthetic techniques for neurodiagnostic studies and radiation therapy have been recently reviewed, but anesthetic involvement in thoracic and abdominal radiology has received little attention. Patient reactions to radiologic contrast media may be of concern to the anesthesiologist, who is often responsible for injecting these agents during diagnostic procedures, and thus is included in this discussion. Finally, the difficulties of administering anesthesia for magnetic resonance imaging (MRI) scans are outlined, in an effort to help anesthesiologist to anticipate problems with this new technologic development. Although there are very few indications for the use of general anesthesia for diagnostic radiologic studies in adults, most procedures performed with children, the mentally retarded, or the combative adult require either heavy sedation or general anesthesia. In selecting an anesthetic technique for a specific procedure, both the patient's disease process and the requirements of the radiologist must be carefully balanced

  4. The Criteria and Variables Affecting the Selection of Quality Book Ideally Suited for Translation: The Perspectives of King Saud University Staff

    Directory of Open Access Journals (Sweden)

    Abdulaziz Abdulrahman Abanomey

    2015-04-01

    Full Text Available This study investigated the ideal definition of QB, that is Quality Book- one that is ideally suited for translation- and the variables affecting its selection criteria among 136 members of King Saud University (KSU academic staff. A workshop was held to elicit the ideal definition of QB to answer the first question, and a 19-item electronic questionnaire with four domains was designed to help collect the data necessary to answer the other two questions of the study. The results revealed that all four domains came low; “Authorship and Publication” came the highest with a mean score of 2.28 and “Titling and Contents” came the lowest with a mean score of 1.76. 5-way ANOVA (without interaction was applied in accordance with the variables of the study at α≤ 0.05 among the mean scores. The analysis revealed significance of the variables of gender, those who translated a book or more before, and those who participated in a conference devoted for translation whereas the variables of qualification and revising a translated book did not reveal any statistical significance. Key words: Quality Book, KSU, Authorship, Translation, Titling

  5. Recommended safety procedures for the selection and use of demonstration-type gas discharge devices in schools

    International Nuclear Information System (INIS)

    1979-01-01

    A 1972 survey of 30 Ottawa secondary schools revealed a total of 347 actual or potential X-ray sources available in these schools. More than half of these sources were gas discharge tubes. Some gas discharge tubes, in particular the cold cathode type, can emit X-rays at significantly high levels. Unless such tubes are used carefully, and with regard for good radiation safety practices, they can result in exposures to students that are in excess of the maximum levels recommended by the International Commission on Radiological Protection. Several cases of the recommended dose being exceeded were found in the classes surveyed. This document has been prepared to assist science teachers and others using demonstration-type gas discharge devices to select and use such devices so as to present negligible risk to themselves and students. Useful information on safety procedures to be followed when performing demonstrations or experiments is included. (J.T.A.)

  6. The Kjeldahl method as a primary reference procedure for total protein in certified reference materials used in clinical chemistry. II. Selection of direct Kjeldahl analysis and its preliminary performance parameters.

    Science.gov (United States)

    Vinklárková, Bára; Chromý, Vratislav; Šprongl, Luděk; Bittová, Miroslava; Rikanová, Milena; Ohnútková, Ivana; Žaludová, Lenka

    2015-01-01

    To select a Kjeldahl procedure suitable for the determination of total protein in reference materials used in laboratory medicine, we reviewed in our previous article Kjeldahl methods adopted by clinical chemistry and found an indirect two-step analysis by total Kjeldahl nitrogen corrected for its nonprotein nitrogen and a direct analysis made on isolated protein precipitates. In this article, we compare both procedures on various reference materials. An indirect Kjeldahl method gave falsely lower results than a direct analysis. Preliminary performance parameters qualify the direct Kjeldahl analysis as a suitable primary reference procedure for the certification of total protein in reference laboratories.

  7. Conference report: 2012 Repository Symposium. Final storage in Germany. New start - ways and consequences of the site selection procedure

    International Nuclear Information System (INIS)

    Kettler, John

    2012-01-01

    The Aachen Institute for Nuclear Training invited participants to the 3-day '2012 Repository Symposium - Final Storage in Germany' held in Bonn. The subtitle of the event, 'New Start - Ways and Consequences of the Site Selection Procedure,' expressed the organizers' summary that the Repository Finding Act currently under discussion did not give rise to any expectation of a repository for high-level radioactive waste before 2080. The symposium was attended by more than 120 persons from Germany and abroad. They discussed the basic elements of the site selection procedure and its consequences on the basis of the draft so far known to the public. While extensive public participation is envisaged for the stage of finding a repository, this does not apply to the draft legislation in the same way. The legal determinations are negotiated in a small circle by the political parties and the state governments. Michael Sailer (Oeko-Institut e.V.) holds that agreement on a repository finding act is urgent. Prof. Dr. Bruno Thomauske (RWTH Aachen) arrives at the conclusion mentioned above, that no repository for high-level radioactive waste can start operation before 2080 on the basis of the Repository Finding Act. Dr. Bettina Keienburg, attorney at law, in her paper drew attention to the points of dispute in the draft legislation with regard to changes in competency of public authorities. The draft law indicated a clear shift of competency for finding a repository from the Federal Office for Radiation Protection to a federal agency yet to be set up. Prof. Dr. Christoph Moench outlined the deficiencies of the draft legislation in matters of refinancing and the polluter-pays principle. Among the tentative solutions discussed it was above all the Swedish model which was acclaimed most widely. (orig.)

  8. Maximum likelihood estimation and EM algorithm of Copas-like selection model for publication bias correction.

    Science.gov (United States)

    Ning, Jing; Chen, Yong; Piao, Jin

    2017-07-01

    Publication bias occurs when the published research results are systematically unrepresentative of the population of studies that have been conducted, and is a potential threat to meaningful meta-analysis. The Copas selection model provides a flexible framework for correcting estimates and offers considerable insight into the publication bias. However, maximizing the observed likelihood under the Copas selection model is challenging because the observed data contain very little information on the latent variable. In this article, we study a Copas-like selection model and propose an expectation-maximization (EM) algorithm for estimation based on the full likelihood. Empirical simulation studies show that the EM algorithm and its associated inferential procedure performs well and avoids the non-convergence problem when maximizing the observed likelihood. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  9. Percutaneous closure of patent foramen ovale and atrial septal defect in adults: the impact of clinical variables and hospital procedure volume on in-hospital adverse events.

    Science.gov (United States)

    Opotowsky, Alexander R; Landzberg, Michael J; Kimmel, Stephen E; Webb, Gary D

    2009-05-01

    Percutaneous closure of patent foramen ovale/atrial septal defect (PFO/ASD) is an increasingly common procedure perceived as having minimal risk. There are no population-based estimates of in-hospital adverse event rates of percutaneous PFO/ASD closure. We used nationally representative data from the 2001-2005 Nationwide Inpatient Sample to identify patients >or-=20 years old admitted to an acute care hospital with an International Classification of Diseases, Ninth Revision code designating percutaneous PFO/ASD closure on the first or second hospital day. Variables analyzed included age, sex, number of comorbidities, year, same-day use of intracardiac or other echocardiography, same-day left heart catheterization, hospital size and teaching status, PFO/ASD procedural volume, and coronary intervention volume. Outcomes of interest included length of stay, charges, and adverse events. The study included 2,555 (weighted to United States population: 12,544 +/- 1,987) PFO/ASD closure procedures. Mean age was 52.0 +/- 0.4 years, and 57.3% +/- 1.0% were women. Annual hospital volume averaged 40.8 +/- 7.7 procedures (range, 1-114). Overall, 8.2 +/- 0.8% of admissions involved an adverse event. Older patients and those with comorbidities were more likely to sustain adverse events. Use of intracardiac echocardiography was associated with fewer adverse events. The risk of adverse events was inversely proportional to annual hospital volume (odds ratio [OR] 0.91, 95% confidence interval [CI] 0.86-0.96, per 10 procedures), even after limiting the analysis to hospitals performing >or=10 procedures annually (OR 0.91, 95% CI 0.85-0.98). Adverse events were more frequent at hospitals in the lowest volume quintile as compared with the highest volume quintile (13.3% vs 5.4%, OR 2.42, 95% CI 1.55-3.78). The risk of adverse events of percutaneous PFO/ASD closure is inversely correlated with hospital volume. This relationship applies even to hospitals meeting the current guidelines

  10. Variable importance and prediction methods for longitudinal problems with missing variables.

    Directory of Open Access Journals (Sweden)

    Iván Díaz

    Full Text Available We present prediction and variable importance (VIM methods for longitudinal data sets containing continuous and binary exposures subject to missingness. We demonstrate the use of these methods for prognosis of medical outcomes of severe trauma patients, a field in which current medical practice involves rules of thumb and scoring methods that only use a few variables and ignore the dynamic and high-dimensional nature of trauma recovery. Well-principled prediction and VIM methods can provide a tool to make care decisions informed by the high-dimensional patient's physiological and clinical history. Our VIM parameters are analogous to slope coefficients in adjusted regressions, but are not dependent on a specific statistical model, nor require a certain functional form of the prediction regression to be estimated. In addition, they can be causally interpreted under causal and statistical assumptions as the expected outcome under time-specific clinical interventions, related to changes in the mean of the outcome if each individual experiences a specified change in the variable (keeping other variables in the model fixed. Better yet, the targeted MLE used is doubly robust and locally efficient. Because the proposed VIM does not constrain the prediction model fit, we use a very flexible ensemble learner (the SuperLearner, which returns a linear combination of a list of user-given algorithms. Not only is such a prediction algorithm intuitive appealing, it has theoretical justification as being asymptotically equivalent to the oracle selector. The results of the analysis show effects whose size and significance would have been not been found using a parametric approach (such as stepwise regression or LASSO. In addition, the procedure is even more compelling as the predictor on which it is based showed significant improvements in cross-validated fit, for instance area under the curve (AUC for a receiver-operator curve (ROC. Thus, given that 1 our VIM

  11. A hybridised variable neighbourhood tabu search heuristic to increase security in a utility network

    International Nuclear Information System (INIS)

    Janssens, Jochen; Talarico, Luca; Sörensen, Kenneth

    2016-01-01

    We propose a decision model aimed at increasing security in a utility network (e.g., electricity, gas, water or communication network). The network is modelled as a graph, the edges of which are unreliable. We assume that all edges (e.g., pipes, cables) have a certain, not necessarily equal, probability of failure, which can be reduced by selecting edge-specific security strategies. We develop a mathematical programming model and a metaheuristic approach that uses a greedy random adaptive search procedure to find an initial solution and uses tabu search hybridised with iterated local search and a variable neighbourhood descend heuristic to improve this solution. The main goal is to reduce the risk of service failure between an origin and a destination node by selecting the right combination of security measures for each network edge given a limited security budget. - Highlights: • A decision model aimed at increasing security in a utility network is proposed. • The goal is to reduce the risk of service failure given a limited security budget. • An exact approach and a variable neighbourhood tabu search heuristic are developed. • A generator for realistic networks is built and used to test the solution methods. • The hybridised heuristic reduces the total risk on average with 32%.

  12. Information-theoretical feature selection using data obtained by Scanning Electron Microscopy coupled with and Energy Dispersive X-ray spectrometer for the classification of glass traces

    International Nuclear Information System (INIS)

    Ramos, Daniel; Zadora, Grzegorz

    2011-01-01

    Highlights: → A selection of the best features for multivariate forensic glass classification using SEM-EDX was performed. → The feature selection process was carried out by means of an exhaustive search, with an Empirical Cross-Entropy objective function. → Results show remarkable accuracy of the best variables selected following the proposed procedure for the task of classifying glass fragments into windows or containers. - Abstract: In this work, a selection of the best features for multivariate forensic glass classification using Scanning Electron Microscopy coupled with an Energy Dispersive X-ray spectrometer (SEM-EDX) has been performed. This has been motivated by the fact that the databases available for forensic glass classification are sparse nowadays, and the acquisition of SEM-EDX data is both costly and time-consuming for forensic laboratories. The database used for this work consists of 278 glass objects for which 7 variables, based on their elemental compositions obtained with SEM-EDX, are available. Two categories are considered for the classification task, namely containers and car/building windows, both of them typical in forensic casework. A multivariate model is proposed for the computation of the likelihood ratios. The feature selection process is carried out by means of an exhaustive search, with an Empirical Cross-Entropy (ECE) objective function. The ECE metric takes into account not only the discriminating power of the model in use, but also its calibration, which indicates whether or not the likelihood ratios are interpretable in a probabilistic way. Thus, the proposed model is applied to all the 63 possible univariate, bivariate and trivariate combinations taken from the 7 variables in the database, and its performance is ranked by its ECE. Results show remarkable accuracy of the best variables selected following the proposed procedure for the task of classifying glass fragments into windows (from cars or buildings) or containers

  13. Measuring the surgical 'learning curve': methods, variables and competency.

    Science.gov (United States)

    Khan, Nuzhath; Abboudi, Hamid; Khan, Mohammed Shamim; Dasgupta, Prokar; Ahmed, Kamran

    2014-03-01

    To describe how learning curves are measured and what procedural variables are used to establish a 'learning curve' (LC). To assess whether LCs are a valuable measure of competency. A review of the surgical literature pertaining to LCs was conducted using the Medline and OVID databases. Variables should be fully defined and when possible, patient-specific variables should be used. Trainee's prior experience and level of supervision should be quantified; the case mix and complexity should ideally be constant. Logistic regression may be used to control for confounding variables. Ideally, a learning plateau should reach a predefined/expert-derived competency level, which should be fully defined. When the group splitting method is used, smaller cohorts should be used in order to narrow the range of the LC. Simulation technology and competence-based objective assessments may be used in training and assessment in LC studies. Measuring the surgical LC has potential benefits for patient safety and surgical education. However, standardisation in the methods and variables used to measure LCs is required. Confounding variables, such as participant's prior experience, case mix, difficulty of procedures and level of supervision, should be controlled. Competency and expert performance should be fully defined. © 2013 The Authors. BJU International © 2013 BJU International.

  14. Oracle Efficient Variable Selection in Random and Fixed Effects Panel Data Models

    DEFF Research Database (Denmark)

    Kock, Anders Bredahl

    This paper generalizes the results for the Bridge estimator of Huang et al. (2008) to linear random and fixed effects panel data models which are allowed to grow in both dimensions. In particular we show that the Bridge estimator is oracle efficient. It can correctly distinguish between relevant...... and irrelevant variables and the asymptotic distribution of the estimators of the coefficients of the relevant variables is the same as if only these had been included in the model, i.e. as if an oracle had revealed the true model prior to estimation. In the case of more explanatory variables than observations......, we prove that the Marginal Bridge estimator can asymptotically correctly distinguish between relevant and irrelevant explanatory variables. We do this without restricting the dependence between covariates and without assuming sub Gaussianity of the error terms thereby generalizing the results...

  15. High procedural fairness heightens the effect of outcome favorability on self-evaluations : An attributional analysis

    NARCIS (Netherlands)

    Brockner, J.; Heuer, L.; Magner, N.; Folger, R.; Umphress, E.; Bos, K. van den; Vermunt, Riël; Magner, M.; Siegel, P.

    2003-01-01

    Previous research has shown that outcome favorability and procedural fairness often interact to influence employees work attitudes and behaviors. Moreover, the form of the interaction effect depends upon the dependent variable. Relative to when procedural fairness is low, high procedural fairness:

  16. Procedure automation: the effect of automated procedure execution on situation awareness and human performance

    International Nuclear Information System (INIS)

    Andresen, Gisle; Svengren, Haakan; Heimdal, Jan O.; Nilsen, Svein; Hulsund, John-Einar; Bisio, Rossella; Debroise, Xavier

    2004-04-01

    As advised by the procedure workshop convened in Halden in 2000, the Halden Project conducted an experiment on the effect of automation of Computerised Procedure Systems (CPS) on situation awareness and human performance. The expected outcome of the study was to provide input for guidance on CPS design, and to support the Halden Project's ongoing research on human reliability analysis. The experiment was performed in HAMMLAB using the HAMBO BWR simulator and the COPMA-III CPS. Eight crews of operators from Forsmark 3 and Oskarshamn 3 participated. Three research questions were investigated: 1) Does procedure automation create Out-Of-The-Loop (OOTL) performance problems? 2) Does procedure automation affect situation awareness? 3) Does procedure automation affect crew performance? The independent variable, 'procedure configuration', had four levels: paper procedures, manual CPS, automation with breaks, and full automation. The results showed that the operators experienced OOTL problems in full automation, but that situation awareness and crew performance (response time) were not affected. One possible explanation for this is that the operators monitored the automated procedure execution conscientiously, something which may have prevented the OOTL problems from having negative effects on situation awareness and crew performance. In a debriefing session, the operators clearly expressed their dislike for the full automation condition, but that automation with breaks could be suitable for some tasks. The main reason why the operators did not like the full automation was that they did not feel being in control. A qualitative analysis addressing factors contributing to response time delays revealed that OOTL problems did not seem to cause delays, but that some delays could be explained by the operators having problems with the freeze function of the CPS. Also other factors such as teamwork and operator tendencies were of importance. Several design implications were drawn

  17. The impact of selected organizational variables and managerial leadership on radiation therapists' organizational commitment

    Energy Technology Data Exchange (ETDEWEB)

    Akroyd, Duane [Department of Adult and Community College Education, College of Education, Campus Box 7801, North Carolina State University, Raleigh, NC 27695 (United States)], E-mail: duane_akroyd@ncsu.edu; Legg, Jeff [Department of Radiologic Sciences, Virginia Commonwealth University, Richmond, VA 23284 (United States); Jackowski, Melissa B. [Division of Radiologic Sciences, University of North Carolina School of Medicine 27599 (United States); Adams, Robert D. [Department of Radiation Oncology, University of North Carolina School of Medicine 27599 (United States)

    2009-05-15

    The purpose of this study was to examine the impact of selected organizational factors and the leadership behavior of supervisors on radiation therapists' commitment to their organizations. The population for this study consists of all full time clinical radiation therapists registered by the American Registry of Radiologic Technologists (ARRT) in the United States. A random sample of 800 radiation therapists was obtained from the ARRT for this study. Questionnaires were mailed to all participants and measured organizational variables; managerial leadership variable and three components of organizational commitment (affective, continuance and normative). It was determined that organizational support, and leadership behavior of supervisors each had a significant and positive affect on normative and affective commitment of radiation therapists and each of the models predicted over 40% of the variance in radiation therapists organizational commitment. This study examined radiation therapists' commitment to their organizations and found that affective (emotional attachment to the organization) and normative (feelings of obligation to the organization) commitments were more important than continuance commitment (awareness of the costs of leaving the organization). This study can help radiation oncology administrators and physicians to understand the values their radiation therapy employees hold that are predictive of their commitment to the organization. A crucial result of the study is the importance of the perceived support of the organization and the leadership skills of managers/supervisors on radiation therapists' commitment to the organization.

  18. Genetic and Psychosocial Predictors of Aggression: Variable Selection and Model Building With Component-Wise Gradient Boosting

    Directory of Open Access Journals (Sweden)

    Robert Suchting

    2018-05-01

    Full Text Available Rationale: Given datasets with a large or diverse set of predictors of aggression, machine learning (ML provides efficient tools for identifying the most salient variables and building a parsimonious statistical model. ML techniques permit efficient exploration of data, have not been widely used in aggression research, and may have utility for those seeking prediction of aggressive behavior.Objectives: The present study examined predictors of aggression and constructed an optimized model using ML techniques. Predictors were derived from a dataset that included demographic, psychometric and genetic predictors, specifically FK506 binding protein 5 (FKBP5 polymorphisms, which have been shown to alter response to threatening stimuli, but have not been tested as predictors of aggressive behavior in adults.Methods: The data analysis approach utilized component-wise gradient boosting and model reduction via backward elimination to: (a select variables from an initial set of 20 to build a model of trait aggression; and then (b reduce that model to maximize parsimony and generalizability.Results: From a dataset of N = 47 participants, component-wise gradient boosting selected 8 of 20 possible predictors to model Buss-Perry Aggression Questionnaire (BPAQ total score, with R2 = 0.66. This model was simplified using backward elimination, retaining six predictors: smoking status, psychopathy (interpersonal manipulation and callous affect, childhood trauma (physical abuse and neglect, and the FKBP5_13 gene (rs1360780. The six-factor model approximated the initial eight-factor model at 99.4% of R2.Conclusions: Using an inductive data science approach, the gradient boosting model identified predictors consistent with previous experimental work in aggression; specifically psychopathy and trauma exposure. Additionally, allelic variants in FKBP5 were identified for the first time, but the relatively small sample size limits generality of results and calls for

  19. New procedure of selected biogenic amines determination in wine samples by HPLC

    Energy Technology Data Exchange (ETDEWEB)

    Piasta, Anna M.; Jastrzębska, Aneta, E-mail: aj@chem.uni.torun.pl; Krzemiński, Marek P.; Muzioł, Tadeusz M.; Szłyk, Edward

    2014-06-27

    Highlights: • We proposed new procedure for derivatization of biogenic amines. • The NMR and XRD analysis confirmed the purity and uniqueness of derivatives. • Concentration of biogenic amines in wine samples were analyzed by RP-HPLC. • Sample contamination and derivatization reactions interferences were minimized. - Abstract: A new procedure for determination of biogenic amines (BA): histamine, phenethylamine, tyramine and tryptamine, based on the derivatization reaction with 2-chloro-1,3-dinitro-5-(trifluoromethyl)-benzene (CNBF), is proposed. The amines derivatives with CNBF were isolated and characterized by X-ray crystallography and {sup 1}H, {sup 13}C, {sup 19}F NMR spectroscopy in solution. The novelty of the procedure is based on the pure and well-characterized products of the amines derivatization reaction. The method was applied for the simultaneous analysis of the above mentioned biogenic amines in wine samples by the reversed phase-high performance liquid chromatography. The procedure revealed correlation coefficients (R{sup 2}) between 0.9997 and 0.9999, and linear range: 0.10–9.00 mg L{sup −1} (histamine); 0.10–9.36 mg L{sup -1} (tyramine); 0.09–8.64 mg L{sup −1} (tryptamine) and 0.10–8.64 mg L{sup −1} (phenethylamine), whereas accuracy was 97%–102% (recovery test). Detection limit of biogenic amines in wine samples was 0.02–0.03 mg L{sup −1}, whereas quantification limit ranged 0.05–0.10 mg L{sup −1}. The variation coefficients for the analyzed amines ranged between 0.49% and 3.92%. Obtained BA derivatives enhanced separation the analytes on chromatograms due to the inhibition of hydrolysis reaction and the reduction of by-products formation.

  20. Safety analysis procedures for PHWR

    International Nuclear Information System (INIS)

    Min, Byung Joo; Kim, Hyoung Tae; Yoo, Kun Joong

    2004-03-01

    The methodology of safety analyses for CANDU reactors in Canada, a vendor country, uses a combination of best-estimate physical models and conservative input parameters so as to minimize the uncertainty of the plant behavior predictions. As using the conservative input parameters, the results of the safety analyses are assured the regulatory requirements such as the public dose, the integrity of fuel and fuel channel, the integrity of containment and reactor structures, etc. However, there is not the comprehensive and systematic procedures for safety analyses for CANDU reactors in Korea. In this regard, the development of the safety analyses procedures for CANDU reactors is being conducted not only to establish the safety analyses system, but also to enhance the quality assurance of the safety assessment. In the first phase of this study, the general procedures of the deterministic safety analyses are developed. The general safety procedures are covered the specification of the initial event, selection of the methodology and accident sequences, computer codes, safety analysis procedures, verification of errors and uncertainties, etc. Finally, These general procedures of the safety analyses are applied to the Large Break Loss Of Coolant Accident (LBLOCA) in Final Safety Analysis Report (FSAR) for Wolsong units 2, 3, 4

  1. Rewarding leadership and fair procedures as determinants of self-esteem.

    Science.gov (United States)

    De Cremer, David; van Knippenberg, Barbara; van Knippenberg, Daan; Mullenders, Danny; Stinglhamber, Florence

    2005-01-01

    In the present research, the authors examined the effect of procedural fairness and rewarding leadership style on an important variable for employees: self-esteem. The authors predicted that procedural fairness would positively influence people's reported self-esteem if the leader adopted a style of rewarding behavior for a job well done. Results from a scenario experiment, a laboratory experiment, and an organizational survey indeed show that procedural fairness and rewarding leadership style interacted to influence followers' self-esteem, such that the positive relationship between procedural fairness and self-esteem was more pronounced when the leadership style was high in rewarding behavior. Implications in terms of integrating the leadership and procedural fairness literature are discussed.

  2. 48 CFR 931.205 - Selected costs.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Selected costs. 931.205... REQUIREMENTS CONTRACT COST PRINCIPLES AND PROCEDURES Contracts With Commercial Organizations 931.205 Selected costs. ...

  3. 48 CFR 1231.205 - Selected costs.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Selected costs. 1231.205... REQUIREMENTS CONTRACT COST PRINCIPLES AND PROCEDURES Contracts With Commercial Organizations 1231.205 Selected costs. ...

  4. 48 CFR 31.205 - Selected costs.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Selected costs. 31.205... REQUIREMENTS CONTRACT COST PRINCIPLES AND PROCEDURES Contracts With Commercial Organizations 31.205 Selected costs. ...

  5. 48 CFR 1331.205 - Selected costs.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Selected costs. 1331.205... REQUIREMENTS CONTRACT COST PRINCIPLES AND PROCEDURES Contracts With Commercial Organizations 1331.205 Selected costs. ...

  6. 48 CFR 631.205 - Selected costs.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 4 2010-10-01 2010-10-01 false Selected costs. 631.205... REQUIREMENTS CONTRACT COST PRINCIPLES AND PROCEDURES Contracts with Commercial Organizations 631.205 Selected costs. ...

  7. Selective Heart, Brain and Body Perfusion in Open Aortic Arch Replacement.

    Science.gov (United States)

    Maier, Sven; Kari, Fabian; Rylski, Bartosz; Siepe, Matthias; Benk, Christoph; Beyersdorf, Friedhelm

    2016-09-01

    Open aortic arch replacement is a complex and challenging procedure, especially in post dissection aneurysms and in redo procedures after previous surgery of the ascending aorta or aortic root. We report our experience with the simultaneous selective perfusion of heart, brain, and remaining body to ensure optimal perfusion and to minimize perfusion-related risks during these procedures. We used a specially configured heart-lung machine with a centrifugal pump as arterial pump and an additional roller pump for the selective cerebral perfusion. Initial arterial cannulation is achieved via femoral artery or right axillary artery. After lower body circulatory arrest and selective antegrade cerebral perfusion for the distal arch anastomosis, we started selective lower body perfusion simultaneously to the selective antegrade cerebral perfusion and heart perfusion. Eighteen patients were successfully treated with this perfusion strategy from October 2012 to November 2015. No complications related to the heart-lung machine and the cannulation occurred during the procedures. Mean cardiopulmonary bypass time was 239 ± 33 minutes, the simultaneous selective perfusion of brain, heart, and remaining body lasted 55 ± 23 minutes. One patient suffered temporary neurological deficit that resolved completely during intensive care unit stay. No patient experienced a permanent neurological deficit or end-organ dysfunction. These high-risk procedures require a concept with a special setup of the heart-lung machine. Our perfusion strategy for aortic arch replacement ensures a selective perfusion of heart, brain, and lower body during this complex procedure and we observed excellent outcomes in this small series. This perfusion strategy is also applicable for redo procedures.

  8. CARVEDILOL POPULATION PHARMACOKINETIC ANALYSIS – APPLIED VALIDATION PROCEDURE

    Directory of Open Access Journals (Sweden)

    Aleksandra Catić-Đorđević

    2013-09-01

    Full Text Available Carvedilol is a nonselective beta blocker/alpha-1 blocker, which is used for treatment of essential hypertension, chronic stable angina, unstable angina and ischemic left ventricular dysfunction. The aim of this study was to describe carvedilol population pharmacokinetic (PK analysis as well as the validation of analytical procedure, which is an important step regarding this approach. In contemporary clinical practice, population PK analysis is often more important than standard PK approach in setting a mathematical model that describes the PK parameters. Also, it includes the variables that have particular importance in the drugs pharmacokinetics such as sex, body mass, dosage, pharmaceutical form, pathophysiological state, disease associated with the organism or the presence of a specific polymorphism in the isoenzyme important for biotransformation of the drug. One of the most frequently used approach in population PK analysis is the Nonlinear Modeling of Mixed Effects - NONMEM modeling. Analytical methods used in the data collection period is of great importance for the implementation of a population PK analysis of carvedilol in order to obtain reliable data that can be useful in clinical practice. High performance liquid chromatography (HPLC analysis of carvedilol is used to confirm the identity of a drug and provide quantitative results and also to monitor the efficacy of the therapy. Analytical procedures used in other studies could not be fully implemented in our research as it was necessary to perform certain modification and validation of the method with the aim of using the obtained results for the purpose of a population pharmacokinetic analysis. Validation process is a logical terminal phase of analytical procedure development that provides applicability of the procedure itself. The goal of validation is to ensure consistency of the method and accuracy of results or to confirm the selection of analytical method for a given sample

  9. Variability of Arthroscopy Case Volume in Orthopaedic Surgery Residency.

    Science.gov (United States)

    Gil, Joseph A; Waryasz, Gregory R; Owens, Brett D; Daniels, Alan H

    2016-05-01

    To examine orthopaedic surgery case logs for arthroscopy case volume during residency training and to evaluate trends in case volume and variability over time. Publicly available Accreditation Council for Graduate Medical Education surgical case logs from 2007 to 2013 for orthopaedic surgery residency were assessed for variability and case volume trends in shoulder, elbow, wrist, hip, knee, and ankle arthroscopy. The national average number of procedures performed in each arthroscopy category reported was directly compared from 2009 to 2013. The 10th and 90th percentile arthroscopy case volume was compared between 2007 and 2013 for shoulder and knee arthroscopy procedures. Subsequently, the difference between the 10th and 90th percentile arthroscopy case volume in each category in 2007 was compared with the difference between the 10th and 90th percentile arthroscopy case volume in each category in 2013. From 2007 to 2013, shoulder arthroscopy procedures performed per resident increased by 43.1% (P = .0001); elbow arthroscopy procedures increased by 28.0% (P = .00612); wrist arthroscopy procedures increased by 8.6% (P = .05); hip arthroscopy procedures, which were first reported in 2012, increased by 588.9%; knee arthroscopy procedures increased by 8.5% (P = .0435); ankle arthroscopy increased by 27.6% (P = .00149). The difference in knee and shoulder arthroscopy volume between residents in the 10th and 90th percentile in 2007 and residents in the 10th and 90th percentile in 2013 was not significant (P > .05). There was a 3.66-fold difference in knee arthroscopy volume between residents in the 10th and 90th percentile in 2007, whereas the difference was 3.36-fold in 2013 (P = .70). There was a 5.86-fold difference in shoulder arthroscopy case volume between residents in the 10th and 90th percentile in 2007, whereas the difference was 4.96-fold in 2013 (P = .29). The volume of arthroscopy cases performed by graduating orthopaedic surgery residents has

  10. Pareto genealogies arising from a Poisson branching evolution model with selection.

    Science.gov (United States)

    Huillet, Thierry E

    2014-02-01

    We study a class of coalescents derived from a sampling procedure out of N i.i.d. Pareto(α) random variables, normalized by their sum, including β-size-biasing on total length effects (β Poisson-Dirichlet (α, -β) Ξ-coalescent (α ε[0, 1)), or to a family of continuous-time Beta (2 - α, α - β)Λ-coalescents (α ε[1, 2)), or to the Kingman coalescent (α ≥ 2). We indicate that this class of coalescent processes (and their scaling limits) may be viewed as the genealogical processes of some forward in time evolving branching population models including selection effects. In such constant-size population models, the reproduction step, which is based on a fitness-dependent Poisson Point Process with scaling power-law(α) intensity, is coupled to a selection step consisting of sorting out the N fittest individuals issued from the reproduction step.

  11. FREY’S PROCEDURE- TO ANALYSE THE OUTCOME OF THIS PROCEDURE IN CHRONIC PANCREATITIS

    Directory of Open Access Journals (Sweden)

    Shilpa Mariappa Casaba

    2017-04-01

    Full Text Available BACKGROUND Chronic Pancreatitis (CP is a progressive inflammatory disease characterised by debilitating pain and pancreatic insufficiency. There is enormous personal and socio-economic impact on impairment of quality of life, inability to work and even shortening of life expectancy. Although, pancreaticoduodenectomy had been considered the standard surgical procedure for patients with CP because of its high post-op complications with exocrine and endocrine insufficiency, it is not preferred. This has led to a hybrid procedure described by Frey’s, which is used in our study for CP. We aim to analyse the short-term and long-term outcomes of Frey’s procedure at a tertiary care center in patients with chronic pancreatitis. MATERIALS AND METHODS A retrospective review of all CP patients who underwent Frey procedure were reviewed from January 2007-January 2016. Perioperative variables, short-term (30 days and long-term (3 years outcomes were reviewed. Data are frequency (% or mean. RESULTS A total of 97 patients underwent Frey’s procedure. A total of 72 (70.7% were men and 25 (29.3% were women. Mean age was 38 years (range 14-66 years. Indications for surgery included intractable pain (n=97, 100% and obstructive jaundice (n=4, 4.3%. 9 patients (32.6% were diabetic preoperatively. Concomitant procedures include biliary drainage procedure was done for 4 patients (4.3%, i.e. choledochojejunostomy and splenectomy for 2 patients (2.1%, cholecystectomy (n=6, 6%. Short-term outcomes include surgical site infection (n=10, 10%, pancreatic leak (n=6, 5.82% and 2 patients required reoperation for bleeding and no mortality (30 days, diabetic ketoacidosis (n=2, 2%. Pancreatic carcinoma was detected in 3 (2.1% patients. Long-term outcomes include pain free status (n=80, 86.9%, median follow-up of 3 years. Redo pancreatic procedure was performed in 1 (4.3% for anastomotic leak. CONCLUSION Frey’s procedure is a safe and effective pain palliative option for CP

  12. Modal-pushover-based ground-motion scaling procedure

    Science.gov (United States)

    Kalkan, Erol; Chopra, Anil K.

    2011-01-01

    Earthquake engineering is increasingly using nonlinear response history analysis (RHA) to demonstrate the performance of structures. This rigorous method of analysis requires selection and scaling of ground motions appropriate to design hazard levels. This paper presents a modal-pushover-based scaling (MPS) procedure to scale ground motions for use in a nonlinear RHA of buildings. In the MPS method, the ground motions are scaled to match to a specified tolerance, a target value of the inelastic deformation of the first-mode inelastic single-degree-of-freedom (SDF) system whose properties are determined by the first-mode pushover analysis. Appropriate for first-mode dominated structures, this approach is extended for structures with significant contributions of higher modes by considering elastic deformation of second-mode SDF systems in selecting a subset of the scaled ground motions. Based on results presented for three actual buildings-4, 6, and 13-story-the accuracy and efficiency of the MPS procedure are established and its superiority over the ASCE/SEI 7-05 scaling procedure is demonstrated.

  13. Determination of optimal ultrasound planes for the initialisation of image registration during endoscopic ultrasound-guided procedures.

    Science.gov (United States)

    Bonmati, Ester; Hu, Yipeng; Gibson, Eli; Uribarri, Laura; Keane, Geri; Gurusami, Kurinchi; Davidson, Brian; Pereira, Stephen P; Clarkson, Matthew J; Barratt, Dean C

    2018-06-01

    Navigation of endoscopic ultrasound (EUS)-guided procedures of the upper gastrointestinal (GI) system can be technically challenging due to the small fields-of-view of ultrasound and optical devices, as well as the anatomical variability and limited number of orienting landmarks during navigation. Co-registration of an EUS device and a pre-procedure 3D image can enhance the ability to navigate. However, the fidelity of this contextual information depends on the accuracy of registration. The purpose of this study was to develop and test the feasibility of a simulation-based planning method for pre-selecting patient-specific EUS-visible anatomical landmark locations to maximise the accuracy and robustness of a feature-based multimodality registration method. A registration approach was adopted in which landmarks are registered to anatomical structures segmented from the pre-procedure volume. The predicted target registration errors (TREs) of EUS-CT registration were estimated using simulated visible anatomical landmarks and a Monte Carlo simulation of landmark localisation error. The optimal planes were selected based on the 90th percentile of TREs, which provide a robust and more accurate EUS-CT registration initialisation. The method was evaluated by comparing the accuracy and robustness of registrations initialised using optimised planes versus non-optimised planes using manually segmented CT images and simulated ([Formula: see text]) or retrospective clinical ([Formula: see text]) EUS landmarks. The results show a lower 90th percentile TRE when registration is initialised using the optimised planes compared with a non-optimised initialisation approach (p value [Formula: see text]). The proposed simulation-based method to find optimised EUS planes and landmarks for EUS-guided procedures may have the potential to improve registration accuracy. Further work will investigate applying the technique in a clinical setting.

  14. Estimation of selected seasonal streamflow statistics representative of 1930-2002 in West Virginia

    Science.gov (United States)

    Wiley, Jeffrey B.; Atkins, John T.

    2010-01-01

    Regional equations and procedures were developed for estimating seasonal 1-day 10-year, 7-day 10-year, and 30-day 5-year hydrologically based low-flow frequency values for unregulated streams in West Virginia. Regional equations and procedures also were developed for estimating the seasonal U.S. Environmental Protection Agency harmonic-mean flows and the 50-percent flow-duration values. The seasons were defined as winter (January 1-March 31), spring (April 1-June 30), summer (July 1-September 30), and fall (October 1-December 31). Regional equations were developed using ordinary least squares regression using statistics from 117 U.S. Geological Survey continuous streamgage stations as dependent variables and basin characteristics as independent variables. Equations for three regions in West Virginia-North, South-Central, and Eastern Panhandle Regions-were determined. Drainage area, average annual precipitation, and longitude of the basin centroid are significant independent variables in one or more of the equations. The average standard error of estimates for the equations ranged from 12.6 to 299 percent. Procedures developed to estimate the selected seasonal streamflow statistics in this study are applicable only to rural, unregulated streams within the boundaries of West Virginia that have independent variables within the limits of the stations used to develop the regional equations: drainage area from 16.3 to 1,516 square miles in the North Region, from 2.78 to 1,619 square miles in the South-Central Region, and from 8.83 to 3,041 square miles in the Eastern Panhandle Region; average annual precipitation from 42.3 to 61.4 inches in the South-Central Region and from 39.8 to 52.9 inches in the Eastern Panhandle Region; and longitude of the basin centroid from 79.618 to 82.023 decimal degrees in the North Region. All estimates of seasonal streamflow statistics are representative of the period from the 1930 to the 2002 climatic year.

  15. Selection of irradiator for potato preservation

    Energy Technology Data Exchange (ETDEWEB)

    Kinsara, A R; Melaibari, A G; Abulfaraj, W H; Mamoon, A M; Kamal, S E [Nuclear Engineering Department, Faculty of Engineering, King Abdulaziz University P.O.Box 9027, Jeddah-21413, (Saudi Arabia)

    1997-12-31

    A formal decision methodology is a sound approach for assisting in decision making needed for the selection of irradiators for Potato preservation. A formal analysis is preferred over an informal intuitive analysis which has limitations. This will focus on substantial issues and provide the basis for a compromise between conflicting objectives. All critical issues in selection of irradiators for Potato preservation can be addressed within the decision analysis framework. Of special significance is the treatment of the uncertainly associated with consequences of a decision and the preferences of the experts. A decision theory is employed in providing a strategy for implementation of the irradiator selection for food preservation for Saudi Arabia. To select a suitable decision methodology for the present case, a detailed survey of available decision methods was conducted. These methods have been developed and applied with varying degrees of success to many diverse areas of interest. Based on detailed surveys, the Analytic Hierarchy process (AHP) was selected to evaluate the various irradiators for Potato irradiation. These are electron accelerators, X-ray irradiators, and gamma irradiators. The purpose was to determine the optimal. The set of factors impacting irradiator selection were developed and defined to provide comprehensive and realistic variables that judge the represented irradiator alternatives. The factors developed are economic considerations, technical considerations, safety aspects, and compatibility with local environment. The AHP computer program was developed to computerize the tedious complicated computations towards implementing the AHP systematic procedures to solve the present problem. The program was developed using FOXPRO. Based upon the available data, and employing the APH computer program, the results show superiority of {sup 60} Co gamma-ray irradiator over other irradiators for saudi arabia`s present circumstances. 2 figs.,7 tabs.

  16. Dynamic variable selection in SNP genotype autocalling from APEX microarray data

    Directory of Open Access Journals (Sweden)

    Zamar Ruben H

    2006-11-01

    Full Text Available Abstract Background Single nucleotide polymorphisms (SNPs are DNA sequence variations, occurring when a single nucleotide – adenine (A, thymine (T, cytosine (C or guanine (G – is altered. Arguably, SNPs account for more than 90% of human genetic variation. Our laboratory has developed a highly redundant SNP genotyping assay consisting of multiple probes with signals from multiple channels for a single SNP, based on arrayed primer extension (APEX. This mini-sequencing method is a powerful combination of a highly parallel microarray with distinctive Sanger-based dideoxy terminator sequencing chemistry. Using this microarray platform, our current genotype calling system (known as SNP Chart is capable of calling single SNP genotypes by manual inspection of the APEX data, which is time-consuming and exposed to user subjectivity bias. Results Using a set of 32 Coriell DNA samples plus three negative PCR controls as a training data set, we have developed a fully-automated genotyping algorithm based on simple linear discriminant analysis (LDA using dynamic variable selection. The algorithm combines separate analyses based on the multiple probe sets to give a final posterior probability for each candidate genotype. We have tested our algorithm on a completely independent data set of 270 DNA samples, with validated genotypes, from patients admitted to the intensive care unit (ICU of St. Paul's Hospital (plus one negative PCR control sample. Our method achieves a concordance rate of 98.9% with a 99.6% call rate for a set of 96 SNPs. By adjusting the threshold value for the final posterior probability of the called genotype, the call rate reduces to 94.9% with a higher concordance rate of 99.6%. We also reversed the two independent data sets in their training and testing roles, achieving a concordance rate up to 99.8%. Conclusion The strength of this APEX chemistry-based platform is its unique redundancy having multiple probes for a single SNP. Our

  17. HIGH QUALITY ENVIRONMENTAL PRINCIPLES APPLIED TO THE ARCHITECTONIC DESIGN SELECTION PROCEDURE: THE NUTRE LAB CASE

    Directory of Open Access Journals (Sweden)

    Claudia Barroso Krause

    2012-06-01

    Full Text Available The need to produce more sustainable buildings has been influencing the design decisions all over the world. That’s why it is imperative, in Brazil, the development of strategies and method to aid the decision making during the design process, focused on high quality environmental. This paper presents a decision support tool based on the principles of sustainable construction developed by the Project, Architecture and Sustainability Research Group (GPAS of Federal University of Rio de Janeiro – Brazil. The methodology has been developed for the selection of a preliminary design of a laboratory to be built at Rio Technology Park at the University campus. The support provided by GPAS occurred in three stages: the elaboration of the Reference Guide for the competitors, the development of a methodology to evaluate the proposed solutions (based on environmental performance criteria and the assistance of the members of jury in the trial phase. The theoretical framework was based upon the concepts of the bioclimatic architecture, the procedures specified by the certification HQE® (Haute Qualité Environnementale and the method suggested by the ADDENDA® architecture office. The success of this experience points out the possibility to future application in similar cases.

  18. Selective extraction of chromium(VI) using a leaching procedure with sodium carbonate from some plant leaves, soil and sediment samples

    Energy Technology Data Exchange (ETDEWEB)

    Elci, Latif, E-mail: elci@pamukkale.edu.tr [Department of Chemistry, Pamukkale University, 20017 Denizli (Turkey); Divrikli, Umit; Akdogan, Abdullah; Hol, Aysen; Cetin, Ayse [Department of Chemistry, Pamukkale University, 20017 Denizli (Turkey); Soylak, Mustafa [Department of Chemistry, Erciyes University, 38039 Kayseri (Turkey)

    2010-01-15

    Speciation of chromium in some plant leaves, soil and sediment samples was carried out by selective leaching of Cr(VI) using a sodium carbonate leaching procedure. Total chromium from the samples was extracted using aqua regia and oxidative acid digestion, respectively. The concentrations of chromium species in the extracts were determined using by graphite furnace atomic absorption spectrometry (GFAAS). Uncoated graphite furnace tubes were used as an atomizer. Due to the presence of relatively high amounts of Na{sub 2}CO{sub 3} in the resulting samples, the possible influences of Na{sub 2}CO{sub 3} on the absorbance signals were checked. There is no interference of Na{sub 2}CO{sub 3} on the chromium absorbance up to 0.1 mol L{sup -1} Na{sub 2}CO{sub 3}. A limit of detection (LOD) for determination of Cr(VI) in 0.1 Na{sub 2}CO{sub 3} solution by GFAAS was found to be 0.93 {mu}g L{sup -1}. The procedure was applied to environmental samples. The relative standard deviation, R.S.D. as precision for 10 replicate measurements of 20 {mu} L{sup -1} Cr in processed soil sample was 4.2%.

  19. Selective extraction of chromium(VI) using a leaching procedure with sodium carbonate from some plant leaves, soil and sediment samples.

    Science.gov (United States)

    Elci, Latif; Divrikli, Umit; Akdogan, Abdullah; Hol, Aysen; Cetin, Ayse; Soylak, Mustafa

    2010-01-15

    Speciation of chromium in some plant leaves, soil and sediment samples was carried out by selective leaching of Cr(VI) using a sodium carbonate leaching procedure. Total chromium from the samples was extracted using aqua regia and oxidative acid digestion, respectively. The concentrations of chromium species in the extracts were determined using by graphite furnace atomic absorption spectrometry (GFAAS). Uncoated graphite furnace tubes were used as an atomizer. Due to the presence of relatively high amounts of Na(2)CO(3) in the resulting samples, the possible influences of Na(2)CO(3) on the absorbance signals were checked. There is no interference of Na(2)CO(3) on the chromium absorbance up to 0.1 mol L(-1) Na(2)CO(3). A limit of detection (LOD) for determination of Cr(VI) in 0.1 Na(2)CO(3) solution by GFAAS was found to be 0.93 microg L(-1). The procedure was applied to environmental samples. The relative standard deviation, R.S.D. as precision for 10 replicate measurements of 20 microL(-1) Cr in processed soil sample was 4.2%.

  20. Failure mode and effects analysis: an empirical comparison of failure mode scoring procedures.

    Science.gov (United States)

    Ashley, Laura; Armitage, Gerry

    2010-12-01

    To empirically compare 2 different commonly used failure mode and effects analysis (FMEA) scoring procedures with respect to their resultant failure mode scores and prioritization: a mathematical procedure, where scores are assigned independently by FMEA team members and averaged, and a consensus procedure, where scores are agreed on by the FMEA team via discussion. A multidisciplinary team undertook a Healthcare FMEA of chemotherapy administration. This included mapping the chemotherapy process, identifying and scoring failure modes (potential errors) for each process step, and generating remedial strategies to counteract them. Failure modes were scored using both an independent mathematical procedure and a team consensus procedure. Almost three-fifths of the 30 failure modes generated were scored differently by the 2 procedures, and for just more than one-third of cases, the score discrepancy was substantial. Using the Healthcare FMEA prioritization cutoff score, almost twice as many failure modes were prioritized by the consensus procedure than by the mathematical procedure. This is the first study to empirically demonstrate that different FMEA scoring procedures can score and prioritize failure modes differently. It found considerable variability in individual team members' opinions on scores, which highlights the subjective and qualitative nature of failure mode scoring. A consensus scoring procedure may be most appropriate for FMEA as it allows variability in individuals' scores and rationales to become apparent and to be discussed and resolved by the team. It may also yield team learning and communication benefits unlikely to result from a mathematical procedure.

  1. Spatially variable natural selection and the divergence between parapatric subspecies of lodgepole pine (Pinus contorta, Pinaceae).

    Science.gov (United States)

    Eckert, Andrew J; Shahi, Hurshbir; Datwyler, Shannon L; Neale, David B

    2012-08-01

    Plant populations arrayed across sharp environmental gradients are ideal systems for identifying the genetic basis of ecologically relevant phenotypes. A series of five uplifted marine terraces along the northern coast of California represents one such system where morphologically distinct populations of lodgepole pine (Pinus contorta) are distributed across sharp soil gradients ranging from fertile soils near the coast to podzolic soils ca. 5 km inland. A total of 92 trees was sampled across four coastal marine terraces (N = 10-46 trees/terrace) located in Mendocino County, California and sequenced for a set of 24 candidate genes for growth and responses to various soil chemistry variables. Statistical analyses relying on patterns of nucleotide diversity were employed to identify genes whose diversity patterns were inconsistent with three null models. Most genes displayed patterns of nucleotide diversity that were consistent with null models (N = 19) or with the presence of paralogs (N = 3). Two genes, however, were exceptional: an aluminum responsive ABC-transporter with F(ST) = 0.664 and an inorganic phosphate transporter characterized by divergent haplotypes segregating at intermediate frequencies in most populations. Spatially variable natural selection along gradients of aluminum and phosphate ion concentrations likely accounted for both outliers. These results shed light on some of the genetic components comprising the extended phenotype of this ecosystem, as well as highlight ecotones as fruitful study systems for the detection of adaptive genetic variants.

  2. Do birds of a feather flock together? The variable bases for African American, Asian American, and European American adolescents' selection of similar friends.

    Science.gov (United States)

    Hamm, J V

    2000-03-01

    Variability in adolescent-friend similarity is documented in a diverse sample of African American, Asian American, and European American adolescents. Similarity was greatest for substance use, modest for academic orientations, and low for ethnic identity. Compared with Asian American and European American adolescents, African American adolescents chose friends who were less similar with respect to academic orientation or substance use but more similar with respect to ethnic identity. For all three ethnic groups, personal endorsement of the dimension in question and selection of cross-ethnic-group friends heightened similarity. Similarity was a relative rather than an absolute selection criterion: Adolescents did not choose friends with identical orientations. These findings call for a comprehensive theory of friendship selection sensitive to diversity in adolescents' experiences. Implications for peer influence and self-development are discussed.

  3. Evaluating measurement models in clinical research: covariance structure analysis of latent variable models of self-conception.

    Science.gov (United States)

    Hoyle, R H

    1991-02-01

    Indirect measures of psychological constructs are vital to clinical research. On occasion, however, the meaning of indirect measures of psychological constructs is obfuscated by statistical procedures that do not account for the complex relations between items and latent variables and among latent variables. Covariance structure analysis (CSA) is a statistical procedure for testing hypotheses about the relations among items that indirectly measure a psychological construct and relations among psychological constructs. This article introduces clinical researchers to the strengths and limitations of CSA as a statistical procedure for conceiving and testing structural hypotheses that are not tested adequately with other statistical procedures. The article is organized around two empirical examples that illustrate the use of CSA for evaluating measurement models with correlated error terms, higher-order factors, and measured and latent variables.

  4. The selective external carotid arterial embolization treatment of uncontrollable epistaxis

    International Nuclear Information System (INIS)

    Yao Qunli; Liu Yizhi; Ni Caifang

    2004-01-01

    Objective: To evaluate the selective external carotid arterial embolization of uncontrollable epistaxis. Methods: 27 procedures of super-selective external carotid arterial embolization were performed with absorbable gelfoam by using Seldinger's method in 26 cases with uncontrollable epistaxis. Results: 27 procedures of super-selective intra-arterial embolization of uncontrollable epistaxis were all successful without any serious complication. Conclusions: Selective external carotid arterial embolization is safe, effective and successful in the treatment of severe epistaxis. (authors)

  5. Assessment and modifications of digestion procedures to determine trace elements in urine of hypertensive and diabetes mellitus patients

    Directory of Open Access Journals (Sweden)

    Awad Abdalla Momen

    2013-01-01

    Full Text Available Context: There is accumulating evidence that the metabolism of several trace elements like Cr, Cu, Pb, Cd, Co, Mn and Zn might have specific roles in the pathogenesis and progress of many diseases like hypertension (HTN and diabetes mellitus (DM. Objectives: To provide a fast, efficient, sensitive, and reliable analytical procedure for trace element determination in urine samples of HTN and DM patients using inductively coupled plasma optical emission spectrometry (ICP-OES. Setting and Design: The ICP-OES operating conditions were optimised and carefully selected in order to maximise the sensitivity, precision and accuracy. Factors affecting analytical and biological variability of the concentrations under study were discussed and carefully optimised. Materials and Methods: Different digestion procedures with acids and oxidising reagents were tested. The suitable procedure ICP-OES was selected, carefully modified and applied. The validity and accuracy of the different elements were determined by spiking of samples with known amounts of multi-element standard solution. Statistical Analysis: Student t-test and analysis of variance (ANOVA test were used for analysis. Microsoft Excel was used to assess the significance of the difference between variables. The concentrations obtained were expressed as mean value ± standard deviation (P = 0.05. Results: The results of this study showed that the mean concentrations of Cd, Zn, Pb, Cu, Cr and Mn in urine from both HTN (study group A and DM (study group B patients were higher than the corresponding values observed in the control group. However, while the mean value of Co was low as compared to the control group, the differences found were not significant (P = 0.05. Conclusion: The method used had excellent sensitivity, multi-element data could be obtained with very short acquisition time. The elements Cr, Cd, Pb and Zn might have specific roles in the pathogenesis and progress of HTN and DM. Further

  6. Reliability assessment of a manual-based procedure towards learning curve modeling and fmea analysis

    Directory of Open Access Journals (Sweden)

    Gustavo Rech

    2013-03-01

    Full Text Available Separation procedures in drug Distribution Centers (DC are manual-based activities prone to failures such as shipping exchanged, expired or broken drugs to the customer. Two interventions seem as promising in improving the reliability in the separation procedure: (i selection and allocation of appropriate operators to the procedure, and (ii analysis of potential failure modes incurred by selected operators. This article integrates Learning Curves (LC and FMEA (Failure Mode and Effect Analysis aimed at reducing the occurrence of failures in the manual separation of a drug DC. LCs parameters enable generating an index to identify the recommended operators to perform the procedures. The FMEA is then applied to the separation procedure carried out by the selected operators in order to identify failure modes. It also deployed the traditional FMEA severity index into two sub-indexes related to financial issues and damage to company´s image in order to characterize failures severity. When applied to a drug DC, the proposed method significantly reduced the frequency and severity of failures in the separation procedure.

  7. Eliminating Survivor Bias in Two-stage Instrumental Variable Estimators.

    Science.gov (United States)

    Vansteelandt, Stijn; Walter, Stefan; Tchetgen Tchetgen, Eric

    2018-07-01

    Mendelian randomization studies commonly focus on elderly populations. This makes the instrumental variables analysis of such studies sensitive to survivor bias, a type of selection bias. A particular concern is that the instrumental variable conditions, even when valid for the source population, may be violated for the selective population of individuals who survive the onset of the study. This is potentially very damaging because Mendelian randomization studies are known to be sensitive to bias due to even minor violations of the instrumental variable conditions. Interestingly, the instrumental variable conditions continue to hold within certain risk sets of individuals who are still alive at a given age when the instrument and unmeasured confounders exert additive effects on the exposure, and moreover, the exposure and unmeasured confounders exert additive effects on the hazard of death. In this article, we will exploit this property to derive a two-stage instrumental variable estimator for the effect of exposure on mortality, which is insulated against the above described selection bias under these additivity assumptions.

  8. Firefly as a novel swarm intelligence variable selection method in spectroscopy.

    Science.gov (United States)

    Goodarzi, Mohammad; dos Santos Coelho, Leandro

    2014-12-10

    A critical step in multivariate calibration is wavelength selection, which is used to build models with better prediction performance when applied to spectral data. Up to now, many feature selection techniques have been developed. Among all different types of feature selection techniques, those based on swarm intelligence optimization methodologies are more interesting since they are usually simulated based on animal and insect life behavior to, e.g., find the shortest path between a food source and their nests. This decision is made by a crowd, leading to a more robust model with less falling in local minima during the optimization cycle. This paper represents a novel feature selection approach to the selection of spectroscopic data, leading to more robust calibration models. The performance of the firefly algorithm, a swarm intelligence paradigm, was evaluated and compared with genetic algorithm and particle swarm optimization. All three techniques were coupled with partial least squares (PLS) and applied to three spectroscopic data sets. They demonstrate improved prediction results in comparison to when only a PLS model was built using all wavelengths. Results show that firefly algorithm as a novel swarm paradigm leads to a lower number of selected wavelengths while the prediction performance of built PLS stays the same. Copyright © 2014. Published by Elsevier B.V.

  9. Variable Work Hours--The MONY Experience

    Science.gov (United States)

    Fields, Cynthia J.

    1974-01-01

    An experiment with variable work hours in one department of a large company was so successful that it has become standard procedure in various corporate areas, both staff and line. The result? Increased production, fewer errors, improved employee morale, and a significant reduction in lateness and absenteeism. (Author)

  10. 48 CFR 231.205 - Selected costs.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 3 2010-10-01 2010-10-01 false Selected costs. 231.205... OF DEFENSE GENERAL CONTRACTING REQUIREMENTS CONTRACT COST PRINCIPLES AND PROCEDURES Contracts With Commercial Organizations 231.205 Selected costs. ...

  11. Comparison of Three Plot Selection Methods for Estimating Change in Temporally Variable, Spatially Clustered Populations.

    Energy Technology Data Exchange (ETDEWEB)

    Thompson, William L. [Bonneville Power Administration, Portland, OR (US). Environment, Fish and Wildlife

    2001-07-01

    Monitoring population numbers is important for assessing trends and meeting various legislative mandates. However, sampling across time introduces a temporal aspect to survey design in addition to the spatial one. For instance, a sample that is initially representative may lose this attribute if there is a shift in numbers and/or spatial distribution in the underlying population that is not reflected in later sampled plots. Plot selection methods that account for this temporal variability will produce the best trend estimates. Consequently, I used simulation to compare bias and relative precision of estimates of population change among stratified and unstratified sampling designs based on permanent, temporary, and partial replacement plots under varying levels of spatial clustering, density, and temporal shifting of populations. Permanent plots produced more precise estimates of change than temporary plots across all factors. Further, permanent plots performed better than partial replacement plots except for high density (5 and 10 individuals per plot) and 25% - 50% shifts in the population. Stratified designs always produced less precise estimates of population change for all three plot selection methods, and often produced biased change estimates and greatly inflated variance estimates under sampling with partial replacement. Hence, stratification that remains fixed across time should be avoided when monitoring populations that are likely to exhibit large changes in numbers and/or spatial distribution during the study period. Key words: bias; change estimation; monitoring; permanent plots; relative precision; sampling with partial replacement; temporary plots.

  12. Selection, de-selection and progression in German football talent promotion.

    Science.gov (United States)

    Güllich, Arne

    2014-01-01

    This study explored to which extent the development of German professional football players is based on early talent identification (TID) and long-term nurture in talent promotion (TP) programmes or on their emergence in the course of repeated procedures of player selection and de-selection in these programmes through childhood and youth. The annual turnover of squad members in national junior teams (2001-2013) and youth elite academies was calculated; national U-team members were followed up with regard to nominations through subsequent seasons and to their success level eventually achieved at senior age; and all current Bundesliga players were analysed retrospectively regarding their earlier involvement in TID/TP programmes. Analyses revealed that the mean annual turnover of squad members was 24.5% (youth academies) and 41.0% (national U-teams), respectively. At any age, the probability of persisting in the programme three years later was <50%. Among current Bundesliga players, the age of recruitment into the TID/TP programme was widely evenly distributed across childhood and youth, respectively. Accordingly, the number of (future) Bundesliga players who were involved in TID/TP was built up continuously through all age categories. The observations suggest that the collective of professional players emerged from repeated procedures of selection and de-selection through childhood and youth rather than from early selection and long-term continuous nurture in TID/TP programmes. The findings are discussed with regard to the uncertainty of TID and of interventions applied to the selected players, and they are related to the individualistic and collectivistic approach in TP.

  13. Exploratory Spectroscopy of Magnetic Cataclysmic Variables Candidates and Other Variable Objects

    Science.gov (United States)

    Oliveira, A. S.; Rodrigues, C. V.; Cieslinski, D.; Jablonski, F. J.; Silva, K. M. G.; Almeida, L. A.; Rodríguez-Ardila, A.; Palhares, M. S.

    2017-04-01

    The increasing number of synoptic surveys made by small robotic telescopes, such as the photometric Catalina Real-Time Transient Survey (CRTS), provides a unique opportunity to discover variable sources and improves the statistical samples of such classes of objects. Our goal is the discovery of magnetic Cataclysmic Variables (mCVs). These are rare objects that probe interesting accretion scenarios controlled by the white-dwarf magnetic field. In particular, improved statistics of mCVs would help to address open questions on their formation and evolution. We performed an optical spectroscopy survey to search for signatures of magnetic accretion in 45 variable objects selected mostly from the CRTS. In this sample, we found 32 CVs, 22 being mCV candidates, 13 of which were previously unreported as such. If the proposed classifications are confirmed, it would represent an increase of 4% in the number of known polars and 12% in the number of known IPs. A fraction of our initial sample was classified as extragalactic sources or other types of variable stars by the inspection of the identification spectra. Despite the inherent complexity in identifying a source as an mCV, variability-based selection, followed by spectroscopic snapshot observations, has proved to be an efficient strategy for their discoveries, being a relatively inexpensive approach in terms of telescope time. Based on observations obtained at the Observatório do Pico dos Dias/LNA, and at the Southern Astrophysical Research (SOAR) telescope, which is a joint project of the Ministério da Ciência, Tecnologia, e Inovação (MCTI) da República Federativa do Brasil, the U.S. National Optical Astronomy Observatory (NOAO), the University of North Carolina at Chapel Hill (UNC), and Michigan State University (MSU).

  14. The Taiwanese-American occultation survey project stellar variability. III. Detection of 58 new variable stars

    Energy Technology Data Exchange (ETDEWEB)

    Ishioka, R.; Wang, S.-Y.; Zhang, Z.-W.; Lehner, M. J.; Cook, K. H.; King, S.-K.; Lee, T.; Marshall, S. L.; Schwamb, M. E.; Wang, J.-H.; Wen, C.-Y. [Institute of Astronomy and Astrophysics, Academia Sinica, 11F of Astronomy-Mathematics Building, National Taiwan University, No. 1, Sec. 4, Roosevelt Road, Taipei 10617, Taiwan (China); Alcock, C.; Protopapas, P. [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Axelrod, T. [Steward Observatory, 933 North Cherry Avenue, Room N204, Tucson, AZ 85721 (United States); Bianco, F. B. [Center for Cosmology and Particle Physics, New York University, 4 Washington Place, New York, NY 10003 (United States); Byun, Y.-I. [Department of Astronomy and University Observatory, Yonsei University, 134 Shinchon, Seoul 120-749 (Korea, Republic of); Chen, W. P.; Ngeow, C.-C. [Institute of Astronomy, National Central University, No. 300, Jhongda Road, Jhongli City, Taoyuan County 320, Taiwan (China); Kim, D.-W. [Max Planck Institute for Astronomy, Königstuhl 17, D-69117 Heidelberg (Germany); Rice, J. A., E-mail: ishioka@asiaa.sinica.edu.tw [Department of Statistics, University of California Berkeley, 367 Evans Hall, Berkeley, CA 94720 (United States)

    2014-04-01

    The Taiwanese-American Occultation Survey project is designed for the detection of stellar occultations by small-size Kuiper Belt Objects, and it has monitored selected fields along the ecliptic plane by using four telescopes with a 3 deg{sup 2} field of view on the sky since 2005. We have analyzed data accumulated during 2005-2012 to detect variable stars. Sixteen fields with observations of more than 100 epochs were examined. We recovered 85 variables among a total of 158 known variable stars in these 16 fields. Most of the unrecovered variables are located in the fields observed less frequently. We also detected 58 variable stars which are not listed in the International Variable Star Index of the American Association of Variable Star Observers. These variable stars are classified as 3 RR Lyrae, 4 Cepheid, 1 δ Scuti, 5 Mira, 15 semi-regular, and 27 eclipsing binaries based on the periodicity and the profile of the light curves.

  15. Extension of Latin hypercube samples with correlated variables

    Energy Technology Data Exchange (ETDEWEB)

    Sallaberry, C.J. [Sandia National Laboratories, Department 6784, MS 0776, Albuquerque, NM 87185-0776 (United States); Helton, J.C. [Department of Mathematics and Statistics, Arizona State University, Tempe, AZ 85287-1804 (United States)], E-mail: jchelto@sandia.gov; Hora, S.C. [University of Hawaii at Hilo, Hilo, HI 96720-4091 (United States)

    2008-07-15

    A procedure for extending the size of a Latin hypercube sample (LHS) with rank correlated variables is described and illustrated. The extension procedure starts with an LHS of size m and associated rank correlation matrix C and constructs a new LHS of size 2m that contains the elements of the original LHS and has a rank correlation matrix that is close to the original rank correlation matrix C. The procedure is intended for use in conjunction with uncertainty and sensitivity analysis of computationally demanding models in which it is important to make efficient use of a necessarily limited number of model evaluations.

  16. Extension of Latin hypercube samples with correlated variables

    International Nuclear Information System (INIS)

    Sallaberry, C.J.; Helton, J.C.; Hora, S.C.

    2008-01-01

    A procedure for extending the size of a Latin hypercube sample (LHS) with rank correlated variables is described and illustrated. The extension procedure starts with an LHS of size m and associated rank correlation matrix C and constructs a new LHS of size 2m that contains the elements of the original LHS and has a rank correlation matrix that is close to the original rank correlation matrix C. The procedure is intended for use in conjunction with uncertainty and sensitivity analysis of computationally demanding models in which it is important to make efficient use of a necessarily limited number of model evaluations

  17. Extension of latin hypercube samples with correlated variables.

    Energy Technology Data Exchange (ETDEWEB)

    Hora, Stephen Curtis (University of Hawaii at Hilo, HI); Helton, Jon Craig (Arizona State University, Tempe, AZ); Sallaberry, Cedric J. PhD. (.; .)

    2006-11-01

    A procedure for extending the size of a Latin hypercube sample (LHS) with rank correlated variables is described and illustrated. The extension procedure starts with an LHS of size m and associated rank correlation matrix C and constructs a new LHS of size 2m that contains the elements of the original LHS and has a rank correlation matrix that is close to the original rank correlation matrix C. The procedure is intended for use in conjunction with uncertainty and sensitivity analysis of computationally demanding models in which it is important to make efficient use of a necessarily limited number of model evaluations.

  18. Relationship of Powder Feedstock Variability to Microstructure and Defects in Selective Laser Melted Alloy 718

    Science.gov (United States)

    Smith, T. M.; Kloesel, M. F.; Sudbrack, C. K.

    2017-01-01

    Powder-bed additive manufacturing processes use fine powders to build parts layer by layer. For selective laser melted (SLM) Alloy 718, the powders that are available off-the-shelf are in the 10-45 or 15-45 micron size range. A comprehensive investigation of sixteen powders from these typical ranges and two off-nominal-sized powders is underway to gain insight into the impact of feedstock on processing, durability and performance of 718 SLM space-flight hardware. This talk emphasizes an aspect of this work: the impact of powder variability on the microstructure and defects observed in the as-fabricated and full heated material, where lab-scale components were built using vendor recommended parameters. These typical powders exhibit variation in composition, percentage of fines, roughness, morphology and particle size distribution. How these differences relate to the melt-pool size, porosity, grain structure, precipitate distributions, and inclusion content will be presented and discussed in context of build quality and powder acceptance.

  19. Operational decisionmaking and action selection under psychological stress in nuclear power plants

    International Nuclear Information System (INIS)

    Gertman, D.I.; Haney, L.N.; Jenkins, J.P.; Blackman, H.S.

    1985-05-01

    An extensive review of literature on individual and group performance and decisionmaking under psychological stress was conducted and summarized. Specific stress-related variables relevant to reactor operation were pinpointed and incorporated in an experiment to assess the performance of reactor operators under psychological stress. The decisionmaking performance of 24 reactor operators under differing levels of workload, conflicting information, and detail of available written procedures was assessed in terms of selecting immediate, subsequent, and nonapplicable actions in response to 12 emergency scenarios resulting from a severe seismic event at a pressurized water reactor. Specific personality characteristics of the operators suggested by the literature to be related to performance under stress were assessed and correlated to decisionmaking under stress. The experimental results were statistically analyzed, and findings indicated that operator decisionmaking under stress was more accurate under lower levels of workload, with the availability of detailed procedures, and in the presence of high conflicting information

  20. The non-appearance of the selection procedure and possibilities of legal protection of the unsuccessful bidder; Das Ausbleiben des Auswahlverfahrens und Rechtsschutzmoeglichkeiten des unterlegenen Bieters

    Energy Technology Data Exchange (ETDEWEB)

    Meyer-Hetling, Astrid; Templin, Wolf [Kanzlei Becker Buettner Held, Berlin (Germany)

    2012-02-15

    A violation of the municipality against the relevant guidelines for awarding concessions may have legal consequences, in particular in terms of a already completed selection process. The authors of the contribution under consideration focus on the complete absence of an concession legal selection process. First of all, the energy legal, competition legal and European legal requirements and bids are presented against which the franchising community violated. Subsequently, the authors examine the question of whether this violation immediately results in the nullity of the concession contract, as well as the question of the claims of the company not taken into account against the municipality. Furthermore, the procedural and antitrust tools are presented for the non-considered companies.