WorldWideScience

Sample records for selection operator lasso

  1. Variable selection by lasso-type methods

    Directory of Open Access Journals (Sweden)

    Sohail Chand

    2011-09-01

    Full Text Available Variable selection is an important property of shrinkage methods. The adaptive lasso is an oracle procedure and can do consistent variable selection. In this paper, we provide an explanation that how use of adaptive weights make it possible for the adaptive lasso to satisfy the necessary and almost sufcient condition for consistent variable selection. We suggest a novel algorithm and give an important result that for the adaptive lasso if predictors are normalised after the introduction of adaptive weights, it makes the adaptive lasso performance identical to the lasso.

  2. Inference for feature selection using the Lasso with high-dimensional data

    DEFF Research Database (Denmark)

    Brink-Jensen, Kasper; Ekstrøm, Claus Thorn

    2014-01-01

    Penalized regression models such as the Lasso have proved useful for variable selection in many fields - especially for situations with high-dimensional data where the numbers of predictors far exceeds the number of observations. These methods identify and rank variables of importance but do...... not generally provide any inference of the selected variables. Thus, the variables selected might be the "most important" but need not be significant. We propose a significance test for the selection found by the Lasso. We introduce a procedure that computes inference and p-values for features chosen...... by the Lasso. This method rephrases the null hypothesis and uses a randomization approach which ensures that the error rate is controlled even for small samples. We demonstrate the ability of the algorithm to compute $p$-values of the expected magnitude with simulated data using a multitude of scenarios...

  3. Efficient methods for overlapping group lasso.

    Science.gov (United States)

    Yuan, Lei; Liu, Jun; Ye, Jieping

    2013-09-01

    The group Lasso is an extension of the Lasso for feature selection on (predefined) nonoverlapping groups of features. The nonoverlapping group structure limits its applicability in practice. There have been several recent attempts to study a more general formulation where groups of features are given, potentially with overlaps between the groups. The resulting optimization is, however, much more challenging to solve due to the group overlaps. In this paper, we consider the efficient optimization of the overlapping group Lasso penalized problem. We reveal several key properties of the proximal operator associated with the overlapping group Lasso, and compute the proximal operator by solving the smooth and convex dual problem, which allows the use of the gradient descent type of algorithms for the optimization. Our methods and theoretical results are then generalized to tackle the general overlapping group Lasso formulation based on the l(q) norm. We further extend our algorithm to solve a nonconvex overlapping group Lasso formulation based on the capped norm regularization, which reduces the estimation bias introduced by the convex penalty. We have performed empirical evaluations using both a synthetic and the breast cancer gene expression dataset, which consists of 8,141 genes organized into (overlapping) gene sets. Experimental results show that the proposed algorithm is more efficient than existing state-of-the-art algorithms. Results also demonstrate the effectiveness of the nonconvex formulation for overlapping group Lasso.

  4. The use of vector bootstrapping to improve variable selection precision in Lasso models

    NARCIS (Netherlands)

    Laurin, C.; Boomsma, D.I.; Lubke, G.H.

    2016-01-01

    The Lasso is a shrinkage regression method that is widely used for variable selection in statistical genetics. Commonly, K-fold cross-validation is used to fit a Lasso model. This is sometimes followed by using bootstrap confidence intervals to improve precision in the resulting variable selections.

  5. Introduction to the LASSO

    Indian Academy of Sciences (India)

    the LASSO method as a constrained quadratic programming prob- lem, and ... solve the LASSO problem. We also ... The problem (2) is equivalent to the best subset selection. .... erator (LASSO), which is based on the following key concepts:.

  6. Using Multivariate Regression Model with Least Absolute Shrinkage and Selection Operator (LASSO) to Predict the Incidence of Xerostomia after Intensity-Modulated Radiotherapy for Head and Neck Cancer

    Science.gov (United States)

    Ting, Hui-Min; Chang, Liyun; Huang, Yu-Jie; Wu, Jia-Ming; Wang, Hung-Yu; Horng, Mong-Fong; Chang, Chun-Ming; Lan, Jen-Hong; Huang, Ya-Yu; Fang, Fu-Min; Leung, Stephen Wan

    2014-01-01

    Purpose The aim of this study was to develop a multivariate logistic regression model with least absolute shrinkage and selection operator (LASSO) to make valid predictions about the incidence of moderate-to-severe patient-rated xerostomia among head and neck cancer (HNC) patients treated with IMRT. Methods and Materials Quality of life questionnaire datasets from 206 patients with HNC were analyzed. The European Organization for Research and Treatment of Cancer QLQ-H&N35 and QLQ-C30 questionnaires were used as the endpoint evaluation. The primary endpoint (grade 3+ xerostomia) was defined as moderate-to-severe xerostomia at 3 (XER3m) and 12 months (XER12m) after the completion of IMRT. Normal tissue complication probability (NTCP) models were developed. The optimal and suboptimal numbers of prognostic factors for a multivariate logistic regression model were determined using the LASSO with bootstrapping technique. Statistical analysis was performed using the scaled Brier score, Nagelkerke R2, chi-squared test, Omnibus, Hosmer-Lemeshow test, and the AUC. Results Eight prognostic factors were selected by LASSO for the 3-month time point: Dmean-c, Dmean-i, age, financial status, T stage, AJCC stage, smoking, and education. Nine prognostic factors were selected for the 12-month time point: Dmean-i, education, Dmean-c, smoking, T stage, baseline xerostomia, alcohol abuse, family history, and node classification. In the selection of the suboptimal number of prognostic factors by LASSO, three suboptimal prognostic factors were fine-tuned by Hosmer-Lemeshow test and AUC, i.e., Dmean-c, Dmean-i, and age for the 3-month time point. Five suboptimal prognostic factors were also selected for the 12-month time point, i.e., Dmean-i, education, Dmean-c, smoking, and T stage. The overall performance for both time points of the NTCP model in terms of scaled Brier score, Omnibus, and Nagelkerke R2 was satisfactory and corresponded well with the expected values. Conclusions

  7. When Is Network Lasso Accurate?

    Directory of Open Access Journals (Sweden)

    Alexander Jung

    2018-01-01

    Full Text Available The “least absolute shrinkage and selection operator” (Lasso method has been adapted recently for network-structured datasets. In particular, this network Lasso method allows to learn graph signals from a small number of noisy signal samples by using the total variation of a graph signal for regularization. While efficient and scalable implementations of the network Lasso are available, only little is known about the conditions on the underlying network structure which ensure network Lasso to be accurate. By leveraging concepts of compressed sensing, we address this gap and derive precise conditions on the underlying network topology and sampling set which guarantee the network Lasso for a particular loss function to deliver an accurate estimate of the entire underlying graph signal. We also quantify the error incurred by network Lasso in terms of two constants which reflect the connectivity of the sampled nodes.

  8. Identifying the Prognosis Factors in Death after Liver Transplantation via Adaptive LASSO in Iran

    Directory of Open Access Journals (Sweden)

    Hadi Raeisi Shahraki

    2016-01-01

    Full Text Available Despite the widespread use of liver transplantation as a routine therapy in liver diseases, the effective factors on its outcomes are still controversial. This study attempted to identify the most effective factors on death after liver transplantation. For this purpose, modified least absolute shrinkage and selection operator (LASSO, called Adaptive LASSO, was utilized. One of the best advantages of this method is considering high number of factors. Therefore, in a historical cohort study from 2008 to 2013, the clinical findings of 680 patients undergoing liver transplant surgery were considered. Ridge and Adaptive LASSO regression methods were then implemented to identify the most effective factors on death. To compare the performance of these two models, receiver operating characteristic (ROC curve was used. According to the results, 12 factors in Ridge regression and 9 ones in Adaptive LASSO regression were significant. The area under the ROC curve (AUC of Adaptive LASSO was equal to 89% (95% CI: 86%–91%, which was significantly greater than Ridge regression (64%, 95% CI: 61%–68% (p<0.001. As a conclusion, the significant factors and the performance criteria revealed the superiority of Adaptive LASSO method as a penalized model versus traditional regression model in the present study.

  9. Consistent and Conservative Model Selection with the Adaptive LASSO in Stationary and Nonstationary Autoregressions

    DEFF Research Database (Denmark)

    Kock, Anders Bredahl

    2016-01-01

    We show that the adaptive Lasso is oracle efficient in stationary and nonstationary autoregressions. This means that it estimates parameters consistently, selects the correct sparsity pattern, and estimates the coefficients belonging to the relevant variables at the same asymptotic efficiency...

  10. Toward Probabilistic Diagnosis and Understanding of Depression Based on Functional MRI Data Analysis with Logistic Group LASSO.

    Directory of Open Access Journals (Sweden)

    Yu Shimizu

    Full Text Available Diagnosis of psychiatric disorders based on brain imaging data is highly desirable in clinical applications. However, a common problem in applying machine learning algorithms is that the number of imaging data dimensions often greatly exceeds the number of available training samples. Furthermore, interpretability of the learned classifier with respect to brain function and anatomy is an important, but non-trivial issue. We propose the use of logistic regression with a least absolute shrinkage and selection operator (LASSO to capture the most critical input features. In particular, we consider application of group LASSO to select brain areas relevant to diagnosis. An additional advantage of LASSO is its probabilistic output, which allows evaluation of diagnosis certainty. To verify our approach, we obtained semantic and phonological verbal fluency fMRI data from 31 depression patients and 31 control subjects, and compared the performances of group LASSO (gLASSO, and sparse group LASSO (sgLASSO to those of standard LASSO (sLASSO, Support Vector Machine (SVM, and Random Forest. Over 90% classification accuracy was achieved with gLASSO, sgLASSO, as well as SVM; however, in contrast to SVM, LASSO approaches allow for identification of the most discriminative weights and estimation of prediction reliability. Semantic task data revealed contributions to the classification from left precuneus, left precentral gyrus, left inferior frontal cortex (pars triangularis, and left cerebellum (c rus1. Weights for the phonological task indicated contributions from left inferior frontal operculum, left post central gyrus, left insula, left middle frontal cortex, bilateral middle temporal cortices, bilateral precuneus, left inferior frontal cortex (pars triangularis, and left precentral gyrus. The distribution of normalized odds ratios further showed, that predictions with absolute odds ratios higher than 0.2 could be regarded as certain.

  11. Matlab implementation of LASSO, LARS, the elastic net and SPCA

    DEFF Research Database (Denmark)

    2005-01-01

    There are a number of interesting variable selection methods available beside the regular forward selection and stepwise selection methods. Such approaches include LASSO (Least Absolute Shrinkage and Selection Operator), least angle regression (LARS) and elastic net (LARS-EN) regression. There al...... exists a method for calculating principal components with sparse loadings. This software package contains Matlab implementations of these functions. The standard implementations of these functions are available as add-on packages in S-Plus and R....

  12. Dictionary-Based Image Denoising by Fused-Lasso Atom Selection

    Directory of Open Access Journals (Sweden)

    Ao Li

    2014-01-01

    Full Text Available We proposed an efficient image denoising scheme by fused lasso with dictionary learning. The scheme has two important contributions. The first one is that we learned the patch-based adaptive dictionary by principal component analysis (PCA with clustering the image into many subsets, which can better preserve the local geometric structure. The second one is that we coded the patches in each subset by fused lasso with the clustering learned dictionary and proposed an iterative Split Bregman to solve it rapidly. We present the capabilities with several experiments. The results show that the proposed scheme is competitive to some excellent denoising algorithms.

  13. Simultaneous Channel and Feature Selection of Fused EEG Features Based on Sparse Group Lasso

    Directory of Open Access Journals (Sweden)

    Jin-Jia Wang

    2015-01-01

    Full Text Available Feature extraction and classification of EEG signals are core parts of brain computer interfaces (BCIs. Due to the high dimension of the EEG feature vector, an effective feature selection algorithm has become an integral part of research studies. In this paper, we present a new method based on a wrapped Sparse Group Lasso for channel and feature selection of fused EEG signals. The high-dimensional fused features are firstly obtained, which include the power spectrum, time-domain statistics, AR model, and the wavelet coefficient features extracted from the preprocessed EEG signals. The wrapped channel and feature selection method is then applied, which uses the logistical regression model with Sparse Group Lasso penalized function. The model is fitted on the training data, and parameter estimation is obtained by modified blockwise coordinate descent and coordinate gradient descent method. The best parameters and feature subset are selected by using a 10-fold cross-validation. Finally, the test data is classified using the trained model. Compared with existing channel and feature selection methods, results show that the proposed method is more suitable, more stable, and faster for high-dimensional feature fusion. It can simultaneously achieve channel and feature selection with a lower error rate. The test accuracy on the data used from international BCI Competition IV reached 84.72%.

  14. Recommendations for the Implementation of the LASSO Workflow

    Energy Technology Data Exchange (ETDEWEB)

    Gustafson, William I [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Vogelmann, Andrew M [Brookhaven National Lab. (BNL), Upton, NY (United States); Cheng, Xiaoping [National University of Defense Technology, China; Endo, Satoshi [Brookhaven National Lab. (BNL), Upton, NY (United States); Krishna, Bhargavi [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Li, Zhijin [California Inst. of Technology (CalTech), La Canada Flintridge, CA (United States). Jet Propulsion Lab.; University of California, Los Angeles; Toto, Tami [Brookhaven National Lab. (BNL), Upton, NY (United States); Xiao, Heng [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2017-11-15

    The U. S. Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Research Fa-cility began a pilot project in May 2015 to design a routine, high-resolution modeling capability to complement ARM’s extensive suite of measurements. This modeling capability, envisioned in the ARM Decadal Vision (U.S. Department of Energy 2014), subsequently has been named the Large-Eddy Simu-lation (LES) ARM Symbiotic Simulation and Observation (LASSO) project, and it has an initial focus of shallow convection at the ARM Southern Great Plains (SGP) atmospheric observatory. This report documents the recommendations resulting from the pilot project to be considered by ARM for imple-mentation into routine operations. During the pilot phase, LASSO has evolved from the initial vision outlined in the pilot project white paper (Gustafson and Vogelmann 2015) to what is recommended in this report. Further details on the overall LASSO project are available at https://www.arm.gov/capabilities/modeling/lasso. Feedback regarding LASSO and the recommendations in this report can be directed to William Gustafson, the project principal investigator (PI), and Andrew Vogelmann, the co-principal investigator (Co-PI), via lasso@arm.gov.

  15. Factors associated with performing tuberculosis screening of HIV-positive patients in Ghana: LASSO-based predictor selection in a large public health data set

    Directory of Open Access Journals (Sweden)

    Susanne Mueller-Using

    2016-07-01

    Full Text Available Abstract Background The purpose of this study is to propose the Least Absolute Shrinkage and Selection Operators procedure (LASSO as an alternative to conventional variable selection models, as it allows for easy interpretation and handles multicollinearities. We developed a model on the basis of LASSO-selected parameters in order to link associated demographical, socio-economical, clinical and immunological factors to performing tuberculosis screening in HIV-positive patients in Ghana. Methods Applying the LASSO method and multivariate logistic regression analysis on a large public health data set, we selected relevant predictors related to tuberculosis screening. Results One Thousand Ninety Five patients infected with HIV were enrolled into this study with 691 (63.2 % of them having tuberculosis screening documented in their patient folders. Predictors found to be significantly associated with performance of tuberculosis screening can be classified into factors related to the clinician’s perception of the clinical state, as well as those related to PLHIV’s awareness. These factors include newly diagnosed HIV infections (n = 354 (32.42 %, aOR 1.84, current CD4+ T cell count (aOR 0.92, non-availability of HIV type (n = 787 (72.07 %, aOR 0.56, chronic cough (n = 32 (2.93 %, aOR 5.07, intake of co-trimoxazole (n = 271 (24.82 %, aOR 2.31, vitamin supplementation (n = 220 (20.15 %, aOR 2.64 as well as the use of mosquito bed nets (n = 613 (56.14 %, aOR 1.53. Conclusions Accelerated TB screening among newly diagnosed HIV-patients indicates that application of the WHO screening form for intensifying tuberculosis case finding among HIV-positive individuals in resource-limited settings is increasingly adopted. However, screening for TB in PLHIV is still impacted by clinician’s perception of patient’s health state and PLHIV’s health awareness. Education of staff, counselling of PLHIV and sufficient financing are

  16. EPS-LASSO: Test for High-Dimensional Regression Under Extreme Phenotype Sampling of Continuous Traits.

    Science.gov (United States)

    Xu, Chao; Fang, Jian; Shen, Hui; Wang, Yu-Ping; Deng, Hong-Wen

    2018-01-25

    Extreme phenotype sampling (EPS) is a broadly-used design to identify candidate genetic factors contributing to the variation of quantitative traits. By enriching the signals in extreme phenotypic samples, EPS can boost the association power compared to random sampling. Most existing statistical methods for EPS examine the genetic factors individually, despite many quantitative traits have multiple genetic factors underlying their variation. It is desirable to model the joint effects of genetic factors, which may increase the power and identify novel quantitative trait loci under EPS. The joint analysis of genetic data in high-dimensional situations requires specialized techniques, e.g., the least absolute shrinkage and selection operator (LASSO). Although there are extensive research and application related to LASSO, the statistical inference and testing for the sparse model under EPS remain unknown. We propose a novel sparse model (EPS-LASSO) with hypothesis test for high-dimensional regression under EPS based on a decorrelated score function. The comprehensive simulation shows EPS-LASSO outperforms existing methods with stable type I error and FDR control. EPS-LASSO can provide a consistent power for both low- and high-dimensional situations compared with the other methods dealing with high-dimensional situations. The power of EPS-LASSO is close to other low-dimensional methods when the causal effect sizes are small and is superior when the effects are large. Applying EPS-LASSO to a transcriptome-wide gene expression study for obesity reveals 10 significant body mass index associated genes. Our results indicate that EPS-LASSO is an effective method for EPS data analysis, which can account for correlated predictors. The source code is available at https://github.com/xu1912/EPSLASSO. hdeng2@tulane.edu. Supplementary data are available at Bioinformatics online. © The Author (2018). Published by Oxford University Press. All rights reserved. For Permissions, please

  17. Oracle Efficient Estimation and Forecasting with the Adaptive LASSO and the Adaptive Group LASSO in Vector Autoregressions

    DEFF Research Database (Denmark)

    Kock, Anders Bredahl; Callot, Laurent

    We show that the adaptive Lasso (aLasso) and the adaptive group Lasso (agLasso) are oracle efficient in stationary vector autoregressions where the number of parameters per equation is smaller than the number of observations. In particular, this means that the parameters are estimated consistently...

  18. Breast cancer tumor classification using LASSO method selection approach

    International Nuclear Information System (INIS)

    Celaya P, J. M.; Ortiz M, J. A.; Martinez B, M. R.; Solis S, L. O.; Castaneda M, R.; Garza V, I.; Martinez F, M.; Ortiz R, J. M.

    2016-10-01

    Breast cancer is one of the leading causes of deaths worldwide among women. Early tumor detection is key in reducing breast cancer deaths and screening mammography is the widest available method for early detection. Mammography is the most common and effective breast cancer screening test. However, the rate of positive findings is very low, making the radiologic interpretation monotonous and biased toward errors. In an attempt to alleviate radiological workload, this work presents a computer-aided diagnosis (CAD x) method aimed to automatically classify tumor lesions into malign or benign as a means to a second opinion. The CAD x methos, extracts image features, and classifies the screening mammogram abnormality into one of two categories: subject at risk of having malignant tumor (malign), and healthy subject (benign). In this study, 143 abnormal segmentation s (57 malign and 86 benign) from the Breast Cancer Digital Repository (BCD R) public database were used to train and evaluate the CAD x system. Percentile-rank (p-rank) was used to standardize the data. Using the LASSO feature selection methodology, the model achieved a Leave-one-out-cross-validation area under the receiver operating characteristic curve (Auc) of 0.950. The proposed method has the potential to rank abnormal lesions with high probability of malignant findings aiding in the detection of potential malign cases as a second opinion to the radiologist. (Author)

  19. Breast cancer tumor classification using LASSO method selection approach

    Energy Technology Data Exchange (ETDEWEB)

    Celaya P, J. M.; Ortiz M, J. A.; Martinez B, M. R.; Solis S, L. O.; Castaneda M, R.; Garza V, I.; Martinez F, M.; Ortiz R, J. M., E-mail: morvymm@yahoo.com.mx [Universidad Autonoma de Zacatecas, Av. Ramon Lopez Velarde 801, Col. Centro, 98000 Zacatecas, Zac. (Mexico)

    2016-10-15

    Breast cancer is one of the leading causes of deaths worldwide among women. Early tumor detection is key in reducing breast cancer deaths and screening mammography is the widest available method for early detection. Mammography is the most common and effective breast cancer screening test. However, the rate of positive findings is very low, making the radiologic interpretation monotonous and biased toward errors. In an attempt to alleviate radiological workload, this work presents a computer-aided diagnosis (CAD x) method aimed to automatically classify tumor lesions into malign or benign as a means to a second opinion. The CAD x methos, extracts image features, and classifies the screening mammogram abnormality into one of two categories: subject at risk of having malignant tumor (malign), and healthy subject (benign). In this study, 143 abnormal segmentation s (57 malign and 86 benign) from the Breast Cancer Digital Repository (BCD R) public database were used to train and evaluate the CAD x system. Percentile-rank (p-rank) was used to standardize the data. Using the LASSO feature selection methodology, the model achieved a Leave-one-out-cross-validation area under the receiver operating characteristic curve (Auc) of 0.950. The proposed method has the potential to rank abnormal lesions with high probability of malignant findings aiding in the detection of potential malign cases as a second opinion to the radiologist. (Author)

  20. Bayesian LASSO, scale space and decision making in association genetics.

    Science.gov (United States)

    Pasanen, Leena; Holmström, Lasse; Sillanpää, Mikko J

    2015-01-01

    LASSO is a penalized regression method that facilitates model fitting in situations where there are as many, or even more explanatory variables than observations, and only a few variables are relevant in explaining the data. We focus on the Bayesian version of LASSO and consider four problems that need special attention: (i) controlling false positives, (ii) multiple comparisons, (iii) collinearity among explanatory variables, and (iv) the choice of the tuning parameter that controls the amount of shrinkage and the sparsity of the estimates. The particular application considered is association genetics, where LASSO regression can be used to find links between chromosome locations and phenotypic traits in a biological organism. However, the proposed techniques are relevant also in other contexts where LASSO is used for variable selection. We separate the true associations from false positives using the posterior distribution of the effects (regression coefficients) provided by Bayesian LASSO. We propose to solve the multiple comparisons problem by using simultaneous inference based on the joint posterior distribution of the effects. Bayesian LASSO also tends to distribute an effect among collinear variables, making detection of an association difficult. We propose to solve this problem by considering not only individual effects but also their functionals (i.e. sums and differences). Finally, whereas in Bayesian LASSO the tuning parameter is often regarded as a random variable, we adopt a scale space view and consider a whole range of fixed tuning parameters, instead. The effect estimates and the associated inference are considered for all tuning parameters in the selected range and the results are visualized with color maps that provide useful insights into data and the association problem considered. The methods are illustrated using two sets of artificial data and one real data set, all representing typical settings in association genetics.

  1. Lasso and probabilistic inequalities for multivariate point processes

    DEFF Research Database (Denmark)

    Hansen, Niels Richard; Reynaud-Bouret, Patricia; Rivoirard, Vincent

    2015-01-01

    Due to its low computational cost, Lasso is an attractive regularization method for high-dimensional statistical settings. In this paper, we consider multivariate counting processes depending on an unknown function parameter to be estimated by linear combinations of a fixed dictionary. To select...... for multivariate Hawkes processes are proven, which allows us to check these assumptions by considering general dictionaries based on histograms, Fourier or wavelet bases. Motivated by problems of neuronal activity inference, we finally carry out a simulation study for multivariate Hawkes processes and compare our...... methodology with the adaptive Lasso procedure proposed by Zou in (J. Amer. Statist. Assoc. 101 (2006) 1418–1429). We observe an excellent behavior of our procedure. We rely on theoretical aspects for the essential question of tuning our methodology. Unlike adaptive Lasso of (J. Amer. Statist. Assoc. 101 (2006...

  2. Improving the prediction of going concern of Taiwanese listed companies using a hybrid of LASSO with data mining techniques.

    Science.gov (United States)

    Goo, Yeung-Ja James; Chi, Der-Jang; Shen, Zong-De

    2016-01-01

    The purpose of this study is to establish rigorous and reliable going concern doubt (GCD) prediction models. This study first uses the least absolute shrinkage and selection operator (LASSO) to select variables and then applies data mining techniques to establish prediction models, such as neural network (NN), classification and regression tree (CART), and support vector machine (SVM). The samples of this study include 48 GCD listed companies and 124 NGCD (non-GCD) listed companies from 2002 to 2013 in the TEJ database. We conduct fivefold cross validation in order to identify the prediction accuracy. According to the empirical results, the prediction accuracy of the LASSO-NN model is 88.96 % (Type I error rate is 12.22 %; Type II error rate is 7.50 %), the prediction accuracy of the LASSO-CART model is 88.75 % (Type I error rate is 13.61 %; Type II error rate is 14.17 %), and the prediction accuracy of the LASSO-SVM model is 89.79 % (Type I error rate is 10.00 %; Type II error rate is 15.83 %).

  3. FFTLasso: Large-Scale LASSO in the Fourier Domain

    KAUST Repository

    Bibi, Adel Aamer

    2017-11-09

    In this paper, we revisit the LASSO sparse representation problem, which has been studied and used in a variety of different areas, ranging from signal processing and information theory to computer vision and machine learning. In the vision community, it found its way into many important applications, including face recognition, tracking, super resolution, image denoising, to name a few. Despite advances in efficient sparse algorithms, solving large-scale LASSO problems remains a challenge. To circumvent this difficulty, people tend to downsample and subsample the problem (e.g. via dimensionality reduction) to maintain a manageable sized LASSO, which usually comes at the cost of losing solution accuracy. This paper proposes a novel circulant reformulation of the LASSO that lifts the problem to a higher dimension, where ADMM can be efficiently applied to its dual form. Because of this lifting, all optimization variables are updated using only basic element-wise operations, the most computationally expensive of which is a 1D FFT. In this way, there is no need for a linear system solver nor matrix-vector multiplication. Since all operations in our FFTLasso method are element-wise, the subproblems are completely independent and can be trivially parallelized (e.g. on a GPU). The attractive computational properties of FFTLasso are verified by extensive experiments on synthetic and real data and on the face recognition task. They demonstrate that FFTLasso scales much more effectively than a state-of-the-art solver.

  4. FFTLasso: Large-Scale LASSO in the Fourier Domain

    KAUST Repository

    Bibi, Adel Aamer; Itani, Hani; Ghanem, Bernard

    2017-01-01

    In this paper, we revisit the LASSO sparse representation problem, which has been studied and used in a variety of different areas, ranging from signal processing and information theory to computer vision and machine learning. In the vision community, it found its way into many important applications, including face recognition, tracking, super resolution, image denoising, to name a few. Despite advances in efficient sparse algorithms, solving large-scale LASSO problems remains a challenge. To circumvent this difficulty, people tend to downsample and subsample the problem (e.g. via dimensionality reduction) to maintain a manageable sized LASSO, which usually comes at the cost of losing solution accuracy. This paper proposes a novel circulant reformulation of the LASSO that lifts the problem to a higher dimension, where ADMM can be efficiently applied to its dual form. Because of this lifting, all optimization variables are updated using only basic element-wise operations, the most computationally expensive of which is a 1D FFT. In this way, there is no need for a linear system solver nor matrix-vector multiplication. Since all operations in our FFTLasso method are element-wise, the subproblems are completely independent and can be trivially parallelized (e.g. on a GPU). The attractive computational properties of FFTLasso are verified by extensive experiments on synthetic and real data and on the face recognition task. They demonstrate that FFTLasso scales much more effectively than a state-of-the-art solver.

  5. Supervised group Lasso with applications to microarray data analysis

    Directory of Open Access Journals (Sweden)

    Huang Jian

    2007-02-01

    Full Text Available Abstract Background A tremendous amount of efforts have been devoted to identifying genes for diagnosis and prognosis of diseases using microarray gene expression data. It has been demonstrated that gene expression data have cluster structure, where the clusters consist of co-regulated genes which tend to have coordinated functions. However, most available statistical methods for gene selection do not take into consideration the cluster structure. Results We propose a supervised group Lasso approach that takes into account the cluster structure in gene expression data for gene selection and predictive model building. For gene expression data without biological cluster information, we first divide genes into clusters using the K-means approach and determine the optimal number of clusters using the Gap method. The supervised group Lasso consists of two steps. In the first step, we identify important genes within each cluster using the Lasso method. In the second step, we select important clusters using the group Lasso. Tuning parameters are determined using V-fold cross validation at both steps to allow for further flexibility. Prediction performance is evaluated using leave-one-out cross validation. We apply the proposed method to disease classification and survival analysis with microarray data. Conclusion We analyze four microarray data sets using the proposed approach: two cancer data sets with binary cancer occurrence as outcomes and two lymphoma data sets with survival outcomes. The results show that the proposed approach is capable of identifying a small number of influential gene clusters and important genes within those clusters, and has better prediction performance than existing methods.

  6. Least absolute shrinkage and selection operator type methods for the identification of serum biomarkers of overweight and obesity: simulation and application

    Directory of Open Access Journals (Sweden)

    Monica M. Vasquez

    2016-11-01

    Full Text Available Abstract Background The study of circulating biomarkers and their association with disease outcomes has become progressively complex due to advances in the measurement of these biomarkers through multiplex technologies. The Least Absolute Shrinkage and Selection Operator (LASSO is a data analysis method that may be utilized for biomarker selection in these high dimensional data. However, it is unclear which LASSO-type method is preferable when considering data scenarios that may be present in serum biomarker research, such as high correlation between biomarkers, weak associations with the outcome, and sparse number of true signals. The goal of this study was to compare the LASSO to five LASSO-type methods given these scenarios. Methods A simulation study was performed to compare the LASSO, Adaptive LASSO, Elastic Net, Iterated LASSO, Bootstrap-Enhanced LASSO, and Weighted Fusion for the binary logistic regression model. The simulation study was designed to reflect the data structure of the population-based Tucson Epidemiological Study of Airway Obstructive Disease (TESAOD, specifically the sample size (N = 1000 for total population, 500 for sub-analyses, correlation of biomarkers (0.20, 0.50, 0.80, prevalence of overweight (40% and obese (12% outcomes, and the association of outcomes with standardized serum biomarker concentrations (log-odds ratio = 0.05–1.75. Each LASSO-type method was then applied to the TESAOD data of 306 overweight, 66 obese, and 463 normal-weight subjects with a panel of 86 serum biomarkers. Results Based on the simulation study, no method had an overall superior performance. The Weighted Fusion correctly identified more true signals, but incorrectly included more noise variables. The LASSO and Elastic Net correctly identified many true signals and excluded more noise variables. In the application study, biomarkers of overweight and obesity selected by all methods were Adiponectin, Apolipoprotein H, Calcitonin, CD

  7. Least absolute shrinkage and selection operator type methods for the identification of serum biomarkers of overweight and obesity: simulation and application.

    Science.gov (United States)

    Vasquez, Monica M; Hu, Chengcheng; Roe, Denise J; Chen, Zhao; Halonen, Marilyn; Guerra, Stefano

    2016-11-14

    The study of circulating biomarkers and their association with disease outcomes has become progressively complex due to advances in the measurement of these biomarkers through multiplex technologies. The Least Absolute Shrinkage and Selection Operator (LASSO) is a data analysis method that may be utilized for biomarker selection in these high dimensional data. However, it is unclear which LASSO-type method is preferable when considering data scenarios that may be present in serum biomarker research, such as high correlation between biomarkers, weak associations with the outcome, and sparse number of true signals. The goal of this study was to compare the LASSO to five LASSO-type methods given these scenarios. A simulation study was performed to compare the LASSO, Adaptive LASSO, Elastic Net, Iterated LASSO, Bootstrap-Enhanced LASSO, and Weighted Fusion for the binary logistic regression model. The simulation study was designed to reflect the data structure of the population-based Tucson Epidemiological Study of Airway Obstructive Disease (TESAOD), specifically the sample size (N = 1000 for total population, 500 for sub-analyses), correlation of biomarkers (0.20, 0.50, 0.80), prevalence of overweight (40%) and obese (12%) outcomes, and the association of outcomes with standardized serum biomarker concentrations (log-odds ratio = 0.05-1.75). Each LASSO-type method was then applied to the TESAOD data of 306 overweight, 66 obese, and 463 normal-weight subjects with a panel of 86 serum biomarkers. Based on the simulation study, no method had an overall superior performance. The Weighted Fusion correctly identified more true signals, but incorrectly included more noise variables. The LASSO and Elastic Net correctly identified many true signals and excluded more noise variables. In the application study, biomarkers of overweight and obesity selected by all methods were Adiponectin, Apolipoprotein H, Calcitonin, CD14, Complement 3, C-reactive protein, Ferritin

  8. Building interpretable predictive models for pediatric hospital readmission using Tree-Lasso logistic regression.

    Science.gov (United States)

    Jovanovic, Milos; Radovanovic, Sandro; Vukicevic, Milan; Van Poucke, Sven; Delibasic, Boris

    2016-09-01

    Quantification and early identification of unplanned readmission risk have the potential to improve the quality of care during hospitalization and after discharge. However, high dimensionality, sparsity, and class imbalance of electronic health data and the complexity of risk quantification, challenge the development of accurate predictive models. Predictive models require a certain level of interpretability in order to be applicable in real settings and create actionable insights. This paper aims to develop accurate and interpretable predictive models for readmission in a general pediatric patient population, by integrating a data-driven model (sparse logistic regression) and domain knowledge based on the international classification of diseases 9th-revision clinical modification (ICD-9-CM) hierarchy of diseases. Additionally, we propose a way to quantify the interpretability of a model and inspect the stability of alternative solutions. The analysis was conducted on >66,000 pediatric hospital discharge records from California, State Inpatient Databases, Healthcare Cost and Utilization Project between 2009 and 2011. We incorporated domain knowledge based on the ICD-9-CM hierarchy in a data driven, Tree-Lasso regularized logistic regression model, providing the framework for model interpretation. This approach was compared with traditional Lasso logistic regression resulting in models that are easier to interpret by fewer high-level diagnoses, with comparable prediction accuracy. The results revealed that the use of a Tree-Lasso model was as competitive in terms of accuracy (measured by area under the receiver operating characteristic curve-AUC) as the traditional Lasso logistic regression, but integration with the ICD-9-CM hierarchy of diseases provided more interpretable models in terms of high-level diagnoses. Additionally, interpretations of models are in accordance with existing medical understanding of pediatric readmission. Best performing models have

  9. Description of the LASSO Alpha 2 Release

    Energy Technology Data Exchange (ETDEWEB)

    Gustafson, William I. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Vogelmann, Andrew M. [Brookhaven National Lab. (BNL), Upton, NY (United States); Cheng, Xiaoping [Univ. of California, Los Angeles, CA (United States); Endo, Satoshi [Brookhaven National Lab. (BNL), Upton, NY (United States); Krishna, Bhargavi [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Li, Z. [Univ. of California, Los Angeles, CA (United States); Toto, Tami [Brookhaven National Lab. (BNL), Upton, NY (United States); Xiao, H. [Univ. of California, Los Angeles, CA (United States)

    2017-09-01

    The Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Climate Research Facility began a pilot project in May 2015 to design a routine, high-resolution modeling capability to complement ARM’s extensive suite of measurements. This modeling capability has been named the Large-Eddy Simulation (LES) ARM Symbiotic Simulation and Observation (LASSO) project. The initial focus of LASSO is on shallow convection at the ARM Southern Great Plains (SGP) Climate Research Facility. The availability of LES simulations with concurrent observations will serve many purposes. LES helps bridge the scale gap between DOE ARM observations and models, and the use of routine LES adds value to observations. It provides a self-consistent representation of the atmosphere and a dynamical context for the observations. Further, it elucidates unobservable processes and properties. LASSO will generate a simulation library for researchers that enables statistical approaches beyond a single-case mentality. It will also provide tools necessary for modelers to reproduce the LES and conduct their own sensitivity experiments. Many different uses are envisioned for the combined LASSO LES and observational library. For an observationalist, LASSO can help inform instrument remote sensing retrievals, conduct Observation System Simulation Experiments (OSSEs), and test implications of radar scan strategies or flight paths. For a theoretician, LASSO will help calculate estimates of fluxes and co-variability of values, and test relationships without having to run the model yourself. For a modeler, LASSO will help one know ahead of time which days have good forcing, have co-registered observations at high-resolution scales, and have simulation inputs and corresponding outputs to test parameterizations. Further details on the overall LASSO project are available at https://www.arm.gov/capabilities/modeling/lasso.

  10. OPTIMAL WAVELENGTH SELECTION ON HYPERSPECTRAL DATA WITH FUSED LASSO FOR BIOMASS ESTIMATION OF TROPICAL RAIN FOREST

    Directory of Open Access Journals (Sweden)

    T. Takayama

    2016-06-01

    Full Text Available Above-ground biomass prediction of tropical rain forest using remote sensing data is of paramount importance to continuous large-area forest monitoring. Hyperspectral data can provide rich spectral information for the biomass prediction; however, the prediction accuracy is affected by a small-sample-size problem, which widely exists as overfitting in using high dimensional data where the number of training samples is smaller than the dimensionality of the samples due to limitation of require time, cost, and human resources for field surveys. A common approach to addressing this problem is reducing the dimensionality of dataset. Also, acquired hyperspectral data usually have low signal-to-noise ratio due to a narrow bandwidth and local or global shifts of peaks due to instrumental instability or small differences in considering practical measurement conditions. In this work, we propose a methodology based on fused lasso regression that select optimal bands for the biomass prediction model with encouraging sparsity and grouping, which solves the small-sample-size problem by the dimensionality reduction from the sparsity and the noise and peak shift problem by the grouping. The prediction model provided higher accuracy with root-mean-square error (RMSE of 66.16 t/ha in the cross-validation than other methods; multiple linear analysis, partial least squares regression, and lasso regression. Furthermore, fusion of spectral and spatial information derived from texture index increased the prediction accuracy with RMSE of 62.62 t/ha. This analysis proves efficiency of fused lasso and image texture in biomass estimation of tropical forests.

  11. Description of the LASSO Alpha 1 Release

    Energy Technology Data Exchange (ETDEWEB)

    Gustafson, William I. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Vogelmann, Andrew M. [Brookhaven National Lab. (BNL), Upton, NY (United States); Cheng, Xiaoping [Univ. of California, Los Angeles, CA (United States); Endo, Satoshi [Brookhaven National Lab. (BNL), Upton, NY (United States); Krishna, Bhargavi [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Li, Zhijin [Univ. of California, Los Angeles, CA (United States); Toto, Tami [Brookhaven National Lab. (BNL), Upton, NY (United States); Xiao, H [Univ. of California, Los Angeles, CA (United States)

    2017-07-31

    The Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Climate Research Facility began a pilot project in May 2015 to design a routine, high-resolution modeling capability to complement ARM’s extensive suite of measurements. This modeling capability has been named the Large-Eddy Simulation (LES) ARM Symbiotic Simulation and Observation (LASSO) project. The availability of LES simulations with concurrent observations will serve many purposes. LES helps bridge the scale gap between DOE ARM observations and models, and the use of routine LES adds value to observations. It provides a self-consistent representation of the atmosphere and a dynamical context for the observations. Further, it elucidates unobservable processes and properties. LASSO will generate a simulation library for researchers that enables statistical approaches beyond a single-case mentality. It will also provide tools necessary for modelers to reproduce the LES and conduct their own sensitivity experiments. Many different uses are envisioned for the combined LASSO LES and observational library. For an observationalist, LASSO can help inform instrument remote-sensing retrievals, conduct Observation System Simulation Experiments (OSSEs), and test implications of radar scan strategies or flight paths. For a theoretician, LASSO will help calculate estimates of fluxes and co-variability of values, and test relationships without having to run the model yourself. For a modeler, LASSO will help one know ahead of time which days have good forcing, have co-registered observations at high-resolution scales, and have simulation inputs and corresponding outputs to test parameterizations. Further details on the overall LASSO project are available at http://www.arm. gov/science/themes/lasso.

  12. Improved Variable Selection Algorithm Using a LASSO-Type Penalty, with an Application to Assessing Hepatitis B Infection Relevant Factors in Community Residents

    Science.gov (United States)

    Guo, Pi; Zeng, Fangfang; Hu, Xiaomin; Zhang, Dingmei; Zhu, Shuming; Deng, Yu; Hao, Yuantao

    2015-01-01

    Objectives In epidemiological studies, it is important to identify independent associations between collective exposures and a health outcome. The current stepwise selection technique ignores stochastic errors and suffers from a lack of stability. The alternative LASSO-penalized regression model can be applied to detect significant predictors from a pool of candidate variables. However, this technique is prone to false positives and tends to create excessive biases. It remains challenging to develop robust variable selection methods and enhance predictability. Material and methods Two improved algorithms denoted the two-stage hybrid and bootstrap ranking procedures, both using a LASSO-type penalty, were developed for epidemiological association analysis. The performance of the proposed procedures and other methods including conventional LASSO, Bolasso, stepwise and stability selection models were evaluated using intensive simulation. In addition, methods were compared by using an empirical analysis based on large-scale survey data of hepatitis B infection-relevant factors among Guangdong residents. Results The proposed procedures produced comparable or less biased selection results when compared to conventional variable selection models. In total, the two newly proposed procedures were stable with respect to various scenarios of simulation, demonstrating a higher power and a lower false positive rate during variable selection than the compared methods. In empirical analysis, the proposed procedures yielding a sparse set of hepatitis B infection-relevant factors gave the best predictive performance and showed that the procedures were able to select a more stringent set of factors. The individual history of hepatitis B vaccination, family and individual history of hepatitis B infection were associated with hepatitis B infection in the studied residents according to the proposed procedures. Conclusions The newly proposed procedures improve the identification of

  13. Efficient Smoothed Concomitant Lasso Estimation for High Dimensional Regression

    Science.gov (United States)

    Ndiaye, Eugene; Fercoq, Olivier; Gramfort, Alexandre; Leclère, Vincent; Salmon, Joseph

    2017-10-01

    In high dimensional settings, sparse structures are crucial for efficiency, both in term of memory, computation and performance. It is customary to consider ℓ 1 penalty to enforce sparsity in such scenarios. Sparsity enforcing methods, the Lasso being a canonical example, are popular candidates to address high dimension. For efficiency, they rely on tuning a parameter trading data fitting versus sparsity. For the Lasso theory to hold this tuning parameter should be proportional to the noise level, yet the latter is often unknown in practice. A possible remedy is to jointly optimize over the regression parameter as well as over the noise level. This has been considered under several names in the literature: Scaled-Lasso, Square-root Lasso, Concomitant Lasso estimation for instance, and could be of interest for uncertainty quantification. In this work, after illustrating numerical difficulties for the Concomitant Lasso formulation, we propose a modification we coined Smoothed Concomitant Lasso, aimed at increasing numerical stability. We propose an efficient and accurate solver leading to a computational cost no more expensive than the one for the Lasso. We leverage on standard ingredients behind the success of fast Lasso solvers: a coordinate descent algorithm, combined with safe screening rules to achieve speed efficiency, by eliminating early irrelevant features.

  14. Comparison of partial least squares and lasso regression techniques as applied to laser-induced breakdown spectroscopy of geological samples

    International Nuclear Information System (INIS)

    Dyar, M.D.; Carmosino, M.L.; Breves, E.A.; Ozanne, M.V.; Clegg, S.M.; Wiens, R.C.

    2012-01-01

    A remote laser-induced breakdown spectrometer (LIBS) designed to simulate the ChemCam instrument on the Mars Science Laboratory Rover Curiosity was used to probe 100 geologic samples at a 9-m standoff distance. ChemCam consists of an integrated remote LIBS instrument that will probe samples up to 7 m from the mast of the rover and a remote micro-imager (RMI) that will record context images. The elemental compositions of 100 igneous and highly-metamorphosed rocks are determined with LIBS using three variations of multivariate analysis, with a goal of improving the analytical accuracy. Two forms of partial least squares (PLS) regression are employed with finely-tuned parameters: PLS-1 regresses a single response variable (elemental concentration) against the observation variables (spectra, or intensity at each of 6144 spectrometer channels), while PLS-2 simultaneously regresses multiple response variables (concentrations of the ten major elements in rocks) against the observation predictor variables, taking advantage of natural correlations between elements. Those results are contrasted with those from the multivariate regression technique of the least absolute shrinkage and selection operator (lasso), which is a penalized shrunken regression method that selects the specific channels for each element that explain the most variance in the concentration of that element. To make this comparison, we use results of cross-validation and of held-out testing, and employ unscaled and uncentered spectral intensity data because all of the input variables are already in the same units. Results demonstrate that the lasso, PLS-1, and PLS-2 all yield comparable results in terms of accuracy for this dataset. However, the interpretability of these methods differs greatly in terms of fundamental understanding of LIBS emissions. PLS techniques generate principal components, linear combinations of intensities at any number of spectrometer channels, which explain as much variance in the

  15. LASSO NTCP predictors for the incidence of xerostomia in patients with head and neck squamous cell carcinoma and nasopharyngeal carcinoma

    Science.gov (United States)

    Lee, Tsair-Fwu; Liou, Ming-Hsiang; Huang, Yu-Jie; Chao, Pei-Ju; Ting, Hui-Min; Lee, Hsiao-Yi

    2014-01-01

    To predict the incidence of moderate-to-severe patient-reported xerostomia among head and neck squamous cell carcinoma (HNSCC) and nasopharyngeal carcinoma (NPC) patients treated with intensity-modulated radiotherapy (IMRT). Multivariable normal tissue complication probability (NTCP) models were developed by using quality of life questionnaire datasets from 152 patients with HNSCC and 84 patients with NPC. The primary endpoint was defined as moderate-to-severe xerostomia after IMRT. The numbers of predictive factors for a multivariable logistic regression model were determined using the least absolute shrinkage and selection operator (LASSO) with bootstrapping technique. Four predictive models were achieved by LASSO with the smallest number of factors while preserving predictive value with higher AUC performance. For all models, the dosimetric factors for the mean dose given to the contralateral and ipsilateral parotid gland were selected as the most significant predictors. Followed by the different clinical and socio-economic factors being selected, namely age, financial status, T stage, and education for different models were chosen. The predicted incidence of xerostomia for HNSCC and NPC patients can be improved by using multivariable logistic regression models with LASSO technique. The predictive model developed in HNSCC cannot be generalized to NPC cohort treated with IMRT without validation and vice versa. PMID:25163814

  16. IPF-LASSO: Integrative L1-Penalized Regression with Penalty Factors for Prediction Based on Multi-Omics Data

    Directory of Open Access Journals (Sweden)

    Anne-Laure Boulesteix

    2017-01-01

    Full Text Available As modern biotechnologies advance, it has become increasingly frequent that different modalities of high-dimensional molecular data (termed “omics” data in this paper, such as gene expression, methylation, and copy number, are collected from the same patient cohort to predict the clinical outcome. While prediction based on omics data has been widely studied in the last fifteen years, little has been done in the statistical literature on the integration of multiple omics modalities to select a subset of variables for prediction, which is a critical task in personalized medicine. In this paper, we propose a simple penalized regression method to address this problem by assigning different penalty factors to different data modalities for feature selection and prediction. The penalty factors can be chosen in a fully data-driven fashion by cross-validation or by taking practical considerations into account. In simulation studies, we compare the prediction performance of our approach, called IPF-LASSO (Integrative LASSO with Penalty Factors and implemented in the R package ipflasso, with the standard LASSO and sparse group LASSO. The use of IPF-LASSO is also illustrated through applications to two real-life cancer datasets. All data and codes are available on the companion website to ensure reproducibility.

  17. Controlling the local false discovery rate in the adaptive Lasso

    KAUST Repository

    Sampson, J. N.

    2013-04-09

    The Lasso shrinkage procedure achieved its popularity, in part, by its tendency to shrink estimated coefficients to zero, and its ability to serve as a variable selection procedure. Using data-adaptive weights, the adaptive Lasso modified the original procedure to increase the penalty terms for those variables estimated to be less important by ordinary least squares. Although this modified procedure attained the oracle properties, the resulting models tend to include a large number of "false positives" in practice. Here, we adapt the concept of local false discovery rates (lFDRs) so that it applies to the sequence, λn, of smoothing parameters for the adaptive Lasso. We define the lFDR for a given λn to be the probability that the variable added to the model by decreasing λn to λn-δ is not associated with the outcome, where δ is a small value. We derive the relationship between the lFDR and λn, show lFDR =1 for traditional smoothing parameters, and show how to select λn so as to achieve a desired lFDR. We compare the smoothing parameters chosen to achieve a specified lFDR and those chosen to achieve the oracle properties, as well as their resulting estimates for model coefficients, with both simulation and an example from a genetic study of prostate specific antigen.

  18. Comparison of partial least squares and lasso regression techniques as applied to laser-induced breakdown spectroscopy of geological samples

    Energy Technology Data Exchange (ETDEWEB)

    Dyar, M.D., E-mail: mdyar@mtholyoke.edu [Dept. of Astronomy, Mount Holyoke College, 50 College St., South Hadley, MA 01075 (United States); Carmosino, M.L.; Breves, E.A.; Ozanne, M.V. [Dept. of Astronomy, Mount Holyoke College, 50 College St., South Hadley, MA 01075 (United States); Clegg, S.M.; Wiens, R.C. [Los Alamos National Laboratory, P.O. Box 1663, MS J565, Los Alamos, NM 87545 (United States)

    2012-04-15

    A remote laser-induced breakdown spectrometer (LIBS) designed to simulate the ChemCam instrument on the Mars Science Laboratory Rover Curiosity was used to probe 100 geologic samples at a 9-m standoff distance. ChemCam consists of an integrated remote LIBS instrument that will probe samples up to 7 m from the mast of the rover and a remote micro-imager (RMI) that will record context images. The elemental compositions of 100 igneous and highly-metamorphosed rocks are determined with LIBS using three variations of multivariate analysis, with a goal of improving the analytical accuracy. Two forms of partial least squares (PLS) regression are employed with finely-tuned parameters: PLS-1 regresses a single response variable (elemental concentration) against the observation variables (spectra, or intensity at each of 6144 spectrometer channels), while PLS-2 simultaneously regresses multiple response variables (concentrations of the ten major elements in rocks) against the observation predictor variables, taking advantage of natural correlations between elements. Those results are contrasted with those from the multivariate regression technique of the least absolute shrinkage and selection operator (lasso), which is a penalized shrunken regression method that selects the specific channels for each element that explain the most variance in the concentration of that element. To make this comparison, we use results of cross-validation and of held-out testing, and employ unscaled and uncentered spectral intensity data because all of the input variables are already in the same units. Results demonstrate that the lasso, PLS-1, and PLS-2 all yield comparable results in terms of accuracy for this dataset. However, the interpretability of these methods differs greatly in terms of fundamental understanding of LIBS emissions. PLS techniques generate principal components, linear combinations of intensities at any number of spectrometer channels, which explain as much variance in the

  19. Association between biomarkers and clinical characteristics in chronic subdural hematoma patients assessed with lasso regression.

    Directory of Open Access Journals (Sweden)

    Are Hugo Pripp

    Full Text Available Chronic subdural hematoma (CSDH is characterized by an "old" encapsulated collection of blood and blood breakdown products between the brain and its outermost covering (the dura. Recognized risk factors for development of CSDH are head injury, old age and using anticoagulation medication, but its underlying pathophysiological processes are still unclear. It is assumed that a complex local process of interrelated mechanisms including inflammation, neomembrane formation, angiogenesis and fibrinolysis could be related to its development and propagation. However, the association between the biomarkers of inflammation and angiogenesis, and the clinical and radiological characteristics of CSDH patients, need further investigation. The high number of biomarkers compared to the number of observations, the correlation between biomarkers, missing data and skewed distributions may limit the usefulness of classical statistical methods. We therefore explored lasso regression to assess the association between 30 biomarkers of inflammation and angiogenesis at the site of lesions, and selected clinical and radiological characteristics in a cohort of 93 patients. Lasso regression performs both variable selection and regularization to improve the predictive accuracy and interpretability of the statistical model. The results from the lasso regression showed analysis exhibited lack of robust statistical association between the biomarkers in hematoma fluid with age, gender, brain infarct, neurological deficiencies and volume of hematoma. However, there were associations between several of the biomarkers with postoperative recurrence requiring reoperation. The statistical analysis with lasso regression supported previous findings that the immunological characteristics of CSDH are local. The relationship between biomarkers, the radiological appearance of lesions and recurrence requiring reoperation have been inclusive using classical statistical methods on these data

  20. Sparse inverse covariance estimation with the graphical lasso.

    Science.gov (United States)

    Friedman, Jerome; Hastie, Trevor; Tibshirani, Robert

    2008-07-01

    We consider the problem of estimating sparse graphs by a lasso penalty applied to the inverse covariance matrix. Using a coordinate descent procedure for the lasso, we develop a simple algorithm--the graphical lasso--that is remarkably fast: It solves a 1000-node problem ( approximately 500,000 parameters) in at most a minute and is 30-4000 times faster than competing methods. It also provides a conceptual link between the exact problem and the approximation suggested by Meinshausen and Bühlmann (2006). We illustrate the method on some cell-signaling data from proteomics.

  1. Logistic LASSO regression for the diagnosis of breast cancer using clinical demographic data and the BI-RADS lexicon for ultrasonography.

    Science.gov (United States)

    Kim, Sun Mi; Kim, Yongdai; Jeong, Kuhwan; Jeong, Heeyeong; Kim, Jiyoung

    2018-01-01

    The aim of this study was to compare the performance of image analysis for predicting breast cancer using two distinct regression models and to evaluate the usefulness of incorporating clinical and demographic data (CDD) into the image analysis in order to improve the diagnosis of breast cancer. This study included 139 solid masses from 139 patients who underwent a ultrasonography-guided core biopsy and had available CDD between June 2009 and April 2010. Three breast radiologists retrospectively reviewed 139 breast masses and described each lesion using the Breast Imaging Reporting and Data System (BI-RADS) lexicon. We applied and compared two regression methods-stepwise logistic (SL) regression and logistic least absolute shrinkage and selection operator (LASSO) regression-in which the BI-RADS descriptors and CDD were used as covariates. We investigated the performances of these regression methods and the agreement of radiologists in terms of test misclassification error and the area under the curve (AUC) of the tests. Logistic LASSO regression was superior (Pcomparable to the AUC with CDD (0.873 vs. 0.880, P=0.141). Logistic LASSO regression based on BI-RADS descriptors and CDD showed better performance than SL in predicting the presence of breast cancer. The use of CDD as a supplement to the BI-RADS descriptors significantly improved the prediction of breast cancer using logistic LASSO regression.

  2. Lasso and probabilistic inequalities for multivariate point processes

    OpenAIRE

    Hansen, Niels Richard; Reynaud-Bouret, Patricia; Rivoirard, Vincent

    2012-01-01

    Due to its low computational cost, Lasso is an attractive regularization method for high-dimensional statistical settings. In this paper, we consider multivariate counting processes depending on an unknown function parameter to be estimated by linear combinations of a fixed dictionary. To select coefficients, we propose an adaptive $\\ell_{1}$-penalization methodology, where data-driven weights of the penalty are derived from new Bernstein type inequalities for martingales. Oracle inequalities...

  3. PERBANDINGAN ANALISIS LEAST ABSOLUTE SHRINKAGE AND SELECTION OPERATOR DAN PARTIAL LEAST SQUARES (Studi Kasus: Data Microarray

    Directory of Open Access Journals (Sweden)

    KADEK DWI FARMANI

    2012-09-01

    Full Text Available Linear regression analysis is one of the parametric statistical methods which utilize the relationship between two or more quantitative variables. In linear regression analysis, there are several assumptions that must be met that is normal distribution of errors, there is no correlation between the error and error variance is constant and homogent. There are some constraints that caused the assumption can not be met, for example, the correlation between independent variables (multicollinearity, constraints on the number of data and independent variables are obtained. When the number of samples obtained less than the number of independent variables, then the data is called the microarray data. Least Absolute shrinkage and Selection Operator (LASSO and Partial Least Squares (PLS is a statistical method that can be used to overcome the microarray, overfitting, and multicollinearity. From the above description, it is necessary to study with the intention of comparing LASSO and PLS method. This study uses coronary heart and stroke patients data which is a microarray data and contain multicollinearity. With these two characteristics of the data that most have a weak correlation between independent variables, LASSO method produces a better model than PLS seen from the large RMSEP.

  4. Discovery and replication of gene influences on brain structure using LASSO regression

    Directory of Open Access Journals (Sweden)

    Omid eKohannim

    2012-08-01

    Full Text Available We implemented LASSO (least absolute shrinkage and selection operator regression to evaluate gene effects in genome-wide association studies (GWAS of brain images, using an MRI-derived temporal lobe volume measure from 729 subjects scanned as part of the Alzheimer’s Disease Neuroimaging Initiative (ADNI. Sparse groups of SNPs in individual genes were selected by LASSO, which identifies efficient sets of variants influencing the data. These SNPs were considered jointly when assessing their association with neuroimaging measures. We discovered 22 genes that passed genome-wide significance for influencing temporal lobe volume. This was a substantially greater number of significant genes compared to those found with standard, univariate GWAS. These top genes are all expressed in the brain and include genes previously related to brain function or neuropsychiatric disorders such as MACROD2, SORCS2, GRIN2B, MAGI2, NPAS3, CLSTN2, GABRG3, NRXN3, PRKAG2, GAS7, RBFOX1, ADARB2, CHD4 and CDH13. The top genes we identified with this method also displayed significant and widespread post-hoc effects on voxelwise, tensor-based morphometry (TBM maps of the temporal lobes. The most significantly associated gene was an autism susceptibility gene known as MACROD2. We were able to successfully replicate the effect of the MACROD2 gene in an independent cohort of 564 young, Australian healthy adult twins and siblings scanned with MRI (mean age: 23.8±2.2 SD years. In exploratory analyses, three selected SNPs in the MACROD2 gene were also significantly associated with performance intelligence quotient (PIQ. Our approach powerfully complements univariate techniques in detecting influences of genes on the living brain.

  5. Validating the LASSO algorithm by unmixing spectral signatures in multicolor phantoms

    Science.gov (United States)

    Samarov, Daniel V.; Clarke, Matthew; Lee, Ji Yoon; Allen, David; Litorja, Maritoni; Hwang, Jeeseong

    2012-03-01

    As hyperspectral imaging (HSI) sees increased implementation into the biological and medical elds it becomes increasingly important that the algorithms being used to analyze the corresponding output be validated. While certainly important under any circumstance, as this technology begins to see a transition from benchtop to bedside ensuring that the measurements being given to medical professionals are accurate and reproducible is critical. In order to address these issues work has been done in generating a collection of datasets which could act as a test bed for algorithms validation. Using a microarray spot printer a collection of three food color dyes, acid red 1 (AR), brilliant blue R (BBR) and erioglaucine (EG) are mixed together at dierent concentrations in varying proportions at dierent locations on a microarray chip. With the concentration and mixture proportions known at each location, using HSI an algorithm should in principle, based on estimates of abundances, be able to determine the concentrations and proportions of each dye at each location on the chip. These types of data are particularly important in the context of medical measurements as the resulting estimated abundances will be used to make critical decisions which can have a serious impact on an individual's health. In this paper we present a novel algorithm for processing and analyzing HSI data based on the LASSO algorithm (similar to "basis pursuit"). The LASSO is a statistical method for simultaneously performing model estimation and variable selection. In the context of estimating abundances in an HSI scene these so called "sparse" representations provided by the LASSO are appropriate as not every pixel will be expected to contain every endmember. The algorithm we present takes the general framework of the LASSO algorithm a step further and incorporates the rich spatial information which is available in HSI to further improve the estimates of abundance. We show our algorithm's improvement

  6. On the Oracle Property of the Adaptive LASSO in Stationary and Nonstationary Autoregressions

    DEFF Research Database (Denmark)

    Kock, Anders Bredahl

    We show that the Adaptive LASSO is oracle efficient in stationary and non-stationary autoregressions. This means that it estimates parameters consistently, selects the correct sparsity pattern, and estimates the coefficients belonging to the relevant variables at the same asymptotic efficiency...

  7. Exact Covariance Thresholding into Connected Components for Large-Scale Graphical Lasso.

    Science.gov (United States)

    Mazumder, Rahul; Hastie, Trevor

    2012-03-01

    We consider the sparse inverse covariance regularization problem or graphical lasso with regularization parameter λ. Suppose the sample covariance graph formed by thresholding the entries of the sample covariance matrix at λ is decomposed into connected components. We show that the vertex-partition induced by the connected components of the thresholded sample covariance graph (at λ) is exactly equal to that induced by the connected components of the estimated concentration graph, obtained by solving the graphical lasso problem for the same λ. This characterizes a very interesting property of a path of graphical lasso solutions. Furthermore, this simple rule, when used as a wrapper around existing algorithms for the graphical lasso, leads to enormous performance gains. For a range of values of λ, our proposal splits a large graphical lasso problem into smaller tractable problems, making it possible to solve an otherwise infeasible large-scale problem. We illustrate the graceful scalability of our proposal via synthetic and real-life microarray examples.

  8. Measurement error correction in the least absolute shrinkage and selection operator model when validation data are available.

    Science.gov (United States)

    Vasquez, Monica M; Hu, Chengcheng; Roe, Denise J; Halonen, Marilyn; Guerra, Stefano

    2017-01-01

    Measurement of serum biomarkers by multiplex assays may be more variable as compared to single biomarker assays. Measurement error in these data may bias parameter estimates in regression analysis, which could mask true associations of serum biomarkers with an outcome. The Least Absolute Shrinkage and Selection Operator (LASSO) can be used for variable selection in these high-dimensional data. Furthermore, when the distribution of measurement error is assumed to be known or estimated with replication data, a simple measurement error correction method can be applied to the LASSO method. However, in practice the distribution of the measurement error is unknown and is expensive to estimate through replication both in monetary cost and need for greater amount of sample which is often limited in quantity. We adapt an existing bias correction approach by estimating the measurement error using validation data in which a subset of serum biomarkers are re-measured on a random subset of the study sample. We evaluate this method using simulated data and data from the Tucson Epidemiological Study of Airway Obstructive Disease (TESAOD). We show that the bias in parameter estimation is reduced and variable selection is improved.

  9. Multivariate sparse group lasso for the multivariate multiple linear regression with an arbitrary group structure.

    Science.gov (United States)

    Li, Yanming; Nan, Bin; Zhu, Ji

    2015-06-01

    We propose a multivariate sparse group lasso variable selection and estimation method for data with high-dimensional predictors as well as high-dimensional response variables. The method is carried out through a penalized multivariate multiple linear regression model with an arbitrary group structure for the regression coefficient matrix. It suits many biology studies well in detecting associations between multiple traits and multiple predictors, with each trait and each predictor embedded in some biological functional groups such as genes, pathways or brain regions. The method is able to effectively remove unimportant groups as well as unimportant individual coefficients within important groups, particularly for large p small n problems, and is flexible in handling various complex group structures such as overlapping or nested or multilevel hierarchical structures. The method is evaluated through extensive simulations with comparisons to the conventional lasso and group lasso methods, and is applied to an eQTL association study. © 2015, The International Biometric Society.

  10. Detection of radionuclides from weak and poorly resolved spectra using Lasso and subsampling techniques

    Energy Technology Data Exchange (ETDEWEB)

    Bai, Er-Wei, E-mail: er-wei-bai@uiowa.edu [Department of Electrical and Computer Engineering, University of Iowa, Iowa City, IA 52242 (United States); Chan, Kung-sik, E-mail: kung-sik-chan@uiowa.edu [Department of Statistical and Actuarial Science, University of Iowa, Iowa City, IA 52242 (United States); Eichinger, William, E-mail: william-eichinger@uiowa.edu [Department of Civil and Environmental Engineering, University of Iowa, Iowa City, IA 52242 (United States); Kump, Paul [Department of Electrical and Computer Engineering, University of Iowa, Iowa City, IA 52242 (United States)

    2011-10-15

    We consider a problem of identification of nuclides from weak and poorly resolved spectra. A two stage algorithm is proposed and tested based on the principle of majority voting. The idea is to model gamma-ray counts as Poisson processes. Then, the average part is taken to be the model and the difference between the observed gamma-ray counts and the average is considered as random noise. In the linear part, the unknown coefficients correspond to if isotopes of interest are present or absent. Lasso types of algorithms are applied to find non-vanishing coefficients. Since Lasso or any prediction error based algorithm is inconsistent with variable selection for finite data length, an estimate of parameter distribution based on subsampling techniques is added in addition to Lasso. Simulation examples are provided in which the traditional peak detection algorithms fail to work and the proposed two stage algorithm performs well in terms of both the False Negative and False Positive errors. - Highlights: > Identification of nuclides from weak and poorly resolved spectra. > An algorithm is proposed and tested based on the principle of majority voting. > Lasso types of algorithms are applied to find non-vanishing coefficients. > An estimate of parameter distribution based on sub-sampling techniques is included. > Simulations compare the results of the proposed method with those of peak detection.

  11. Detection of radionuclides from weak and poorly resolved spectra using Lasso and subsampling techniques

    International Nuclear Information System (INIS)

    Bai, Er-Wei; Chan, Kung-sik; Eichinger, William; Kump, Paul

    2011-01-01

    We consider a problem of identification of nuclides from weak and poorly resolved spectra. A two stage algorithm is proposed and tested based on the principle of majority voting. The idea is to model gamma-ray counts as Poisson processes. Then, the average part is taken to be the model and the difference between the observed gamma-ray counts and the average is considered as random noise. In the linear part, the unknown coefficients correspond to if isotopes of interest are present or absent. Lasso types of algorithms are applied to find non-vanishing coefficients. Since Lasso or any prediction error based algorithm is inconsistent with variable selection for finite data length, an estimate of parameter distribution based on subsampling techniques is added in addition to Lasso. Simulation examples are provided in which the traditional peak detection algorithms fail to work and the proposed two stage algorithm performs well in terms of both the False Negative and False Positive errors. - Highlights: → Identification of nuclides from weak and poorly resolved spectra. → An algorithm is proposed and tested based on the principle of majority voting. → Lasso types of algorithms are applied to find non-vanishing coefficients. → An estimate of parameter distribution based on sub-sampling techniques is included. → Simulations compare the results of the proposed method with those of peak detection.

  12. A Permutation Approach for Selecting the Penalty Parameter in Penalized Model Selection

    Science.gov (United States)

    Sabourin, Jeremy A; Valdar, William; Nobel, Andrew B

    2015-01-01

    Summary We describe a simple, computationally effcient, permutation-based procedure for selecting the penalty parameter in LASSO penalized regression. The procedure, permutation selection, is intended for applications where variable selection is the primary focus, and can be applied in a variety of structural settings, including that of generalized linear models. We briefly discuss connections between permutation selection and existing theory for the LASSO. In addition, we present a simulation study and an analysis of real biomedical data sets in which permutation selection is compared with selection based on the following: cross-validation (CV), the Bayesian information criterion (BIC), Scaled Sparse Linear Regression, and a selection method based on recently developed testing procedures for the LASSO. PMID:26243050

  13. The Los Alamos Space Science Outreach (LASSO) Program

    Science.gov (United States)

    Barker, P. L.; Skoug, R. M.; Alexander, R. J.; Thomsen, M. F.; Gary, S. P.

    2002-12-01

    The Los Alamos Space Science Outreach (LASSO) program features summer workshops in which K-14 teachers spend several weeks at LANL learning space science from Los Alamos scientists and developing methods and materials for teaching this science to their students. The program is designed to provide hands-on space science training to teachers as well as assistance in developing lesson plans for use in their classrooms. The program supports an instructional model based on education research and cognitive theory. Students and teachers engage in activities that encourage critical thinking and a constructivist approach to learning. LASSO is run through the Los Alamos Science Education Team (SET). SET personnel have many years of experience in teaching, education research, and science education programs. Their involvement ensures that the teacher workshop program is grounded in sound pedagogical methods and meets current educational standards. Lesson plans focus on current LANL satellite projects to study the solar wind and the Earth's magnetosphere. LASSO is an umbrella program for space science education activities at Los Alamos National Laboratory (LANL) that was created to enhance the science and math interests and skills of students from New Mexico and the nation. The LASSO umbrella allows maximum leveraging of EPO funding from a number of projects (and thus maximum educational benefits to both students and teachers), while providing a format for the expression of the unique science perspective of each project.

  14. The joint graphical lasso for inverse covariance estimation across multiple classes.

    Science.gov (United States)

    Danaher, Patrick; Wang, Pei; Witten, Daniela M

    2014-03-01

    We consider the problem of estimating multiple related Gaussian graphical models from a high-dimensional data set with observations belonging to distinct classes. We propose the joint graphical lasso , which borrows strength across the classes in order to estimate multiple graphical models that share certain characteristics, such as the locations or weights of nonzero edges. Our approach is based upon maximizing a penalized log likelihood. We employ generalized fused lasso or group lasso penalties, and implement a fast ADMM algorithm to solve the corresponding convex optimization problems. The performance of the proposed method is illustrated through simulated and real data examples.

  15. Tracing the breeding farm of domesticated pig using feature selection (

    Directory of Open Access Journals (Sweden)

    Taehyung Kwon

    2017-11-01

    Full Text Available Objective Increasing food safety demands in the animal product market have created a need for a system to trace the food distribution process, from the manufacturer to the retailer, and genetic traceability is an effective method to trace the origin of animal products. In this study, we successfully achieved the farm tracing of 6,018 multi-breed pigs, using single nucleotide polymorphism (SNP markers strictly selected through least absolute shrinkage and selection operator (LASSO feature selection. Methods We performed farm tracing of domesticated pig (Sus scrofa from SNP markers and selected the most relevant features for accurate prediction. Considering multi-breed composition of our data, we performed feature selection using LASSO penalization on 4,002 SNPs that are shared between breeds, which also includes 179 SNPs with small between-breed difference. The 100 highest-scored features were extracted from iterative simulations and then evaluated using machine-leaning based classifiers. Results We selected 1,341 SNPs from over 45,000 SNPs through iterative LASSO feature selection, to minimize between-breed differences. We subsequently selected 100 highest-scored SNPs from iterative scoring, and observed high statistical measures in classification of breeding farms by cross-validation only using these SNPs. Conclusion The study represents a successful application of LASSO feature selection on multi-breed pig SNP data to trace the farm information, which provides a valuable method and possibility for further researches on genetic traceability.

  16. Implementations of geographically weighted lasso in spatial data with multicollinearity (Case study: Poverty modeling of Java Island)

    Science.gov (United States)

    Setiyorini, Anis; Suprijadi, Jadi; Handoko, Budhi

    2017-03-01

    Geographically Weighted Regression (GWR) is a regression model that takes into account the spatial heterogeneity effect. In the application of the GWR, inference on regression coefficients is often of interest, as is estimation and prediction of the response variable. Empirical research and studies have demonstrated that local correlation between explanatory variables can lead to estimated regression coefficients in GWR that are strongly correlated, a condition named multicollinearity. It later results on a large standard error on estimated regression coefficients, and, hence, problematic for inference on relationships between variables. Geographically Weighted Lasso (GWL) is a method which capable to deal with spatial heterogeneity and local multicollinearity in spatial data sets. GWL is a further development of GWR method, which adds a LASSO (Least Absolute Shrinkage and Selection Operator) constraint in parameter estimation. In this study, GWL will be applied by using fixed exponential kernel weights matrix to establish a poverty modeling of Java Island, Indonesia. The results of applying the GWL to poverty datasets show that this method stabilizes regression coefficients in the presence of multicollinearity and produces lower prediction and estimation error of the response variable than GWR does.

  17. Sparse EEG/MEG source estimation via a group lasso.

    Directory of Open Access Journals (Sweden)

    Michael Lim

    Full Text Available Non-invasive recordings of human brain activity through electroencephalography (EEG or magnetoencelphalography (MEG are of value for both basic science and clinical applications in sensory, cognitive, and affective neuroscience. Here we introduce a new approach to estimating the intra-cranial sources of EEG/MEG activity measured from extra-cranial sensors. The approach is based on the group lasso, a sparse-prior inverse that has been adapted to take advantage of functionally-defined regions of interest for the definition of physiologically meaningful groups within a functionally-based common space. Detailed simulations using realistic source-geometries and data from a human Visual Evoked Potential experiment demonstrate that the group-lasso method has improved performance over traditional ℓ2 minimum-norm methods. In addition, we show that pooling source estimates across subjects over functionally defined regions of interest results in improvements in the accuracy of source estimates for both the group-lasso and minimum-norm approaches.

  18. Channel selection for simultaneous and proportional myoelectric prosthesis control of multiple degrees-of-freedom

    Science.gov (United States)

    Hwang, Han-Jeong; Hahne, Janne Mathias; Müller, Klaus-Robert

    2014-10-01

    Objective. Recent studies have shown the possibility of simultaneous and proportional control of electrically powered upper-limb prostheses, but there has been little investigation on optimal channel selection. The objective of this study is to find a robust channel selection method and the channel subsets most suitable for simultaneous and proportional myoelectric prosthesis control of multiple degrees-of-freedom (DoFs). Approach. Ten able-bodied subjects and one person with congenital upper-limb deficiency took part in this study, and performed wrist movements with various combinations of two DoFs (flexion/extension and radial/ulnar deviation). During the experiment, high density electromyographic (EMG) signals and the actual wrist angles were recorded with an 8 × 24 electrode array and a motion tracking system, respectively. The wrist angles were estimated from EMG features with ridge regression using the subsets of channels chosen by three different channel selection methods: (1) least absolute shrinkage and selection operator (LASSO), (2) sequential feature selection (SFS), and (3) uniform selection (UNI). Main results. SFS generally showed higher estimation accuracy than LASSO and UNI, but LASSO always outperformed SFS in terms of robustness, such as noise addition, channel shift and training data reduction. It was also confirmed that about 95% of the original performance obtained using all channels can be retained with only 12 bipolar channels individually selected by LASSO and SFS. Significance. From the analysis results, it can be concluded that LASSO is a promising channel selection method for accurate simultaneous and proportional prosthesis control. We expect that our results will provide a useful guideline to select optimal channel subsets when developing clinical myoelectric prosthesis control systems based on continuous movements with multiple DoFs.

  19. LES ARM Symbiotic Simulation and Observation (LASSO) Implementation Strategy

    Energy Technology Data Exchange (ETDEWEB)

    Gustafson Jr., WI [Pacific Northwest National Laboratory; Vogelmann, AM [Brookhaven National Laboratory

    2015-09-01

    This document illustrates the design of the Large-Eddy Simulation (LES) ARM Symbiotic Simulation and Observation (LASSO) workflow to provide a routine, high-resolution modeling capability to augment the U.S. Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Climate Research Facility’s high-density observations. LASSO will create a powerful new capability for furthering ARM’s mission to advance understanding of cloud, radiation, aerosol, and land-surface processes. The combined observational and modeling elements will enable a new level of scientific inquiry by connecting processes and context to observations and providing needed statistics for details that cannot be measured. The result will be improved process understanding that facilitates concomitant improvements in climate model parameterizations. The initial LASSO implementation will be for ARM’s Southern Great Plains site in Oklahoma and will focus on shallow convection, which is poorly simulated by climate models due in part to clouds’ typically small spatial scale compared to model grid spacing, and because the convection involves complicated interactions of microphysical and boundary layer processes.

  20. The Bayesian group lasso for confounded spatial data

    Science.gov (United States)

    Hefley, Trevor J.; Hooten, Mevin B.; Hanks, Ephraim M.; Russell, Robin E.; Walsh, Daniel P.

    2017-01-01

    Generalized linear mixed models for spatial processes are widely used in applied statistics. In many applications of the spatial generalized linear mixed model (SGLMM), the goal is to obtain inference about regression coefficients while achieving optimal predictive ability. When implementing the SGLMM, multicollinearity among covariates and the spatial random effects can make computation challenging and influence inference. We present a Bayesian group lasso prior with a single tuning parameter that can be chosen to optimize predictive ability of the SGLMM and jointly regularize the regression coefficients and spatial random effect. We implement the group lasso SGLMM using efficient Markov chain Monte Carlo (MCMC) algorithms and demonstrate how multicollinearity among covariates and the spatial random effect can be monitored as a derived quantity. To test our method, we compared several parameterizations of the SGLMM using simulated data and two examples from plant ecology and disease ecology. In all examples, problematic levels multicollinearity occurred and influenced sampling efficiency and inference. We found that the group lasso prior resulted in roughly twice the effective sample size for MCMC samples of regression coefficients and can have higher and less variable predictive accuracy based on out-of-sample data when compared to the standard SGLMM.

  1. Logistic LASSO regression for the diagnosis of breast cancer using clinical demographic data and the BI-RADS lexicon for ultrasonography

    Directory of Open Access Journals (Sweden)

    Sun Mi Kim

    2018-01-01

    Full Text Available Purpose The aim of this study was to compare the performance of image analysis for predicting breast cancer using two distinct regression models and to evaluate the usefulness of incorporating clinical and demographic data (CDD into the image analysis in order to improve the diagnosis of breast cancer. Methods This study included 139 solid masses from 139 patients who underwent a ultrasonography-guided core biopsy and had available CDD between June 2009 and April 2010. Three breast radiologists retrospectively reviewed 139 breast masses and described each lesion using the Breast Imaging Reporting and Data System (BI-RADS lexicon. We applied and compared two regression methods-stepwise logistic (SL regression and logistic least absolute shrinkage and selection operator (LASSO regression-in which the BI-RADS descriptors and CDD were used as covariates. We investigated the performances of these regression methods and the agreement of radiologists in terms of test misclassification error and the area under the curve (AUC of the tests. Results Logistic LASSO regression was superior (P<0.05 to SL regression, regardless of whether CDD was included in the covariates, in terms of test misclassification errors (0.234 vs. 0.253, without CDD; 0.196 vs. 0.258, with CDD and AUC (0.785 vs. 0.759, without CDD; 0.873 vs. 0.735, with CDD. However, it was inferior (P<0.05 to the agreement of three radiologists in terms of test misclassification errors (0.234 vs. 0.168, without CDD; 0.196 vs. 0.088, with CDD and the AUC without CDD (0.785 vs. 0.844, P<0.001, but was comparable to the AUC with CDD (0.873 vs. 0.880, P=0.141. Conclusion Logistic LASSO regression based on BI-RADS descriptors and CDD showed better performance than SL in predicting the presence of breast cancer. The use of CDD as a supplement to the BI-RADS descriptors significantly improved the prediction of breast cancer using logistic LASSO regression.

  2. Selecting Sensitive Parameter Subsets in Dynamical Models With Application to Biomechanical System Identification.

    Science.gov (United States)

    Ramadan, Ahmed; Boss, Connor; Choi, Jongeun; Peter Reeves, N; Cholewicki, Jacek; Popovich, John M; Radcliffe, Clark J

    2018-07-01

    Estimating many parameters of biomechanical systems with limited data may achieve good fit but may also increase 95% confidence intervals in parameter estimates. This results in poor identifiability in the estimation problem. Therefore, we propose a novel method to select sensitive biomechanical model parameters that should be estimated, while fixing the remaining parameters to values obtained from preliminary estimation. Our method relies on identifying the parameters to which the measurement output is most sensitive. The proposed method is based on the Fisher information matrix (FIM). It was compared against the nonlinear least absolute shrinkage and selection operator (LASSO) method to guide modelers on the pros and cons of our FIM method. We present an application identifying a biomechanical parametric model of a head position-tracking task for ten human subjects. Using measured data, our method (1) reduced model complexity by only requiring five out of twelve parameters to be estimated, (2) significantly reduced parameter 95% confidence intervals by up to 89% of the original confidence interval, (3) maintained goodness of fit measured by variance accounted for (VAF) at 82%, (4) reduced computation time, where our FIM method was 164 times faster than the LASSO method, and (5) selected similar sensitive parameters to the LASSO method, where three out of five selected sensitive parameters were shared by FIM and LASSO methods.

  3. PLS-based and regularization-based methods for the selection of relevant variables in non-targeted metabolomics data

    Directory of Open Access Journals (Sweden)

    Renata Bujak

    2016-07-01

    Full Text Available Non-targeted metabolomics constitutes a part of systems biology and aims to determine many metabolites in complex biological samples. Datasets obtained in non-targeted metabolomics studies are multivariate and high-dimensional due to the sensitivity of mass spectrometry-based detection methods as well as complexity of biological matrices. Proper selection of variables which contribute into group classification is a crucial step, especially in metabolomics studies which are focused on searching for disease biomarker candidates. In the present study, three different statistical approaches were tested using two metabolomics datasets (RH and PH study. Orthogonal projections to latent structures-discriminant analysis (OPLS-DA without and with multiple testing correction as well as least absolute shrinkage and selection operator (LASSO were tested and compared. For the RH study, OPLS-DA model built without multiple testing correction, selected 46 and 218 variables based on VIP criteria using Pareto and UV scaling, respectively. In the case of the PH study, 217 and 320 variables were selected based on VIP criteria using Pareto and UV scaling, respectively. In the RH study, OPLS-DA model built with multiple testing correction, selected 4 and 19 variables as statistically significant in terms of Pareto and UV scaling, respectively. For PH study, 14 and 18 variables were selected based on VIP criteria in terms of Pareto and UV scaling, respectively. Additionally, the concept and fundaments of the least absolute shrinkage and selection operator (LASSO with bootstrap procedure evaluating reproducibility of results, was demonstrated. In the RH and PH study, the LASSO selected 14 and 4 variables with reproducibility between 99.3% and 100%. However, apart from the popularity of PLS-DA and OPLS-DA methods in metabolomics, it should be highlighted that they do not control type I or type II error, but only arbitrarily establish a cut-off value for PLS-DA loadings

  4. LASSO observations at McDonald and OCA/CERGA: A preliminary analysis

    Science.gov (United States)

    Veillet, CH.; Fridelance, P.; Feraudy, D.; Boudon, Y.; Shelus, P. J.; Ricklefs, R. L.; Wiant, J. R.

    1993-01-01

    The Laser Synchronization from Synchronous Orbit (LASSO) observations between USA and Europe were made possible with the move of Meteosat 3/P2 toward 50 deg W. Two Lunar Laser Ranging stations participated into the observations: the MLRS at McDonald Observatory (Texas, USA) and OCA/CERGA (Grasse, France). Common sessions were performed since 30 Apr. 1992, and will be continued up to the next Meteosat 3/P2 move further West (planned for January 1993). The preliminary analysis made with the data already collected by the end of Nov. 1992 shows that the precision which can be obtained from LASSO is better than 100 ps, the accuracy depending on how well the stations maintain their time metrology, as well as on the quality of the calibration (still to be made.) For extracting such a precision from the data, the processing has been drastically changed compared to the initial LASSO data analysis. It takes into account all the measurements made, timings on board, and echoes at each station. This complete use of the data increased dramatically the confidence into the synchronization results.

  5. Fast empirical Bayesian LASSO for multiple quantitative trait locus mapping

    Directory of Open Access Journals (Sweden)

    Xu Shizhong

    2011-05-01

    Full Text Available Abstract Background The Bayesian shrinkage technique has been applied to multiple quantitative trait loci (QTLs mapping to estimate the genetic effects of QTLs on quantitative traits from a very large set of possible effects including the main and epistatic effects of QTLs. Although the recently developed empirical Bayes (EB method significantly reduced computation comparing with the fully Bayesian approach, its speed and accuracy are limited by the fact that numerical optimization is required to estimate the variance components in the QTL model. Results We developed a fast empirical Bayesian LASSO (EBLASSO method for multiple QTL mapping. The fact that the EBLASSO can estimate the variance components in a closed form along with other algorithmic techniques render the EBLASSO method more efficient and accurate. Comparing with the EB method, our simulation study demonstrated that the EBLASSO method could substantially improve the computational speed and detect more QTL effects without increasing the false positive rate. Particularly, the EBLASSO algorithm running on a personal computer could easily handle a linear QTL model with more than 100,000 variables in our simulation study. Real data analysis also demonstrated that the EBLASSO method detected more reasonable effects than the EB method. Comparing with the LASSO, our simulation showed that the current version of the EBLASSO implemented in Matlab had similar speed as the LASSO implemented in Fortran, and that the EBLASSO detected the same number of true effects as the LASSO but a much smaller number of false positive effects. Conclusions The EBLASSO method can handle a large number of effects possibly including both the main and epistatic QTL effects, environmental effects and the effects of gene-environment interactions. It will be a very useful tool for multiple QTL mapping.

  6. LASSO-ligand activity by surface similarity order: a new tool for ligand based virtual screening.

    Science.gov (United States)

    Reid, Darryl; Sadjad, Bashir S; Zsoldos, Zsolt; Simon, Aniko

    2008-01-01

    Virtual Ligand Screening (VLS) has become an integral part of the drug discovery process for many pharmaceutical companies. Ligand similarity searches provide a very powerful method of screening large databases of ligands to identify possible hits. If these hits belong to new chemotypes the method is deemed even more successful. eHiTS LASSO uses a new interacting surface point types (ISPT) molecular descriptor that is generated from the 3D structure of the ligand, but unlike most 3D descriptors it is conformation independent. Combined with a neural network machine learning technique, LASSO screens molecular databases at an ultra fast speed of 1 million structures in under 1 min on a standard PC. The results obtained from eHiTS LASSO trained on relatively small training sets of just 2, 4 or 8 actives are presented using the diverse directory of useful decoys (DUD) dataset. It is shown that over a wide range of receptor families, eHiTS LASSO is consistently able to enrich screened databases and provides scaffold hopping ability.

  7. Structural Graphical Lasso for Learning Mouse Brain Connectivity

    KAUST Repository

    Yang, Sen

    2015-08-07

    Investigations into brain connectivity aim to recover networks of brain regions connected by anatomical tracts or by functional associations. The inference of brain networks has recently attracted much interest due to the increasing availability of high-resolution brain imaging data. Sparse inverse covariance estimation with lasso and group lasso penalty has been demonstrated to be a powerful approach to discover brain networks. Motivated by the hierarchical structure of the brain networks, we consider the problem of estimating a graphical model with tree-structural regularization in this paper. The regularization encourages the graphical model to exhibit a brain-like structure. Specifically, in this hierarchical structure, hundreds of thousands of voxels serve as the leaf nodes of the tree. A node in the intermediate layer represents a region formed by voxels in the subtree rooted at that node. The whole brain is considered as the root of the tree. We propose to apply the tree-structural regularized graphical model to estimate the mouse brain network. However, the dimensionality of whole-brain data, usually on the order of hundreds of thousands, poses significant computational challenges. Efficient algorithms that are capable of estimating networks from high-dimensional data are highly desired. To address the computational challenge, we develop a screening rule which can quickly identify many zero blocks in the estimated graphical model, thereby dramatically reducing the computational cost of solving the proposed model. It is based on a novel insight on the relationship between screening and the so-called proximal operator that we first establish in this paper. We perform experiments on both synthetic data and real data from the Allen Developing Mouse Brain Atlas; results demonstrate the effectiveness and efficiency of the proposed approach.

  8. Ultrahigh Dimensional Variable Selection for Interpolation of Point Referenced Spatial Data: A Digital Soil Mapping Case Study

    Science.gov (United States)

    Lamb, David W.; Mengersen, Kerrie

    2016-01-01

    Modern soil mapping is characterised by the need to interpolate point referenced (geostatistical) observations and the availability of large numbers of environmental characteristics for consideration as covariates to aid this interpolation. Modelling tasks of this nature also occur in other fields such as biogeography and environmental science. This analysis employs the Least Angle Regression (LAR) algorithm for fitting Least Absolute Shrinkage and Selection Operator (LASSO) penalized Multiple Linear Regressions models. This analysis demonstrates the efficiency of the LAR algorithm at selecting covariates to aid the interpolation of geostatistical soil carbon observations. Where an exhaustive search of the models that could be constructed from 800 potential covariate terms and 60 observations would be prohibitively demanding, LASSO variable selection is accomplished with trivial computational investment. PMID:27603135

  9. Scoring relevancy of features based on combinatorial analysis of Lasso with application to lymphoma diagnosis

    Directory of Open Access Journals (Sweden)

    Zare Habil

    2013-01-01

    Full Text Available Abstract One challenge in applying bioinformatic tools to clinical or biological data is high number of features that might be provided to the learning algorithm without any prior knowledge on which ones should be used. In such applications, the number of features can drastically exceed the number of training instances which is often limited by the number of available samples for the study. The Lasso is one of many regularization methods that have been developed to prevent overfitting and improve prediction performance in high-dimensional settings. In this paper, we propose a novel algorithm for feature selection based on the Lasso and our hypothesis is that defining a scoring scheme that measures the "quality" of each feature can provide a more robust feature selection method. Our approach is to generate several samples from the training data by bootstrapping, determine the best relevance-ordering of the features for each sample, and finally combine these relevance-orderings to select highly relevant features. In addition to the theoretical analysis of our feature scoring scheme, we provided empirical evaluations on six real datasets from different fields to confirm the superiority of our method in exploratory data analysis and prediction performance. For example, we applied FeaLect, our feature scoring algorithm, to a lymphoma dataset, and according to a human expert, our method led to selecting more meaningful features than those commonly used in the clinics. This case study built a basis for discovering interesting new criteria for lymphoma diagnosis. Furthermore, to facilitate the use of our algorithm in other applications, the source code that implements our algorithm was released as FeaLect, a documented R package in CRAN.

  10. Non-Asymptotic Oracle Inequalities for the High-Dimensional Cox Regression via Lasso.

    Science.gov (United States)

    Kong, Shengchun; Nan, Bin

    2014-01-01

    We consider finite sample properties of the regularized high-dimensional Cox regression via lasso. Existing literature focuses on linear models or generalized linear models with Lipschitz loss functions, where the empirical risk functions are the summations of independent and identically distributed (iid) losses. The summands in the negative log partial likelihood function for censored survival data, however, are neither iid nor Lipschitz.We first approximate the negative log partial likelihood function by a sum of iid non-Lipschitz terms, then derive the non-asymptotic oracle inequalities for the lasso penalized Cox regression using pointwise arguments to tackle the difficulties caused by lacking iid Lipschitz losses.

  11. Fused Adaptive Lasso for Spatial and Temporal Quantile Function Estimation

    KAUST Repository

    Sun, Ying; Wang, Huixia J.; Fuentes, Montserrat

    2015-01-01

    and temporal data with a fused adaptive Lasso penalty to accommodate the dependence in space and time. This method penalizes the difference among neighboring quantiles, hence it is desirable for applications with features ordered in time or space without

  12. Sungsanpin, a lasso peptide from a deep-sea streptomycete.

    Science.gov (United States)

    Um, Soohyun; Kim, Young-Joo; Kwon, Hyuknam; Wen, He; Kim, Seong-Hwan; Kwon, Hak Cheol; Park, Sunghyouk; Shin, Jongheon; Oh, Dong-Chan

    2013-05-24

    Sungsanpin (1), a new 15-amino-acid peptide, was discovered from a Streptomyces species isolated from deep-sea sediment collected off Jeju Island, Korea. The planar structure of 1 was determined by 1D and 2D NMR spectroscopy, mass spectrometry, and UV spectroscopy. The absolute configurations of the stereocenters in this compound were assigned by derivatizations of the hydrolysate of 1 with Marfey's reagents and 2,3,4,6-tetra-O-acetyl-β-d-glucopyranosyl isothiocyanate, followed by LC-MS analysis. Careful analysis of the ROESY NMR spectrum and three-dimensional structure calculations revealed that sungsanpin possesses the features of a lasso peptide: eight amino acids (-Gly(1)-Phe-Gly-Ser-Lys-Pro-Ile-Asp(8)-) that form a cyclic peptide and seven amino acids (-Ser(9)-Phe-Gly-Leu-Ser-Trp-Leu(15)) that form a tail that loops through the ring. Sungsanpin is thus the first example of a lasso peptide isolated from a marine-derived microorganism. Sungsanpin displayed inhibitory activity in a cell invasion assay with the human lung cancer cell line A549.

  13. Similarity regularized sparse group lasso for cup to disc ratio computation.

    Science.gov (United States)

    Cheng, Jun; Zhang, Zhuo; Tao, Dacheng; Wong, Damon Wing Kee; Liu, Jiang; Baskaran, Mani; Aung, Tin; Wong, Tien Yin

    2017-08-01

    Automatic cup to disc ratio (CDR) computation from color fundus images has shown to be promising for glaucoma detection. Over the past decade, many algorithms have been proposed. In this paper, we first review the recent work in the area and then present a novel similarity-regularized sparse group lasso method for automated CDR estimation. The proposed method reconstructs the testing disc image based on a set of reference disc images by integrating the similarity between testing and the reference disc images with the sparse group lasso constraints. The reconstruction coefficients are then used to estimate the CDR of the testing image. The proposed method has been validated using 650 images with manually annotated CDRs. Experimental results show an average CDR error of 0.0616 and a correlation coefficient of 0.7, outperforming other methods. The areas under curve in the diagnostic test reach 0.843 and 0.837 when manual and automatically segmented discs are used respectively, better than other methods as well.

  14. Ensembling Variable Selectors by Stability Selection for the Cox Model

    Directory of Open Access Journals (Sweden)

    Qing-Yan Yin

    2017-01-01

    Full Text Available As a pivotal tool to build interpretive models, variable selection plays an increasingly important role in high-dimensional data analysis. In recent years, variable selection ensembles (VSEs have gained much interest due to their many advantages. Stability selection (Meinshausen and Bühlmann, 2010, a VSE technique based on subsampling in combination with a base algorithm like lasso, is an effective method to control false discovery rate (FDR and to improve selection accuracy in linear regression models. By adopting lasso as a base learner, we attempt to extend stability selection to handle variable selection problems in a Cox model. According to our experience, it is crucial to set the regularization region Λ in lasso and the parameter λmin properly so that stability selection can work well. To the best of our knowledge, however, there is no literature addressing this problem in an explicit way. Therefore, we first provide a detailed procedure to specify Λ and λmin. Then, some simulated and real-world data with various censoring rates are used to examine how well stability selection performs. It is also compared with several other variable selection approaches. Experimental results demonstrate that it achieves better or competitive performance in comparison with several other popular techniques.

  15. YM2: Continuum expectations, lattice convergence, and lassos

    International Nuclear Information System (INIS)

    Driver, B.K.

    1989-01-01

    The two dimensional Yang-Mills theory (YM 2 ) is analyzed in both the continuum and the lattice. In the complete axial gauge the continuum theory may be defined in terms of a Lie algebra valued white noise, and parallel translation may be defined by stochastic differential equations. This machinery is used to compute the expectations of gauge invariant functions of the parallel translation operators along a collection of curves C. The expectation values are expressed as finite dimensional integrals with densities that are products of the heat kernel on the structure group. The time parameters of the heat kernels are determined by the areas enclosed by the collection C, and the arguments are determined by the crossing topologies of the curves in C. The expectations for the Wilson lattice models have a similar structure, and from this it follows that in the limit of small lattice spacing the lattice expectations converge to the continuum expectations. It is also shown that the lasso variables advocated by L. Gross exist and are sufficient to generate all the measurable functions on the YM 2 -measure space. (orig.)

  16. Pierced Lasso Bundles are a new class of knot-like motifs.

    Directory of Open Access Journals (Sweden)

    Ellinor Haglund

    2014-06-01

    Full Text Available A four-helix bundle is a well-characterized motif often used as a target for designed pharmaceutical therapeutics and nutritional supplements. Recently, we discovered a new structural complexity within this motif created by a disulphide bridge in the long-chain helical bundle cytokine leptin. When oxidized, leptin contains a disulphide bridge creating a covalent-loop through which part of the polypeptide chain is threaded (as seen in knotted proteins. We explored whether other proteins contain a similar intriguing knot-like structure as in leptin and discovered 11 structurally homologous proteins in the PDB. We call this new helical family class the Pierced Lasso Bundle (PLB and the knot-like threaded structural motif a Pierced Lasso (PL. In the current study, we use structure-based simulation to investigate the threading/folding mechanisms for all the PLBs along with three unthreaded homologs as the covalent loop (or lasso in leptin is important in folding dynamics and activity. We find that the presence of a small covalent loop leads to a mechanism where structural elements slipknot to thread through the covalent loop. Larger loops use a piercing mechanism where the free terminal plugs through the covalent loop. Remarkably, the position of the loop as well as its size influences the native state dynamics, which can impact receptor binding and biological activity. This previously unrecognized complexity of knot-like proteins within the helical bundle family comprises a completely new class within the knot family, and the hidden complexity we unraveled in the PLBs is expected to be found in other protein structures outside the four-helix bundles. The insights gained here provide critical new elements for future investigation of this emerging class of proteins, where function and the energetic landscape can be controlled by hidden topology, and should be take into account in ab initio predictions of newly identified protein targets.

  17. Variance Component Selection With Applications to Microbiome Taxonomic Data

    Directory of Open Access Journals (Sweden)

    Jing Zhai

    2018-03-01

    Full Text Available High-throughput sequencing technology has enabled population-based studies of the role of the human microbiome in disease etiology and exposure response. Microbiome data are summarized as counts or composition of the bacterial taxa at different taxonomic levels. An important problem is to identify the bacterial taxa that are associated with a response. One method is to test the association of specific taxon with phenotypes in a linear mixed effect model, which incorporates phylogenetic information among bacterial communities. Another type of approaches consider all taxa in a joint model and achieves selection via penalization method, which ignores phylogenetic information. In this paper, we consider regression analysis by treating bacterial taxa at different level as multiple random effects. For each taxon, a kernel matrix is calculated based on distance measures in the phylogenetic tree and acts as one variance component in the joint model. Then taxonomic selection is achieved by the lasso (least absolute shrinkage and selection operator penalty on variance components. Our method integrates biological information into the variable selection problem and greatly improves selection accuracies. Simulation studies demonstrate the superiority of our methods versus existing methods, for example, group-lasso. Finally, we apply our method to a longitudinal microbiome study of Human Immunodeficiency Virus (HIV infected patients. We implement our method using the high performance computing language Julia. Software and detailed documentation are freely available at https://github.com/JingZhai63/VCselection.

  18. The Spike-and-Slab Lasso Generalized Linear Models for Prediction and Associated Genes Detection.

    Science.gov (United States)

    Tang, Zaixiang; Shen, Yueping; Zhang, Xinyan; Yi, Nengjun

    2017-01-01

    Large-scale "omics" data have been increasingly used as an important resource for prognostic prediction of diseases and detection of associated genes. However, there are considerable challenges in analyzing high-dimensional molecular data, including the large number of potential molecular predictors, limited number of samples, and small effect of each predictor. We propose new Bayesian hierarchical generalized linear models, called spike-and-slab lasso GLMs, for prognostic prediction and detection of associated genes using large-scale molecular data. The proposed model employs a spike-and-slab mixture double-exponential prior for coefficients that can induce weak shrinkage on large coefficients, and strong shrinkage on irrelevant coefficients. We have developed a fast and stable algorithm to fit large-scale hierarchal GLMs by incorporating expectation-maximization (EM) steps into the fast cyclic coordinate descent algorithm. The proposed approach integrates nice features of two popular methods, i.e., penalized lasso and Bayesian spike-and-slab variable selection. The performance of the proposed method is assessed via extensive simulation studies. The results show that the proposed approach can provide not only more accurate estimates of the parameters, but also better prediction. We demonstrate the proposed procedure on two cancer data sets: a well-known breast cancer data set consisting of 295 tumors, and expression data of 4919 genes; and the ovarian cancer data set from TCGA with 362 tumors, and expression data of 5336 genes. Our analyses show that the proposed procedure can generate powerful models for predicting outcomes and detecting associated genes. The methods have been implemented in a freely available R package BhGLM (http://www.ssg.uab.edu/bhglm/). Copyright © 2017 by the Genetics Society of America.

  19. Genetic risk prediction using a spatial autoregressive model with adaptive lasso.

    Science.gov (United States)

    Wen, Yalu; Shen, Xiaoxi; Lu, Qing

    2018-05-31

    With rapidly evolving high-throughput technologies, studies are being initiated to accelerate the process toward precision medicine. The collection of the vast amounts of sequencing data provides us with great opportunities to systematically study the role of a deep catalog of sequencing variants in risk prediction. Nevertheless, the massive amount of noise signals and low frequencies of rare variants in sequencing data pose great analytical challenges on risk prediction modeling. Motivated by the development in spatial statistics, we propose a spatial autoregressive model with adaptive lasso (SARAL) for risk prediction modeling using high-dimensional sequencing data. The SARAL is a set-based approach, and thus, it reduces the data dimension and accumulates genetic effects within a single-nucleotide variant (SNV) set. Moreover, it allows different SNV sets having various magnitudes and directions of effect sizes, which reflects the nature of complex diseases. With the adaptive lasso implemented, SARAL can shrink the effects of noise SNV sets to be zero and, thus, further improve prediction accuracy. Through simulation studies, we demonstrate that, overall, SARAL is comparable to, if not better than, the genomic best linear unbiased prediction method. The method is further illustrated by an application to the sequencing data from the Alzheimer's Disease Neuroimaging Initiative. Copyright © 2018 John Wiley & Sons, Ltd.

  20. Integrative Sparse K-Means With Overlapping Group Lasso in Genomic Applications for Disease Subtype Discovery.

    Science.gov (United States)

    Huo, Zhiguang; Tseng, George

    2017-06-01

    Cancer subtypes discovery is the first step to deliver personalized medicine to cancer patients. With the accumulation of massive multi-level omics datasets and established biological knowledge databases, omics data integration with incorporation of rich existing biological knowledge is essential for deciphering a biological mechanism behind the complex diseases. In this manuscript, we propose an integrative sparse K -means (is- K means) approach to discover disease subtypes with the guidance of prior biological knowledge via sparse overlapping group lasso. An algorithm using an alternating direction method of multiplier (ADMM) will be applied for fast optimization. Simulation and three real applications in breast cancer and leukemia will be used to compare is- K means with existing methods and demonstrate its superior clustering accuracy, feature selection, functional annotation of detected molecular features and computing efficiency.

  1. The Bayesian Covariance Lasso.

    Science.gov (United States)

    Khondker, Zakaria S; Zhu, Hongtu; Chu, Haitao; Lin, Weili; Ibrahim, Joseph G

    2013-04-01

    Estimation of sparse covariance matrices and their inverse subject to positive definiteness constraints has drawn a lot of attention in recent years. The abundance of high-dimensional data, where the sample size ( n ) is less than the dimension ( d ), requires shrinkage estimation methods since the maximum likelihood estimator is not positive definite in this case. Furthermore, when n is larger than d but not sufficiently larger, shrinkage estimation is more stable than maximum likelihood as it reduces the condition number of the precision matrix. Frequentist methods have utilized penalized likelihood methods, whereas Bayesian approaches rely on matrix decompositions or Wishart priors for shrinkage. In this paper we propose a new method, called the Bayesian Covariance Lasso (BCLASSO), for the shrinkage estimation of a precision (covariance) matrix. We consider a class of priors for the precision matrix that leads to the popular frequentist penalties as special cases, develop a Bayes estimator for the precision matrix, and propose an efficient sampling scheme that does not precalculate boundaries for positive definiteness. The proposed method is permutation invariant and performs shrinkage and estimation simultaneously for non-full rank data. Simulations show that the proposed BCLASSO performs similarly as frequentist methods for non-full rank data.

  2. Multi-omics facilitated variable selection in Cox-regression model for cancer prognosis prediction.

    Science.gov (United States)

    Liu, Cong; Wang, Xujun; Genchev, Georgi Z; Lu, Hui

    2017-07-15

    New developments in high-throughput genomic technologies have enabled the measurement of diverse types of omics biomarkers in a cost-efficient and clinically-feasible manner. Developing computational methods and tools for analysis and translation of such genomic data into clinically-relevant information is an ongoing and active area of investigation. For example, several studies have utilized an unsupervised learning framework to cluster patients by integrating omics data. Despite such recent advances, predicting cancer prognosis using integrated omics biomarkers remains a challenge. There is also a shortage of computational tools for predicting cancer prognosis by using supervised learning methods. The current standard approach is to fit a Cox regression model by concatenating the different types of omics data in a linear manner, while penalty could be added for feature selection. A more powerful approach, however, would be to incorporate data by considering relationships among omics datatypes. Here we developed two methods: a SKI-Cox method and a wLASSO-Cox method to incorporate the association among different types of omics data. Both methods fit the Cox proportional hazards model and predict a risk score based on mRNA expression profiles. SKI-Cox borrows the information generated by these additional types of omics data to guide variable selection, while wLASSO-Cox incorporates this information as a penalty factor during model fitting. We show that SKI-Cox and wLASSO-Cox models select more true variables than a LASSO-Cox model in simulation studies. We assess the performance of SKI-Cox and wLASSO-Cox using TCGA glioblastoma multiforme and lung adenocarcinoma data. In each case, mRNA expression, methylation, and copy number variation data are integrated to predict the overall survival time of cancer patients. Our methods achieve better performance in predicting patients' survival in glioblastoma and lung adenocarcinoma. Copyright © 2017. Published by Elsevier

  3. Two-step variable selection in quantile regression models

    Directory of Open Access Journals (Sweden)

    FAN Yali

    2015-06-01

    Full Text Available We propose a two-step variable selection procedure for high dimensional quantile regressions, in which the dimension of the covariates, pn is much larger than the sample size n. In the first step, we perform ℓ1 penalty, and we demonstrate that the first step penalized estimator with the LASSO penalty can reduce the model from an ultra-high dimensional to a model whose size has the same order as that of the true model, and the selected model can cover the true model. The second step excludes the remained irrelevant covariates by applying the adaptive LASSO penalty to the reduced model obtained from the first step. Under some regularity conditions, we show that our procedure enjoys the model selection consistency. We conduct a simulation study and a real data analysis to evaluate the finite sample performance of the proposed approach.

  4. High-dimensional model estimation and model selection

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    I will review concepts and algorithms from high-dimensional statistics for linear model estimation and model selection. I will particularly focus on the so-called p>>n setting where the number of variables p is much larger than the number of samples n. I will focus mostly on regularized statistical estimators that produce sparse models. Important examples include the LASSO and its matrix extension, the Graphical LASSO, and more recent non-convex methods such as the TREX. I will show the applicability of these estimators in a diverse range of scientific applications, such as sparse interaction graph recovery and high-dimensional classification and regression problems in genomics.

  5. Spatio Temporal EEG Source Imaging with the Hierarchical Bayesian Elastic Net and Elitist Lasso Models.

    Science.gov (United States)

    Paz-Linares, Deirel; Vega-Hernández, Mayrim; Rojas-López, Pedro A; Valdés-Hernández, Pedro A; Martínez-Montes, Eduardo; Valdés-Sosa, Pedro A

    2017-01-01

    The estimation of EEG generating sources constitutes an Inverse Problem (IP) in Neuroscience. This is an ill-posed problem due to the non-uniqueness of the solution and regularization or prior information is needed to undertake Electrophysiology Source Imaging. Structured Sparsity priors can be attained through combinations of (L1 norm-based) and (L2 norm-based) constraints such as the Elastic Net (ENET) and Elitist Lasso (ELASSO) models. The former model is used to find solutions with a small number of smooth nonzero patches, while the latter imposes different degrees of sparsity simultaneously along different dimensions of the spatio-temporal matrix solutions. Both models have been addressed within the penalized regression approach, where the regularization parameters are selected heuristically, leading usually to non-optimal and computationally expensive solutions. The existing Bayesian formulation of ENET allows hyperparameter learning, but using the computationally intensive Monte Carlo/Expectation Maximization methods, which makes impractical its application to the EEG IP. While the ELASSO have not been considered before into the Bayesian context. In this work, we attempt to solve the EEG IP using a Bayesian framework for ENET and ELASSO models. We propose a Structured Sparse Bayesian Learning algorithm based on combining the Empirical Bayes and the iterative coordinate descent procedures to estimate both the parameters and hyperparameters. Using realistic simulations and avoiding the inverse crime we illustrate that our methods are able to recover complicated source setups more accurately and with a more robust estimation of the hyperparameters and behavior under different sparsity scenarios than classical LORETA, ENET and LASSO Fusion solutions. We also solve the EEG IP using data from a visual attention experiment, finding more interpretable neurophysiological patterns with our methods. The Matlab codes used in this work, including Simulations, Methods

  6. Impact of statistical learning methods on the predictive power of multivariate normal tissue complication probability models.

    Science.gov (United States)

    Xu, Cheng-Jian; van der Schaaf, Arjen; Schilstra, Cornelis; Langendijk, Johannes A; van't Veld, Aart A

    2012-03-15

    To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator (LASSO), and Bayesian model averaging (BMA), were used to build NTCP models of xerostomia following radiotherapy treatment for head and neck cancer. Performance of each learning method was evaluated by a repeated cross-validation scheme in order to obtain a fair comparison among methods. It was found that the LASSO and BMA methods produced models with significantly better predictive power than that of the stepwise selection method. Furthermore, the LASSO method yields an easily interpretable model as the stepwise method does, in contrast to the less intuitive BMA method. The commonly used stepwise selection method, which is simple to execute, may be insufficient for NTCP modeling. The LASSO method is recommended. Copyright © 2012 Elsevier Inc. All rights reserved.

  7. Impact of Statistical Learning Methods on the Predictive Power of Multivariate Normal Tissue Complication Probability Models

    Energy Technology Data Exchange (ETDEWEB)

    Xu Chengjian, E-mail: c.j.xu@umcg.nl [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Schaaf, Arjen van der; Schilstra, Cornelis; Langendijk, Johannes A.; Veld, Aart A. van' t [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands)

    2012-03-15

    Purpose: To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. Methods and Materials: In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator (LASSO), and Bayesian model averaging (BMA), were used to build NTCP models of xerostomia following radiotherapy treatment for head and neck cancer. Performance of each learning method was evaluated by a repeated cross-validation scheme in order to obtain a fair comparison among methods. Results: It was found that the LASSO and BMA methods produced models with significantly better predictive power than that of the stepwise selection method. Furthermore, the LASSO method yields an easily interpretable model as the stepwise method does, in contrast to the less intuitive BMA method. Conclusions: The commonly used stepwise selection method, which is simple to execute, may be insufficient for NTCP modeling. The LASSO method is recommended.

  8. Impact of Statistical Learning Methods on the Predictive Power of Multivariate Normal Tissue Complication Probability Models

    International Nuclear Information System (INIS)

    Xu Chengjian; Schaaf, Arjen van der; Schilstra, Cornelis; Langendijk, Johannes A.; Veld, Aart A. van’t

    2012-01-01

    Purpose: To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. Methods and Materials: In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator (LASSO), and Bayesian model averaging (BMA), were used to build NTCP models of xerostomia following radiotherapy treatment for head and neck cancer. Performance of each learning method was evaluated by a repeated cross-validation scheme in order to obtain a fair comparison among methods. Results: It was found that the LASSO and BMA methods produced models with significantly better predictive power than that of the stepwise selection method. Furthermore, the LASSO method yields an easily interpretable model as the stepwise method does, in contrast to the less intuitive BMA method. Conclusions: The commonly used stepwise selection method, which is simple to execute, may be insufficient for NTCP modeling. The LASSO method is recommended.

  9. Genome-wide prediction of traits with different genetic architecture through efficient variable selection.

    Science.gov (United States)

    Wimmer, Valentin; Lehermeier, Christina; Albrecht, Theresa; Auinger, Hans-Jürgen; Wang, Yu; Schön, Chris-Carolin

    2013-10-01

    In genome-based prediction there is considerable uncertainty about the statistical model and method required to maximize prediction accuracy. For traits influenced by a small number of quantitative trait loci (QTL), predictions are expected to benefit from methods performing variable selection [e.g., BayesB or the least absolute shrinkage and selection operator (LASSO)] compared to methods distributing effects across the genome [ridge regression best linear unbiased prediction (RR-BLUP)]. We investigate the assumptions underlying successful variable selection by combining computer simulations with large-scale experimental data sets from rice (Oryza sativa L.), wheat (Triticum aestivum L.), and Arabidopsis thaliana (L.). We demonstrate that variable selection can be successful when the number of phenotyped individuals is much larger than the number of causal mutations contributing to the trait. We show that the sample size required for efficient variable selection increases dramatically with decreasing trait heritabilities and increasing extent of linkage disequilibrium (LD). We contrast and discuss contradictory results from simulation and experimental studies with respect to superiority of variable selection methods over RR-BLUP. Our results demonstrate that due to long-range LD, medium heritabilities, and small sample sizes, superiority of variable selection methods cannot be expected in plant breeding populations even for traits like FRIGIDA gene expression in Arabidopsis and flowering time in rice, assumed to be influenced by a few major QTL. We extend our conclusions to the analysis of whole-genome sequence data and infer upper bounds for the number of causal mutations which can be identified by LASSO. Our results have major impact on the choice of statistical method needed to make credible inferences about genetic architecture and prediction accuracy of complex traits.

  10. Comparison of methods used to identify superior individuals in genomic selection in plant breeding.

    Science.gov (United States)

    Bhering, L L; Junqueira, V S; Peixoto, L A; Cruz, C D; Laviola, B G

    2015-09-10

    The aim of this study was to evaluate different methods used in genomic selection, and to verify those that select a higher proportion of individuals with superior genotypes. Thus, F2 populations of different sizes were simulated (100, 200, 500, and 1000 individuals) with 10 replications each. These consisted of 10 linkage groups (LG) of 100 cM each, containing 100 equally spaced markers per linkage group, of which 200 controlled the characteristics, defined as the 20 initials of each LG. Genetic and phenotypic values were simulated assuming binomial distribution of effects for each LG, and the absence of dominance. For phenotypic values, heritabilities of 20, 50, and 80% were considered. To compare methodologies, the analysis processing time, coefficient of coincidence (selection of 5, 10, and 20% of superior individuals), and Spearman correlation between true genetic values, and the genomic values predicted by each methodology were determined. Considering the processing time, the three methodologies were statistically different, rrBLUP was the fastest, and Bayesian LASSO was the slowest. Spearman correlation revealed that the rrBLUP and GBLUP methodologies were equivalent, and Bayesian LASSO provided the lowest correlation values. Similar results were obtained in coincidence variables among the individuals selected, in which Bayesian LASSO differed statistically and presented a lower value than the other methodologies. Therefore, for the scenarios evaluated, rrBLUP is the best methodology for the selection of genetically superior individuals.

  11. A Novel SCCA Approach via Truncated ℓ1-norm and Truncated Group Lasso for Brain Imaging Genetics.

    Science.gov (United States)

    Du, Lei; Liu, Kefei; Zhang, Tuo; Yao, Xiaohui; Yan, Jingwen; Risacher, Shannon L; Han, Junwei; Guo, Lei; Saykin, Andrew J; Shen, Li

    2017-09-18

    Brain imaging genetics, which studies the linkage between genetic variations and structural or functional measures of the human brain, has become increasingly important in recent years. Discovering the bi-multivariate relationship between genetic markers such as single-nucleotide polymorphisms (SNPs) and neuroimaging quantitative traits (QTs) is one major task in imaging genetics. Sparse Canonical Correlation Analysis (SCCA) has been a popular technique in this area for its powerful capability in identifying bi-multivariate relationships coupled with feature selection. The existing SCCA methods impose either the ℓ 1 -norm or its variants to induce sparsity. The ℓ 0 -norm penalty is a perfect sparsity-inducing tool which, however, is an NP-hard problem. In this paper, we propose the truncated ℓ 1 -norm penalized SCCA to improve the performance and effectiveness of the ℓ 1 -norm based SCCA methods. Besides, we propose an efficient optimization algorithms to solve this novel SCCA problem. The proposed method is an adaptive shrinkage method via tuning τ . It can avoid the time intensive parameter tuning if given a reasonable small τ . Furthermore, we extend it to the truncated group-lasso (TGL), and propose TGL-SCCA model to improve the group-lasso-based SCCA methods. The experimental results, compared with four benchmark methods, show that our SCCA methods identify better or similar correlation coefficients, and better canonical loading profiles than the competing methods. This demonstrates the effectiveness and efficiency of our methods in discovering interesting imaging genetic associations. The Matlab code and sample data are freely available at http://www.iu.edu/∼shenlab/tools/tlpscca/ . © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  12. Sparse Reduced-Rank Regression for Simultaneous Dimension Reduction and Variable Selection

    KAUST Repository

    Chen, Lisha; Huang, Jianhua Z.

    2012-01-01

    and hence improves predictive accuracy. We propose to select relevant variables for reduced-rank regression by using a sparsity-inducing penalty. We apply a group-lasso type penalty that treats each row of the matrix of the regression coefficients as a group

  13. Improved intact soil-core carbon determination applying regression shrinkage and variable selection techniques to complete spectrum laser-induced breakdown spectroscopy (LIBS).

    Science.gov (United States)

    Bricklemyer, Ross S; Brown, David J; Turk, Philip J; Clegg, Sam M

    2013-10-01

    Laser-induced breakdown spectroscopy (LIBS) provides a potential method for rapid, in situ soil C measurement. In previous research on the application of LIBS to intact soil cores, we hypothesized that ultraviolet (UV) spectrum LIBS (200-300 nm) might not provide sufficient elemental information to reliably discriminate between soil organic C (SOC) and inorganic C (IC). In this study, using a custom complete spectrum (245-925 nm) core-scanning LIBS instrument, we analyzed 60 intact soil cores from six wheat fields. Predictive multi-response partial least squares (PLS2) models using full and reduced spectrum LIBS were compared for directly determining soil total C (TC), IC, and SOC. Two regression shrinkage and variable selection approaches, the least absolute shrinkage and selection operator (LASSO) and sparse multivariate regression with covariance estimation (MRCE), were tested for soil C predictions and the identification of wavelengths important for soil C prediction. Using complete spectrum LIBS for PLS2 modeling reduced the calibration standard error of prediction (SEP) 15 and 19% for TC and IC, respectively, compared to UV spectrum LIBS. The LASSO and MRCE approaches provided significantly improved calibration accuracy and reduced SEP 32-55% over UV spectrum PLS2 models. We conclude that (1) complete spectrum LIBS is superior to UV spectrum LIBS for predicting soil C for intact soil cores without pretreatment; (2) LASSO and MRCE approaches provide improved calibration prediction accuracy over PLS2 but require additional testing with increased soil and target analyte diversity; and (3) measurement errors associated with analyzing intact cores (e.g., sample density and surface roughness) require further study and quantification.

  14. Selecting a Cable System Operator.

    Science.gov (United States)

    Cable Television Information Center, Washington, DC.

    Intended to assist franchising authorities with the process of selecting a cable television system operator from franchise applicants, this document provides a framework for analysis of individual applications. Section 1 deals with various methods which can be used to select an operator. The next section covers the application form, the vehicle a…

  15. Fused Adaptive Lasso for Spatial and Temporal Quantile Function Estimation

    KAUST Repository

    Sun, Ying

    2015-09-01

    Quantile functions are important in characterizing the entire probability distribution of a random variable, especially when the tail of a skewed distribution is of interest. This article introduces new quantile function estimators for spatial and temporal data with a fused adaptive Lasso penalty to accommodate the dependence in space and time. This method penalizes the difference among neighboring quantiles, hence it is desirable for applications with features ordered in time or space without replicated observations. The theoretical properties are investigated and the performances of the proposed methods are evaluated by simulations. The proposed method is applied to particulate matter (PM) data from the Community Multiscale Air Quality (CMAQ) model to characterize the upper quantiles, which are crucial for studying spatial association between PM concentrations and adverse human health effects. © 2016 American Statistical Association and the American Society for Quality.

  16. AN ANALYTIC OUTLOOK OF THE MADRIGAL MORO LASSO AL MIO DUOLO BY GESUALDO DA VENOSA

    Directory of Open Access Journals (Sweden)

    MURARU AUREL

    2015-09-01

    Full Text Available The analysis of the madrigal Moro lasso al mio duolo reveals the melancholic, thoughtful and grieving atmosphere, gene­rating shady, silent, sometimes dark soundscapes. Gesualdo shapes the poliphony through chromatic licenses, in order to create a tense musical discourse, permanently yearning for stability and balance amidst a harmonic construction lacking any attempt for resolution. Thus the strange harmonies of Gesualdo are shaped, giving birth to a unique musical style, full of dissonances and endless musical tension.

  17. Quality optimization of H.264/AVC video transmission over noisy environments using a sparse regression framework

    Science.gov (United States)

    Pandremmenou, K.; Tziortziotis, N.; Paluri, S.; Zhang, W.; Blekas, K.; Kondi, L. P.; Kumar, S.

    2015-03-01

    We propose the use of the Least Absolute Shrinkage and Selection Operator (LASSO) regression method in order to predict the Cumulative Mean Squared Error (CMSE), incurred by the loss of individual slices in video transmission. We extract a number of quality-relevant features from the H.264/AVC video sequences, which are given as input to the LASSO. This method has the benefit of not only keeping a subset of the features that have the strongest effects towards video quality, but also produces accurate CMSE predictions. Particularly, we study the LASSO regression through two different architectures; the Global LASSO (G.LASSO) and Local LASSO (L.LASSO). In G.LASSO, a single regression model is trained for all slice types together, while in L.LASSO, motivated by the fact that the values for some features are closely dependent on the considered slice type, each slice type has its own regression model, in an e ort to improve LASSO's prediction capability. Based on the predicted CMSE values, we group the video slices into four priority classes. Additionally, we consider a video transmission scenario over a noisy channel, where Unequal Error Protection (UEP) is applied to all prioritized slices. The provided results demonstrate the efficiency of LASSO in estimating CMSE with high accuracy, using only a few features. les that typically contain high-entropy data, producing a footprint that is far less conspicuous than existing methods. The system uses a local web server to provide a le system, user interface and applications through an web architecture.

  18. Efficient Selection of Multiple Objects on a Large Scale

    DEFF Research Database (Denmark)

    Stenholt, Rasmus

    2012-01-01

    The task of multiple object selection (MOS) in immersive virtual environments is important and still largely unexplored. The diffi- culty of efficient MOS increases with the number of objects to be selected. E.g. in small-scale MOS, only a few objects need to be simultaneously selected. This may...... consuming. Instead, we have implemented and tested two of the existing approaches to 3-D MOS, a brush and a lasso, as well as a new technique, a magic wand, which automati- cally selects objects based on local proximity to other objects. In a formal user evaluation, we have studied how the performance...

  19. Prediction of genetic values of quantitative traits with epistatic effects in plant breeding populations.

    Science.gov (United States)

    Wang, D; Salah El-Basyoni, I; Stephen Baenziger, P; Crossa, J; Eskridge, K M; Dweikat, I

    2012-11-01

    Though epistasis has long been postulated to have a critical role in genetic regulation of important pathways as well as provide a major source of variation in the process of speciation, the importance of epistasis for genomic selection in the context of plant breeding is still being debated. In this paper, we report the results on the prediction of genetic values with epistatic effects for 280 accessions in the Nebraska Wheat Breeding Program using adaptive mixed least absolute shrinkage and selection operator (LASSO). The development of adaptive mixed LASSO, originally designed for association mapping, for the context of genomic selection is reported. The results show that adaptive mixed LASSO can be successfully applied to the prediction of genetic values while incorporating both marker main effects and epistatic effects. Especially, the prediction accuracy is substantially improved by the inclusion of two-locus epistatic effects (more than onefold in some cases as measured by cross-validation correlation coefficient), which is observed for multiple traits and planting locations. This points to significant potential in using non-additive genetic effects for genomic selection in crop breeding practices.

  20. Selection/licensing of nuclear power plant operators

    International Nuclear Information System (INIS)

    Saari, L.M.

    1983-07-01

    An important aspect of nuclear power plant (NPP) safety is the reactor operator in the control room. The operators are the first individuals to deal with an emergency situation, and thus, effective performance on their part is essential for safe plant operations. Important issues pertaining to NPP reactor operators would fall within the personnel subsystem of our safety system analysis. While there are many potential aspects of the personnel subsystem, a key first step in this focus is the selection of individuals - attempting to choose individuals for the job of reactor operator who will safely perform the job. This requires a valid (job-related) selection process. Some background information on the Nuclear Regulatory Commission (NRC) licensing process used for selecting NPP reactor operators is briefly presented and a description of a research endeavor now underway at Battelle for developing a valid reactor operator licensing examination is included

  1. Prediction-Oriented Marker Selection (PROMISE): With Application to High-Dimensional Regression.

    Science.gov (United States)

    Kim, Soyeon; Baladandayuthapani, Veerabhadran; Lee, J Jack

    2017-06-01

    In personalized medicine, biomarkers are used to select therapies with the highest likelihood of success based on an individual patient's biomarker/genomic profile. Two goals are to choose important biomarkers that accurately predict treatment outcomes and to cull unimportant biomarkers to reduce the cost of biological and clinical verifications. These goals are challenging due to the high dimensionality of genomic data. Variable selection methods based on penalized regression (e.g., the lasso and elastic net) have yielded promising results. However, selecting the right amount of penalization is critical to simultaneously achieving these two goals. Standard approaches based on cross-validation (CV) typically provide high prediction accuracy with high true positive rates but at the cost of too many false positives. Alternatively, stability selection (SS) controls the number of false positives, but at the cost of yielding too few true positives. To circumvent these issues, we propose prediction-oriented marker selection (PROMISE), which combines SS with CV to conflate the advantages of both methods. Our application of PROMISE with the lasso and elastic net in data analysis shows that, compared to CV, PROMISE produces sparse solutions, few false positives, and small type I + type II error, and maintains good prediction accuracy, with a marginal decrease in the true positive rates. Compared to SS, PROMISE offers better prediction accuracy and true positive rates. In summary, PROMISE can be applied in many fields to select regularization parameters when the goals are to minimize false positives and maximize prediction accuracy.

  2. Basis of valve operator selection for SMART

    International Nuclear Information System (INIS)

    Kang, H. S.; Lee, D. J.; See, J. K.; Park, C. K.; Choi, B. S.

    2000-05-01

    SMART, an integral reactor with enhanced safety and operability, is under development for use of the nuclear energy. The valve operator of SMART system were selected through the data survey and technical review of potential valve fabrication vendors, and it will provide the establishment and optimization of the basic system design of SMART. In order to establish and optimize the basic system design of SMART, the basis of selection for the valve operator type were provided based on the basic design requirements. The basis of valve operator selection for SMART will be used as a basic technical data for the SMART basic and detail design and a fundamental material for the new reactor development in the future

  3. Basis of valve operator selection for SMART

    Energy Technology Data Exchange (ETDEWEB)

    Kang, H. S.; Lee, D. J.; See, J. K.; Park, C. K.; Choi, B. S

    2000-05-01

    SMART, an integral reactor with enhanced safety and operability, is under development for use of the nuclear energy. The valve operator of SMART system were selected through the data survey and technical review of potential valve fabrication vendors, and it will provide the establishment and optimization of the basic system design of SMART. In order to establish and optimize the basic system design of SMART, the basis of selection for the valve operator type were provided based on the basic design requirements. The basis of valve operator selection for SMART will be used as a basic technical data for the SMART basic and detail design and a fundamental material for the new reactor development in the future.

  4. Operator psychological selection system for nuclear power plant

    International Nuclear Information System (INIS)

    He Xuhong; Huang Xiangrui

    2004-01-01

    Based on a detailed job analysis of nuclear power plant operator including operation procedures analysis, interview with personnel familiar with operator job, and 9 events happened in the past in the plant involved operator error analysis, several operator work characteristics and performance influence factors are obtained. According to these specific characteristics and factors, referring to the psychological selection research results in the other related critical occupational fields, a full psychological selection system of nuclear power plant operator is forwarded in this paper, including 21 dimensions in 3 facets as general psychological ability, personality and psychological healthy. Practical measurement methods for the proposed selection dimensions are discussed in the end

  5. Allele frequency changes due to hitch-hiking in genomic selection programs

    DEFF Research Database (Denmark)

    Liu, Huiming; Sørensen, Anders Christian; Meuwissen, Theo H E

    2014-01-01

    of inbreeding due to changes in allele frequencies and hitch-hiking. This study aimed at understanding the impact of using long-term genomic selection on changes in allele frequencies, genetic variation and the level of inbreeding. Methods Selection was performed in simulated scenarios with a population of 400......-BLUP, Genomic BLUP and Bayesian Lasso. Changes in allele frequencies at QTL, markers and linked neutral loci were investigated for the different selection criteria and different scenarios, along with the loss of favourable alleles and the rate of inbreeding measured by pedigree and runs of homozygosity. Results...

  6. Statistically Modeling I-V Characteristics of CNT-FET with LASSO

    Science.gov (United States)

    Ma, Dongsheng; Ye, Zuochang; Wang, Yan

    2017-08-01

    With the advent of internet of things (IOT), the need for studying new material and devices for various applications is increasing. Traditionally we build compact models for transistors on the basis of physics. But physical models are expensive and need a very long time to adjust for non-ideal effects. As the vision for the application of many novel devices is not certain or the manufacture process is not mature, deriving generalized accurate physical models for such devices is very strenuous, whereas statistical modeling is becoming a potential method because of its data oriented property and fast implementation. In this paper, one classical statistical regression method, LASSO, is used to model the I-V characteristics of CNT-FET and a pseudo-PMOS inverter simulation based on the trained model is implemented in Cadence. The normalized relative mean square prediction error of the trained model versus experiment sample data and the simulation results show that the model is acceptable for digital circuit static simulation. And such modeling methodology can extend to general devices.

  7. Pierced Lasso Proteins

    Science.gov (United States)

    Jennings, Patricia

    Entanglement and knots are naturally occurring, where, in the microscopic world, knots in DNA and homopolymers are well characterized. The most complex knots are observed in proteins which are harder to investigate, as proteins are heteropolymers composed of a combination of 20 different amino acids with different individual biophysical properties. As new-knotted topologies and new proteins containing knots continue to be discovered and characterized, the investigation of knots in proteins has gained intense interest. Thus far, the principle focus has been on the evolutionary origin of tying a knot, with questions of how a protein chain `self-ties' into a knot, what the mechanism(s) are that contribute to threading, and the biological relevance and functional implication of a knotted topology in vivo gaining the most insight. Efforts to study the fully untied and unfolded chain indicate that the knot is highly stable, remaining intact in the unfolded state orders of magnitude longer than first anticipated. The persistence of ``stable'' knots in the unfolded state, together with the challenge of defining an unfolded and untied chain from an unfolded and knotted chain, complicates the study of fully untied protein in vitro. Our discovery of a new class of knotted proteins, the Pierced Lassos (PL) loop topology, simplifies the knotting approach. While PLs are not easily recognizable by the naked eye, they have now been identified in many proteins in the PDB through the use of computation tools. PL topologies are diverse proteins found in all kingdoms of life, performing a large variety of biological responses such as cell signaling, immune responses, transporters and inhibitors (http://lassoprot.cent.uw.edu.pl/). Many of these PL topologies are secreted proteins, extracellular proteins, as well as, redox sensors, enzymes and metal and co-factor binding proteins; all of which provide a favorable environment for the formation of the disulphide bridge. In the PL

  8. Statistical learning and selective inference.

    Science.gov (United States)

    Taylor, Jonathan; Tibshirani, Robert J

    2015-06-23

    We describe the problem of "selective inference." This addresses the following challenge: Having mined a set of data to find potential associations, how do we properly assess the strength of these associations? The fact that we have "cherry-picked"--searched for the strongest associations--means that we must set a higher bar for declaring significant the associations that we see. This challenge becomes more important in the era of big data and complex statistical modeling. The cherry tree (dataset) can be very large and the tools for cherry picking (statistical learning methods) are now very sophisticated. We describe some recent new developments in selective inference and illustrate their use in forward stepwise regression, the lasso, and principal components analysis.

  9. Perceptual quality estimation of H.264/AVC videos using reduced-reference and no-reference models

    Science.gov (United States)

    Shahid, Muhammad; Pandremmenou, Katerina; Kondi, Lisimachos P.; Rossholm, Andreas; Lövström, Benny

    2016-09-01

    Reduced-reference (RR) and no-reference (NR) models for video quality estimation, using features that account for the impact of coding artifacts, spatio-temporal complexity, and packet losses, are proposed. The purpose of this study is to analyze a number of potentially quality-relevant features in order to select the most suitable set of features for building the desired models. The proposed sets of features have not been used in the literature and some of the features are used for the first time in this study. The features are employed by the least absolute shrinkage and selection operator (LASSO), which selects only the most influential of them toward perceptual quality. For comparison, we apply feature selection in the complete feature sets and ridge regression on the reduced sets. The models are validated using a database of H.264/AVC encoded videos that were subjectively assessed for quality in an ITU-T compliant laboratory. We infer that just two features selected by RR LASSO and two bitstream-based features selected by NR LASSO are able to estimate perceptual quality with high accuracy, higher than that of ridge, which uses more features. The comparisons with competing works and two full-reference metrics also verify the superiority of our models.

  10. Applying Least Absolute Shrinkage Selection Operator and Akaike Information Criterion Analysis to Find the Best Multiple Linear Regression Models between Climate Indices and Components of Cow's Milk.

    Science.gov (United States)

    Marami Milani, Mohammad Reza; Hense, Andreas; Rahmani, Elham; Ploeger, Angelika

    2016-07-23

    This study focuses on multiple linear regression models relating six climate indices (temperature humidity THI, environmental stress ESI, equivalent temperature index ETI, heat load HLI, modified HLI (HLI new ), and respiratory rate predictor RRP) with three main components of cow's milk (yield, fat, and protein) for cows in Iran. The least absolute shrinkage selection operator (LASSO) and the Akaike information criterion (AIC) techniques are applied to select the best model for milk predictands with the smallest number of climate predictors. Uncertainty estimation is employed by applying bootstrapping through resampling. Cross validation is used to avoid over-fitting. Climatic parameters are calculated from the NASA-MERRA global atmospheric reanalysis. Milk data for the months from April to September, 2002 to 2010 are used. The best linear regression models are found in spring between milk yield as the predictand and THI, ESI, ETI, HLI, and RRP as predictors with p -value < 0.001 and R ² (0.50, 0.49) respectively. In summer, milk yield with independent variables of THI, ETI, and ESI show the highest relation ( p -value < 0.001) with R ² (0.69). For fat and protein the results are only marginal. This method is suggested for the impact studies of climate variability/change on agriculture and food science fields when short-time series or data with large uncertainty are available.

  11. Improved Sparse Channel Estimation for Cooperative Communication Systems

    Directory of Open Access Journals (Sweden)

    Guan Gui

    2012-01-01

    Full Text Available Accurate channel state information (CSI is necessary at receiver for coherent detection in amplify-and-forward (AF cooperative communication systems. To estimate the channel, traditional methods, that is, least squares (LS and least absolute shrinkage and selection operator (LASSO, are based on assumptions of either dense channel or global sparse channel. However, LS-based linear method neglects the inherent sparse structure information while LASSO-based sparse channel method cannot take full advantage of the prior information. Based on the partial sparse assumption of the cooperative channel model, we propose an improved channel estimation method with partial sparse constraint. At first, by using sparse decomposition theory, channel estimation is formulated as a compressive sensing problem. Secondly, the cooperative channel is reconstructed by LASSO with partial sparse constraint. Finally, numerical simulations are carried out to confirm the superiority of proposed methods over global sparse channel estimation methods.

  12. Selecting Operations for Assembler Encoding

    Directory of Open Access Journals (Sweden)

    Tomasz Praczyk

    2010-04-01

    Full Text Available Assembler Encoding is a neuro-evolutionary method in which a neural network is represented in the form of a simple program called Assembler Encoding Program. The task of the program is to create the so-called Network Definition Matrix which maintains all the information necessary to construct the network. To generate Assembler Encoding Programs and the subsequent neural networks evolutionary techniques are used.
    The performance of Assembler Encoding strongly depends on operations used in Assembler Encoding Programs. To select the most effective operations, experiments in the optimization and the predator-prey problem were carried out. In the experiments, Assembler Encoding Programs equipped with different types of operations were tested. The results of the tests are presented at the end of the paper.

  13. Economy system and operation of a selected retail chain

    OpenAIRE

    KALUSOVÁ, Monika

    2011-01-01

    The goal of the thesis Economy system and operation of a selected retail chain is to explore and analyze the sphere in which the retailer chain operates, and evaluate their financial situation. At the same time, the selected retail chain will compare with the selected sectoral competition. The first part of thesis covers the theoretical information about issues, in particular the definition of basic terms of trade and retail. The second part of thesis includes application theoretical knowledg...

  14. EM Adaptive LASSO – A Multilocus Modeling Strategy for Detecting SNPs Associated With Zero-inflated Count Phenotypes

    Directory of Open Access Journals (Sweden)

    Himel eMallick

    2016-03-01

    Full Text Available Count data are increasingly ubiquitous in genetic association studies, where it is possible to observe excess zero counts as compared to what is expected based on standard assumptions. For instance, in rheumatology, data are usually collected in multiple joints within a person or multiple sub-regions of a joint, and it is not uncommon that the phenotypes contain enormous number of zeroes due to the presence of excessive zero counts in majority of patients. Most existing statistical methods assume that the count phenotypes follow one of these four distributions with appropriate dispersion-handling mechanisms: Poisson, Zero-inflated Poisson (ZIP, Negative Binomial, and Zero-inflated Negative Binomial (ZINB. However, little is known about their implications in genetic association studies. Also, there is a relative paucity of literature on their usefulness with respect to model misspecification and variable selection. In this article, we have investigated the performance of several state-of-the-art approaches for handling zero-inflated count data along with a novel penalized regression approach with an adaptive LASSO penalty, by simulating data under a variety of disease models and linkage disequilibrium patterns. By taking into account data-adaptive weights in the estimation procedure, the proposed method provides greater flexibility in multi-SNP modeling of zero-inflated count phenotypes. A fast coordinate descent algorithm nested within an EM (expectation-maximization algorithm is implemented for estimating the model parameters and conducting variable selection simultaneously. Results show that the proposed method has optimal performance in the presence of multicollinearity, as measured by both prediction accuracy and empirical power, which is especially apparent as the sample size increases. Moreover, the Type I error rates become more or less uncontrollable for the competing methods when a model is misspecified, a phenomenon routinely

  15. Detection of Independent Associations of Plasma Lipidomic Parameters with Insulin Sensitivity Indices Using Data Mining Methodology.

    Directory of Open Access Journals (Sweden)

    Steffi Kopprasch

    Full Text Available Glucolipotoxicity is a major pathophysiological mechanism in the development of insulin resistance and type 2 diabetes mellitus (T2D. We aimed to detect subtle changes in the circulating lipid profile by shotgun lipidomics analyses and to associate them with four different insulin sensitivity indices.The cross-sectional study comprised 90 men with a broad range of insulin sensitivity including normal glucose tolerance (NGT, n = 33, impaired glucose tolerance (IGT, n = 32 and newly detected T2D (n = 25. Prior to oral glucose challenge plasma was obtained and quantitatively analyzed for 198 lipid molecular species from 13 different lipid classes including triacylglycerls (TAGs, phosphatidylcholine plasmalogen/ether (PC O-s, sphingomyelins (SMs, and lysophosphatidylcholines (LPCs. To identify a lipidomic signature of individual insulin sensitivity we applied three data mining approaches, namely least absolute shrinkage and selection operator (LASSO, Support Vector Regression (SVR and Random Forests (RF for the following insulin sensitivity indices: homeostasis model of insulin resistance (HOMA-IR, glucose insulin sensitivity index (GSI, insulin sensitivity index (ISI, and disposition index (DI. The LASSO procedure offers a high prediction accuracy and and an easier interpretability than SVR and RF.After LASSO selection, the plasma lipidome explained 3% (DI to maximal 53% (HOMA-IR variability of the sensitivity indexes. Among the lipid species with the highest positive LASSO regression coefficient were TAG 54:2 (HOMA-IR, PC O- 32:0 (GSI, and SM 40:3:1 (ISI. The highest negative regression coefficient was obtained for LPC 22:5 (HOMA-IR, TAG 51:1 (GSI, and TAG 58:6 (ISI.Although a substantial part of lipid molecular species showed a significant correlation with insulin sensitivity indices we were able to identify a limited number of lipid metabolites of particular importance based on the LASSO approach. These few selected lipids with the closest

  16. Distractor Inhibition: Principles of Operation during Selective Attention

    Science.gov (United States)

    Wyatt, Natalie; Machado, Liana

    2013-01-01

    Research suggests that although target amplification acts as the main determinant of the efficacy of selective attention, distractor inhibition contributes under some circumstances. Here we aimed to gain insight into the operating principles that regulate the use of distractor inhibition during selective attention. The results suggest that, in…

  17. Design of an operations manager selection system in service encounter

    Directory of Open Access Journals (Sweden)

    Tanawin Nunthaphanich

    2015-10-01

    Full Text Available The purpose of this study is to provide criteria for selecting operations managers at the ‘service encounter’ for mobile telecommunication companies, and develop a system for this multi-criteria decision-making scheme based on the Analytical Hierarchy Process (AHP. There are three main criteria for evaluating the capability of service-encounter operation managers: (1 the ability to design service process; (2 the ability to operate service process; (3 the ability to conduct improvement. The AHP operation manager selection tool was developed based on the complex problems at the service encounter. It was created as a decision support system which was used to recruit and evaluate operations managers’ capability for the purpose of career advancement.

  18. Plant operator selection system for evaluating employment candidates' potential for success in electric power plant operations positions

    International Nuclear Information System (INIS)

    Dunnette, M.D.

    1982-01-01

    The Plant Operator Selection System is a battery of tests and questionnaires that can be administered to job candidates in less than three hours. Various components of the battery measure what a job candidate has accomplished in previous educational and work situations, how well a candidate compares with others on a number of important aptitudes or abilities, and whether or not a candidate possesses the kind of personal stability required in power plant operations positions. A job candidate's answers to the tests and questionnaires of the Plant Operator Selection System are scored and converted to an OVERALL POTENTIAL INDEX. Values of the OVERALL POTENTIAL INDEX [OPI] range between 0 and 15. Candidates with high OPI values are much more likely to become effective and successful plant operators than candidates with low OPI values. It is possible to estimate the financial advantages to a company of using the Plant Operator Selection System in evaluating candidates for plant operations jobs

  19. Research on professional adaptability psychological selection indices of nuclear power plant operators

    International Nuclear Information System (INIS)

    Liu Jingquan; Li Zhe; Li Maoyou

    2010-01-01

    Based on the analysis of the work characteristics of nuclear power plant operators and the comparison of professional psychological selection indices for different occupations, the indices of psychological selection system which is applicable to nuclear power plant operators are proposed in this paper, using the method named 'taking classes,cross-comparison'. The index results of the suggested psychological selection system reflects all the professional requirements on the nuclear power plant operators, which can also be used for the recruitment, training and the retraining programs for operators. (authors)

  20. [Multi-mathematical modelings for compatibility optimization of Jiangzhi granules].

    Science.gov (United States)

    Yang, Ming; Zhang, Li; Ge, Yingli; Lu, Yanliu; Ji, Guang

    2011-12-01

    To investigate into the method of "multi activity index evaluation and combination optimized of mult-component" for Chinese herbal formulas. According to the scheme of uniform experimental design, efficacy experiment, multi index evaluation, least absolute shrinkage, selection operator (LASSO) modeling, evolutionary optimization algorithm, validation experiment, we optimized the combination of Jiangzhi granules based on the activity indexes of blood serum ALT, ALT, AST, TG, TC, HDL, LDL and TG level of liver tissues, ratio of liver tissue to body. Analytic hierarchy process (AHP) combining with criteria importance through intercriteria correlation (CRITIC) for multi activity index evaluation was more reasonable and objective, it reflected the information of activity index's order and objective sample data. LASSO algorithm modeling could accurately reflect the relationship between different combination of Jiangzhi granule and the activity comprehensive indexes. The optimized combination of Jiangzhi granule showed better values of the activity comprehensive indexed than the original formula after the validation experiment. AHP combining with CRITIC can be used for multi activity index evaluation and LASSO algorithm, it is suitable for combination optimized of Chinese herbal formulas.

  1. Program management aid for redundancy selection and operational guidelines

    Science.gov (United States)

    Hodge, P. W.; Davis, W. L.; Frumkin, B.

    1972-01-01

    Although this criterion was developed specifically for use on the shuttle program, it has application to many other multi-missions programs (i.e. aircraft or mechanisms). The methodology employed is directly applicable even if the tools (nomographs and equations) are for mission peculiar cases. The redundancy selection criterion was developed to insure that both the design and operational cost impacts (life cycle costs) were considered in the selection of the quantity of operational redundancy. These tools were developed as aids in expediting the decision process and not intended as the automatic decision maker. This approach to redundancy selection is unique in that it enables a pseudo systems analysis to be performed on an equipment basis without waiting for all designs to be hardened.

  2. Group spike-and-slab lasso generalized linear models for disease prediction and associated genes detection by incorporating pathway information.

    Science.gov (United States)

    Tang, Zaixiang; Shen, Yueping; Li, Yan; Zhang, Xinyan; Wen, Jia; Qian, Chen'ao; Zhuang, Wenzhuo; Shi, Xinghua; Yi, Nengjun

    2018-03-15

    Large-scale molecular data have been increasingly used as an important resource for prognostic prediction of diseases and detection of associated genes. However, standard approaches for omics data analysis ignore the group structure among genes encoded in functional relationships or pathway information. We propose new Bayesian hierarchical generalized linear models, called group spike-and-slab lasso GLMs, for predicting disease outcomes and detecting associated genes by incorporating large-scale molecular data and group structures. The proposed model employs a mixture double-exponential prior for coefficients that induces self-adaptive shrinkage amount on different coefficients. The group information is incorporated into the model by setting group-specific parameters. We have developed a fast and stable deterministic algorithm to fit the proposed hierarchal GLMs, which can perform variable selection within groups. We assess the performance of the proposed method on several simulated scenarios, by varying the overlap among groups, group size, number of non-null groups, and the correlation within group. Compared with existing methods, the proposed method provides not only more accurate estimates of the parameters but also better prediction. We further demonstrate the application of the proposed procedure on three cancer datasets by utilizing pathway structures of genes. Our results show that the proposed method generates powerful models for predicting disease outcomes and detecting associated genes. The methods have been implemented in a freely available R package BhGLM (http://www.ssg.uab.edu/bhglm/). nyi@uab.edu. Supplementary data are available at Bioinformatics online.

  3. Novel Harmonic Regularization Approach for Variable Selection in Cox’s Proportional Hazards Model

    Directory of Open Access Journals (Sweden)

    Ge-Jin Chu

    2014-01-01

    Full Text Available Variable selection is an important issue in regression and a number of variable selection methods have been proposed involving nonconvex penalty functions. In this paper, we investigate a novel harmonic regularization method, which can approximate nonconvex Lq  (1/2select key risk factors in the Cox’s proportional hazards model using microarray gene expression data. The harmonic regularization method can be efficiently solved using our proposed direct path seeking approach, which can produce solutions that closely approximate those for the convex loss function and the nonconvex regularization. Simulation results based on the artificial datasets and four real microarray gene expression datasets, such as real diffuse large B-cell lymphoma (DCBCL, the lung cancer, and the AML datasets, show that the harmonic regularization method can be more accurate for variable selection than existing Lasso series methods.

  4. Performance Analysis of Hospitals Affiliated to Mashhad University of Medical Sciences Using the Pabon Lasso Model: A Six-Year-Trend Study

    Directory of Open Access Journals (Sweden)

    Kalhor

    2016-08-01

    Full Text Available Background Nowadays, productivity and efficiency are considered a culture and a perspective in both life and work environments. This is the starting point of human development. Objectives The aim of the present study was to investigate the performance of hospitals affiliated to Mashhad University of Medical Sciences using the Pabon Lasso Model. Methods The present study was a descriptive-analytic research, with a cross-sectional design, conducted during six years (2009 - 2014, at selected hospitals. The studied hospitals of this study were 21 public hospitals affiliated to Mashhad University of Medical Sciences. The data was obtained from the treatment Deputy of Khorasan Razavi province. Results Results from the present study showed that only 19% of the studied hospitals were located in zone 3 of the diagram, indicating a perfect performance. Twenty-eight percent were in zone 1, 19% in zone 2, and 28% in zone 4. Conclusions According to the findings, only a few hospitals are at the desirable zone (zone 3; the rest of the hospitals fell in other zones, which could be a result of poor performance and poor management of hospital resources. Most of the hospitals were in zones 1 and 4, whose characteristics are low bed turnover and longer stay, indicating higher bed supply than demand for healthcare services or longer hospitalization, less outpatient equipment use, and higher costs.

  5. Integrative Modeling and Inference in High Dimensional Genomic and Metabolic Data

    DEFF Research Database (Denmark)

    Brink-Jensen, Kasper

    in Manuscript I preserves the attributes of the compounds found in LC–MS samples while identifying genes highly associated with these. The main obstacles that must be overcome with this approach are dimension reduction and variable selection, here done with PARAFAC and LASSO respectively. One important drawback...... of the LASSO has been the lack of inference, the variables selected could potentially just be the most important from a set of non–important variables. Manuscript II addresses this problem with a permutation based significance test for the variables chosen by the LASSO. Once a set of relevant variables has......, particularly it scales to many lists and it provides an intuitive interpretation of the measure....

  6. Gene expression network reconstruction by convex feature selection when incorporating genetic perturbations.

    Directory of Open Access Journals (Sweden)

    Benjamin A Logsdon

    Full Text Available Cellular gene expression measurements contain regulatory information that can be used to discover novel network relationships. Here, we present a new algorithm for network reconstruction powered by the adaptive lasso, a theoretically and empirically well-behaved method for selecting the regulatory features of a network. Any algorithms designed for network discovery that make use of directed probabilistic graphs require perturbations, produced by either experiments or naturally occurring genetic variation, to successfully infer unique regulatory relationships from gene expression data. Our approach makes use of appropriately selected cis-expression Quantitative Trait Loci (cis-eQTL, which provide a sufficient set of independent perturbations for maximum network resolution. We compare the performance of our network reconstruction algorithm to four other approaches: the PC-algorithm, QTLnet, the QDG algorithm, and the NEO algorithm, all of which have been used to reconstruct directed networks among phenotypes leveraging QTL. We show that the adaptive lasso can outperform these algorithms for networks of ten genes and ten cis-eQTL, and is competitive with the QDG algorithm for networks with thirty genes and thirty cis-eQTL, with rich topologies and hundreds of samples. Using this novel approach, we identify unique sets of directed relationships in Saccharomyces cerevisiae when analyzing genome-wide gene expression data for an intercross between a wild strain and a lab strain. We recover novel putative network relationships between a tyrosine biosynthesis gene (TYR1, and genes involved in endocytosis (RCY1, the spindle checkpoint (BUB2, sulfonate catabolism (JLP1, and cell-cell communication (PRM7. Our algorithm provides a synthesis of feature selection methods and graphical model theory that has the potential to reveal new directed regulatory relationships from the analysis of population level genetic and gene expression data.

  7. Empirical Performance of Cross-Validation With Oracle Methods in a Genomics Context.

    Science.gov (United States)

    Martinez, Josue G; Carroll, Raymond J; Müller, Samuel; Sampson, Joshua N; Chatterjee, Nilanjan

    2011-11-01

    When employing model selection methods with oracle properties such as the smoothly clipped absolute deviation (SCAD) and the Adaptive Lasso, it is typical to estimate the smoothing parameter by m-fold cross-validation, for example, m = 10. In problems where the true regression function is sparse and the signals large, such cross-validation typically works well. However, in regression modeling of genomic studies involving Single Nucleotide Polymorphisms (SNP), the true regression functions, while thought to be sparse, do not have large signals. We demonstrate empirically that in such problems, the number of selected variables using SCAD and the Adaptive Lasso, with 10-fold cross-validation, is a random variable that has considerable and surprising variation. Similar remarks apply to non-oracle methods such as the Lasso. Our study strongly questions the suitability of performing only a single run of m-fold cross-validation with any oracle method, and not just the SCAD and Adaptive Lasso.

  8. Psychophysiological and psychological criteria for professional selection of reactor operators - a review

    International Nuclear Information System (INIS)

    Jonkova, A.

    1992-01-01

    The professional activity of reactor operators is determined according to some modern classification schemes. The NPP operation staff's classification containing groups of operators-supervisors, operators-manipulators and operators-managers (Zinchenko and Munipov) is discussed. The author specifies the functions of the operators-supervisors in more detail. The requirements for selection of operators are given and the significance of mind, memory, attention, differentiative sensomotorics, emotional and stress-stability is emphasized. An own set of criteria for professional selection of reactor operators is proposed based on the definition of reliability concepts -adequacy, timeliness and lack of outages. The requirements for psychological structure of personality are well-grounded also by the collective character of this kind of professional activity also. 28 refs., 1 fig. (A.B.)

  9. Improving inspection reliability through operator selection and training

    International Nuclear Information System (INIS)

    McGrath, Bernard; Carter, Luke

    2013-01-01

    A number of years ago the UK's Health and Safety Executive sponsored a series of three PANI projects investigating the application of manual ultrasonics, which endeavoured to establish the necessary steps that ensure a reliable inspection is performed. The results of the three projects were each reported separately on completion and also presented at number of international conferences. This paper summarises the results of these projects from the point of view of operator performance. The correlation of operator ultrasonic performance with results of aptitude tests is presented along with observations on the impact of training and qualifications of the operators. The results lead to conclusions on how the selection and training of operators could be modified to improve reliability of inspections.

  10. Wavenumber selection based analysis in Raman spectroscopy improves skin cancer diagnostic specificity at high sensitivity levels (Conference Presentation)

    Science.gov (United States)

    Zhao, Jianhua; Zeng, Haishan; Kalia, Sunil; Lui, Harvey

    2017-02-01

    Background: Raman spectroscopy is a non-invasive optical technique which can measure molecular vibrational modes within tissue. A large-scale clinical study (n = 518) has demonstrated that real-time Raman spectroscopy could distinguish malignant from benign skin lesions with good diagnostic accuracy; this was validated by a follow-up independent study (n = 127). Objective: Most of the previous diagnostic algorithms have typically been based on analyzing the full band of the Raman spectra, either in the fingerprint or high wavenumber regions. Our objective in this presentation is to explore wavenumber selection based analysis in Raman spectroscopy for skin cancer diagnosis. Methods: A wavenumber selection algorithm was implemented using variably-sized wavenumber windows, which were determined by the correlation coefficient between wavenumbers. Wavenumber windows were chosen based on accumulated frequency from leave-one-out cross-validated stepwise regression or least and shrinkage selection operator (LASSO). The diagnostic algorithms were then generated from the selected wavenumber windows using multivariate statistical analyses, including principal component and general discriminant analysis (PC-GDA) and partial least squares (PLS). A total cohort of 645 confirmed lesions from 573 patients encompassing skin cancers, precancers and benign skin lesions were included. Lesion measurements were divided into training cohort (n = 518) and testing cohort (n = 127) according to the measurement time. Result: The area under the receiver operating characteristic curve (ROC) improved from 0.861-0.891 to 0.891-0.911 and the diagnostic specificity for sensitivity levels of 0.99-0.90 increased respectively from 0.17-0.65 to 0.20-0.75 by selecting specific wavenumber windows for analysis. Conclusion: Wavenumber selection based analysis in Raman spectroscopy improves skin cancer diagnostic specificity at high sensitivity levels.

  11. Geographically weighted lasso (GWL) study for modeling the diarrheic to achieve open defecation free (ODF) target

    Science.gov (United States)

    Arumsari, Nurvita; Sutidjo, S. U.; Brodjol; Soedjono, Eddy S.

    2014-03-01

    Diarrhea has been one main cause of morbidity and mortality to children around the world, especially in the developing countries According to available data that was mentioned. It showed that sanitary and healthy lifestyle implementation by the inhabitants was not good yet. Inadequacy of environmental influence and the availability of health services were suspected factors which influenced diarrhea cases happened followed by heightened percentage of the diarrheic. This research is aimed at modelling the diarrheic by using Geographically Weighted Lasso method. With the existence of spatial heterogeneity was tested by Breusch Pagan, it was showed that diarrheic modeling with weighted regression, especially GWR and GWL, can explain the variation in each location. But, the absence of multi-collinearity cases on predictor variables, which were affecting the diarrheic, resulted in GWR and GWL modelling to be not different or identical. It is shown from the resulting MSE value. While from R2 value which usually higher on GWL model showed a significant variable predictor based on more parametric shrinkage value.

  12. Estimating High-Dimensional Time Series Models

    DEFF Research Database (Denmark)

    Medeiros, Marcelo C.; Mendes, Eduardo F.

    We study the asymptotic properties of the Adaptive LASSO (adaLASSO) in sparse, high-dimensional, linear time-series models. We assume both the number of covariates in the model and candidate variables can increase with the number of observations and the number of candidate variables is, possibly......, larger than the number of observations. We show the adaLASSO consistently chooses the relevant variables as the number of observations increases (model selection consistency), and has the oracle property, even when the errors are non-Gaussian and conditionally heteroskedastic. A simulation study shows...

  13. Sparse Reduced-Rank Regression for Simultaneous Dimension Reduction and Variable Selection

    KAUST Repository

    Chen, Lisha

    2012-12-01

    The reduced-rank regression is an effective method in predicting multiple response variables from the same set of predictor variables. It reduces the number of model parameters and takes advantage of interrelations between the response variables and hence improves predictive accuracy. We propose to select relevant variables for reduced-rank regression by using a sparsity-inducing penalty. We apply a group-lasso type penalty that treats each row of the matrix of the regression coefficients as a group and show that this penalty satisfies certain desirable invariance properties. We develop two numerical algorithms to solve the penalized regression problem and establish the asymptotic consistency of the proposed method. In particular, the manifold structure of the reduced-rank regression coefficient matrix is considered and studied in our theoretical analysis. In our simulation study and real data analysis, the new method is compared with several existing variable selection methods for multivariate regression and exhibits competitive performance in prediction and variable selection. © 2012 American Statistical Association.

  14. Model selection emphasises the importance of non-chromosomal information in genetic studies.

    Directory of Open Access Journals (Sweden)

    Reda Rawi

    Full Text Available Ever since the case of the missing heritability was highlighted some years ago, scientists have been investigating various possible explanations for the issue. However, none of these explanations include non-chromosomal genetic information. Here we describe explicitly how chromosomal and non-chromosomal modifiers collectively influence the heritability of a trait, in this case, the growth rate of yeast. Our results show that the non-chromosomal contribution can be large, adding another dimension to the estimation of heritability. We also discovered, combining the strength of LASSO with model selection, that the interaction of chromosomal and non-chromosomal information is essential in describing phenotypes.

  15. On Weighted Support Vector Regression

    DEFF Research Database (Denmark)

    Han, Xixuan; Clemmensen, Line Katrine Harder

    2014-01-01

    We propose a new type of weighted support vector regression (SVR), motivated by modeling local dependencies in time and space in prediction of house prices. The classic weights of the weighted SVR are added to the slack variables in the objective function (OF‐weights). This procedure directly...... shrinks the coefficient of each observation in the estimated functions; thus, it is widely used for minimizing influence of outliers. We propose to additionally add weights to the slack variables in the constraints (CF‐weights) and call the combination of weights the doubly weighted SVR. We illustrate...... the differences and similarities of the two types of weights by demonstrating the connection between the Least Absolute Shrinkage and Selection Operator (LASSO) and the SVR. We show that an SVR problem can be transformed to a LASSO problem plus a linear constraint and a box constraint. We demonstrate...

  16. On the effect of emotional states on operator thinking. [psychological test for operator selection

    Science.gov (United States)

    Solodkova, A. V.

    1975-01-01

    A combination sonic and electrical skin stimuli stress test is reported that is suitable for the psychological selection of individuals to perform operator functions. The behavior of these people is characterized by a fighting spirit, increased work capacity, minimum expenditure of strength and insignificant fatigue.

  17. Risk Prediction Using Genome-Wide Association Studies on Type 2 Diabetes

    Directory of Open Access Journals (Sweden)

    Sungkyoung Choi

    2016-12-01

    Full Text Available The success of genome-wide association studies (GWASs has enabled us to improve risk assessment and provide novel genetic variants for diagnosis, prevention, and treatment. However, most variants discovered by GWASs have been reported to have very small effect sizes on complex human diseases, which has been a big hurdle in building risk prediction models. Recently, many statistical approaches based on penalized regression have been developed to solve the “large p and small n” problem. In this report, we evaluated the performance of several statistical methods for predicting a binary trait: stepwise logistic regression (SLR, least absolute shrinkage and selection operator (LASSO, and Elastic-Net (EN. We first built a prediction model by combining variable selection and prediction methods for type 2 diabetes using Affymetrix Genome-Wide Human SNP Array 5.0 from the Korean Association Resource project. We assessed the risk prediction performance using area under the receiver operating characteristic curve (AUC for the internal and external validation datasets. In the internal validation, SLR-LASSO and SLR-EN tended to yield more accurate predictions than other combinations. During the external validation, the SLR-SLR and SLR-EN combinations achieved the highest AUC of 0.726. We propose these combinations as a potentially powerful risk prediction model for type 2 diabetes.

  18. Objective ARX Model Order Selection for Multi-Channel Human Operator Identification

    NARCIS (Netherlands)

    Roggenkämper, N; Pool, D.M.; Drop, F.M.; van Paassen, M.M.; Mulder, M.

    2016-01-01

    In manual control, the human operator primarily responds to visual inputs but may elect to make use of other available feedback paths such as physical motion, adopting a multi-channel control strategy. Hu- man operator identification procedures generally require a priori selection of the model

  19. A F o cus on Far Eastern Tourists – Tour Operator Selection Criteria

    Directory of Open Access Journals (Sweden)

    Ayşe Çelik

    2014-03-01

    Full Text Available Tour operators are becoming more important in the long haul destination market. Identifying tour operator selection criteria is crucial to orientate marketing strategies. The aim of the study was to determine the tour operation selection criteria of a package holiday maker visiting Turkey from the Far East according to the nationality. Data was drawn up and analyzed from tourists who came from three of these countries namely: Japan, South Korea, and China between February and April 2013 in Cappadocia. Quantitative methodology employing One-way ANOVA analysis was used. Deduction was made by analysing the tour operator selection criteria data that nationality was not a meaningful differentiation for a tourist in their assessment of the “Service Quality’” and “Opportunity to interact with other people’” items referred to in the survey questionnaire. Other items from the resulting data gave meaningful differences in cross - cultural behaviour. Results from the study provide important cues for tour operator managers to consider developing different promotional strategy initiatives to engage and attract more Japanese, South Korean, and Chinese tourists to Turkey.

  20. Poster: Brush, Lasso, or Magic Wand? Picking the Right Tool for Large-Scale Multiple Object Selection Tasks

    DEFF Research Database (Denmark)

    Stenholt, Rasmus; Madsen, Claus B.

    2012-01-01

    are presented with a range of different geometric layouts of selection targets, to investigate the pros and cons of each of the MOS techniques. The evaluation shows that the magic wand is significantly faster to use than the other techniques, however the quality of the magic wand's selections is highly...

  1. Relationships Between the External and Internal Training Load in Professional Soccer: What Can We Learn From Machine Learning?

    Science.gov (United States)

    Jaspers, Arne; Beéck, Tim Op De; Brink, Michel S; Frencken, Wouter G P; Staes, Filip; Davis, Jesse J; Helsen, Werner F

    2017-12-28

    Machine learning may contribute to understanding the relationship between the external load and internal load in professional soccer. Therefore, the relationship between external load indicators and the rating of perceived exertion (RPE) was examined using machine learning techniques on a group and individual level. Training data were collected from 38 professional soccer players over two seasons. The external load was measured using global positioning system technology and accelerometry. The internal load was obtained using the RPE. Predictive models were constructed using two machine learning techniques, artificial neural networks (ANNs) and least absolute shrinkage and selection operator (LASSO), and one naive baseline method. The predictions were based on a large set of external load indicators. Using each technique, one group model involving all players and one individual model for each player was constructed. These models' performance on predicting the reported RPE values for future training sessions was compared to the naive baseline's performance. Both the ANN and LASSO models outperformed the baseline. Additionally, the LASSO model made more accurate predictions for the RPE than the ANN model. Furthermore, decelerations were identified as important external load indicators. Regardless of the applied machine learning technique, the group models resulted in equivalent or better predictions for the reported RPE values than the individual models. Machine learning techniques may have added value in predicting the RPE for future sessions to optimize training design and evaluation. Additionally, these techniques may be used in conjunction with expert knowledge to select key external load indicators for load monitoring.

  2. Novel high-resolution computed tomography-based radiomic classifier for screen-identified pulmonary nodules in the National Lung Screening Trial.

    Science.gov (United States)

    Peikert, Tobias; Duan, Fenghai; Rajagopalan, Srinivasan; Karwoski, Ronald A; Clay, Ryan; Robb, Richard A; Qin, Ziling; Sicks, JoRean; Bartholmai, Brian J; Maldonado, Fabien

    2018-01-01

    Optimization of the clinical management of screen-detected lung nodules is needed to avoid unnecessary diagnostic interventions. Herein we demonstrate the potential value of a novel radiomics-based approach for the classification of screen-detected indeterminate nodules. Independent quantitative variables assessing various radiologic nodule features such as sphericity, flatness, elongation, spiculation, lobulation and curvature were developed from the NLST dataset using 726 indeterminate nodules (all ≥ 7 mm, benign, n = 318 and malignant, n = 408). Multivariate analysis was performed using least absolute shrinkage and selection operator (LASSO) method for variable selection and regularization in order to enhance the prediction accuracy and interpretability of the multivariate model. The bootstrapping method was then applied for the internal validation and the optimism-corrected AUC was reported for the final model. Eight of the originally considered 57 quantitative radiologic features were selected by LASSO multivariate modeling. These 8 features include variables capturing Location: vertical location (Offset carina centroid z), Size: volume estimate (Minimum enclosing brick), Shape: flatness, Density: texture analysis (Score Indicative of Lesion/Lung Aggression/Abnormality (SILA) texture), and surface characteristics: surface complexity (Maximum shape index and Average shape index), and estimates of surface curvature (Average positive mean curvature and Minimum mean curvature), all with Pscreen-detected nodule characterization appears extremely promising however independent external validation is needed.

  3. A Method Based on Intuitionistic Fuzzy Dependent Aggregation Operators for Supplier Selection

    Directory of Open Access Journals (Sweden)

    Fen Wang

    2013-01-01

    Full Text Available Recently, resolving the decision making problem of evaluation and ranking the potential suppliers have become as a key strategic factor for business firms. In this paper, two new intuitionistic fuzzy aggregation operators are developed: dependent intuitionistic fuzzy ordered weighed averaging (DIFOWA operator and dependent intuitionistic fuzzy hybrid weighed aggregation (DIFHWA operator. Some of their main properties are studied. A method based on the DIFHWA operator for intuitionistic fuzzy multiple attribute decision making is presented. Finally, an illustrative example concerning supplier selection is given.

  4. Selective application of revised source terms to operating nuclear power plants

    International Nuclear Information System (INIS)

    Moon, Joo Hyun; Song, Jae Hyuk; Lee, Young Wook; Ko, Hyun Seok; Kang, Chang Sun

    2001-01-01

    More than 30 years later since 1962 when TID-14844 was promulgated, there has been big change of the US NRC's regulatory position in using accident source terms for radiological assessment following a design basis accident (DBA). To replace the instantaneous source terms of TID-14844, the time-dependent source terms of NUREG-1465 was published in 1995. In the meantime, the radiological acceptance criteria for reactor site evaluation in 10 CFR Part 100 were also revised. In particular, the concept of total effective dose equivalent has been incorporated in accordance with the radiation protection standards set forth in revised 10 CFR Part 20. Subsequently, the publication of Regulatory Guide 1.183 and the revision of Standard Review Plan 15.0.1 followed in 2000, which provided the licensee of operating nuclear power reactor with the acceptable guidance of applying the revised source term. The guidance allowed the holder of an operating license issued prior to January 10, 1997 to voluntarily revise the accident source terms used in the radiological consequence analyses of DBA. Regarding to its type of application, there suggested full and selective applications, Whether it is full or selective, based upon the scope and nature of associated plant modifications being proposed, the actual application of the revised source terms to an operating plant is expected to give a large impact on its facility design basis. Considering scope and cost of the analyses required for licensing, selective application is seemed to be more appealing to an licensee of the operating plant rather than full application. In this paper, hence, the selective application methodology is reviewed and is actally applied to the assessment of offsite radiological consequence following a LOCA at Ulchin Unit 3 and 4, in order to identify and analyze the potential impacts due to application of revised source terms and to assess the considerations taken in each application prior to its actual

  5. On the selection of ordinary differential equation models with application to predator-prey dynamical models.

    Science.gov (United States)

    Zhang, Xinyu; Cao, Jiguo; Carroll, Raymond J

    2015-03-01

    We consider model selection and estimation in a context where there are competing ordinary differential equation (ODE) models, and all the models are special cases of a "full" model. We propose a computationally inexpensive approach that employs statistical estimation of the full model, followed by a combination of a least squares approximation (LSA) and the adaptive Lasso. We show the resulting method, here called the LSA method, to be an (asymptotically) oracle model selection method. The finite sample performance of the proposed LSA method is investigated with Monte Carlo simulations, in which we examine the percentage of selecting true ODE models, the efficiency of the parameter estimation compared to simply using the full and true models, and coverage probabilities of the estimated confidence intervals for ODE parameters, all of which have satisfactory performances. Our method is also demonstrated by selecting the best predator-prey ODE to model a lynx and hare population dynamical system among some well-known and biologically interpretable ODE models. © 2014, The International Biometric Society.

  6. Covariate selection for the semiparametric additive risk model

    DEFF Research Database (Denmark)

    Martinussen, Torben; Scheike, Thomas

    2009-01-01

    This paper considers covariate selection for the additive hazards model. This model is particularly simple to study theoretically and its practical implementation has several major advantages to the similar methodology for the proportional hazards model. One complication compared...... and study their large sample properties for the situation where the number of covariates p is smaller than the number of observations. We also show that the adaptive Lasso has the oracle property. In many practical situations, it is more relevant to tackle the situation with large p compared with the number...... of observations. We do this by studying the properties of the so-called Dantzig selector in the setting of the additive risk model. Specifically, we establish a bound on how close the solution is to a true sparse signal in the case where the number of covariates is large. In a simulation study, we also compare...

  7. Finger vein recognition with personalized feature selection.

    Science.gov (United States)

    Xi, Xiaoming; Yang, Gongping; Yin, Yilong; Meng, Xianjing

    2013-08-22

    Finger veins are a promising biometric pattern for personalized identification in terms of their advantages over existing biometrics. Based on the spatial pyramid representation and the combination of more effective information such as gray, texture and shape, this paper proposes a simple but powerful feature, called Pyramid Histograms of Gray, Texture and Orientation Gradients (PHGTOG). For a finger vein image, PHGTOG can reflect the global spatial layout and local details of gray, texture and shape. To further improve the recognition performance and reduce the computational complexity, we select a personalized subset of features from PHGTOG for each subject by using the sparse weight vector, which is trained by using LASSO and called PFS-PHGTOG. We conduct extensive experiments to demonstrate the promise of the PHGTOG and PFS-PHGTOG, experimental results on our databases show that PHGTOG outperforms the other existing features. Moreover, PFS-PHGTOG can further boost the performance in comparison with PHGTOG.

  8. Finger Vein Recognition with Personalized Feature Selection

    Directory of Open Access Journals (Sweden)

    Xianjing Meng

    2013-08-01

    Full Text Available Finger veins are a promising biometric pattern for personalized identification in terms of their advantages over existing biometrics. Based on the spatial pyramid representation and the combination of more effective information such as gray, texture and shape, this paper proposes a simple but powerful feature, called Pyramid Histograms of Gray, Texture and Orientation Gradients (PHGTOG. For a finger vein image, PHGTOG can reflect the global spatial layout and local details of gray, texture and shape. To further improve the recognition performance and reduce the computational complexity, we select a personalized subset of features from PHGTOG for each subject by using the sparse weight vector, which is trained by using LASSO and called PFS-PHGTOG. We conduct extensive experiments to demonstrate the promise of the PHGTOG and PFS-PHGTOG, experimental results on our databases show that PHGTOG outperforms the other existing features. Moreover, PFS-PHGTOG can further boost the performance in comparison with PHGTOG.

  9. Regularized rare variant enrichment analysis for case-control exome sequencing data.

    Science.gov (United States)

    Larson, Nicholas B; Schaid, Daniel J

    2014-02-01

    Rare variants have recently garnered an immense amount of attention in genetic association analysis. However, unlike methods traditionally used for single marker analysis in GWAS, rare variant analysis often requires some method of aggregation, since single marker approaches are poorly powered for typical sequencing study sample sizes. Advancements in sequencing technologies have rendered next-generation sequencing platforms a realistic alternative to traditional genotyping arrays. Exome sequencing in particular not only provides base-level resolution of genetic coding regions, but also a natural paradigm for aggregation via genes and exons. Here, we propose the use of penalized regression in combination with variant aggregation measures to identify rare variant enrichment in exome sequencing data. In contrast to marginal gene-level testing, we simultaneously evaluate the effects of rare variants in multiple genes, focusing on gene-based least absolute shrinkage and selection operator (LASSO) and exon-based sparse group LASSO models. By using gene membership as a grouping variable, the sparse group LASSO can be used as a gene-centric analysis of rare variants while also providing a penalized approach toward identifying specific regions of interest. We apply extensive simulations to evaluate the performance of these approaches with respect to specificity and sensitivity, comparing these results to multiple competing marginal testing methods. Finally, we discuss our findings and outline future research. © 2013 WILEY PERIODICALS, INC.

  10. Schwarzian conditions for linear differential operators with selected differential Galois groups

    International Nuclear Information System (INIS)

    Abdelaziz, Y; Maillard, J-M

    2017-01-01

    We show that non-linear Schwarzian differential equations emerging from covariance symmetry conditions imposed on linear differential operators with hypergeometric function solutions can be generalized to arbitrary order linear differential operators with polynomial coefficients having selected differential Galois groups. For order three and order four linear differential operators we show that this pullback invariance up to conjugation eventually reduces to symmetric powers of an underlying order-two operator. We give, precisely, the conditions to have modular correspondences solutions for such Schwarzian differential equations, which was an open question in a previous paper. We analyze in detail a pullbacked hypergeometric example generalizing modular forms, that ushers a pullback invariance up to operator homomorphisms. We finally consider the more general problem of the equivalence of two different order-four linear differential Calabi–Yau operators up to pullbacks and conjugation, and clarify the cases where they have the same Yukawa couplings. (paper)

  11. Schwarzian conditions for linear differential operators with selected differential Galois groups

    Science.gov (United States)

    Abdelaziz, Y.; Maillard, J.-M.

    2017-11-01

    We show that non-linear Schwarzian differential equations emerging from covariance symmetry conditions imposed on linear differential operators with hypergeometric function solutions can be generalized to arbitrary order linear differential operators with polynomial coefficients having selected differential Galois groups. For order three and order four linear differential operators we show that this pullback invariance up to conjugation eventually reduces to symmetric powers of an underlying order-two operator. We give, precisely, the conditions to have modular correspondences solutions for such Schwarzian differential equations, which was an open question in a previous paper. We analyze in detail a pullbacked hypergeometric example generalizing modular forms, that ushers a pullback invariance up to operator homomorphisms. We finally consider the more general problem of the equivalence of two different order-four linear differential Calabi-Yau operators up to pullbacks and conjugation, and clarify the cases where they have the same Yukawa couplings.

  12. Diseño de un modelo de descripción, valoración, clasificación y remuneración de puestos para la empresa Novacero S.A., planta Lasso

    OpenAIRE

    Cajas Garzón, Alexandra Maribel

    2012-01-01

    208 hojas : ilustraciones, 29 x 21 cm El presente proyecto de titulación tiene por objetivo diseñar un Modelo de Descripción, Valoración, Clasificación y Remuneración de Puestos, aplicando la Metodología HAY de Valoración de Cargos por Perfiles y Escalas, para la Empresa NOVACERO S.A. Planta Lasso. Se definió un Mapa de Procesos, considerando los que están orientados a satisfacer las necesidades del cliente interno y externo lo cual es un insumo básico para proceder con la identificación ...

  13. Selective use of peri-operative steroids in pituitary tumor surgery: escape from dogma

    Directory of Open Access Journals (Sweden)

    Jacqueline Marie Regan

    2013-03-01

    Full Text Available Objective: Traditional neurosurgical practice calls for administration of peri-operative stress-dose steroids for sellar-suprasellar masses undergoing operative treatment. This practice is considered critical to prevent peri-operative complications associated with hypoadrenalism, such as hypotension and circulatory collapse. However, stress-dose steroids complicate the management of these patients. It has been our routine practice to use stress steroids during surgery only if the patient has clinical or biochemical evidence of hypocortisolism pre-operatively. We wanted to be certain that this practice was safe.Methods: We present our retrospective analysis from a consecutive series of 114 operations in 109 patients with sellar and/or suprasellar tumors, the majority of whom were managed without empirical stress-dose steroid coverage. Only patients who were hypoadrenal pre-operatively or who had suffered apoplexy were given stress dose coverage during surgery. We screened for biochemical evidence of hypoadrenalism as a result of surgery by measuring immediate post-operative AM serum cortisol levels.Results: There were no adverse events related to the selective use of cortisol replacement in this patient population. Conclusions: Our experience demonstrates that selective use of corticosteroid replacement is safe; it simplifies the management of the patients, and has advantages over empiric dogmatic steroid coverage.

  14. Do Red Edge and Texture Attributes from High-Resolution Satellite Data Improve Wood Volume Estimation in a Semi-Arid Mountainous Region?

    DEFF Research Database (Denmark)

    Schumacher, Paul; Mislimshoeva, Bunafsha; Brenning, Alexander

    2016-01-01

    to overcome this issue. However, clear recommendations on the suitability of specific proxies to provide accurate biomass information in semi-arid to arid environments are still lacking. This study contributes to the understanding of using multispectral high-resolution satellite data (RapidEye), specifically...... red edge and texture attributes, to estimate wood volume in semi-arid ecosystems characterized by scarce vegetation. LASSO (Least Absolute Shrinkage and Selection Operator) and random forest were used as predictive models relating in situ-measured aboveground standing wood volume to satellite data...

  15. Detection of shielded radionuclides from weak and poorly resolved spectra using group positive RIVAL

    International Nuclear Information System (INIS)

    Kump, Paul; Bai, Er-Wei; Chan, Kung-Sik; Eichinger, William

    2013-01-01

    This paper is concerned with the identification of nuclides from weak and poorly resolved spectra in the presence of unknown radiation shielding materials such as carbon, water, concrete and lead. Since a shield will attenuate lower energies more so than higher ones, isotope sub-spectra must be introduced into models and into detection algorithms. We propose a new algorithm for detection, called group positive RIVAL, that encourages the selection of groups of sub-spectra rather than the selection of individual sub-spectra that may be from the same parent isotope. Indeed, the proposed algorithm incorporates group positive LASSO, and, as such, we supply the consistency results of group positive LASSO and adaptive group positive LASSO. In an example employing various shielding materials and material thicknesses, group positive RIVAL is shown to perform well in all scenarios with the exception of ones in which the shielding material is lead. - Highlights: ► Identification of nuclides from weak and poorly resolved spectra. ► Shielding materials such as carbon, water, concrete, and lead are considered. ► Isotope spectra are decomposed into their sub-spectra. ► A variable selection algorithm is proposed that encourages group selection. ► Simulations demonstrate the proposed method's performance when nuclides have been shielded

  16. Adaptive L1/2 Shooting Regularization Method for Survival Analysis Using Gene Expression Data

    Directory of Open Access Journals (Sweden)

    Xiao-Ying Liu

    2013-01-01

    Full Text Available A new adaptive L1/2 shooting regularization method for variable selection based on the Cox’s proportional hazards mode being proposed. This adaptive L1/2 shooting algorithm can be easily obtained by the optimization of a reweighed iterative series of L1 penalties and a shooting strategy of L1/2 penalty. Simulation results based on high dimensional artificial data show that the adaptive L1/2 shooting regularization method can be more accurate for variable selection than Lasso and adaptive Lasso methods. The results from real gene expression dataset (DLBCL also indicate that the L1/2 regularization method performs competitively.

  17. Interval 2-Tuple Linguistic Distance Operators and Their Applications to Supplier Evaluation and Selection

    Directory of Open Access Journals (Sweden)

    Meng-Meng Shan

    2016-01-01

    Full Text Available With respect to multicriteria supplier selection problems with interval 2-tuple linguistic information, a new decision making approach that uses distance measures is proposed. Motivated by the ordered weighted distance (OWD measures, in this paper, we develop some interval 2-tuple linguistic distance operators such as the interval 2-tuple weighted distance (ITWD, the interval 2-tuple ordered weighted distance (ITOWD, and the interval 2-tuple hybrid weighted distance (ITHWD operators. These aggregation operators are very useful for the treatment of input data in the form of interval 2-tuple linguistic variables. We study some desirable properties of the ITOWD operator and further generalize it by using the generalized and the quasi-arithmetic means. Finally, the new approach is utilized to complete a supplier selection study for an actual hospital from the healthcare industry.

  18. Controlling Working Memory Operations by Selective Gating: The Roles of Oscillations and Synchrony

    Science.gov (United States)

    Dipoppa, Mario; Szwed, Marcin; Gutkin, Boris S.

    2016-01-01

    Working memory (WM) is a primary cognitive function that corresponds to the ability to update, stably maintain, and manipulate short-term memory (ST M) rapidly to perform ongoing cognitive tasks. A prevalent neural substrate of WM coding is persistent neural activity, the property of neurons to remain active after having been activated by a transient sensory stimulus. This persistent activity allows for online maintenance of memory as well as its active manipulation necessary for task performance. WM is tightly capacity limited. Therefore, selective gating of sensory and internally generated information is crucial for WM function. While the exact neural substrate of selective gating remains unclear, increasing evidence suggests that it might be controlled by modulating ongoing oscillatory brain activity. Here, we review experiments and models that linked selective gating, persistent activity, and brain oscillations, putting them in the more general mechanistic context of WM. We do so by defining several operations necessary for successful WM function and then discussing how such operations may be carried out by mechanisms suggested by computational models. We specifically show how oscillatory mechanisms may provide a rapid and flexible active gating mechanism for WM operations. PMID:28154616

  19. Joint effect of unlinked genotypes: application to type 2 diabetes in the EPIC-Potsdam case-cohort study.

    Science.gov (United States)

    Knüppel, Sven; Meidtner, Karina; Arregui, Maria; Holzhütter, Hermann-Georg; Boeing, Heiner

    2015-07-01

    Analyzing multiple single nucleotide polymorphisms (SNPs) is a promising approach to finding genetic effects beyond single-locus associations. We proposed the use of multilocus stepwise regression (MSR) to screen for allele combinations as a method to model joint effects, and compared the results with the often used genetic risk score (GRS), conventional stepwise selection, and the shrinkage method LASSO. In contrast to MSR, the GRS, conventional stepwise selection, and LASSO model each genotype by the risk allele doses. We reanalyzed 20 unlinked SNPs related to type 2 diabetes (T2D) in the EPIC-Potsdam case-cohort study (760 cases, 2193 noncases). No SNP-SNP interactions and no nonlinear effects were found. Two SNP combinations selected by MSR (Nagelkerke's R² = 0.050 and 0.048) included eight SNPs with mean allele combination frequency of 2%. GRS and stepwise selection selected nearly the same SNP combinations consisting of 12 and 13 SNPs (Nagelkerke's R² ranged from 0.020 to 0.029). LASSO showed similar results. The MSR method showed the best model fit measured by Nagelkerke's R² suggesting that further improvement may render this method a useful tool in genetic research. However, our comparison suggests that the GRS is a simple way to model genetic effects since it does not consider linkage, SNP-SNP interactions, and no non-linear effects. © 2015 John Wiley & Sons Ltd/University College London.

  20. Pairwise Constraint-Guided Sparse Learning for Feature Selection.

    Science.gov (United States)

    Liu, Mingxia; Zhang, Daoqiang

    2016-01-01

    Feature selection aims to identify the most informative features for a compact and accurate data representation. As typical supervised feature selection methods, Lasso and its variants using L1-norm-based regularization terms have received much attention in recent studies, most of which use class labels as supervised information. Besides class labels, there are other types of supervised information, e.g., pairwise constraints that specify whether a pair of data samples belong to the same class (must-link constraint) or different classes (cannot-link constraint). However, most of existing L1-norm-based sparse learning methods do not take advantage of the pairwise constraints that provide us weak and more general supervised information. For addressing that problem, we propose a pairwise constraint-guided sparse (CGS) learning method for feature selection, where the must-link and the cannot-link constraints are used as discriminative regularization terms that directly concentrate on the local discriminative structure of data. Furthermore, we develop two variants of CGS, including: 1) semi-supervised CGS that utilizes labeled data, pairwise constraints, and unlabeled data and 2) ensemble CGS that uses the ensemble of pairwise constraint sets. We conduct a series of experiments on a number of data sets from University of California-Irvine machine learning repository, a gene expression data set, two real-world neuroimaging-based classification tasks, and two large-scale attribute classification tasks. Experimental results demonstrate the efficacy of our proposed methods, compared with several established feature selection methods.

  1. Fine-mapping additive and dominant SNP effects using group-LASSO and Fractional Resample Model Averaging

    Science.gov (United States)

    Sabourin, Jeremy; Nobel, Andrew B.; Valdar, William

    2014-01-01

    Genomewide association studies sometimes identify loci at which both the number and identities of the underlying causal variants are ambiguous. In such cases, statistical methods that model effects of multiple SNPs simultaneously can help disentangle the observed patterns of association and provide information about how those SNPs could be prioritized for follow-up studies. Current multi-SNP methods, however, tend to assume that SNP effects are well captured by additive genetics; yet when genetic dominance is present, this assumption translates to reduced power and faulty prioritizations. We describe a statistical procedure for prioritizing SNPs at GWAS loci that efficiently models both additive and dominance effects. Our method, LLARRMA-dawg, combines a group LASSO procedure for sparse modeling of multiple SNP effects with a resampling procedure based on fractional observation weights; it estimates for each SNP the robustness of association with the phenotype both to sampling variation and to competing explanations from other SNPs. In producing a SNP prioritization that best identifies underlying true signals, we show that: our method easily outperforms a single marker analysis; when additive-only signals are present, our joint model for additive and dominance is equivalent to or only slightly less powerful than modeling additive-only effects; and, when dominance signals are present, even in combination with substantial additive effects, our joint model is unequivocally more powerful than a model assuming additivity. We also describe how performance can be improved through calibrated randomized penalization, and discuss how dominance in ungenotyped SNPs can be incorporated through either heterozygote dosage or multiple imputation. PMID:25417853

  2. A Bayesian Framework That Integrates Heterogeneous Data for Inferring Gene Regulatory Networks

    Energy Technology Data Exchange (ETDEWEB)

    Santra, Tapesh, E-mail: tapesh.santra@ucd.ie [Systems Biology Ireland, University College Dublin, Dublin (Ireland)

    2014-05-20

    Reconstruction of gene regulatory networks (GRNs) from experimental data is a fundamental challenge in systems biology. A number of computational approaches have been developed to infer GRNs from mRNA expression profiles. However, expression profiles alone are proving to be insufficient for inferring GRN topologies with reasonable accuracy. Recently, it has been shown that integration of external data sources (such as gene and protein sequence information, gene ontology data, protein–protein interactions) with mRNA expression profiles may increase the reliability of the inference process. Here, I propose a new approach that incorporates transcription factor binding sites (TFBS) and physical protein interactions (PPI) among transcription factors (TFs) in a Bayesian variable selection (BVS) algorithm which can infer GRNs from mRNA expression profiles subjected to genetic perturbations. Using real experimental data, I show that the integration of TFBS and PPI data with mRNA expression profiles leads to significantly more accurate networks than those inferred from expression profiles alone. Additionally, the performance of the proposed algorithm is compared with a series of least absolute shrinkage and selection operator (LASSO) regression-based network inference methods that can also incorporate prior knowledge in the inference framework. The results of this comparison suggest that BVS can outperform LASSO regression-based method in some circumstances.

  3. A Bayesian Framework That Integrates Heterogeneous Data for Inferring Gene Regulatory Networks

    International Nuclear Information System (INIS)

    Santra, Tapesh

    2014-01-01

    Reconstruction of gene regulatory networks (GRNs) from experimental data is a fundamental challenge in systems biology. A number of computational approaches have been developed to infer GRNs from mRNA expression profiles. However, expression profiles alone are proving to be insufficient for inferring GRN topologies with reasonable accuracy. Recently, it has been shown that integration of external data sources (such as gene and protein sequence information, gene ontology data, protein–protein interactions) with mRNA expression profiles may increase the reliability of the inference process. Here, I propose a new approach that incorporates transcription factor binding sites (TFBS) and physical protein interactions (PPI) among transcription factors (TFs) in a Bayesian variable selection (BVS) algorithm which can infer GRNs from mRNA expression profiles subjected to genetic perturbations. Using real experimental data, I show that the integration of TFBS and PPI data with mRNA expression profiles leads to significantly more accurate networks than those inferred from expression profiles alone. Additionally, the performance of the proposed algorithm is compared with a series of least absolute shrinkage and selection operator (LASSO) regression-based network inference methods that can also incorporate prior knowledge in the inference framework. The results of this comparison suggest that BVS can outperform LASSO regression-based method in some circumstances.

  4. A Ranking Approach to Genomic Selection.

    Science.gov (United States)

    Blondel, Mathieu; Onogi, Akio; Iwata, Hiroyoshi; Ueda, Naonori

    2015-01-01

    Genomic selection (GS) is a recent selective breeding method which uses predictive models based on whole-genome molecular markers. Until now, existing studies formulated GS as the problem of modeling an individual's breeding value for a particular trait of interest, i.e., as a regression problem. To assess predictive accuracy of the model, the Pearson correlation between observed and predicted trait values was used. In this paper, we propose to formulate GS as the problem of ranking individuals according to their breeding value. Our proposed framework allows us to employ machine learning methods for ranking which had previously not been considered in the GS literature. To assess ranking accuracy of a model, we introduce a new measure originating from the information retrieval literature called normalized discounted cumulative gain (NDCG). NDCG rewards more strongly models which assign a high rank to individuals with high breeding value. Therefore, NDCG reflects a prerequisite objective in selective breeding: accurate selection of individuals with high breeding value. We conducted a comparison of 10 existing regression methods and 3 new ranking methods on 6 datasets, consisting of 4 plant species and 25 traits. Our experimental results suggest that tree-based ensemble methods including McRank, Random Forests and Gradient Boosting Regression Trees achieve excellent ranking accuracy. RKHS regression and RankSVM also achieve good accuracy when used with an RBF kernel. Traditional regression methods such as Bayesian lasso, wBSR and BayesC were found less suitable for ranking. Pearson correlation was found to correlate poorly with NDCG. Our study suggests two important messages. First, ranking methods are a promising research direction in GS. Second, NDCG can be a useful evaluation measure for GS.

  5. Operational plans for life science payloads - From experiment selection through postflight reporting

    Science.gov (United States)

    Mccollum, G. W.; Nelson, W. G.; Wells, G. W.

    1976-01-01

    Key features of operational plans developed in a study of the Space Shuttle era life science payloads program are presented. The data describes the overall acquisition, staging, and integration of payload elements, as well as program implementation methods and mission support requirements. Five configurations were selected as representative payloads: (a) carry-on laboratories - medical emphasis experiments, (b) mini-laboratories - medical/biology experiments, (c) seven-day dedicated laboratories - medical/biology experiments, (d) 30-day dedicated laboratories - Regenerative Life Support Evaluation (RLSE) with selected life science experiments, and (e) Biomedical Experiments Scientific Satellite (BESS) - extended duration primate (Type I) and small vertebrate (Type II) missions. The recommended operational methods described in the paper are compared to the fundamental data which has been developed in the life science Spacelab Mission Simulation (SMS) test series. Areas assessed include crew training, experiment development and integration, testing, data-dissemination, organization interfaces, and principal investigator working relationships.

  6. The impact of pre-selected variance inflation factor thresholds on the ...

    African Journals Online (AJOL)

    It is basically an index that measures how much the variance of an estimated ... the literature were not considered, such as penalised regularisation methods like the Lasso ... Y = 1 if a customer has defaulted, otherwise Y = 0). ..... method- ology is applied, but different VIF-thresholds have to be satisfied during the collinearity.

  7. Selection of Forklift Unit for Warehouse Operation by Applying Multi-Criteria Analysis

    Directory of Open Access Journals (Sweden)

    Predrag Atanasković

    2013-07-01

    Full Text Available This paper presents research related to the choice of the criteria that can be used to perform an optimal selection of the forklift unit for warehouse operation. The analysis has been done with the aim of exploring the requirements and defining relevant criteria that are important when investment decision is made for forklift procurement, and based on the conducted research by applying multi-criteria analysis, to determine the appropriate parameters and their relative weights that form the input data and database for selection of the optimal handling unit. This paper presents an example of choosing the optimal forklift based on the selected criteria for the purpose of making the relevant investment decision.

  8. RECRUITMENT AND SELECTION PROCESS OF HUMAN RESOURCES: A SAMPLE OF TRAVEL AGENCIES OPERATING IN FETHİYE

    OpenAIRE

    ERGÜN, Emre; GAVCAR, Erdoğan

    2013-01-01

    The purpose of the study was to understand the recruitment and selection process of travel agencies' employees operating in Fethiye. The data was collected from the department managers hired in travel agencies operating in Fethiye and analyzed using the statistical package programs.According to the results, the selection criteria were different between incoming agencies and the agencies that were not incoming. Besides, there was another difference on the agencies that have human resource...

  9. Diffusion Indexes With Sparse Loadings

    DEFF Research Database (Denmark)

    Kristensen, Johannes Tang

    2017-01-01

    The use of large-dimensional factor models in forecasting has received much attention in the literature with the consensus being that improvements on forecasts can be achieved when comparing with standard models. However, recent contributions in the literature have demonstrated that care needs...... to the problem by using the least absolute shrinkage and selection operator (LASSO) as a variable selection method to choose between the possible variables and thus obtain sparse loadings from which factors or diffusion indexes can be formed. This allows us to build a more parsimonious factor model...... in forecasting accuracy and thus find it to be an important alternative to PC. Supplementary materials for this article are available online....

  10. Evaluation of the Achieve Mapping Catheter in cryoablation for atrial fibrillation: a prospective randomized trial.

    Science.gov (United States)

    Gang, Yi; Gonna, Hanney; Domenichini, Giulia; Sampson, Michael; Aryan, Niloufar; Norman, Mark; Behr, Elijah R; Zuberi, Zia; Dhillon, Paramdeep; Gallagher, Mark M

    2016-03-01

    The purpose of this study is to establish the role of Achieve Mapping Catheter in cryoablation for paroxysmal atrial fibrillation (PAF) in a randomized trial. A total of 102 patients undergoing their first ablation for PAF were randomized at 2:1 to an Achieve- or Lasso-guided procedure. Study patients were systematically followed up for 12 months with Holter monitoring. Primary study endpoint was acute procedure success. Secondary endpoint was clinical outcomes assessed by AF free at 6 and 12 months after the procedure. Of 102 participants, 99 % of acute procedure success was achieved. Significantly shorter procedure duration with the Achieve-guided group than with the Lasso-guided group (118 ± 18 vs. 129 ± 21 min, p < 0.05) was observed as was the duration of fluoroscopy (17 ± 5 vs. 20 ± 7 min, p < 0.05) by subgroup analysis focused on procedures performed by experienced operators. In the whole study patients, procedure and fluoroscopic durations were similar in the Achieve- (n = 68) and Lasso-guided groups (n = 34). Transient phrenic nerve weakening was equally prevalent with the Achieve and Lasso. No association was found between clinical outcomes and the mapping catheter used. The use of second-generation cryoballoon (n = 68) reduced procedure time significantly compared to the first-generation balloon (n = 34); more patients were free of AF in the former than the latter group during follow-up. The use of the Achieve Mapping Catheter can reduce procedure and fluoroscopic durations compared with Lasso catheters in cryoablation for PAF after operators gained sufficient experience. The type of mapping catheter used does not affect procedure efficiency and safety by models of cryoballoon.

  11. Selection of operating parameters on the basis of hydrodynamics in centrifugal partition chromatography for the purification of nybomycin derivatives.

    Science.gov (United States)

    Adelmann, S; Baldhoff, T; Koepcke, B; Schembecker, G

    2013-01-25

    The selection of solvent systems in centrifugal partition chromatography (CPC) is the most critical point in setting up a separation. Therefore, lots of research was done on the topic in the last decades. But the selection of suitable operating parameters (mobile phase flow rate, rotational speed and mode of operation) with respect to hydrodynamics and pressure drop limit in CPC is still mainly driven by experience of the chromatographer. In this work we used hydrodynamic analysis for the prediction of most suitable operating parameters. After selection of different solvent systems with respect to partition coefficients for the target compound the hydrodynamics were visualized. Based on flow pattern and retention the operating parameters were selected for the purification runs of nybomycin derivatives that were carried out with a 200 ml FCPC(®) rotor. The results have proven that the selection of optimized operating parameters by analysis of hydrodynamics only is possible. As the hydrodynamics are predictable by the physical properties of the solvent system the optimized operating parameters can be estimated, too. Additionally, we found that dispersion and especially retention are improved if the less viscous phase is mobile. Crown Copyright © 2012. Published by Elsevier B.V. All rights reserved.

  12. Radiation dose estimates due to air particulate emissions from selected phosphate industry operations

    International Nuclear Information System (INIS)

    Partridge, J.E.; Horton, T.R.; Sensintaffar, E.L.; Boysen, G.A.

    1978-06-01

    The EPA Office of Radiation Programs has conducted a series of studies to determine the radiological impact of the phosphate mining and milling industry. This report describes the efforts to estimate the radiation doses due to airborne emissions of particulates from selected phosphate milling operations in Florida. Two wet process phosphoric acid plants and one ore drying facility were selected for this study. The 1976 Annual Operations/Emissions Report, submitted by each facility to the Florida Department of Environmental Regulation, and a field survey trip by EPA personnel to each facility were used to develop data for dose calculations. The field survey trip included sampling for stack emissions and ambient air samples collected in the general vicinity of each plant. Population and individual radiation dose estimates are made based on these sources of data

  13. Variable Selection in Heterogeneous Datasets: A Truncated-rank Sparse Linear Mixed Model with Applications to Genome-wide Association Studies.

    Science.gov (United States)

    Wang, Haohan; Aragam, Bryon; Xing, Eric P

    2018-04-26

    A fundamental and important challenge in modern datasets of ever increasing dimensionality is variable selection, which has taken on renewed interest recently due to the growth of biological and medical datasets with complex, non-i.i.d. structures. Naïvely applying classical variable selection methods such as the Lasso to such datasets may lead to a large number of false discoveries. Motivated by genome-wide association studies in genetics, we study the problem of variable selection for datasets arising from multiple subpopulations, when this underlying population structure is unknown to the researcher. We propose a unified framework for sparse variable selection that adaptively corrects for population structure via a low-rank linear mixed model. Most importantly, the proposed method does not require prior knowledge of sample structure in the data and adaptively selects a covariance structure of the correct complexity. Through extensive experiments, we illustrate the effectiveness of this framework over existing methods. Further, we test our method on three different genomic datasets from plants, mice, and human, and discuss the knowledge we discover with our method. Copyright © 2018. Published by Elsevier Inc.

  14. Operational decisionmaking and action selection under psychological stress in nuclear power plants

    International Nuclear Information System (INIS)

    Gertman, D.I.; Haney, L.N.; Jenkins, J.P.; Blackman, H.S.

    1985-05-01

    An extensive review of literature on individual and group performance and decisionmaking under psychological stress was conducted and summarized. Specific stress-related variables relevant to reactor operation were pinpointed and incorporated in an experiment to assess the performance of reactor operators under psychological stress. The decisionmaking performance of 24 reactor operators under differing levels of workload, conflicting information, and detail of available written procedures was assessed in terms of selecting immediate, subsequent, and nonapplicable actions in response to 12 emergency scenarios resulting from a severe seismic event at a pressurized water reactor. Specific personality characteristics of the operators suggested by the literature to be related to performance under stress were assessed and correlated to decisionmaking under stress. The experimental results were statistically analyzed, and findings indicated that operator decisionmaking under stress was more accurate under lower levels of workload, with the availability of detailed procedures, and in the presence of high conflicting information

  15. Chemical agnostic hazard prediction: Statistical inference of toxicity pathways - data for Figure 2

    Data.gov (United States)

    U.S. Environmental Protection Agency — This dataset comprises one SigmaPlot 13 file containing measured survival data and survival data predicted from the model coefficients selected by the LASSO...

  16. Lead extraction by selective operation of a nanosecond-pulsed 355nm laser

    Science.gov (United States)

    Herzog, Amir; Bogdan, Stefan; Glikson, Michael; Ishaaya, Amiel A.; Love, Charles

    2016-03-01

    Lead extraction (LE) is necessary for patients who are suffering from a related infection, or in opening venous occlusions that prevent the insertion of additional lead. In severe cases of fibrous encapsulation of the lead within a vein, laser-based cardiac LE has become one of the foremost methods of removal. In cases where the laser radiation (typically at 308 nm wavelength) interacts with the vein wall rather than with the fibrotic lesion, severe injury and subsequent bleeding may occur. Selective tissue ablation was previously demonstrated by a laser operating in the UV regime; however, it requires the use of sensitizers (e.g.: tetracycline). In this study, we present a preliminary examination of efficacy and safety aspects in the use of a nanosecond-pulsed solid-state laser radiation, at 355 nm wavelength, guided in a catheter consisting of optical fibers, in LE. Specifically, we demonstrate a correlation between the tissue elasticity and the catheter advancement rate, in ex-vivo experiments. Our results indicate a selectivity property for specific parameters of the laser radiation and catheter design. The selectivity is attributed to differences in the mechanical properties of the fibrotic tissue and a normal vein wall, leading to a different photomechanical response of the tissue's extracellular matrix. Furthermore, we performed successful in-vivo animal trials, providing a basic proof of concept for using the suggested scheme in LE. Selective operation using a 355 nm laser may reduce the risk of blood vessel perforation as well as the incidence of major adverse events.

  17. Coupling bacterioplankton populations and environment to community function in coastal temperate waters

    DEFF Research Database (Denmark)

    Traving, S. J.; Bentzon-Tilia, Mikkel; Knudsen-Leerbeck, H.

    2016-01-01

    drivers of bacterioplankton community functions, taking into account the variability in community composition and environmental conditions over seasons, in two contrasting coastal systems. A Least Absolute Shrinkage and Selection Operator (LASSO) analysis of the biological and chemical data obtained from...... surface waters over a full year indicated that specific bacterial populations were linked to measured functions. Namely, Synechococcus (Cyanobacteria) was strongly correlated with protease activity. Both function and community composition showed seasonal variation. However, the pattern of substrate...... of common drivers of bacterioplankton community functions in two different systems indicates that the drivers may be of broader relevance in coastal temperate waters....

  18. Organizational Commitment and Job Satisfaction of Security Operatives in Selected Tertiary Institutions In Kwara State

    Directory of Open Access Journals (Sweden)

    Alade Y. Saliu

    2015-11-01

    Full Text Available The prevalence of civil disorder and cultism in higher institutions of learning in Nigeria and the apparent inability of security operatives to stem the tide has continued to be a source of concern to both the Government and Individuals in recent times. This study examines the effect of organisational commitment on job satisfaction among security operatives working in Nigeria universities. In this study a sample of three hundred (300 security operatives were selected from both public and private universities in Kwara State. Data were collected through self-administered questionnaire and analysed through Descriptive, Comparative, Regression analysis and Spearman Rank Correlation. The findings revealed that these security operatives, a positive relationship exists between organisational commitment and job satisfaction with affective commitment having little or no significant relationship and continuance commitment having significant positive relationship. The study also found that the level of affective commitment was significantly lower than the other components. The study thus concludes that there is a significant positive relationship between organisation commitment and job satisfaction amongst the security operatives. Based on the findings, it was recommended that the selected Universities should focus on improving affective and normative commitment among security operatives in order to be able to deal with the problem of high job turnover and poor performance.

  19. An Improved SPEA2 Algorithm with Adaptive Selection of Evolutionary Operators Scheme for Multiobjective Optimization Problems

    Directory of Open Access Journals (Sweden)

    Fuqing Zhao

    2016-01-01

    Full Text Available A fixed evolutionary mechanism is usually adopted in the multiobjective evolutionary algorithms and their operators are static during the evolutionary process, which causes the algorithm not to fully exploit the search space and is easy to trap in local optima. In this paper, a SPEA2 algorithm which is based on adaptive selection evolution operators (AOSPEA is proposed. The proposed algorithm can adaptively select simulated binary crossover, polynomial mutation, and differential evolution operator during the evolutionary process according to their contribution to the external archive. Meanwhile, the convergence performance of the proposed algorithm is analyzed with Markov chain. Simulation results on the standard benchmark functions reveal that the performance of the proposed algorithm outperforms the other classical multiobjective evolutionary algorithms.

  20. Genomic Selection for Drought Tolerance Using Genome-Wide SNPs in Maize

    Directory of Open Access Journals (Sweden)

    Thirunavukkarasu Nepolean

    2017-04-01

    Full Text Available Traditional breeding strategies for selecting superior genotypes depending on phenotypic traits have proven to be of limited success, as this direct selection is hindered by low heritability, genetic interactions such as epistasis, environmental-genotype interactions, and polygenic effects. With the advent of new genomic tools, breeders have paved a way for selecting superior breeds. Genomic selection (GS has emerged as one of the most important approaches for predicting genotype performance. Here, we tested the breeding values of 240 maize subtropical lines phenotyped for drought at different environments using 29,619 cured SNPs. Prediction accuracies of seven genomic selection models (ridge regression, LASSO, elastic net, random forest, reproducing kernel Hilbert space, Bayes A and Bayes B were tested for their agronomic traits. Though prediction accuracies of Bayes B, Bayes A and RKHS were comparable, Bayes B outperformed the other models by predicting highest Pearson correlation coefficient in all three environments. From Bayes B, a set of the top 1053 significant SNPs with higher marker effects was selected across all datasets to validate the genes and QTLs. Out of these 1053 SNPs, 77 SNPs associated with 10 drought-responsive transcription factors. These transcription factors were associated with different physiological and molecular functions (stomatal closure, root development, hormonal signaling and photosynthesis. Of several models, Bayes B has been shown to have the highest level of prediction accuracy for our data sets. Our experiments also highlighted several SNPs based on their performance and relative importance to drought tolerance. The result of our experiments is important for the selection of superior genotypes and candidate genes for breeding drought-tolerant maize hybrids.

  1. Eigentumors for prediction of treatment failure in patients with early-stage breast cancer using dynamic contrast-enhanced MRI: a feasibility study

    Science.gov (United States)

    Chan, H. M.; van der Velden, B. H. M.; E Loo, C.; Gilhuijs, K. G. A.

    2017-08-01

    We present a radiomics model to discriminate between patients at low risk and those at high risk of treatment failure at long-term follow-up based on eigentumors: principal components computed from volumes encompassing tumors in washin and washout images of pre-treatment dynamic contrast-enhanced (DCE-) MR images. Eigentumors were computed from the images of 563 patients from the MARGINS study. Subsequently, a least absolute shrinkage selection operator (LASSO) selected candidates from the components that contained 90% of the variance of the data. The model for prediction of survival after treatment (median follow-up time 86 months) was based on logistic regression. Receiver operating characteristic (ROC) analysis was applied and area-under-the-curve (AUC) values were computed as measures of training and cross-validated performances. The discriminating potential of the model was confirmed using Kaplan-Meier survival curves and log-rank tests. From the 322 principal components that explained 90% of the variance of the data, the LASSO selected 28 components. The ROC curves of the model yielded AUC values of 0.88, 0.77 and 0.73, for the training, leave-one-out cross-validated and bootstrapped performances, respectively. The bootstrapped Kaplan-Meier survival curves confirmed significant separation for all tumors (P  <  0.0001). Survival analysis on immunohistochemical subgroups shows significant separation for the estrogen-receptor subtype tumors (P  <  0.0001) and the triple-negative subtype tumors (P  =  0.0039), but not for tumors of the HER2 subtype (P  =  0.41). The results of this retrospective study show the potential of early-stage pre-treatment eigentumors for use in prediction of treatment failure of breast cancer.

  2. Statistical validation of normal tissue complication probability models.

    Science.gov (United States)

    Xu, Cheng-Jian; van der Schaaf, Arjen; Van't Veld, Aart A; Langendijk, Johannes A; Schilstra, Cornelis

    2012-09-01

    To investigate the applicability and value of double cross-validation and permutation tests as established statistical approaches in the validation of normal tissue complication probability (NTCP) models. A penalized regression method, LASSO (least absolute shrinkage and selection operator), was used to build NTCP models for xerostomia after radiation therapy treatment of head-and-neck cancer. Model assessment was based on the likelihood function and the area under the receiver operating characteristic curve. Repeated double cross-validation showed the uncertainty and instability of the NTCP models and indicated that the statistical significance of model performance can be obtained by permutation testing. Repeated double cross-validation and permutation tests are recommended to validate NTCP models before clinical use. Copyright © 2012 Elsevier Inc. All rights reserved.

  3. Statistical Validation of Normal Tissue Complication Probability Models

    Energy Technology Data Exchange (ETDEWEB)

    Xu Chengjian, E-mail: c.j.xu@umcg.nl [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Schaaf, Arjen van der; Veld, Aart A. van' t; Langendijk, Johannes A. [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Schilstra, Cornelis [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Radiotherapy Institute Friesland, Leeuwarden (Netherlands)

    2012-09-01

    Purpose: To investigate the applicability and value of double cross-validation and permutation tests as established statistical approaches in the validation of normal tissue complication probability (NTCP) models. Methods and Materials: A penalized regression method, LASSO (least absolute shrinkage and selection operator), was used to build NTCP models for xerostomia after radiation therapy treatment of head-and-neck cancer. Model assessment was based on the likelihood function and the area under the receiver operating characteristic curve. Results: Repeated double cross-validation showed the uncertainty and instability of the NTCP models and indicated that the statistical significance of model performance can be obtained by permutation testing. Conclusion: Repeated double cross-validation and permutation tests are recommended to validate NTCP models before clinical use.

  4. Experimental assessment of the accuracy of genomic selection in sugarcane.

    Science.gov (United States)

    Gouy, M; Rousselle, Y; Bastianelli, D; Lecomte, P; Bonnal, L; Roques, D; Efile, J-C; Rocher, S; Daugrois, J; Toubi, L; Nabeneza, S; Hervouet, C; Telismart, H; Denis, M; Thong-Chane, A; Glaszmann, J C; Hoarau, J-Y; Nibouche, S; Costet, L

    2013-10-01

    Sugarcane cultivars are interspecific hybrids with an aneuploid, highly heterozygous polyploid genome. The complexity of the sugarcane genome is the main obstacle to the use of marker-assisted selection in sugarcane breeding. Given the promising results of recent studies of plant genomic selection, we explored the feasibility of genomic selection in this complex polyploid crop. Genetic values were predicted in two independent panels, each composed of 167 accessions representing sugarcane genetic diversity worldwide. Accessions were genotyped with 1,499 DArT markers. One panel was phenotyped in Reunion Island and the other in Guadeloupe. Ten traits concerning sugar and bagasse contents, digestibility and composition of the bagasse, plant morphology, and disease resistance were used. We used four statistical predictive models: bayesian LASSO, ridge regression, reproducing kernel Hilbert space, and partial least square regression. The accuracy of the predictions was assessed through the correlation between observed and predicted genetic values by cross validation within each panel and between the two panels. We observed equivalent accuracy among the four predictive models for a given trait, and marked differences were observed among traits. Depending on the trait concerned, within-panel cross validation yielded median correlations ranging from 0.29 to 0.62 in the Reunion Island panel and from 0.11 to 0.5 in the Guadeloupe panel. Cross validation between panels yielded correlations ranging from 0.13 for smut resistance to 0.55 for brix. This level of correlations is promising for future implementations. Our results provide the first validation of genomic selection in sugarcane.

  5. A Comparative Investigation of the Combined Effects of Pre-Processing, Wavelength Selection, and Regression Methods on Near-Infrared Calibration Model Performance.

    Science.gov (United States)

    Wan, Jian; Chen, Yi-Chieh; Morris, A Julian; Thennadil, Suresh N

    2017-07-01

    Near-infrared (NIR) spectroscopy is being widely used in various fields ranging from pharmaceutics to the food industry for analyzing chemical and physical properties of the substances concerned. Its advantages over other analytical techniques include available physical interpretation of spectral data, nondestructive nature and high speed of measurements, and little or no need for sample preparation. The successful application of NIR spectroscopy relies on three main aspects: pre-processing of spectral data to eliminate nonlinear variations due to temperature, light scattering effects and many others, selection of those wavelengths that contribute useful information, and identification of suitable calibration models using linear/nonlinear regression . Several methods have been developed for each of these three aspects and many comparative studies of different methods exist for an individual aspect or some combinations. However, there is still a lack of comparative studies for the interactions among these three aspects, which can shed light on what role each aspect plays in the calibration and how to combine various methods of each aspect together to obtain the best calibration model. This paper aims to provide such a comparative study based on four benchmark data sets using three typical pre-processing methods, namely, orthogonal signal correction (OSC), extended multiplicative signal correction (EMSC) and optical path-length estimation and correction (OPLEC); two existing wavelength selection methods, namely, stepwise forward selection (SFS) and genetic algorithm optimization combined with partial least squares regression for spectral data (GAPLSSP); four popular regression methods, namely, partial least squares (PLS), least absolute shrinkage and selection operator (LASSO), least squares support vector machine (LS-SVM), and Gaussian process regression (GPR). The comparative study indicates that, in general, pre-processing of spectral data can play a significant

  6. Methodology for selection of attributes and operating conditions for SVM-Based fault locator's

    Directory of Open Access Journals (Sweden)

    Debbie Johan Arredondo Arteaga

    2017-01-01

    Full Text Available Context: Energy distribution companies must employ strategies to meet their timely and high quality service, and fault-locating techniques represent and agile alternative for restoring the electric service in the power distribution due to the size of distribution services (generally large and the usual interruptions in the service. However, these techniques are not robust enough and present some limitations in both computational cost and the mathematical description of the models they use. Method: This paper performs an analysis based on a Support Vector Machine for the evaluation of the proper conditions to adjust and validate a fault locator for distribution systems; so that it is possible to determine the minimum number of operating conditions that allow to achieve a good performance with a low computational effort. Results: We tested the proposed methodology in a prototypical distribution circuit, located in a rural area of Colombia. This circuit has a voltage of 34.5 KV and is subdivided in 20 zones. Additionally, the characteristics of the circuit allowed us to obtain a database of 630.000 records of single-phase faults and different operating conditions. As a result, we could determine that the locator showed a performance above 98% with 200 suitable selected operating conditions. Conclusions: It is possible to improve the performance of fault locators based on Support Vector Machine. Specifically, these improvements are achieved by properly selecting optimal operating conditions and attributes, since they directly affect the performance in terms of efficiency and the computational cost.

  7. Representation of the quantum Fourier transform on multilevel basic elements by a sequence of selective rotation operators

    Science.gov (United States)

    Ermilov, A. S.; Zobov, V. E.

    2007-12-01

    To experimentally realize quantum computations on d-level basic elements (qudits) at d > 2, it is necessary to develop schemes for the technical realization of elementary logical operators. We have found sequences of selective rotation operators that represent the operators of the quantum Fourier transform (Walsh-Hadamard matrices) for d = 3-10. For the prime numbers 3, 5, and 7, the well-known method of linear algebra is applied, whereas, for the factorable numbers 6, 9, and 10, the representation of virtual spins is used (which we previously applied for d = 4, 8). Selective rotations can be realized, for example, by means of pulses of an RF magnetic field for systems of quadrupole nuclei or laser pulses for atoms and ions in traps.

  8. Accuracy of genomic selection in biparental populations of flax (Linum usitatissimum L.

    Directory of Open Access Journals (Sweden)

    Frank M. You

    2016-08-01

    Full Text Available Flax is an important economic crop for seed oil and stem fiber. Phenotyping of traits such as seed yield, seed quality, stem fiber yield, and quality characteristics is expensive and time consuming. Genomic selection (GS refers to a breeding approach aimed at selecting preferred individuals based on genomic estimated breeding values predicted by a statistical model based on the relationship between phenotypes and genome-wide genetic markers. We evaluated the prediction accuracy of GS (rMP and the efficiency of GS relative to phenotypic selection (RE for three GS models: ridge regression best linear unbiased prediction (RR-BLUP, Bayesian LASSO (BL, and Bayesian ridge regression (BRR, for seed yield, oil content, iodine value, linoleic, and linolenic acid content with a full and a common set of genome-wide simple sequence repeat markers in each of three biparental populations. The three GS models generated similar rMP and RE, while BRR displayed a higher coefficient of determination (R2 of the fitted models than did RR-BLUP or BL. The mean rMP and RE varied for traits with different heritabilities and was affected by the genetic variation of the traits in the populations. GS for seed yield generated a mean RE of 1.52 across populations and marker sets, a value significantly superior to that for direct phenotypic selection. Our empirical results provide the first validation of GS in flax and demonstrate that GS could increase genetic gain per unit time for linseed breeding. Further studies for selection of training populations and markers are warranted.

  9. Selected Lessons Learned through the ISS Design, Development, Assembly, and Operations: Applicability to International Cooperation for Standardization

    Science.gov (United States)

    Hirsch, David B.

    2009-01-01

    This slide presentation reviews selected lessons that were learned during the design, development, assembly and operation of the International Space Station. The critical importance of standards and common interfaces is emphasized to create a common operation environment that can lead to flexibility and adaptability.

  10. Perfectionism, selected demographic and job characteristics as predictors of burnout in operating suite nurses

    Directory of Open Access Journals (Sweden)

    Dorota Włodarczyk

    2013-12-01

    Full Text Available Background: The study was aimed at verifying the predictive power of perfectionism for professional burnout among nurses exposed to distress resulting from work in an operating suite and testing whether this effect exists after controlling for selected demographic and job characteristics. Material and Methods: The study group consisted of 100 nurses (93 women; mean age: 38.67 years. The majority in the group worked in public facilities (68%, in duty system (62%, as operating (75% or anesthesiology (25% nurses. To test perfectionism The Polish Adaptive and Maladaptive Perfectionism Questionnaire (AMPQ (Perfekcjonizm Adaptacyjny i Dezadaptacyjny - PAD, developed by Szczucka, was used. To examine burnout the Oldenburg Burnout Inventory (OLBI by Demerouti et al. was adopted. The effects of selected demographic and job characteristics were controlled. Results: The results of hierarchical regression analyses revealed that after controlling for selected demographic and job characteristics maladaptive perfectionism was a significant predictor of disengagement and exhaustion whereas adaptive perfectionism predicted a better work engagement. Significant predictors were also: education, number of workplaces, duty system and marital status. Conclusions: The study allowed to confirm the hypothesis on a harmful role of maladaptive perfectionism in shaping burnout among operating suite nurses. The hypothesis on protective function of adaptive perfectionism was confirmed only partially, with regard to disengagement. The results of the study also highlighted some risk factors of burnout which may exist in this occupational group. This confirms the need to continue research in this area. Med Pr 2013;64(6:761–773

  11. Combination of radiological and gray level co-occurrence matrix textural features used to distinguish solitary pulmonary nodules by computed tomography.

    Science.gov (United States)

    Wu, Haifeng; Sun, Tao; Wang, Jingjing; Li, Xia; Wang, Wei; Huo, Da; Lv, Pingxin; He, Wen; Wang, Keyang; Guo, Xiuhua

    2013-08-01

    The objective of this study was to investigate the method of the combination of radiological and textural features for the differentiation of malignant from benign solitary pulmonary nodules by computed tomography. Features including 13 gray level co-occurrence matrix textural features and 12 radiological features were extracted from 2,117 CT slices, which came from 202 (116 malignant and 86 benign) patients. Lasso-type regularization to a nonlinear regression model was applied to select predictive features and a BP artificial neural network was used to build the diagnostic model. Eight radiological and two textural features were obtained after the Lasso-type regularization procedure. Twelve radiological features alone could reach an area under the ROC curve (AUC) of 0.84 in differentiating between malignant and benign lesions. The 10 selected characters improved the AUC to 0.91. The evaluation results showed that the method of selecting radiological and textural features appears to yield more effective in the distinction of malignant from benign solitary pulmonary nodules by computed tomography.

  12. Genomic Selection for Predicting Fusarium Head Blight Resistance in a Wheat Breeding Program

    Directory of Open Access Journals (Sweden)

    Marcio P. Arruda

    2015-11-01

    Full Text Available Genomic selection (GS is a breeding method that uses marker–trait models to predict unobserved phenotypes. This study developed GS models for predicting traits associated with resistance to head blight (FHB in wheat ( L.. We used genotyping-by-sequencing (GBS to identify 5054 single-nucleotide polymorphisms (SNPs, which were then treated as predictor variables in GS analysis. We compared how the prediction accuracy of the genomic-estimated breeding values (GEBVs was affected by (i five genotypic imputation methods (random forest imputation [RFI], expectation maximization imputation [EMI], -nearest neighbor imputation [kNNI], singular value decomposition imputation [SVDI], and the mean imputation [MNI]; (ii three statistical models (ridge-regression best linear unbiased predictor [RR-BLUP], least absolute shrinkage and operator selector [LASSO], and elastic net; (iii marker density ( = 500, 1500, 3000, and 4500 SNPs; (iv training population (TP size ( = 96, 144, 192, and 218; (v marker-based and pedigree-based relationship matrices; and (vi control for relatedness in TPs and validation populations (VPs. No discernable differences in prediction accuracy were observed among imputation methods. The RR-BLUP outperformed other models in nearly all scenarios. Accuracies decreased substantially when marker number decreased to 3000 or 1500 SNPs, depending on the trait; when sample size of the training set was less than 192; when using pedigree-based instead of marker-based matrix; or when no control for relatedness was implemented. Overall, moderate to high prediction accuracies were observed in this study, suggesting that GS is a very promising breeding strategy for FHB resistance in wheat.

  13. An enhancement of selection and crossover operations in real-coded genetic algorithm for large-dimensionality optimization

    Energy Technology Data Exchange (ETDEWEB)

    Kwak, Noh Sung; Lee, Jongsoo [Yonsei University, Seoul (Korea, Republic of)

    2016-01-15

    The present study aims to implement a new selection method and a novel crossover operation in a real-coded genetic algorithm. The proposed selection method facilitates the establishment of a successively evolved population by combining several subpopulations: an elitist subpopulation, an off-spring subpopulation and a mutated subpopulation. A probabilistic crossover is performed based on the measure of probabilistic distance between the individuals. The concept of ‘allowance’ is suggested to describe the level of variance in the crossover operation. A number of nonlinear/non-convex functions and engineering optimization problems are explored to verify the capacities of the proposed strategies. The results are compared with those obtained from other genetic and nature-inspired algorithms.

  14. Genomic selection in maritime pine.

    Science.gov (United States)

    Isik, Fikret; Bartholomé, Jérôme; Farjat, Alfredo; Chancerel, Emilie; Raffin, Annie; Sanchez, Leopoldo; Plomion, Christophe; Bouffier, Laurent

    2016-01-01

    A two-generation maritime pine (Pinus pinaster Ait.) breeding population (n=661) was genotyped using 2500 SNP markers. The extent of linkage disequilibrium and utility of genomic selection for growth and stem straightness improvement were investigated. The overall intra-chromosomal linkage disequilibrium was r(2)=0.01. Linkage disequilibrium corrected for genomic relationships derived from markers was smaller (rV(2)=0.006). Genomic BLUP, Bayesian ridge regression and Bayesian LASSO regression statistical models were used to obtain genomic estimated breeding values. Two validation methods (random sampling 50% of the population and 10% of the progeny generation as validation sets) were used with 100 replications. The average predictive ability across statistical models and validation methods was about 0.49 for stem sweep, and 0.47 and 0.43 for total height and tree diameter, respectively. The sensitivity analysis suggested that prior densities (variance explained by markers) had little or no discernible effect on posterior means (residual variance) in Bayesian prediction models. Sampling from the progeny generation for model validation increased the predictive ability of markers for tree diameter and stem sweep but not for total height. The results are promising despite low linkage disequilibrium and low marker coverage of the genome (∼1.39 markers/cM). Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  15. Molecular Classification Substitutes for the Prognostic Variables Stage, Age, and MYCN Status in Neuroblastoma Risk Assessment

    Directory of Open Access Journals (Sweden)

    Carolina Rosswog

    2017-12-01

    Full Text Available BACKGROUND: Current risk stratification systems for neuroblastoma patients consider clinical, histopathological, and genetic variables, and additional prognostic markers have been proposed in recent years. We here sought to select highly informative covariates in a multistep strategy based on consecutive Cox regression models, resulting in a risk score that integrates hazard ratios of prognostic variables. METHODS: A cohort of 695 neuroblastoma patients was divided into a discovery set (n = 75 for multigene predictor generation, a training set (n = 411 for risk score development, and a validation set (n = 209. Relevant prognostic variables were identified by stepwise multivariable L1-penalized least absolute shrinkage and selection operator (LASSO Cox regression, followed by backward selection in multivariable Cox regression, and then integrated into a novel risk score. RESULTS: The variables stage, age, MYCN status, and two multigene predictors, NB-th24 and NB-th44, were selected as independent prognostic markers by LASSO Cox regression analysis. Following backward selection, only the multigene predictors were retained in the final model. Integration of these classifiers in a risk scoring system distinguished three patient subgroups that differed substantially in their outcome. The scoring system discriminated patients with diverging outcome in the validation cohort (5-year event-free survival, 84.9 ± 3.4 vs 63.6 ± 14.5 vs 31.0 ± 5.4; P < .001, and its prognostic value was validated by multivariable analysis. CONCLUSION: We here propose a translational strategy for developing risk assessment systems based on hazard ratios of relevant prognostic variables. Our final neuroblastoma risk score comprised two multigene predictors only, supporting the notion that molecular properties of the tumor cells strongly impact clinical courses of neuroblastoma patients.

  16. Molecular Classification Substitutes for the Prognostic Variables Stage, Age, and MYCN Status in Neuroblastoma Risk Assessment.

    Science.gov (United States)

    Rosswog, Carolina; Schmidt, Rene; Oberthuer, André; Juraeva, Dilafruz; Brors, Benedikt; Engesser, Anne; Kahlert, Yvonne; Volland, Ruth; Bartenhagen, Christoph; Simon, Thorsten; Berthold, Frank; Hero, Barbara; Faldum, Andreas; Fischer, Matthias

    2017-12-01

    Current risk stratification systems for neuroblastoma patients consider clinical, histopathological, and genetic variables, and additional prognostic markers have been proposed in recent years. We here sought to select highly informative covariates in a multistep strategy based on consecutive Cox regression models, resulting in a risk score that integrates hazard ratios of prognostic variables. A cohort of 695 neuroblastoma patients was divided into a discovery set (n=75) for multigene predictor generation, a training set (n=411) for risk score development, and a validation set (n=209). Relevant prognostic variables were identified by stepwise multivariable L1-penalized least absolute shrinkage and selection operator (LASSO) Cox regression, followed by backward selection in multivariable Cox regression, and then integrated into a novel risk score. The variables stage, age, MYCN status, and two multigene predictors, NB-th24 and NB-th44, were selected as independent prognostic markers by LASSO Cox regression analysis. Following backward selection, only the multigene predictors were retained in the final model. Integration of these classifiers in a risk scoring system distinguished three patient subgroups that differed substantially in their outcome. The scoring system discriminated patients with diverging outcome in the validation cohort (5-year event-free survival, 84.9±3.4 vs 63.6±14.5 vs 31.0±5.4; P<.001), and its prognostic value was validated by multivariable analysis. We here propose a translational strategy for developing risk assessment systems based on hazard ratios of relevant prognostic variables. Our final neuroblastoma risk score comprised two multigene predictors only, supporting the notion that molecular properties of the tumor cells strongly impact clinical courses of neuroblastoma patients. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  17. IDENTIFYING OPERATIONAL REQUIREMENTS TO SELECT SUITABLE DECISION MODELS FOR A PUBLIC SECTOR EPROCUREMENT DECISION SUPPORT SYSTEM

    Directory of Open Access Journals (Sweden)

    Mohamed Adil

    2014-10-01

    Full Text Available Public sector procurement should be a transparent and fair process. Strict legal requirements are enforced on public sector procurement to make it a standardised process. To make fair decisions on selecting suppliers, a practical method which adheres to legal requirements is important. The research that is the base for this paper aimed at identifying a suitable Multi-Criteria Decision Analysis (MCDA method for the specific legal and functional needs of the Maldivian Public Sector. To identify such operational requirements, a set of focus group interviews were conducted in the Maldives with public officials responsible for procurement decision making. Based on the operational requirements identified through focus groups, criteria-based evaluation is done on published MCDA methods to identify the suitable methods for e-procurement decision making. This paper describes the identification of the operational requirements and the results of the evaluation to select suitable decision models for the Maldivian context.

  18. Developing control room operator selection procedures

    International Nuclear Information System (INIS)

    Bosshardt, M.J.; Bownas, D.A.

    1979-01-01

    PDRI is performing a two-year study to identify the tasks performed and attributes required in electric power generating plant operating jobs, and focusing on the control room operator position. Approximately 65 investor-owned utilities are participating in the study

  19. Handling high predictor dimensionality in slope-unit-based landslide susceptibility models through LASSO-penalized Generalized Linear Model

    KAUST Repository

    Camilo, Daniela Castro; Lombardo, Luigi; Mai, Paul Martin; Dou, Jie; Huser, Raphaë l

    2017-01-01

    Grid-based landslide susceptibility models at regional scales are computationally demanding when using a fine grid resolution. Conversely, Slope-Unit (SU) based susceptibility models allows to investigate the same areas offering two main advantages: 1) a smaller computational burden and 2) a more geomorphologically-oriented interpretation. In this contribution, we generate SU-based landslide susceptibility for the Sado Island in Japan. This island is characterized by deep-seated landslides which we assume can only limitedly be explained by the first two statistical moments (mean and variance) of a set of predictors within each slope unit. As a consequence, in a nested experiment, we first analyse the distributions of a set of continuous predictors within each slope unit computing the standard deviation and quantiles from 0.05 to 0.95 with a step of 0.05. These are then used as predictors for landslide susceptibility. In addition, we combine shape indices for polygon features and the normalized extent of each class belonging to the outcropping lithology in a given SU. This procedure significantly enlarges the size of the predictor hyperspace, thus producing a high level of slope-unit characterization. In a second step, we adopt a LASSO-penalized Generalized Linear Model to shrink back the predictor set to a sensible and interpretable number, carrying only the most significant covariates in the models. As a result, we are able to document the geomorphic features (e.g., 95% quantile of Elevation and 5% quantile of Plan Curvature) that primarily control the SU-based susceptibility within the test area while producing high predictive performances. The implementation of the statistical analyses are included in a parallelized R script (LUDARA) which is here made available for the community to replicate analogous experiments.

  20. Handling high predictor dimensionality in slope-unit-based landslide susceptibility models through LASSO-penalized Generalized Linear Model

    KAUST Repository

    Camilo, Daniela Castro

    2017-08-30

    Grid-based landslide susceptibility models at regional scales are computationally demanding when using a fine grid resolution. Conversely, Slope-Unit (SU) based susceptibility models allows to investigate the same areas offering two main advantages: 1) a smaller computational burden and 2) a more geomorphologically-oriented interpretation. In this contribution, we generate SU-based landslide susceptibility for the Sado Island in Japan. This island is characterized by deep-seated landslides which we assume can only limitedly be explained by the first two statistical moments (mean and variance) of a set of predictors within each slope unit. As a consequence, in a nested experiment, we first analyse the distributions of a set of continuous predictors within each slope unit computing the standard deviation and quantiles from 0.05 to 0.95 with a step of 0.05. These are then used as predictors for landslide susceptibility. In addition, we combine shape indices for polygon features and the normalized extent of each class belonging to the outcropping lithology in a given SU. This procedure significantly enlarges the size of the predictor hyperspace, thus producing a high level of slope-unit characterization. In a second step, we adopt a LASSO-penalized Generalized Linear Model to shrink back the predictor set to a sensible and interpretable number, carrying only the most significant covariates in the models. As a result, we are able to document the geomorphic features (e.g., 95% quantile of Elevation and 5% quantile of Plan Curvature) that primarily control the SU-based susceptibility within the test area while producing high predictive performances. The implementation of the statistical analyses are included in a parallelized R script (LUDARA) which is here made available for the community to replicate analogous experiments.

  1. Thunder Bay Terminals Ltd. Site selection to operation: the management function

    Energy Technology Data Exchange (ETDEWEB)

    Cook, P.R.

    1979-08-01

    Thunder Bay Terminals Ltd. is a link in a new transportation system for Canada's natural resources stretching over 3000 miles from British Columbia's mountains to Ontario's lower Great Lakes. Thunder Bay Terminals' plant, now in operation, cost about $70 million and was completed on time and under budget. The paper is the project manager's account of this accomplishment. From site selection through feasibility, engineering and construction to realization, he emphasizes the necessary philosophies for the control of time and money. The computer as a tool is discussed, as well as techniques for procurement.

  2. A selective electrocatalyst-based direct methanol fuel cell operated at high concentrations of methanol.

    Science.gov (United States)

    Feng, Yan; Liu, Hui; Yang, Jun

    2017-06-01

    Owing to the serious crossover of methanol from the anode to the cathode through the polymer electrolyte membrane, direct methanol fuel cells (DMFCs) usually use dilute methanol solutions as fuel. However, the use of high-concentration methanol is highly demanded to improve the energy density of a DMFC system. Instead of the conventional strategies (for example, improving the fuel-feed system, membrane development, modification of electrode, and water management), we demonstrate the use of selective electrocatalysts to run a DMFC at high concentrations of methanol. In particular, at an operating temperature of 80°C, the as-fabricated DMFC with core-shell-shell Au@Ag 2 S@Pt nanocomposites at the anode and core-shell Au@Pd nanoparticles at the cathode produces a maximum power density of 89.7 mW cm -2 at a methanol feed concentration of 10 M and maintains good performance at a methanol concentration of up to 15 M. The high selectivity of the electrocatalysts achieved through structural construction accounts for the successful operation of the DMFC at high concentrations of methanol.

  3. A selective electrocatalyst–based direct methanol fuel cell operated at high concentrations of methanol

    Science.gov (United States)

    Feng, Yan; Liu, Hui; Yang, Jun

    2017-01-01

    Owing to the serious crossover of methanol from the anode to the cathode through the polymer electrolyte membrane, direct methanol fuel cells (DMFCs) usually use dilute methanol solutions as fuel. However, the use of high-concentration methanol is highly demanded to improve the energy density of a DMFC system. Instead of the conventional strategies (for example, improving the fuel-feed system, membrane development, modification of electrode, and water management), we demonstrate the use of selective electrocatalysts to run a DMFC at high concentrations of methanol. In particular, at an operating temperature of 80°C, the as-fabricated DMFC with core-shell-shell Au@Ag2S@Pt nanocomposites at the anode and core-shell Au@Pd nanoparticles at the cathode produces a maximum power density of 89.7 mW cm−2 at a methanol feed concentration of 10 M and maintains good performance at a methanol concentration of up to 15 M. The high selectivity of the electrocatalysts achieved through structural construction accounts for the successful operation of the DMFC at high concentrations of methanol. PMID:28695199

  4. Sleep duration, daytime napping, markers of obstructive sleep apnea and stroke in a population of southern China

    Science.gov (United States)

    Wen, Ye; Pi, Fu-Hua; Guo, Pi; Dong, Wen-Ya; Xie, Yu-Qing; Wang, Xiang-Yu; Xia, Fang-Fang; Pang, Shao-Jie; Wu, Yan-Chun; Wang, Yuan-Yuan; Zhang, Qing-Ying

    2016-01-01

    Sleep habits are associated with stroke in western populations, but this relation has been rarely investigated in China. Moreover, the differences among stroke subtypes remain unclear. This study aimed to explore the associations of total stroke, including ischemic and hemorrhagic type, with sleep habits of a population in southern China. We performed a case-control study in patients admitted to the hospital with first stroke and community control subjects. A total of 333 patients (n = 223, 67.0%, with ischemic stroke; n = 110, 23.0%, with hemorrhagic stroke) and 547 controls were enrolled in the study. Participants completed a structured questionnaire to identify sleep habits and other stroke risk factors. Least absolute shrinkage and selection operator (Lasso) and multiple logistic regression were performed to identify risk factors of disease. Incidence of stroke, and its subtypes, was significantly associated with snorting/gasping, snoring, sleep duration, and daytime napping. Snorting/gasping was identified as an important risk factor in the Lasso logistic regression model (Lasso’ β = 0.84), and the result was proven to be robust. This study showed the association between stroke and sleep habits in the southern Chinese population and might help in better detecting important sleep-related factors for stroke risk. PMID:27698374

  5. Interim action record of decision remedial alternative selection: TNX area groundwater operable unit

    International Nuclear Information System (INIS)

    Palmer, E.R.

    1994-10-01

    This document presents the selected interim remedial action for the TNX Area Groundwater Operable Unit at the Savannah River Site (SRS), which was developed in accordance with CERCLA of 1980, as amended by the Superfund Amendments and Reauthorization Act (SARA) of 1986, and to the extent practicable, the National Oil and Hazardous Substances Pollution contingency Plan (NCP). This decision is based on the Administrative Record File for this specific CERCLA unit

  6. Selection of the optimum condition for electron capture detector operation

    International Nuclear Information System (INIS)

    Lasa, J.; Korus, A.

    1974-01-01

    A method of determination of the optimal work conditions for the electron capture detector is presented in the paper. Physical phenomena which occur in the detector, as well as the energetic dependence of the electron attachment process are taken into consideration. The influence of the kind of carrier gas, temperature, and the parameters of the supplied voltage in both direct and pulse methods on average values of electron energy is described. Dependence of the sensitivity of the electron capture detector on the carrier gas and the polarizing voltage is illustrated for the Model DNW-300 electron capture detector produced in Poland. Practical indications for selecting optimal conditions of electron capture detector operation are given at the end of the paper. (author)

  7. The role of personality, disability and physical activity in the development of medication-overuse headache: a prospective observational study.

    Science.gov (United States)

    Mose, Louise S; Pedersen, Susanne S; Debrabant, Birgit; Jensen, Rigmor H; Gram, Bibi

    2018-05-25

    Factors associated with development of medication-overuse headache (MOH) in migraine patients are not fully understood, but with respect to prevention, the ability to predict the onset of MOH is clinically important. The aims were to examine if personality characteristics, disability and physical activity level are associated with the onset of MOH in a group of migraine patients and explore to which extend these factors combined can predict the onset of MOH. The study was a single-center prospective observational study of migraine patients. At inclusion, all patients completed questionnaires evaluating 1) personality (NEO Five-Factor Inventory), 2) disability (Migraine Disability Assessment), and 3) physical activity level (Physical Activity Scale 2.1). Diagnostic codes from patients' electronic health records confirmed if they had developed MOH during the study period of 20 months. Analyses of associations were performed and to identify which of the variables predict onset MOH, a multivariable least absolute shrinkage and selection operator (LASSO) logistic regression model was fitted to predict presence or absence of MOH. Out of 131 participants, 12 % (n=16) developed MOH. Migraine disability score (OR=1.02, 95 % CI: 1.00 to 1.04), intensity of headache (OR=1.49, 95 % CI: 1.03 to 2.15) and headache frequency (OR=1.02, 95 % CI: 1.00 to 1.04) were associated with the onset of MOH adjusting for age and gender. To identify which of the variables predict onset MOH, we used a LASSO regression model, and evaluating the predictive performance of the LASSO-mode (containing the predictors MIDAS score, MIDAS-intensity and -frequency, neuroticism score, time with moderate physical activity, educational level, hours of sleep daily and number of contacts to the headache clinic) in terms of area under the curve (AUC) was weak (apparent AUC=0.62, 95% CI: 0.41-0.82). Disability, headache intensity and frequency were associated with the onset of MOH whereas personality and the

  8. Operational amplifiers

    CERN Document Server

    Dostal, Jiri

    1993-01-01

    This book provides the reader with the practical knowledge necessary to select and use operational amplifier devices. It presents an extensive treatment of applications and a practically oriented, unified theory of operational circuits.Provides the reader with practical knowledge necessary to select and use operational amplifier devices. Presents an extensive treatment of applications and a practically oriented, unified theory of operational circuits

  9. Selection of stirling engine parameter and modes of joint operation with the Topaz II

    International Nuclear Information System (INIS)

    Kirillov, E.Y.; Ogloblin, B.G.; Shalaev, A.I.

    1996-01-01

    In addition to a high-temperature thermionic conversion cycle, application of a low-temperature machine cycle, such as the Stirling engine, is being considered. To select the optimum mode for joint operation of the Topaz II system and Stirling engine, output electric parameters are obtained as a function of thermal power released in the TFE fuel cores. The hydraulic diagram used for joint operation of the Topaz II and the Stirling engine is considered. Requirements to hydraulic characteristics of the Stirling engine heat exchanges are formulated. Scope of necessary modifications to mount the Stirling Engine on the Topaz II is estimated. copyright 1996 American Institute of Physics

  10. Coupling bacterioplankton populations and environment to community function in coastal temperate waters

    DEFF Research Database (Denmark)

    Traving, S. J.; Bentzon-Tilia, Mikkel; Knudsen-Leerbeck, H.

    2016-01-01

    Bacterioplankton play a key role in marine waters facilitating processes important for carbon cycling. However, the influence of specific bacterial populations and environmental conditions on bacterioplankton community performance remains unclear. The aim of the present study was to identify...... drivers of bacterioplankton community functions, taking into account the variability in community composition and environmental conditions over seasons, in two contrasting coastal systems. A Least Absolute Shrinkage and Selection Operator (LASSO) analysis of the biological and chemical data obtained from...... surface waters over a full year indicated that specific bacterial populations were linked to measured functions. Namely, Synechococcus (Cyanobacteria) was strongly correlated with protease activity. Both function and community composition showed seasonal variation. However, the pattern of substrate...

  11. Human Error Probabilites (HEPs) for generic tasks and Performance Shaping Factors (PSFs) selected for railway operations

    DEFF Research Database (Denmark)

    Thommesen, Jacob; Andersen, Henning Boje

    This report describes an HRA (Human Reliability Assessment) of six generic tasks and four Perfor-mance Shaping Factors (PSFs) targeted at railway operations commissioned by Banedanmark. The selection and characterization of generic tasks and PSFs are elaborated by DTU Management in close...

  12. Selected Aspects of Wear Affecting Keyed Joints and Spline Connections During Operation of Aircrafts

    Directory of Open Access Journals (Sweden)

    Gębura Andrzej

    2014-12-01

    Full Text Available The paper deals with selected deficiencies of spline connections, such as angular or parallel misalignment (eccentricity and excessive play. It is emphasized how important these deficiencies are for smooth operation of the entire driving units. The aim of the study is to provide a kind of a reference list with such deficiencies with visual symptoms of wear, specification of mechanical measurements for mating surfaces, mathematical description of waveforms for dynamic variability of motion in such connections and visualizations of the connection behaviour acquired with the use of the FAM-C and FDM-A. Attention is paid to hazards to flight safety when excessively worn spline connections are operated for long periods of time

  13. Safety Standard for Oxygen and Oxygen Systems: Guidelines for Oxygen System Design, Materials Selection, Operations, Storage, and Transportation

    Science.gov (United States)

    1996-01-01

    NASA's standard for oxygen system design, materials selection, operation, and transportation is presented. Minimum guidelines applicable to NASA Headquarters and all NASA Field Installations are contained.

  14. Topology of evolving, mutagenized viral populations: quasispecies expansion, compression, and operation of negative selection.

    Science.gov (United States)

    Ojosnegros, Samuel; Agudo, Rubén; Sierra, Macarena; Briones, Carlos; Sierra, Saleta; González-López, Claudia; Domingo, Esteban; Cristina, Juan

    2008-07-17

    The molecular events and evolutionary forces underlying lethal mutagenesis of virus (or virus extinction through an excess of mutations) are not well understood. Here we apply for the first time phylogenetic methods and Partition Analysis of Quasispecies (PAQ) to monitor genetic distances and intra-population structures of mutant spectra of foot-and-mouth disease virus (FMDV) quasispecies subjected to mutagenesis by base and nucleoside analogues. Phylogenetic and PAQ analyses have revealed a highly dynamic variation of intrapopulation diversity of FMDV quasispecies. The population diversity first suffers striking expansions in the presence of mutagens and then compressions either when the presence of the mutagenic analogue was discontinued or when a mutation that decreased sensitivity to a mutagen was selected. The pattern of mutations found in the populations was in agreement with the behavior of the corresponding nucleotide analogues with FMDV in vitro. Mutations accumulated at preferred genomic sites, and dn/ds ratios indicate the operation of negative (or purifying) selection in populations subjected to mutagenesis. No evidence of unusually elevated genetic distances has been obtained for FMDV populations approaching extinction. Phylogenetic and PAQ analysis provide adequate procedures to describe the evolution of viral sequences subjected to lethal mutagenesis. These methods define the changes of intra-population structure more precisely than mutation frequencies and Shannon entropies. PAQ is very sensitive to variations of intrapopulation genetic distances. Strong negative (or purifying) selection operates in FMDV populations subjected to enhanced mutagenesis. The quantifications provide evidence that extinction does not imply unusual increases of intrapopulation complexity, in support of the lethal defection model of virus extinction.

  15. Equipment related methods and means for professional selection of operators in the Kozloduy NPP

    International Nuclear Information System (INIS)

    Pandov, E.; Popandreeva, A.

    1993-01-01

    The principal methods of psychological tests for the selection of nuclear power plant operators are presented. The mobility of the psychic processes, the stability and the shift of attention, the short-term memory and the speed of the sensory-motor reactions are evaluated by adopted testing procedures to assess the functional status of the applicants. A set of 11 tests, divided into 4 groups according to the qualities under evaluation is described. The tests include various reactions to light and sound stimulus and a repetitive numerical test in limited time. The differentiating bimodal response is considered as the most conclusive for the assessment of the sensory-motor response of importance in the nuclear reactor operators work. 4 refs. (R.Ts.)

  16. Design Criteria, Operating Conditions, and Nickel-Iron Hydroxide Catalyst Materials for Selective Seawater Electrolysis.

    Science.gov (United States)

    Dionigi, Fabio; Reier, Tobias; Pawolek, Zarina; Gliech, Manuel; Strasser, Peter

    2016-05-10

    Seawater is an abundant water resource on our planet and its direct electrolysis has the advantage that it would not compete with activities demanding fresh water. Oxygen selectivity is challenging when performing seawater electrolysis owing to competing chloride oxidation reactions. In this work we propose a design criterion based on thermodynamic and kinetic considerations that identifies alkaline conditions as preferable to obtain high selectivity for the oxygen evolution reaction. The criterion states that catalysts sustaining the desired operating current with an overpotential seawater-mimicking electrolyte. The catalyst was synthesized by a solvothermal method and the activity, surface redox chemistry, and stability were tested electrochemically in alkaline and near-neutral conditions (borate buffer at pH 9.2) and under both fresh seawater conditions. The Tafel slope at low current densities is not influenced by pH or presence of chloride. On the other hand, the addition of chloride ions has an influence in the temporal evolution of the nickel reduction peak and on both the activity and stability at high current densities at pH 9.2. Faradaic efficiency close to 100 % under the operating conditions predicted by our design criteria was proven using in situ electrochemical mass spectrometry. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. High-Dimensional Additive Hazards Regression for Oral Squamous Cell Carcinoma Using Microarray Data: A Comparative Study

    Directory of Open Access Journals (Sweden)

    Omid Hamidi

    2014-01-01

    Full Text Available Microarray technology results in high-dimensional and low-sample size data sets. Therefore, fitting sparse models is substantial because only a small number of influential genes can reliably be identified. A number of variable selection approaches have been proposed for high-dimensional time-to-event data based on Cox proportional hazards where censoring is present. The present study applied three sparse variable selection techniques of Lasso, smoothly clipped absolute deviation and the smooth integration of counting, and absolute deviation for gene expression survival time data using the additive risk model which is adopted when the absolute effects of multiple predictors on the hazard function are of interest. The performances of used techniques were evaluated by time dependent ROC curve and bootstrap .632+ prediction error curves. The selected genes by all methods were highly significant (P<0.001. The Lasso showed maximum median of area under ROC curve over time (0.95 and smoothly clipped absolute deviation showed the lowest prediction error (0.105. It was observed that the selected genes by all methods improved the prediction of purely clinical model indicating the valuable information containing in the microarray features. So it was concluded that used approaches can satisfactorily predict survival based on selected gene expression measurements.

  18. On the issue of selecting technical and operational parameters for buses in urban passenger routes

    Directory of Open Access Journals (Sweden)

    Rudzinskyi V.V.

    2017-10-01

    Full Text Available Problems of a public transport bus service in urban areas were analyzed. The aim of the article is to determine actual operational parameters of buses during passenger transportation in Zhytomyr. Ways of determining technical and operational parameters of buses were developed using visual and tabular methods of city buses real-time speed and acceleration performance registration by GPS-monitoring system with the help of a communicational and informational intelligent transport system of the city. Experimental studies of city buses motion parameters were presented. A comprehensive survey of passenger traffic and conditions of public transport functioning in Zhytomyr was carried out. The values of technical and operational parameters of buses on city routes were obtained. Preliminarily conclusions and recommendations considering the criteria for selecting the optimal rolling stock for a bus network of the city were suggested.

  19. The influence of selected design and operating parameters on the dynamics of the steam micro-turbine

    Science.gov (United States)

    Żywica, Grzegorz; Kiciński, Jan

    2015-10-01

    The topic of the article is the analysis of the influence of selected design parameters and operating conditions on the radial steam micro-turbine, which was adapted to operate with low-boiling agent in the Organic Rankine Cycle (ORC). In the following parts of this article the results of the thermal load analysis, the residual unbalance and the stiffness of bearing supports are discussed. Advanced computational methods and numerical models have been used. Computational analysis showed that the steam micro-turbine is characterized by very good dynamic properties and is resistant to extreme operating conditions. The prototype of micro-turbine has passed a series of test calculations. It has been found that it can be subjected to experimental research in the micro combined heat and power system.

  20. Topology of evolving, mutagenized viral populations: quasispecies expansion, compression, and operation of negative selection

    Directory of Open Access Journals (Sweden)

    Sierra Saleta

    2008-07-01

    Full Text Available Abstract Background The molecular events and evolutionary forces underlying lethal mutagenesis of virus (or virus extinction through an excess of mutations are not well understood. Here we apply for the first time phylogenetic methods and Partition Analysis of Quasispecies (PAQ to monitor genetic distances and intra-population structures of mutant spectra of foot-and-mouth disease virus (FMDV quasispecies subjected to mutagenesis by base and nucleoside analogues. Results Phylogenetic and PAQ analyses have revealed a highly dynamic variation of intrapopulation diversity of FMDV quasispecies. The population diversity first suffers striking expansions in the presence of mutagens and then compressions either when the presence of the mutagenic analogue was discontinued or when a mutation that decreased sensitivity to a mutagen was selected. The pattern of mutations found in the populations was in agreement with the behavior of the corresponding nucleotide analogues with FMDV in vitro. Mutations accumulated at preferred genomic sites, and dn/ds ratios indicate the operation of negative (or purifying selection in populations subjected to mutagenesis. No evidence of unusually elevated genetic distances has been obtained for FMDV populations approaching extinction. Conclusion Phylogenetic and PAQ analysis provide adequate procedures to describe the evolution of viral sequences subjected to lethal mutagenesis. These methods define the changes of intra-population structure more precisely than mutation frequencies and Shannon entropies. PAQ is very sensitive to variations of intrapopulation genetic distances. Strong negative (or purifying selection operates in FMDV populations subjected to enhanced mutagenesis. The quantifications provide evidence that extinction does not imply unusual increases of intrapopulation complexity, in support of the lethal defection model of virus extinction.

  1. Regulatory principles, criteria and guidelines for site selection, design, construction and operation of uranium tailings retention systems

    International Nuclear Information System (INIS)

    Coady, J.R.; Henry, L.C.

    1978-01-01

    Principles, criteria and guidelines developed by the Atomic Energy Control Board for the management of uranium mill tailings are discussed. The application of these concepts is considered in relation to site selection, design and construction, operation and decommissioning of tailings retention facilities

  2. Prediction models for solitary pulmonary nodules based on curvelet textural features and clinical parameters.

    Science.gov (United States)

    Wang, Jing-Jing; Wu, Hai-Feng; Sun, Tao; Li, Xia; Wang, Wei; Tao, Li-Xin; Huo, Da; Lv, Ping-Xin; He, Wen; Guo, Xiu-Hua

    2013-01-01

    Lung cancer, one of the leading causes of cancer-related deaths, usually appears as solitary pulmonary nodules (SPNs) which are hard to diagnose using the naked eye. In this paper, curvelet-based textural features and clinical parameters are used with three prediction models [a multilevel model, a least absolute shrinkage and selection operator (LASSO) regression method, and a support vector machine (SVM)] to improve the diagnosis of benign and malignant SPNs. Dimensionality reduction of the original curvelet-based textural features was achieved using principal component analysis. In addition, non-conditional logistical regression was used to find clinical predictors among demographic parameters and morphological features. The results showed that, combined with 11 clinical predictors, the accuracy rates using 12 principal components were higher than those using the original curvelet-based textural features. To evaluate the models, 10-fold cross validation and back substitution were applied. The results obtained, respectively, were 0.8549 and 0.9221 for the LASSO method, 0.9443 and 0.9831 for SVM, and 0.8722 and 0.9722 for the multilevel model. All in all, it was found that using curvelet-based textural features after dimensionality reduction and using clinical predictors, the highest accuracy rate was achieved with SVM. The method may be used as an auxiliary tool to differentiate between benign and malignant SPNs in CT images.

  3. Mapping Haplotype-haplotype Interactions with Adaptive LASSO

    Directory of Open Access Journals (Sweden)

    Li Ming

    2010-08-01

    Full Text Available Abstract Background The genetic etiology of complex diseases in human has been commonly viewed as a complex process involving both genetic and environmental factors functioning in a complicated manner. Quite often the interactions among genetic variants play major roles in determining the susceptibility of an individual to a particular disease. Statistical methods for modeling interactions underlying complex diseases between single genetic variants (e.g. single nucleotide polymorphisms or SNPs have been extensively studied. Recently, haplotype-based analysis has gained its popularity among genetic association studies. When multiple sequence or haplotype interactions are involved in determining an individual's susceptibility to a disease, it presents daunting challenges in statistical modeling and testing of the interaction effects, largely due to the complicated higher order epistatic complexity. Results In this article, we propose a new strategy in modeling haplotype-haplotype interactions under the penalized logistic regression framework with adaptive L1-penalty. We consider interactions of sequence variants between haplotype blocks. The adaptive L1-penalty allows simultaneous effect estimation and variable selection in a single model. We propose a new parameter estimation method which estimates and selects parameters by the modified Gauss-Seidel method nested within the EM algorithm. Simulation studies show that it has low false positive rate and reasonable power in detecting haplotype interactions. The method is applied to test haplotype interactions involved in mother and offspring genome in a small for gestational age (SGA neonates data set, and significant interactions between different genomes are detected. Conclusions As demonstrated by the simulation studies and real data analysis, the approach developed provides an efficient tool for the modeling and testing of haplotype interactions. The implementation of the method in R codes can be

  4. Criteria Considered in Selecting Feed Items for Americium-241 Oxide Production Operations

    Energy Technology Data Exchange (ETDEWEB)

    Schulte, Louis D. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-01-30

    The analysis in this document serves the purpose of defining a number of attributes in selection of feed items to be utilized in recovery/recycle of Pu and also production operations of 241AmO2 material intended to meet specification requirements. This document was written in response to a specific request on the part of the 2014 annual program review which took place over the dates of October 28-29, 2014. A number of feed attributes are noted including: (1) Non-interference with existing Pu recovery operations; (2) Content of sufficient 241Am to allow process efficiency in recovery operations; (3) Absence of indications that 243Am might be mixed in with the Pu/241Am material; (4) Absence of indications that Cm might be mixed in with the Pu/241Am material; (5) Absence of indications of other chemical elements that would present difficulty in chemical separation from 241Am; (6) Feed material not expected to present difficulty in dissolution; (7) Dose issues; (8) Process efficiency; (9) Size; (10) Hazard associated with items and package configuration in the vault; (11) Within existing NEPA documentation. The analysis in this document provides a baseline of attributes considered for feed materials, but does not presume to replace the need for technical expertise and judgment on the part of individuals responsible for selecting the material feed to be processed. This document is not comprehensive as regards all attributes that could prove to be important. The value of placing a formal QA hold point on accepting feed items versus more informal management of feed items is discussed in the summation of this analysis. The existing planned QA hold points on 241AmO2 products produced and packaged may be adequate as the entire project is based on QA of the product rather than QA of the process. The probability of introduction of items that would inherently cause the241

  5. A comparison of statistical methods for genomic selection in a mice population

    Directory of Open Access Journals (Sweden)

    Neves Haroldo HR

    2012-11-01

    Full Text Available Abstract Background The availability of high-density panels of SNP markers has opened new perspectives for marker-assisted selection strategies, such that genotypes for these markers are used to predict the genetic merit of selection candidates. Because the number of markers is often much larger than the number of phenotypes, marker effect estimation is not a trivial task. The objective of this research was to compare the predictive performance of ten different statistical methods employed in genomic selection, by analyzing data from a heterogeneous stock mice population. Results For the five traits analyzed (W6W: weight at six weeks, WGS: growth slope, BL: body length, %CD8+: percentage of CD8+ cells, CD4+/ CD8+: ratio between CD4+ and CD8+ cells, within-family predictions were more accurate than across-family predictions, although this superiority in accuracy varied markedly across traits. For within-family prediction, two kernel methods, Reproducing Kernel Hilbert Spaces Regression (RKHS and Support Vector Regression (SVR, were the most accurate for W6W, while a polygenic model also had comparable performance. A form of ridge regression assuming that all markers contribute to the additive variance (RR_GBLUP figured among the most accurate for WGS and BL, while two variable selection methods ( LASSO and Random Forest, RF had the greatest predictive abilities for %CD8+ and CD4+/ CD8+. RF, RKHS, SVR and RR_GBLUP outperformed the remainder methods in terms of bias and inflation of predictions. Conclusions Methods with large conceptual differences reached very similar predictive abilities and a clear re-ranking of methods was observed in function of the trait analyzed. Variable selection methods were more accurate than the remainder in the case of %CD8+ and CD4+/CD8+ and these traits are likely to be influenced by a smaller number of QTL than the remainder. Judged by their overall performance across traits and computational requirements, RR

  6. Application of multi-SNP approaches Bayesian LASSO and AUC-RF to detect main effects of inflammatory-gene variants associated with bladder cancer risk.

    Directory of Open Access Journals (Sweden)

    Evangelina López de Maturana

    Full Text Available The relationship between inflammation and cancer is well established in several tumor types, including bladder cancer. We performed an association study between 886 inflammatory-gene variants and bladder cancer risk in 1,047 cases and 988 controls from the Spanish Bladder Cancer (SBC/EPICURO Study. A preliminary exploration with the widely used univariate logistic regression approach did not identify any significant SNP after correcting for multiple testing. We further applied two more comprehensive methods to capture the complexity of bladder cancer genetic susceptibility: Bayesian Threshold LASSO (BTL, a regularized regression method, and AUC-Random Forest, a machine-learning algorithm. Both approaches explore the joint effect of markers. BTL analysis identified a signature of 37 SNPs in 34 genes showing an association with bladder cancer. AUC-RF detected an optimal predictive subset of 56 SNPs. 13 SNPs were identified by both methods in the total population. Using resources from the Texas Bladder Cancer study we were able to replicate 30% of the SNPs assessed. The associations between inflammatory SNPs and bladder cancer were reexamined among non-smokers to eliminate the effect of tobacco, one of the strongest and most prevalent environmental risk factor for this tumor. A 9 SNP-signature was detected by BTL. Here we report, for the first time, a set of SNP in inflammatory genes jointly associated with bladder cancer risk. These results highlight the importance of the complex structure of genetic susceptibility associated with cancer risk.

  7. Operation of the Selected Local Action Group

    Directory of Open Access Journals (Sweden)

    Lukáš Nevěděl

    2015-01-01

    Full Text Available The main objective of this article is to compare the current operation of elected local action group with the concept of learning regions. This comparison is built on detailed knowledge and understanding of the operation of local action group Podbrnensko citizens’ association (Podbrnensko CA and learning regions in general. The following is assumed: the understanding of community-based processes from the perspective of residents, the important stakeholders who influence the operation of communities or locations. The operation of local action groups is in line with the current concept led by local community development (community led local development, CLLD, which uses elements of the LEADER method. In this method the solution of development problems comes primarily from the inside, not from the outside of the studied territory. The methods used for the collection of empirical data were mostly observation and interviews with all partners involved in LAG (31 people, all mayors in LAG (29 people and 176 people from region, i.e. methods, which result in so called deep data. Between the primary techniques applied in the research are: participant observation, unstructured or semi-structured interviews and public debates.

  8. Clinical utility of routine pre-operative axillary ultrasound and fine needle aspiration cytology in patient selection for sentinel lymph node biopsy.

    Science.gov (United States)

    Rattay, T; Muttalib, M; Khalifa, E; Duncan, A; Parker, S J

    2012-04-01

    In patients with operable breast cancer, pre-operative evaluation of the axilla may be of use in the selection of appropriate axillary surgery. Pre-operative axillary ultrasound (US) and fine needle aspiration cytology (FNAC) assessments have become routine practice in many breast units, although the evidence base is still gathering. This study assessed the clinical utility of US+/-FNAC in patient selection for either axillary node clearance (ANC) or sentinel lymph node biopsy (SLNB) in patients undergoing surgery for operable breast cancer. Over a two-year period, 348 patients with a clinically negative axilla underwent axillary US. 67 patients with suspicious nodes on US also underwent FNAC. The sensitivity and specificity of axillary investigations to determine nodal involvement were 56% (confidence interval: 47-64%) and 90% (84-93%) for US alone, and 76% (61-87%) and 100% (65-100%) for FNAC combined with US, respectively. With a positive US, the post-test probability was 78%. A negative US carried a post-test probability of 25%. When FNAC was positive, the post-test probability was greater than unity. A negative FNAC yielded a post-test probability of 52%. All patients with positive FNAC and most patients with suspicious US were listed for axillary node clearance (ANC) after consideration at the multi-disciplinary team (MDT) meeting. With pre-operative axillary US+/-FNAC, 20% of patients were saved a potential second axillary procedure, facilitating a reduction in the overall re-operation rate to 12%. In this study, a positive pre-operative US+/-FNAC directs patients towards ANC. When the result is negative, other clinico-pathological factors need to be taken into account in the selection of the appropriate axillary procedure. Copyright © 2011 Elsevier Ltd. All rights reserved.

  9. Multiscale Data Assimilation for Large-Eddy Simulations

    Science.gov (United States)

    Li, Z.; Cheng, X.; Gustafson, W. I., Jr.; Xiao, H.; Vogelmann, A. M.; Endo, S.; Toto, T.

    2017-12-01

    Large-eddy simulation (LES) is a powerful tool for understanding atmospheric turbulence, boundary layer physics and cloud development, and there is a great need for developing data assimilation methodologies that can constrain LES models. The U.S. Department of Energy Atmospheric Radiation Measurement (ARM) User Facility has been developing the capability to routinely generate ensembles of LES. The LES ARM Symbiotic Simulation and Observation (LASSO) project (https://www.arm.gov/capabilities/modeling/lasso) is generating simulations for shallow convection days at the ARM Southern Great Plains site in Oklahoma. One of major objectives of LASSO is to develop the capability to observationally constrain LES using a hierarchy of ARM observations. We have implemented a multiscale data assimilation (MSDA) scheme, which allows data assimilation to be implemented separately for distinct spatial scales, so that the localized observations can be effectively assimilated to constrain the mesoscale fields in the LES area of about 15 km in width. The MSDA analysis is used to produce forcing data that drive LES. With such LES workflow we have examined 13 days with shallow convection selected from the period May-August 2016. We will describe the implementation of MSDA, present LES results, and address challenges and opportunities for applying data assimilation to LES studies.

  10. A diagnostic signal selection scheme for planetary gearbox vibration monitoring under non-stationary operational conditions

    International Nuclear Information System (INIS)

    Feng, Ke; Wang, KeSheng; Zhang, Mian; Ni, Qing; Zuo, Ming J

    2017-01-01

    The planetary gearbox, due to its unique mechanical structures, is an important rotating machine for transmission systems. Its engineering applications are often in non-stationary operational conditions, such as helicopters, wind energy systems, etc. The unique physical structures and working conditions make the vibrations measured from planetary gearboxes exhibit a complex time-varying modulation and therefore yield complicated spectral structures. As a result, traditional signal processing methods, such as Fourier analysis, and the selection of characteristic fault frequencies for diagnosis face serious challenges. To overcome this drawback, this paper proposes a signal selection scheme for fault-emphasized diagnostics based upon two order tracking techniques. The basic procedures for the proposed scheme are as follows. (1) Computed order tracking is applied to reveal the order contents and identify the order(s) of interest. (2) Vold–Kalman filter order tracking is used to extract the order(s) of interest—these filtered order(s) constitute the so-called selected vibrations. (3) Time domain statistic indicators are applied to the selected vibrations for faulty information-emphasized diagnostics. The proposed scheme is explained and demonstrated in a signal simulation model and experimental studies and the method proves to be effective for planetary gearbox fault diagnosis. (paper)

  11. Spaceborne construction and operations planning - Decision rules for selecting EVA, telerobot, and combined work-systems

    Science.gov (United States)

    Smith, Jeffrey H.

    1992-01-01

    An approach is presented for selecting an appropriate work-system for performing construction and operations tasks by humans and telerobots. The decision to use extravehicular activity (EVA) performed by astronauts, extravehicular robotics (EVR), or a combination of EVA and EVR is determined by the ratio of the marginal costs of EVA, EVR, and IVA. The approach proposed here is useful for examining cost trade-offs between tasks and performing trade studies of task improvement techniques (human or telerobotic).

  12. An independent safety assessment of Department of Energy nuclear reactor facilities: Training of operating personnel and personnel selection

    International Nuclear Information System (INIS)

    Drain, J.F.

    1981-02-01

    This study has been prepared for the Department of Energy's Nuclear Facilities Personnel Qualification and Training (NFPQT) Committee. Its purpose is to provide the Committee with background information on, and assessment of, the selection, training, and qualification of nuclear reactor operating personnel at DOE-owned facilities

  13. Testing the Lag Structure of Assets’ Realized Volatility Dynamics

    Directory of Open Access Journals (Sweden)

    Francesco Audrino

    2017-12-01

    Full Text Available A (conservative test is applied to investigate the optimal lag structure for modelingrealized volatility dynamics. The testing procedure relies on the recent theoretical results that showthe ability of the adaptive least absolute shrinkage and selection operator (adaptive lasso to combinee cient parameter estimation, variable selection, and valid inference for time series processes. In anapplication to several constituents of the S&P 500 index it is shown that (i the optimal significantlag structure is time-varying and subject to drastic regime shifts that seem to happen across assetssimultaneously; (ii in many cases the relevant information for prediction is included in the first 22lags, corroborating previous results concerning the accuracy and the diffculty of outperforming outof-sample the heterogeneous autoregressive (HAR model; and (iii some common features of theoptimal lag structure can be identified across assets belonging to the same market segment or showinga similar beta with respect to the market index.

  14. Evaluation of digital soil mapping approaches with large sets of environmental covariates

    Science.gov (United States)

    Nussbaum, Madlene; Spiess, Kay; Baltensweiler, Andri; Grob, Urs; Keller, Armin; Greiner, Lucie; Schaepman, Michael E.; Papritz, Andreas

    2018-01-01

    The spatial assessment of soil functions requires maps of basic soil properties. Unfortunately, these are either missing for many regions or are not available at the desired spatial resolution or down to the required soil depth. The field-based generation of large soil datasets and conventional soil maps remains costly. Meanwhile, legacy soil data and comprehensive sets of spatial environmental data are available for many regions. Digital soil mapping (DSM) approaches relating soil data (responses) to environmental data (covariates) face the challenge of building statistical models from large sets of covariates originating, for example, from airborne imaging spectroscopy or multi-scale terrain analysis. We evaluated six approaches for DSM in three study regions in Switzerland (Berne, Greifensee, ZH forest) by mapping the effective soil depth available to plants (SD), pH, soil organic matter (SOM), effective cation exchange capacity (ECEC), clay, silt, gravel content and fine fraction bulk density for four soil depths (totalling 48 responses). Models were built from 300-500 environmental covariates by selecting linear models through (1) grouped lasso and (2) an ad hoc stepwise procedure for robust external-drift kriging (georob). For (3) geoadditive models we selected penalized smoothing spline terms by component-wise gradient boosting (geoGAM). We further used two tree-based methods: (4) boosted regression trees (BRTs) and (5) random forest (RF). Lastly, we computed (6) weighted model averages (MAs) from the predictions obtained from methods 1-5. Lasso, georob and geoGAM successfully selected strongly reduced sets of covariates (subsets of 3-6 % of all covariates). Differences in predictive performance, tested on independent validation data, were mostly small and did not reveal a single best method for 48 responses. Nevertheless, RF was often the best among methods 1-5 (28 of 48 responses), but was outcompeted by MA for 14 of these 28 responses. RF tended to over

  15. Robust estimation of the expected survival probabilities from high-dimensional Cox models with biomarker-by-treatment interactions in randomized clinical trials

    Directory of Open Access Journals (Sweden)

    Nils Ternès

    2017-05-01

    Full Text Available Abstract Background Thanks to the advances in genomics and targeted treatments, more and more prediction models based on biomarkers are being developed to predict potential benefit from treatments in a randomized clinical trial. Despite the methodological framework for the development and validation of prediction models in a high-dimensional setting is getting more and more established, no clear guidance exists yet on how to estimate expected survival probabilities in a penalized model with biomarker-by-treatment interactions. Methods Based on a parsimonious biomarker selection in a penalized high-dimensional Cox model (lasso or adaptive lasso, we propose a unified framework to: estimate internally the predictive accuracy metrics of the developed model (using double cross-validation; estimate the individual survival probabilities at a given timepoint; construct confidence intervals thereof (analytical or bootstrap; and visualize them graphically (pointwise or smoothed with spline. We compared these strategies through a simulation study covering scenarios with or without biomarker effects. We applied the strategies to a large randomized phase III clinical trial that evaluated the effect of adding trastuzumab to chemotherapy in 1574 early breast cancer patients, for which the expression of 462 genes was measured. Results In our simulations, penalized regression models using the adaptive lasso estimated the survival probability of new patients with low bias and standard error; bootstrapped confidence intervals had empirical coverage probability close to the nominal level across very different scenarios. The double cross-validation performed on the training data set closely mimicked the predictive accuracy of the selected models in external validation data. We also propose a useful visual representation of the expected survival probabilities using splines. In the breast cancer trial, the adaptive lasso penalty selected a prediction model with 4

  16. Machine Learning for Education: Learning to Teach

    Science.gov (United States)

    2016-12-01

    silhouette is high, and there are enough data points within each cluster. Of course, one could use a Bayesian prior (e.g., Chinese restaurant process...over k if one believes their data are well-suited to such an interpretation. Based on our data and the metrics shown in Figure 2, we select k = 4 for our...to evaluate during training for the shrinkage parameter, λ. Our approach combines a feature selection subroutine [31] as well as LASSO regression [38

  17. Application of a collaborative modelling and strategic fuzzy decision support system for selecting appropriate resilience strategies for seaport operations

    Directory of Open Access Journals (Sweden)

    Andrew John

    2014-06-01

    Full Text Available The selection of an appropriate resilience investment strategy to optimize the operational efficiency of a seaport is a challenging task given that many criteria need to be considered and modelled under an uncertain environment. The design of such a complex decision system consists of many subjective and imprecise parameters contained in different quantitative and qualitative forms. This paper proposes a fuzzy multi-attribute decision making methodology for the selection of an appropriate resilience investment strategy in a succinct and straightforward manner. The decision support model allows for a collaborative modelling of the system by multiple analysts in a group decision making process. Fuzzy analytical hierarchy process (FAHP was utilized to analyse the complex structure of the system to obtain the weights of all the criteria while fuzzy technique for order of preference by similarity to ideal solution (TOPSIS was employed to facilitate the ranking process of the resilience strategies. Given that it is often financially difficult to invest in all the resilience strategies, it is envisaged that the proposed approach could provide decision makers with a flexible and transparent tool for selecting appropriate resilience strategies aimed at increasing the resilience of seaport operations.

  18. Operation and Maintenance Plan for the 300-FF-5 Operable Unit

    International Nuclear Information System (INIS)

    Singleton, K.M.

    1996-09-01

    This document is the operation and maintenance plan for the 300-FF-5 groundwater operable unit. The purpose of this plan is to identify tasks necessary to verify the effectiveness of the selected alternative. This plan also describes the monitoring program and administrative tasks that will be used as the preferred alternative for the remediation of groundwater in the 300-FF-5 Operable Unit. The preferred alternative selected for remediation of groundwater consists of institutional controls

  19. Estimated Mortality of Selected Migratory Bird Species from Mowing and Other Mechanical Operations in Canadian Agriculture

    Directory of Open Access Journals (Sweden)

    Joerg Tews

    2013-12-01

    Full Text Available Mechanical operations such as mowing, tilling, seeding, and harvesting are well-known sources of direct avian mortality in agricultural fields. However, there are currently no mortality rate estimates available for any species group or larger jurisdiction. Even reviews of sources of mortality in birds have failed to address mechanical disturbance in farm fields. To overcome this information gap we provide estimates of total mortality rates by mechanical operations for five selected species across Canada. In our step-by-step modeling approach we (i quantified the amount of various types of agricultural land in each Bird Conservation Region (BCR in Canada, (ii estimated population densities by region and agricultural habitat type for each selected species, (iii estimated the average timing of mechanical agricultural activities, egg laying, and fledging, (iv and used these values and additional demographical parameters to derive estimates of total mortality by species within each BCR. Based on our calculations the total annual estimated incidental take of young ranged from ~138,000 for Horned Lark (Eremophila alpestris to as much as ~941,000 for Savannah Sparrow (Passerculus sandwichensis. Net losses to the fall flight of birds, i.e., those birds that would have fledged successfully in the absence of mechanical disturbance, were, for example ~321,000 for Bobolink (Dolichonyx oryzivorus and ~483,000 for Savannah Sparrow. Although our estimates are subject to an unknown degree of uncertainty, this assessment is a very important first step because it provides a broad estimate of incidental take for a set of species that may be particularly vulnerable to mechanical operations and a starting point for future refinements of model parameters if and when they become available.

  20. Gas Reactor International Cooperative Program. Interim report. Construction and operating experience of selected European Gas-Cooled Reactors

    International Nuclear Information System (INIS)

    1978-09-01

    The construction and operating experience of selected European Gas-Cooled Reactors is summarized along with technical descriptions of the plants. Included in the report are the AVR Experimental Pebble Bed Reactor, the Dragon Reactor, AGR Reactors, and the Thorium High Temperature Reactor (THTR). The study demonstrates that the European experience has been favorable and forms a good foundation for the development of Advanced High Temperature Reactors

  1. Savings from controlled measurement operations during the selection of the methods of intensifying inflows

    Energy Technology Data Exchange (ETDEWEB)

    Pluden, I A

    1979-01-01

    Investigation was made of the effectiveness of the information preservation of problems on the selection of the methods of the intensification of the extraction of petroleum. On the basis of the methodology on the determination of the operational outlay for each well and the cost of the extraction of petroleum which was worked out by the All-Union Science Research Institute for Gas, the formulae concerning the dependence of the cost on the output of the wells is generalized for the case of flooded wells. Formulae are presented, which are suitable for the calculation of the savings on the application of the methods of intensifying extraction. The output of the wells is examined as a random value. Formulae are deduced for the density of the reliability and the function of the distribution of the cost of the extraction of petroleum per well, and a graph of this function of determination is given. The necessity of taking into consideration the error of the dimensions of measurements of the output of the wells in the calculations is pointed out. This error leads in practice to significant errors in the selection of intensification methods and the evaluation of its effectiveness, which in the last analysis has a significant economic disadvantage. Recommendations are made for showing the elimination of this disadvantage by rational organization of the measurement control operations. A graph is made for the dependence between the duration of the measurements of the outputs of the wells and their frequency per month (periodicity) with various outputs of wells for the case when the error of the measurements of the yields does not exceed /sub +//sub -/5%.

  2. Accuracy of genomic selection for alfalfa biomass yield in different reference populations.

    Science.gov (United States)

    Annicchiarico, Paolo; Nazzicari, Nelson; Li, Xuehui; Wei, Yanling; Pecetti, Luciano; Brummer, E Charles

    2015-12-01

    Genomic selection based on genotyping-by-sequencing (GBS) data could accelerate alfalfa yield gains, if it displayed moderate ability to predict parent breeding values. Its interest would be enhanced by predicting ability also for germplasm/reference populations other than those for which it was defined. Predicting accuracy may be influenced by statistical models, SNP calling procedures and missing data imputation strategies. Landrace and variety material from two genetically-contrasting reference populations, i.e., 124 elite genotypes adapted to the Po Valley (sub-continental climate; PV population) and 154 genotypes adapted to Mediterranean-climate environments (Me population), were genotyped by GBS and phenotyped in separate environments for dry matter yield of their dense-planted half-sib progenies. Both populations showed no sub-population genetic structure. Predictive accuracy was higher by joint rather than separate SNP calling for the two data sets, and using random forest imputation of missing data. Highest accuracy was obtained using Support Vector Regression (SVR) for PV, and Ridge Regression BLUP and SVR for Me germplasm. Bayesian methods (Bayes A, Bayes B and Bayesian Lasso) tended to be less accurate. Random Forest Regression was the least accurate model. Accuracy attained about 0.35 for Me in the range of 0.30-0.50 missing data, and 0.32 for PV at 0.50 missing data, using at least 10,000 SNP markers. Cross-population predictions based on a smaller subset of common SNPs implied a relative loss of accuracy of about 25% for Me and 30% for PV. Genome-wide association analyses based on large subsets of M. truncatula-aligned markers revealed many SNPs with modest association with yield, and some genome areas hosting putative QTLs. A comparison of genomic vs. conventional selection for parent breeding value assuming 1-year vs. 5-year selection cycles, respectively, indicated over three-fold greater predicted yield gain per unit time for genomic selection

  3. Comparisons of single-stage and two-stage approaches to genomic selection.

    Science.gov (United States)

    Schulz-Streeck, Torben; Ogutu, Joseph O; Piepho, Hans-Peter

    2013-01-01

    Genomic selection (GS) is a method for predicting breeding values of plants or animals using many molecular markers that is commonly implemented in two stages. In plant breeding the first stage usually involves computation of adjusted means for genotypes which are then used to predict genomic breeding values in the second stage. We compared two classical stage-wise approaches, which either ignore or approximate correlations among the means by a diagonal matrix, and a new method, to a single-stage analysis for GS using ridge regression best linear unbiased prediction (RR-BLUP). The new stage-wise method rotates (orthogonalizes) the adjusted means from the first stage before submitting them to the second stage. This makes the errors approximately independently and identically normally distributed, which is a prerequisite for many procedures that are potentially useful for GS such as machine learning methods (e.g. boosting) and regularized regression methods (e.g. lasso). This is illustrated in this paper using componentwise boosting. The componentwise boosting method minimizes squared error loss using least squares and iteratively and automatically selects markers that are most predictive of genomic breeding values. Results are compared with those of RR-BLUP using fivefold cross-validation. The new stage-wise approach with rotated means was slightly more similar to the single-stage analysis than the classical two-stage approaches based on non-rotated means for two unbalanced datasets. This suggests that rotation is a worthwhile pre-processing step in GS for the two-stage approaches for unbalanced datasets. Moreover, the predictive accuracy of stage-wise RR-BLUP was higher (5.0-6.1%) than that of componentwise boosting.

  4. Technical report on semiautomatic segmentation using the Adobe Photoshop.

    Science.gov (United States)

    Park, Jin Seo; Chung, Min Suk; Hwang, Sung Bae; Lee, Yong Sook; Har, Dong-Hwan

    2005-12-01

    The purpose of this research is to enable users to semiautomatically segment the anatomical structures in magnetic resonance images (MRIs), computerized tomographs (CTs), and other medical images on a personal computer. The segmented images are used for making 3D images, which are helpful to medical education and research. To achieve this purpose, the following trials were performed. The entire body of a volunteer was scanned to make 557 MRIs. On Adobe Photoshop, contours of 19 anatomical structures in the MRIs were semiautomatically drawn using MAGNETIC LASSO TOOL and manually corrected using either LASSO TOOL or DIRECT SELECTION TOOL to make 557 segmented images. In a similar manner, 13 anatomical structures in 8,590 anatomical images were segmented. Proper segmentation was verified by making 3D images from the segmented images. Semiautomatic segmentation using Adobe Photoshop is expected to be widely used for segmentation of anatomical structures in various medical images.

  5. Structured sparse canonical correlation analysis for brain imaging genetics: an improved GraphNet method.

    Science.gov (United States)

    Du, Lei; Huang, Heng; Yan, Jingwen; Kim, Sungeun; Risacher, Shannon L; Inlow, Mark; Moore, Jason H; Saykin, Andrew J; Shen, Li

    2016-05-15

    Structured sparse canonical correlation analysis (SCCA) models have been used to identify imaging genetic associations. These models either use group lasso or graph-guided fused lasso to conduct feature selection and feature grouping simultaneously. The group lasso based methods require prior knowledge to define the groups, which limits the capability when prior knowledge is incomplete or unavailable. The graph-guided methods overcome this drawback by using the sample correlation to define the constraint. However, they are sensitive to the sign of the sample correlation, which could introduce undesirable bias if the sign is wrongly estimated. We introduce a novel SCCA model with a new penalty, and develop an efficient optimization algorithm. Our method has a strong upper bound for the grouping effect for both positively and negatively correlated features. We show that our method performs better than or equally to three competing SCCA models on both synthetic and real data. In particular, our method identifies stronger canonical correlations and better canonical loading patterns, showing its promise for revealing interesting imaging genetic associations. The Matlab code and sample data are freely available at http://www.iu.edu/∼shenlab/tools/angscca/ shenli@iu.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  6. Genomic Selection for Quantitative Adult Plant Stem Rust Resistance in Wheat

    Directory of Open Access Journals (Sweden)

    Jessica E. Rutkoski

    2014-11-01

    Full Text Available Quantitative adult plant resistance (APR to stem rust ( f. sp. is an important breeding target in wheat ( L. and a potential target for genomic selection (GS. To evaluate the relative importance of known APR loci in applying GS, we characterized a set of CIMMYT germplasm at important APR loci and on a genome-wide profile using genotyping-by-sequencing (GBS. Using this germplasm, we describe the genetic architecture and evaluate prediction models for APR using data from the international Ug99 stem rust screening nurseries. Prediction models incorporating markers linked to important APR loci and seedling phenotype scores as fixed effects were evaluated along with the classic prediction models: Multiple linear regression (MLR, Genomic best linear unbiased prediction (G-BLUP, Bayesian Lasso (BL, and Bayes Cπ (BCπ. We found the region to play an important role in APR in this germplasm. A model using linked markers as fixed effects in G-BLUP was more accurate than MLR with linked markers (-value = 0.12, and ordinary G-BLUP (-value = 0.15. Incorporating seedling phenotype information as fixed effects in G-BLUP did not consistently increase accuracy. Overall, levels of prediction accuracy found in this study indicate that GS can be effectively applied to improve stem rust APR in this germplasm, and if genotypes at linked markers are available, modeling these genotypes as fixed effects could lead to better predictions.

  7. Genome-Wide Association Mapping and Genomic Selection for Alfalfa (Medicago sativa) Forage Quality Traits.

    Science.gov (United States)

    Biazzi, Elisa; Nazzicari, Nelson; Pecetti, Luciano; Brummer, E Charles; Palmonari, Alberto; Tava, Aldo; Annicchiarico, Paolo

    2017-01-01

    Genetic progress for forage quality has been poor in alfalfa (Medicago sativa L.), the most-grown forage legume worldwide. This study aimed at exploring opportunities for marker-assisted selection (MAS) and genomic selection of forage quality traits based on breeding values of parent plants. Some 154 genotypes from a broadly-based reference population were genotyped by genotyping-by-sequencing (GBS), and phenotyped for leaf-to-stem ratio, leaf and stem contents of protein, neutral detergent fiber (NDF) and acid detergent lignin (ADL), and leaf and stem NDF digestibility after 24 hours (NDFD), of their dense-planted half-sib progenies in three growing conditions (summer harvest, full irrigation; summer harvest, suspended irrigation; autumn harvest). Trait-marker analyses were performed on progeny values averaged over conditions, owing to modest germplasm × condition interaction. Genomic selection exploited 11,450 polymorphic SNP markers, whereas a subset of 8,494 M. truncatula-aligned markers were used for a genome-wide association study (GWAS). GWAS confirmed the polygenic control of quality traits and, in agreement with phenotypic correlations, indicated substantially different genetic control of a given trait in stems and leaves. It detected several SNPs in different annotated genes that were highly linked to stem protein content. Also, it identified a small genomic region on chromosome 8 with high concentration of annotated genes associated with leaf ADL, including one gene probably involved in the lignin pathway. Three genomic selection models, i.e., Ridge-regression BLUP, Bayes B and Bayesian Lasso, displayed similar prediction accuracy, whereas SVR-lin was less accurate. Accuracy values were moderate (0.3-0.4) for stem NDFD and leaf protein content, modest for leaf ADL and NDFD, and low to very low for the other traits. Along with previous results for the same germplasm set, this study indicates that GBS data can be exploited to improve both quality traits

  8. Mission operations technology

    Science.gov (United States)

    Varsi, Giulio

    In the last decade, the operation of a spacecraft after launch has emerged as a major component of the total cost of the mission. This trend is sustained by the increasing complexity, flexibility, and data gathering capability of the space assets and by their greater reliability and consequent longevity. The trend can, however, be moderated by the progressive transfer of selected functions from the ground to the spacecraft and by application, on the ground, of new technology. Advances in ground operations derive from the introduction in the mission operations environment of advanced microprocessor-based workstations in the class of a few million instructions per second and from the selective application of artificial intelligence technology. In the last few years a number of these applications have been developed, tested in operational settings and successfully demonstrated to users. Some are now being integrated in mission operations facilities. An analysis of mission operations indicates that the key areas are: concurrent control of multiple missions; automated/interactive production of command sequences of high integrity at low cost; automated monitoring of spacecraft health and automated aides for fault diagnosis; automated allocation of resources; automated processing of science data; and high-fidelity, high-speed spacecraft simulation. Examples of major advances in selected areas are described.

  9. Selected financial and operating ratios of public power systems

    International Nuclear Information System (INIS)

    Moody, D.

    1993-01-01

    In 1992, the American Public Power Association published its fourth report on financial and operating ratios. Based on 1990 data for the largest public power distribution systems, the report examined 21 categories of ratio indicators that can be used by public power distribution systems to assess their performance relative to the utilities of of comparable size and in the same geographic region. The 394 utilities summarized in the report are those that are required to file financial statements with the Energy Information Administration (EIA). Ratios were calculated from financial and operating data reported by utilities to the EIA. Data are presented for the following ratios: (1) revenue per kW/hr; (2) revenue for customer; (3) debt to total assets; (4) operating ratio; (5) current ratio; (6) times interest earned; (7) net income per revenue dollar; (8) uncollectible accounts per revenue dollar; (9) retail MW hr sales per manpower generation employee; (10) retail customers per nonpower generation employee; (11) total operation and maintenane expense per kW hr sold; (12) total operation and maintenance expense per retail customer; (13) total power supply expense kW hr sold; (14) purchased power cost per kW hr; (15) production expense per not kW hr; (16) retail customers for with reader; (17) distribution operation and maintenance expenses per retail customer; (18) distribution operation and maintenance expenses per circuit mile; (19) customer accounting, customer service and sales expenses per retail customers; (20) administration and general expenses per retail customer; (21) labor expense per worker-hour; (22) OSHA incidence rate; and (23) the system average interruptible duration index

  10. Selection of operations staff, qualifications and experience

    International Nuclear Information System (INIS)

    Gutmann, H.

    1977-01-01

    Requirements and suggestions have been made by authorities and various organisations in a number of countries which define necessary experience and training for the various groups of nuclear power plant personnel. For two countries, the USA and the FRG, a comparison has been made which shows that there is only a slight deviation, taking into account the different education systems. With the example of the Biblis nuclear power plant the training on the job is described. Especially the production or operation department is looked at in more detail. The training is split up into several parts: a general part, such as nuclear physics, reactor physics and engineering, reactor safety, radiation protection and so on and a plant related part, such as arrangement and mode of operation of the plant under normal and accident conditions, license conditions and so on. (orig.) [de

  11. Development of a Numerical Fish Surrogate for Improved Selection of Fish Passage Design and Operation Alternatives for Lower Granite Dam: Phase I

    National Research Council Canada - National Science Library

    Nestler, John

    2000-01-01

    .... The overall goal of this research is to develop and apply an approach for integrating biological and hydraulic information to support selection of optimum designs and project operations for Surface...

  12. An expert machine tools selection system for turning operation

    NARCIS (Netherlands)

    Tan, C.F.; Khalil, S.N.; Karjanto, J.; Wahidin, L.S.; Chen, W.; Rauterberg, G.W.M.

    2015-01-01

    The turning machining process is an important process in the manufacturing industry. It is important to select the right tool for the turning process so that the manufacturing cost will be decreased. The main objective of this research is to select the most suitable machine tools with respect to

  13. Operator interface for vehicles

    Science.gov (United States)

    Bissontz, Jay E

    2015-03-10

    A control interface for drivetrain braking provided by a regenerative brake and a non-regenerative brake is implemented using a combination of switches and graphic interface elements. The control interface comprises a control system for allocating drivetrain braking effort between the regenerative brake and the non-regenerative brake, a first operator actuated control for enabling operation of the drivetrain braking, and a second operator actuated control for selecting a target braking effort for drivetrain braking. A graphic display displays to an operator the selected target braking effort and can be used to further display actual braking effort achieved by drivetrain braking.

  14. Identification of biomarkers for Mycobacterium tuberculosis infection and disease in BCG-vaccinated young children in Southern India

    DEFF Research Database (Denmark)

    Dhanasekaran, S; Jenum, S; Stavrum, R

    2013-01-01

    Pediatric tuberculosis (TB) often goes undiagnosed because of the lack of reliable diagnostic methods. With the aim of assessing biomarker(s) that can aid in the diagnosis of TB infection and disease, we investigated 746 Indian children with suspected TB. Whole-blood mRNA from 210 children...... or equal to0.05) was downregulated in TB disease compared with uninfected controls, while transcription of RAB33A was downregulated in TB disease compared with both latent TB (Pcontrols (P....05) was upregulated in latent TB compared with that in controls. Using the Least Absolute Shrinkage and Selection Operator (lasso) model, RAB33A alone discriminated between TB disease and latent TB (area under the curve (AUC) 77.5%), whereas a combination of RAB33A, CXCL10, SEC14L1, FOXP3 and TNFRSF1A was effective...

  15. Genome wide association studies for body conformation traits in the Chinese Holstein cattle population

    DEFF Research Database (Denmark)

    Wu, Xiaoping; Fang, Ming; Liu, Lin

    2013-01-01

    .Results: The Illumina BovineSNP50 BeadChip was used to identify single nucleotide polymorphisms (SNPs) that are associated with body conformation traits. A least absolute shrinkage and selection operator (LASSO) was applied to detect multiple SNPs simultaneously for 29 body conformation traits with 1,314 Chinese...... Holstein cattle and 52,166 SNPs. Totally, 59 genome-wide significant SNPs associated with 26 conformation traits were detected by genome-wide association analysis; five SNPs were within previously reported QTL regions (Animal Quantitative Trait Loci (QTL) database) and 11 were very close to the reported...... SNPs. Twenty-two SNPs were located within annotated gene regions, while the remainder were 0.6-826 kb away from known genes. Some of the genes had clear biological functions related to conformation traits. By combining information about the previously reported QTL regions and the biological functions...

  16. A multi-stage intelligent approach based on an ensemble of two-way interaction model for forecasting the global horizontal radiation of India

    International Nuclear Information System (INIS)

    Jiang, He; Dong, Yao; Xiao, Ling

    2017-01-01

    Highlights: • Ensemble learning system is proposed to forecast the global solar radiation. • LASSO is utilized as feature selection method for subset model. • GSO is used to select the weight vector aggregating the response of subset model. • A simple and efficient algorithm is designed based on thresholding function. • Theoretical analysis focusing on error rate is provided. - Abstract: Forecasting of effective solar irradiation has developed a huge interest in recent decades, mainly due to its various applications in grid connect photovoltaic installations. This paper develops and investigates an ensemble learning based multistage intelligent approach to forecast 5 days global horizontal radiation at four given locations of India. The two-way interaction model is considered with purpose of detecting the associated correlation between the features. The main structure of the novel method is the ensemble learning, which is based on Divide and Conquer principle, is applied to enhance the forecasting accuracy and model stability. An efficient feature selection method LASSO is performed in the input space with the regularization parameter selected by Cross-Validation. A weight vector which best represents the importance of each individual model in ensemble system is provided by glowworm swarm optimization. The combination of feature selection and parameter selection are helpful in creating the diversity of the ensemble learning. In order to illustrate the validity of the proposed method, the datasets at four different locations of the India are split into training and test datasets. The results of the real data experiments demonstrate the efficiency and efficacy of the proposed method comparing with other competitors.

  17. Operational Dynamic Configuration Analysis

    Science.gov (United States)

    Lai, Chok Fung; Zelinski, Shannon

    2010-01-01

    Sectors may combine or split within areas of specialization in response to changing traffic patterns. This method of managing capacity and controller workload could be made more flexible by dynamically modifying sector boundaries. Much work has been done on methods for dynamically creating new sector boundaries [1-5]. Many assessments of dynamic configuration methods assume the current day baseline configuration remains fixed [6-7]. A challenging question is how to select a dynamic configuration baseline to assess potential benefits of proposed dynamic configuration concepts. Bloem used operational sector reconfigurations as a baseline [8]. The main difficulty is that operational reconfiguration data is noisy. Reconfigurations often occur frequently to accommodate staff training or breaks, or to complete a more complicated reconfiguration through a rapid sequence of simpler reconfigurations. Gupta quantified a few aspects of airspace boundary changes from this data [9]. Most of these metrics are unique to sector combining operations and not applicable to more flexible dynamic configuration concepts. To better understand what sort of reconfigurations are acceptable or beneficial, more configuration change metrics should be developed and their distribution in current practice should be computed. This paper proposes a method to select a simple sequence of configurations among operational configurations to serve as a dynamic configuration baseline for future dynamic configuration concept assessments. New configuration change metrics are applied to the operational data to establish current day thresholds for these metrics. These thresholds are then corroborated, refined, or dismissed based on airspace practitioner feedback. The dynamic configuration baseline selection method uses a k-means clustering algorithm to select the sequence of configurations and trigger times from a given day of operational sector combination data. The clustering algorithm selects a simplified

  18. Operations and maintenance costs - Selected observations

    International Nuclear Information System (INIS)

    Kirk, M.W.

    1991-01-01

    The operations and maintenance (O and M) costs associated with nuclear power plants have been rising continuously over the past decade. The Nuclear Management and Resources Council (NUMARC) has undertaken an examination of this issue to determine what components of O and M costs are driven by regulatory activity. Observers from various perspectives within the nuclear industry have cited staffing, outages, training, and management structure among others as large contributors to O and M costs. NUMARC is currently analyzing utility cost data to isolate the regulatory components for further action

  19. Dual mode operation, highly selective nanohole array-based plasmonic colour filters

    Science.gov (United States)

    Fouladi Mahani, Fatemeh; Mokhtari, Arash; Mehran, Mahdiyeh

    2017-09-01

    Taking advantage of nanostructured metal films as plasmonic colour filters (PCFs) has been evolved remarkably as an alternative to the conventional technologies of chemical colour filtering. However, most of the proposed PCFs depict a poor colour purity focusing on generating either the additive or subtractive colours. In this paper, we present dual mode operation PCFs employing an opaque aluminium film patterned with sub-wavelength holes. Subtractive colours like cyan, magenta, and yellow are the results of reflection mode of these filters yielding optical efficiencies as high as 70%-80% and full width at half maximum of the stop-bands up to 40-50 nm. The colour selectivity of the transmission mode for the additive colours is also significant due to their enhanced performance through the utilization of a relatively thick aluminium film in contact with a modified dielectric environment. These filters provide a simple design with one-step lithography in addition to compatibility with the conventional CMOS processes. Moreover, they are polarization insensitive due to their symmetric geometry. A complete palette of pure subtractive and additive colours has been realized with potential applications, such as multispectral imaging, CMOS image sensors, displays, and colour printing.

  20. Oracle Inequalities for High Dimensional Vector Autoregressions

    DEFF Research Database (Denmark)

    Callot, Laurent; Kock, Anders Bredahl

    This paper establishes non-asymptotic oracle inequalities for the prediction error and estimation accuracy of the LASSO in stationary vector autoregressive models. These inequalities are used to establish consistency of the LASSO even when the number of parameters is of a much larger order...

  1. Functional Principal Component Analysis and Randomized Sparse Clustering Algorithm for Medical Image Analysis

    Science.gov (United States)

    Lin, Nan; Jiang, Junhai; Guo, Shicheng; Xiong, Momiao

    2015-01-01

    Due to the advancement in sensor technology, the growing large medical image data have the ability to visualize the anatomical changes in biological tissues. As a consequence, the medical images have the potential to enhance the diagnosis of disease, the prediction of clinical outcomes and the characterization of disease progression. But in the meantime, the growing data dimensions pose great methodological and computational challenges for the representation and selection of features in image cluster analysis. To address these challenges, we first extend the functional principal component analysis (FPCA) from one dimension to two dimensions to fully capture the space variation of image the signals. The image signals contain a large number of redundant features which provide no additional information for clustering analysis. The widely used methods for removing the irrelevant features are sparse clustering algorithms using a lasso-type penalty to select the features. However, the accuracy of clustering using a lasso-type penalty depends on the selection of the penalty parameters and the threshold value. In practice, they are difficult to determine. Recently, randomized algorithms have received a great deal of attentions in big data analysis. This paper presents a randomized algorithm for accurate feature selection in image clustering analysis. The proposed method is applied to both the liver and kidney cancer histology image data from the TCGA database. The results demonstrate that the randomized feature selection method coupled with functional principal component analysis substantially outperforms the current sparse clustering algorithms in image cluster analysis. PMID:26196383

  2. Selection of index complex for the NPP operator activity efficiency

    Energy Technology Data Exchange (ETDEWEB)

    Kolesnik, A I; Chertorizhskij, E A

    1984-01-01

    Preconditions for choice of NPP operator activity efficiency index are determined. Results of the choice are given and a method for determination of generalized and particular parameters by means of which NPP operator activity efficiency can be estimated is considered. An algorithm of diagnosis of reason for unsuccess of operator activity based on assessment of psychological factors of complicacy is suggested.

  3. Exploring Machine Learning to Correct Satellite-Derived Sea Surface Temperatures

    Directory of Open Access Journals (Sweden)

    Stéphane Saux Picart

    2018-02-01

    Full Text Available Machine learning techniques are attractive tools to establish statistical models with a high degree of non linearity. They require a large amount of data to be trained and are therefore particularly suited to analysing remote sensing data. This work is an attempt at using advanced statistical methods of machine learning to predict the bias between Sea Surface Temperature (SST derived from infrared remote sensing and ground “truth” from drifting buoy measurements. A large dataset of collocation between satellite SST and in situ SST is explored. Four regression models are used: Simple multi-linear regression, Least Square Shrinkage and Selection Operator (LASSO, Generalised Additive Model (GAM and random forest. In the case of geostationary satellites for which a large number of collocations is available, results show that the random forest model is the best model to predict the systematic errors and it is computationally fast, making it a good candidate for operational processing. It is able to explain nearly 31% of the total variance of the bias (in comparison to about 24% for the multi-linear regression model.

  4. Liquid-Phase Hydrodeoxygenation of Guaiacol over Mo2C Supported on Commercial CNF. Effects of Operating Conditions on Conversion and Product Selectivity

    Directory of Open Access Journals (Sweden)

    Rui Moreira

    2018-03-01

    Full Text Available In this work, a Mo2C catalyst that was supported on commercial carbon nanofibers (CNF was synthetized and tested in the hydrodeoxygenation (HDO of guaiacol. The effects of operating conditions (temperature and pressure and reaction time (2 and 4 h on the conversion of guaiacol and products selectivity were studied. The major reaction products were cresol and phenol, followed by xylenols and toluene. The use of more severe operating conditions during the HDO of guaiacol caused a diversification in the reaction pathways, and consequently in the selectivity to products. The formation of phenol may have occurred by demethylation of guaiacol, followed by dehydroxylation of catechol, together with other reaction pathways, including direct guaiacol demethoxylation, and demethylation of cresols. X-ray diffraction (XRD analysis of spent catalysts did not reveal any significant changes as compared to the fresh catalyst.

  5. La-doped Al2O3 supported Au nanoparticles: highly active and selective catalysts for PROX under PEMFC operation conditions.

    Science.gov (United States)

    Lin, Qingquan; Qiao, Botao; Huang, Yanqiang; Li, Lin; Lin, Jian; Liu, Xiao Yan; Wang, Aiqin; Li, Wen-Cui; Zhang, Tao

    2014-03-14

    La-doped γ-Al2O3 supported Au catalysts show high activity and selectivity for the PROX reaction under PEMFC operation conditions. The superior performance is attributed to the formation of LaAlO3, which suppresses H2 oxidation and strengthens CO adsorption on Au sites, thereby improving competitive oxidation of CO at elevated temperature.

  6. Optimized ONO thickness for multi-level and 2-bit/cell operation for wrapped-select-gate (WSG) SONOS memory

    International Nuclear Information System (INIS)

    Wu, Woei-Cherng; Chao, Tien-Sheng; Yang, Tsung-Yu; Peng, Wu-Chin; Yang, Wen-Luh; Chen, Jian-Hao; Ma, Ming Wen; Lai, Chao-Sung; Lee, Chien-Hsing; Hsieh, Tsung-Min; Liou, Jhyy Cheng; Chen, Tzu Ping; Chen, Chien Hung; Lin, Chih Hung; Chen, Hwi Huang; Ko, Joe

    2008-01-01

    In this paper, highly reliable wrapped-select-gate (WSG) silicon–oxide–nitride–oxide–silicon (SONOS) memory cells with multi-level and 2-bit/cell operation have been successfully demonstrated. The source-side injection mechanism for WSG-SONOS memory with different ONO thickness was thoroughly investigated. The different programming efficiencies of the WSG-SONOS memory under different ONO thicknesses are explained by the lateral electrical field extracted from the simulation results. Furthermore, multi-level storage is easily obtained, and good V TH distribution presented, for the WSG-SONOS memory with optimized ONO thickness. High program/erase speed (10 µs/5 ms) and low programming current (3.5 µA) are used to achieve the multi-level operation with tolerable gate and drain disturbance, negligible second-bit effect, excellent data retention and good endurance performance

  7. Evaluation of operational, economic, and environmental performance of mixed and selective collection of municipal solid waste: Porto case study.

    Science.gov (United States)

    Teixeira, Carlos A; Russo, Mário; Matos, Cristina; Bentes, Isabel

    2014-12-01

    This article describes an accurate methodology for an operational, economic, and environmental assessment of municipal solid waste collection. The proposed methodological tool uses key performance indicators to evaluate independent operational and economic efficiency and performance of municipal solid waste collection practices. These key performance indicators are then used in life cycle inventories and life cycle impact assessment. Finally, the life cycle assessment environmental profiles provide the environmental assessment. We also report a successful application of this tool through a case study in the Portuguese city of Porto. Preliminary results demonstrate the applicability of the methodological tool to real cases. Some of the findings focus a significant difference between average mixed and selective collection effective distance (2.14 km t(-1); 16.12 km t(-1)), fuel consumption (3.96 L t(-1); 15.37 L t(-1)), crew productivity (0.98 t h(-1) worker(-1); 0.23 t h(-1) worker(-1)), cost (45.90 € t(-1); 241.20 € t(-1)), and global warming impact (19.95 kg CO2eq t(-1); 57.47 kg CO2eq t(-1)). Preliminary results consistently indicate: (a) higher global performance of mixed collection as compared with selective collection; (b) dependency of collection performance, even in urban areas, on the waste generation rate and density; (c) the decline of selective collection performances with decreasing source-separated material density and recycling collection rate; and (d) that the main threats to collection route efficiency are the extensive collection distances, high fuel consumption vehicles, and reduced crew productivity. © The Author(s) 2014.

  8. Frit Development Efforts for Sludge Batch 4 (SB4): Operating Window Assessments of Scenarios Leading Up to the Selected Preparation Plan for SB4

    International Nuclear Information System (INIS)

    Peeler, D

    2006-01-01

    The objective of this report is to document technical information that has been provided to Defense Waste Processing Facility (DWPF) and Closure Business Unit (CBU) personnel as part of the frit development support for Sludge Batch 4 (SB4). The information presented in this report includes projected operating windows (expressed in terms of waste loading) for various sludge blending and/or washing options coupled with candidate frits of interest. Although the Nominal Stage assessment serves as the primary tool for these evaluations, select systems were also evaluated using a Variation Stage assessment in which compositional variations were introduced. In addition, assessments of the impacts of nepheline formation potential and the SO 4 - solubility limit on the projected operating windows are also provided. Although this information was used as part of the technical basis leading to CBU's development of the preferred SB4 preparation plan, none of the options presented in this report was selected as the preferred plan. Therefore, the information is presented without significant interpretation of the resulting operating windows, but the projected windows are provided so additional insight can be explored if desired. Detailed assessments of the projected operating windows (using both Nominal and Variation Stage assessments) of the preferred sludge preparation plan with candidate frits are to be documented elsewhere. The information provided in this report is focused solely on model-based projections of the operating windows for various SB4 blending strategies of interest. Although nepheline formation potential is monitored via model predictions as a part of this assessment, experimental work investigating the impact of nepheline on glass quality is also being addressed in a parallel study. The results of this paper study and the experimental assessments of melt rate, SO 4 solubility, and/or nepheline formation potential are all critical components of the inputs into

  9. A Systematic Evaluation of Feature Selection and Classification Algorithms Using Simulated and Real miRNA Sequencing Data

    Directory of Open Access Journals (Sweden)

    Sheng Yang

    2015-01-01

    Full Text Available Sequencing is widely used to discover associations between microRNAs (miRNAs and diseases. However, the negative binomial distribution (NB and high dimensionality of data obtained using sequencing can lead to low-power results and low reproducibility. Several statistical learning algorithms have been proposed to address sequencing data, and although evaluation of these methods is essential, such studies are relatively rare. The performance of seven feature selection (FS algorithms, including baySeq, DESeq, edgeR, the rank sum test, lasso, particle swarm optimistic decision tree, and random forest (RF, was compared by simulation under different conditions based on the difference of the mean, the dispersion parameter of the NB, and the signal to noise ratio. Real data were used to evaluate the performance of RF, logistic regression, and support vector machine. Based on the simulation and real data, we discuss the behaviour of the FS and classification algorithms. The Apriori algorithm identified frequent item sets (mir-133a, mir-133b, mir-183, mir-937, and mir-96 from among the deregulated miRNAs of six datasets from The Cancer Genomics Atlas. Taking these findings altogether and considering computational memory requirements, we propose a strategy that combines edgeR and DESeq for large sample sizes.

  10. A study on operation efficiency evaluation based on firm's financial index and benchmark selection: take China Unicom as an example

    Science.gov (United States)

    Wu, Zu-guang; Tian, Zhan-jun; Liu, Hui; Huang, Rui; Zhu, Guo-hua

    2009-07-01

    Being the only listed telecom operators of A share market, China Unicom has always been attracted many institutional investors under the concept of 3G recent years,which itself is a great technical progress expectation.Do the institutional investors or the concept of technical progress have signficant effect on the improving of firm's operating efficiency?Though reviewing the documentary about operating efficiency we find that schoolars study this problem useing the regress analyzing based on traditional production function and data envelopment analysis(DEA) and financial index anayzing and marginal function and capital labor ratio coefficient etc. All the methods mainly based on macrodata. This paper we use the micro-data of company to evaluate the operating efficiency.Using factor analyzing based on financial index and comparing the factor score of three years from 2005 to 2007, we find that China Unicom's operating efficiency is under the averge level of benchmark corporates and has't improved under the concept of 3G from 2005 to 2007.In other words,institutional investor or the conception of technical progress expectation have faint effect on the changes of China Unicom's operating efficiency. Selecting benchmark corporates as post to evaluate the operating efficiency is a characteristic of this method ,which is basicallly sipmly and direct.This method is suit for the operation efficiency evaluation of agriculture listed companies because agriculture listed also face technical progress and marketing concept such as tax-free etc.

  11. A methodology of selection of exercises for operator training on a control room simulator and its application to the data bank of exercises at the Dukovany NPP

    International Nuclear Information System (INIS)

    Holy, J.

    2005-07-01

    The report describes the preparation of methodology for the selection of scenarios to be used during operator training on a full-scope simulator. The scenarios are selected from a data bank of scenarios, which is under preparation based on feedback from the operational history and theoretical analyses. The new methodology takes into account 3 basic attributes defining the priority for use within the training programme: frequency of occurrence, safety-related significance, and difficulty. The attributes are scored and based on a joint score, the importance of inclusion of the scenario in the training programme is also scored. The methodology was applied to the data bank of scenarios for simulation of abnormal states and incidents trained on the up-to-date simulator of the Dukovany NPP, and the results of this pilot application were made available to Dukovany operator training staff as a tool for the preparation of training plans for the years to come. The results of a PSA study are used for a non-trivial selection of the scenarios

  12. An electronic image processing device featuring continuously selectable two-dimensional bipolar filter functions and real-time operation

    International Nuclear Information System (INIS)

    Charleston, B.D.; Beckman, F.H.; Franco, M.J.; Charleston, D.B.

    1981-01-01

    A versatile electronic-analogue image processing system has been developed for use in improving the quality of various types of images with emphasis on those encountered in experimental and diagnostic medicine. The operational principle utilizes spatial filtering which selectively controls the contrast of an image according to the spatial frequency content of relevant and non-relevant features of the image. Noise can be reduced or eliminated by selectively lowering the contrast of information in the high spatial frequency range. Edge sharpness can be enhanced by accentuating the upper midrange spatial frequencies. Both methods of spatial frequency control may be adjusted continuously in the same image to obtain maximum visibility of the features of interest. A precision video camera is used to view medical diagnostic images, either prints, transparencies or CRT displays. The output of the camera provides the analogue input signal for both the electronic processing system and the video display of the unprocessed image. The video signal input to the electronic processing system is processed by a two-dimensional spatial convolution operation. The system employs charged-coupled devices (CCDs), both tapped analogue delay lines (TADs) and serial analogue delay lines (SADs), to store information in the form of analogue potentials which are constantly being updated as new sampled analogue data arrive at the input. This information is convolved with a programmed bipolar radially symmetrical hexagonal function which may be controlled and varied at each radius by the operator in real-time by adjusting a set of front panel controls or by a programmed microprocessor control. Two TV monitors are used, one for processed image display and the other for constant reference to the original image. The working prototype has a full-screen display matrix size of 200 picture elements per horizontal line by 240 lines. The matrix can be expanded vertically and horizontally for the

  13. LASSO—ligand activity by surface similarity order: a new tool for ligand based virtual screening

    Science.gov (United States)

    Reid, Darryl; Sadjad, Bashir S.; Zsoldos, Zsolt; Simon, Aniko

    2008-06-01

    Virtual Ligand Screening (VLS) has become an integral part of the drug discovery process for many pharmaceutical companies. Ligand similarity searches provide a very powerful method of screening large databases of ligands to identify possible hits. If these hits belong to new chemotypes the method is deemed even more successful. eHiTS LASSO uses a new interacting surface point types (ISPT) molecular descriptor that is generated from the 3D structure of the ligand, but unlike most 3D descriptors it is conformation independent. Combined with a neural network machine learning technique, LASSO screens molecular databases at an ultra fast speed of 1 million structures in under 1 min on a standard PC. The results obtained from eHiTS LASSO trained on relatively small training sets of just 2, 4 or 8 actives are presented using the diverse directory of useful decoys (DUD) dataset. It is shown that over a wide range of receptor families, eHiTS LASSO is consistently able to enrich screened databases and provides scaffold hopping ability.

  14. Ecole d'été de probabilités de Saint-Flour XLV

    CERN Document Server

    van de Geer, Sara

    2016-01-01

    Taking the Lasso method as its starting point, this book describes the main ingredients needed to study general loss functions and sparsity-inducing regularizers. It also provides a semi-parametric approach to establishing confidence intervals and tests. Sparsity-inducing methods have proven to be very useful in the analysis of high-dimensional data. Examples include the Lasso and group Lasso methods, and the least squares method with other norm-penalties, such as the nuclear norm. The illustrations provided include generalized linear models, density estimation, matrix completion and sparse principal components. Each chapter ends with a problem section. The book can be used as a textbook for a graduate or PhD course.

  15. Internet-Based Motivation Program for Women With Eating Disorders: Eating Disorder Pathology and Depressive Mood Predict Dropout

    Science.gov (United States)

    Hirschfeld, Gerrit; Rieger, Elizabeth; Schmidt, Ulrike; Kosfelder, Joachim; Hechler, Tanja; Schulte, Dietmar; Vocks, Silja

    2014-01-01

    Background One of the main problems of Internet-delivered interventions for a range of disorders is the high dropout rate, yet little is known about the factors associated with this. We recently developed and tested a Web-based 6-session program to enhance motivation to change for women with anorexia nervosa, bulimia nervosa, or related subthreshold eating pathology. Objective The aim of the present study was to identify predictors of dropout from this Web program. Methods A total of 179 women took part in the study. We used survival analyses (Cox regression) to investigate the predictive effect of eating disorder pathology (assessed by the Eating Disorders Examination-Questionnaire; EDE-Q), depressive mood (Hopkins Symptom Checklist), motivation to change (University of Rhode Island Change Assessment Scale; URICA), and participants’ age at dropout. To identify predictors, we used the least absolute shrinkage and selection operator (LASSO) method. Results The dropout rate was 50.8% (91/179) and was equally distributed across the 6 treatment sessions. The LASSO analysis revealed that higher scores on the Shape Concerns subscale of the EDE-Q, a higher frequency of binge eating episodes and vomiting, as well as higher depression scores significantly increased the probability of dropout. However, we did not find any effect of the URICA or age on dropout. Conclusions Women with more severe eating disorder pathology and depressive mood had a higher likelihood of dropping out from a Web-based motivational enhancement program. Interventions such as ours need to address the specific needs of women with more severe eating disorder pathology and depressive mood and offer them additional support to prevent them from prematurely discontinuing treatment. PMID:24686856

  16. Internet-based motivation program for women with eating disorders: eating disorder pathology and depressive mood predict dropout.

    Science.gov (United States)

    von Brachel, Ruth; Hötzel, Katrin; Hirschfeld, Gerrit; Rieger, Elizabeth; Schmidt, Ulrike; Kosfelder, Joachim; Hechler, Tanja; Schulte, Dietmar; Vocks, Silja

    2014-03-31

    One of the main problems of Internet-delivered interventions for a range of disorders is the high dropout rate, yet little is known about the factors associated with this. We recently developed and tested a Web-based 6-session program to enhance motivation to change for women with anorexia nervosa, bulimia nervosa, or related subthreshold eating pathology. The aim of the present study was to identify predictors of dropout from this Web program. A total of 179 women took part in the study. We used survival analyses (Cox regression) to investigate the predictive effect of eating disorder pathology (assessed by the Eating Disorders Examination-Questionnaire; EDE-Q), depressive mood (Hopkins Symptom Checklist), motivation to change (University of Rhode Island Change Assessment Scale; URICA), and participants' age at dropout. To identify predictors, we used the least absolute shrinkage and selection operator (LASSO) method. The dropout rate was 50.8% (91/179) and was equally distributed across the 6 treatment sessions. The LASSO analysis revealed that higher scores on the Shape Concerns subscale of the EDE-Q, a higher frequency of binge eating episodes and vomiting, as well as higher depression scores significantly increased the probability of dropout. However, we did not find any effect of the URICA or age on dropout. Women with more severe eating disorder pathology and depressive mood had a higher likelihood of dropping out from a Web-based motivational enhancement program. Interventions such as ours need to address the specific needs of women with more severe eating disorder pathology and depressive mood and offer them additional support to prevent them from prematurely discontinuing treatment.

  17. A Unified and Comprehensible View of Parametric and Kernel Methods for Genomic Prediction with Application to Rice.

    Science.gov (United States)

    Jacquin, Laval; Cao, Tuong-Vi; Ahmadi, Nourollah

    2016-01-01

    One objective of this study was to provide readers with a clear and unified understanding of parametric statistical and kernel methods, used for genomic prediction, and to compare some of these in the context of rice breeding for quantitative traits. Furthermore, another objective was to provide a simple and user-friendly R package, named KRMM, which allows users to perform RKHS regression with several kernels. After introducing the concept of regularized empirical risk minimization, the connections between well-known parametric and kernel methods such as Ridge regression [i.e., genomic best linear unbiased predictor (GBLUP)] and reproducing kernel Hilbert space (RKHS) regression were reviewed. Ridge regression was then reformulated so as to show and emphasize the advantage of the kernel "trick" concept, exploited by kernel methods in the context of epistatic genetic architectures, over parametric frameworks used by conventional methods. Some parametric and kernel methods; least absolute shrinkage and selection operator (LASSO), GBLUP, support vector machine regression (SVR) and RKHS regression were thereupon compared for their genomic predictive ability in the context of rice breeding using three real data sets. Among the compared methods, RKHS regression and SVR were often the most accurate methods for prediction followed by GBLUP and LASSO. An R function which allows users to perform RR-BLUP of marker effects, GBLUP and RKHS regression, with a Gaussian, Laplacian, polynomial or ANOVA kernel, in a reasonable computation time has been developed. Moreover, a modified version of this function, which allows users to tune kernels for RKHS regression, has also been developed and parallelized for HPC Linux clusters. The corresponding KRMM package and all scripts have been made publicly available.

  18. Your Lung Operation: After Your Operation

    Medline Plus

    Full Text Available ... Use Patient Opioid Use Position Statements and Task Force Patient Education Initiatives Advocacy and Health Policy Updates Selected Research ... at ACS ACS and Veterans Diversity at ACS ... and Family Contact My Profile Shop ( 0 ) Cart Donate American College of Surgeons Education Patients and Family Skills Programs Your Lung Operation ...

  19. Rapid selection of a pyrethroid metabolic enzyme CYP9K1 by operational malaria control activities.

    Science.gov (United States)

    Vontas, John; Grigoraki, Linda; Morgan, John; Tsakireli, Dimitra; Fuseini, Godwin; Segura, Luis; Niemczura de Carvalho, Julie; Nguema, Raul; Weetman, David; Slotman, Michel A; Hemingway, Janet

    2018-05-01

    Since 2004, indoor residual spraying (IRS) and long-lasting insecticide-impregnated bednets (LLINs) have reduced the malaria parasite prevalence in children on Bioko Island, Equatorial Guinea, from 45% to 12%. After target site-based (knockdown resistance; kdr ) pyrethroid resistance was detected in 2004 in Anopheles coluzzii (formerly known as the M form of the Anopheles gambiae complex), the carbamate bendiocarb was introduced. Subsequent analysis showed that kdr alone was not operationally significant, so pyrethroid-based IRS was successfully reintroduced in 2012. In 2007 and 2014-2015, mass distribution of new pyrethroid LLINs was undertaken to increase the net coverage levels. The combined selection pressure of IRS and LLINs resulted in an increase in the frequency of pyrethroid resistance in 2015. In addition to a significant increase in kd r frequency, an additional metabolic pyrethroid resistance mechanism had been selected. Increased metabolism of the pyrethroid deltamethrin was linked with up-regulation of the cytochrome P450 CYP9K1. The increase in resistance prompted a reversion to bendiocarb IRS in 2016 to avoid a resurgence of malaria, in line with the national Malaria Control Program plan. Copyright © 2018 the Author(s). Published by PNAS.

  20. Program scheme using common source lines in channel stacked NAND flash memory with layer selection by multilevel operation

    Science.gov (United States)

    Kim, Do-Bin; Kwon, Dae Woong; Kim, Seunghyun; Lee, Sang-Ho; Park, Byung-Gook

    2018-02-01

    To obtain high channel boosting potential and reduce a program disturbance in channel stacked NAND flash memory with layer selection by multilevel (LSM) operation, a new program scheme using boosted common source line (CSL) is proposed. The proposed scheme can be achieved by applying proper bias to each layer through its own CSL. Technology computer-aided design (TCAD) simulations are performed to verify the validity of the new method in LSM. Through TCAD simulation, it is revealed that the program disturbance characteristics is effectively improved by the proposed scheme.

  1. Reactor operation plan preparing device

    International Nuclear Information System (INIS)

    Sano, Hiroki; Maruyama, Hiromi; Kinoshita, Mitsuo; Fukuzaki, Koji; Banto, Masaru; Fukazawa, Yukihisa.

    1993-01-01

    The device comprises a means for retrieving a control rod pattern capable of satisfying a thermal limit upon aimed power/minimum flow rate and providing minimum xenon and a control rod pattern maximum xenon. It further comprises a means for selecting a control rod pattern corresponding to a xenon equilibrium condition, and selecting a control rod which provides a greater thermal margin to provide a control rod operation sequence for each of the patterns. Further, the device comprises an outline plan preparing means and a correction means therefor, a simplified sequence table reference means operated along with sequence change, an operation limit region input means, a control rod operation preferential region changing means, a thermal margin evaluation region and an input means. This can automatically prepare the operation plan, decrease the times for preparation of detailed plans by using the outline plan preparing function, thereby enabling to remarkably shorten the time for preparing of an operation plan. (N.H.)

  2. Genetic variation in the TP53 pathway and bladder cancer risk. a comprehensive analysis.

    Directory of Open Access Journals (Sweden)

    Silvia Pineda

    Full Text Available Germline variants in TP63 have been consistently associated with several tumors, including bladder cancer, indicating the importance of TP53 pathway in cancer genetic susceptibility. However, variants in other related genes, including TP53 rs1042522 (Arg72Pro, still present controversial results. We carried out an in depth assessment of associations between common germline variants in the TP53 pathway and bladder cancer risk.We investigated 184 tagSNPs from 18 genes in 1,058 cases and 1,138 controls from the Spanish Bladder Cancer/EPICURO Study. Cases were newly-diagnosed bladder cancer patients during 1998-2001. Hospital controls were age-gender, and area matched to cases. SNPs were genotyped in blood DNA using Illumina Golden Gate and TaqMan assays. Cases were subphenotyped according to stage/grade and tumor p53 expression. We applied classical tests to assess individual SNP associations and the Least Absolute Shrinkage and Selection Operator (LASSO-penalized logistic regression analysis to assess multiple SNPs simultaneously.Based on classical analyses, SNPs in BAK1 (1, IGF1R (5, P53AIP1 (1, PMAIP1 (2, SERINPB5 (3, TP63 (3, and TP73 (1 showed significant associations at p-value≤0.05. However, no evidence of association, either with overall risk or with specific disease subtypes, was observed after correction for multiple testing (p-value≥0.8. LASSO selected the SNP rs6567355 in SERPINB5 with 83% of reproducibility. This SNP provided an OR = 1.21, 95%CI 1.05-1.38, p-value = 0.006, and a corrected p-value = 0.5 when controlling for over-estimation.We found no strong evidence that common variants in the TP53 pathway are associated with bladder cancer susceptibility. Our study suggests that it is unlikely that TP53 Arg72Pro is implicated in the UCB in white Europeans. SERPINB5 and TP63 variation deserve further exploration in extended studies.

  3. The Art of Selection: Command Selection Failures, and a Better Way to Select Army Senior Leaders

    Science.gov (United States)

    2013-04-12

    and Effects ( MFE ), Force Sustainment (FS), and Operations Support (OS). Board members review board files in accordance with the instructions given to...Fires, and Effects ( MFE ), Operations Support (OS), and Force Sustainment (FS). The exact composition of a command selection board is governed by a...policy updated annually by the Military Personnel Management Directorate. For example, the MFE lieutenant colonel command board will be made up of one

  4. Predictive ability of genomic selection models for breeding value estimation on growth traits of Pacific white shrimp Litopenaeus vannamei

    Science.gov (United States)

    Wang, Quanchao; Yu, Yang; Li, Fuhua; Zhang, Xiaojun; Xiang, Jianhai

    2017-09-01

    Genomic selection (GS) can be used to accelerate genetic improvement by shortening the selection interval. The successful application of GS depends largely on the accuracy of the prediction of genomic estimated breeding value (GEBV). This study is a first attempt to understand the practicality of GS in Litopenaeus vannamei and aims to evaluate models for GS on growth traits. The performance of GS models in L. vannamei was evaluated in a population consisting of 205 individuals, which were genotyped for 6 359 single nucleotide polymorphism (SNP) markers by specific length amplified fragment sequencing (SLAF-seq) and phenotyped for body length and body weight. Three GS models (RR-BLUP, BayesA, and Bayesian LASSO) were used to obtain the GEBV, and their predictive ability was assessed by the reliability of the GEBV and the bias of the predicted phenotypes. The mean reliability of the GEBVs for body length and body weight predicted by the different models was 0.296 and 0.411, respectively. For each trait, the performances of the three models were very similar to each other with respect to predictability. The regression coefficients estimated by the three models were close to one, suggesting near to zero bias for the predictions. Therefore, when GS was applied in a L. vannamei population for the studied scenarios, all three models appeared practicable. Further analyses suggested that improved estimation of the genomic prediction could be realized by increasing the size of the training population as well as the density of SNPs.

  5. Differential evolution enhanced with multiobjective sorting-based mutation operators.

    Science.gov (United States)

    Wang, Jiahai; Liao, Jianjun; Zhou, Ying; Cai, Yiqiao

    2014-12-01

    Differential evolution (DE) is a simple and powerful population-based evolutionary algorithm. The salient feature of DE lies in its mutation mechanism. Generally, the parents in the mutation operator of DE are randomly selected from the population. Hence, all vectors are equally likely to be selected as parents without selective pressure at all. Additionally, the diversity information is always ignored. In order to fully exploit the fitness and diversity information of the population, this paper presents a DE framework with multiobjective sorting-based mutation operator. In the proposed mutation operator, individuals in the current population are firstly sorted according to their fitness and diversity contribution by nondominated sorting. Then parents in the mutation operators are proportionally selected according to their rankings based on fitness and diversity, thus, the promising individuals with better fitness and diversity have more opportunity to be selected as parents. Since fitness and diversity information is simultaneously considered for parent selection, a good balance between exploration and exploitation can be achieved. The proposed operator is applied to original DE algorithms, as well as several advanced DE variants. Experimental results on 48 benchmark functions and 12 real-world application problems show that the proposed operator is an effective approach to enhance the performance of most DE algorithms studied.

  6. Selected aspects of industrial psychology in reactor operation

    Energy Technology Data Exchange (ETDEWEB)

    Bohr, E; Thau, G

    1980-08-01

    An appropriate consideration of human factors in the operation of nuclear power plants may contribute to plant safety and availability. This requires an adequate evaluation of the role and the significance of human factors and a realistic understanding of human characteristics and needs. Some common misconceptions are discussed, and some outlines for future developments are proposed.

  7. Selected aspects of industrial psychology in reactor operation

    International Nuclear Information System (INIS)

    Bohr, E.; Thau, G.

    1980-01-01

    An appropriate consideration of human factors in the operation of nuclear power plants may contribute to plant safety and availability. This requires an adequate evaluation of the role and the significance of human factors and a realistic understanding of human characteristics and needs. Some common misconceptions are discussed, and some outlines for future developments are proposed. (orig.) [de

  8. Evaluation of Penalized and Nonpenalized Methods for Disease Prediction with Large-Scale Genetic Data

    Directory of Open Access Journals (Sweden)

    Sungho Won

    2015-01-01

    Full Text Available Owing to recent improvement of genotyping technology, large-scale genetic data can be utilized to identify disease susceptibility loci and this successful finding has substantially improved our understanding of complex diseases. However, in spite of these successes, most of the genetic effects for many complex diseases were found to be very small, which have been a big hurdle to build disease prediction model. Recently, many statistical methods based on penalized regressions have been proposed to tackle the so-called “large P and small N” problem. Penalized regressions including least absolute selection and shrinkage operator (LASSO and ridge regression limit the space of parameters, and this constraint enables the estimation of effects for very large number of SNPs. Various extensions have been suggested, and, in this report, we compare their accuracy by applying them to several complex diseases. Our results show that penalized regressions are usually robust and provide better accuracy than the existing methods for at least diseases under consideration.

  9. Identifying predictive features in drug response using machine learning: opportunities and challenges.

    Science.gov (United States)

    Vidyasagar, Mathukumalli

    2015-01-01

    This article reviews several techniques from machine learning that can be used to study the problem of identifying a small number of features, from among tens of thousands of measured features, that can accurately predict a drug response. Prediction problems are divided into two categories: sparse classification and sparse regression. In classification, the clinical parameter to be predicted is binary, whereas in regression, the parameter is a real number. Well-known methods for both classes of problems are briefly discussed. These include the SVM (support vector machine) for classification and various algorithms such as ridge regression, LASSO (least absolute shrinkage and selection operator), and EN (elastic net) for regression. In addition, several well-established methods that do not directly fall into machine learning theory are also reviewed, including neural networks, PAM (pattern analysis for microarrays), SAM (significance analysis for microarrays), GSEA (gene set enrichment analysis), and k-means clustering. Several references indicative of the application of these methods to cancer biology are discussed.

  10. A Selective CPS Transformation

    DEFF Research Database (Denmark)

    Nielsen, Lasse Riechstein

    2001-01-01

    characterize this involvement as a control effect and we present a selective CPS transformation that makes functions and expressions continuation-passing if they have a control effect, and that leaves the rest of the program in direct style. We formalize this selective CPS transformation with an operational...

  11. Patient- and therapy-related factors associated with the incidence of xerostomia in nasopharyngeal carcinoma patients receiving parotid-sparing helical tomotherapy.

    Science.gov (United States)

    Lee, Tsair-Fwu; Liou, Ming-Hsiang; Ting, Hui-Min; Chang, Liyun; Lee, Hsiao-Yi; Wan Leung, Stephen; Huang, Chih-Jen; Chao, Pei-Ju

    2015-08-20

    We investigated the incidence of moderate to severe patient-reported xerostomia among nasopharyngeal carcinoma (NPC) patients treated with helical tomotherapy (HT) and identified patient- and therapy-related factors associated with acute and chronic xerostomia toxicity. The least absolute shrinkage and selection operator (LASSO) normal tissue complication probability (NTCP) models were developed using quality-of-life questionnaire datasets from 67 patients with NPC. For acute toxicity, the dosimetric factors of the mean doses to the ipsilateral submandibular gland (Dis) and the contralateral submandibular gland (Dcs) were selected as the first two significant predictors. For chronic toxicity, four predictive factors were selected: age, mean dose to the oral cavity (Doc), education, and T stage. The substantial sparing data can be used to avoid xerostomia toxicity. We suggest that the tolerance values corresponded to a 20% incidence of complications (TD20) for Dis = 39.0 Gy, Dcs = 38.4 Gy, and Doc = 32.5 Gy, respectively, when mean doses to the parotid glands met the QUANTEC 25 Gy sparing guidelines. To avoid patient-reported xerostomia toxicity, the mean doses to the parotid gland, submandibular gland, and oral cavity have to meet the sparing tolerance, although there is also a need to take inherent patient characteristics into consideration.

  12. Bayesian variable selection for multistate Markov models with interval-censored data in an ecological momentary assessment study of smoking cessation.

    Science.gov (United States)

    Koslovsky, Matthew D; Swartz, Michael D; Chan, Wenyaw; Leon-Novelo, Luis; Wilkinson, Anna V; Kendzor, Darla E; Businelle, Michael S

    2017-10-11

    The application of sophisticated analytical methods to intensive longitudinal data, collected with ecological momentary assessments (EMA), has helped researchers better understand smoking behaviors after a quit attempt. Unfortunately, the wealth of information captured with EMAs is typically underutilized in practice. Thus, novel methods are needed to extract this information in exploratory research studies. One of the main objectives of intensive longitudinal data analysis is identifying relations between risk factors and outcomes of interest. Our goal is to develop and apply expectation maximization variable selection for Bayesian multistate Markov models with interval-censored data to generate new insights into the relation between potential risk factors and transitions between smoking states. Through simulation, we demonstrate the effectiveness of our method in identifying associated risk factors and its ability to outperform the LASSO in a special case. Additionally, we use the expectation conditional-maximization algorithm to simplify estimation, a deterministic annealing variant to reduce the algorithm's dependence on starting values, and Louis's method to estimate unknown parameter uncertainty. We then apply our method to intensive longitudinal data collected with EMA to identify risk factors associated with transitions between smoking states after a quit attempt in a cohort of socioeconomically disadvantaged smokers who were interested in quitting. © 2017, The International Biometric Society.

  13. Remote operation: a selective review of research into visual depth perception.

    Science.gov (United States)

    Reinhardt-Rutland, A H

    1996-07-01

    Some perceptual motor operations are performed remotely; examples include the handling of life-threatening materials and surgical procedures. A camera conveys the site of operation to a TV monitor, so depth perception relies mainly on pictorial information, perhaps with enhancement of the occlusion cue by motion. However, motion information such as motion parallax is not likely to be important. The effectiveness of pictorial information is diminished by monocular and binocular information conveying flatness of the screen and by difficulties in scaling: Only a degree of relative depth can be conveyed. Furthermore, pictorial information can mislead. Depth perception is probably adequate in remote operation, if target objects are well separated, with well-defined edges and familiar shapes. Stereoscopic viewing systems are being developed to introduce binocular information to remote operation. However, stereoscopic viewing is problematic because binocular disparity conflicts with convergence and monocular information. An alternative strategy to improve precision in remote operation may be to rely on individuals who lack binocular function: There is redundancy in depth information, and such individuals seem to compensate for the lack of binocular function.

  14. Higher operational safety of nuclear power plants by evaluating the behaviour of operating personnel

    International Nuclear Information System (INIS)

    Mertins, M.; Glasner, P.

    1990-01-01

    In the GDR power reactors have been operated since 1966. Since that time operational experiences of 73 cumulative reactor years have been collected. The behaviour of operating personnel is an essential factor to guarantee the safety of operation of the nuclear power plant. Therefore a continuous analysis of the behaviour of operating personnel has been introduced at the GDR nuclear power plants. In the paper the overall system of the selection, preparation and control of the behaviour of nuclear power plant operating personnel is presented. The methods concerned are based on recording all errors of operating personnel and on analyzing them in order to find out the reasons. The aim of the analysis of reasons is to reduce the number of errors. By a feedback of experiences the nuclear safety of the nuclear power plant can be increased. All data necessary for the evaluation of errors are recorded and evaluated by a computer program. This method is explained thoroughly in the paper. Selected results of error analysis are presented. It is explained how the activities of the personnel are made safer by means of this analysis. Comparisons with other methods are made. (author). 3 refs, 4 figs

  15. T2L2 on JASON-2: First Evaluation of the Flying Model

    Science.gov (United States)

    2007-01-01

    Para, J.-M. Torre R&D Metrology CNRS/GEMINI Observatoire de la Côte d’Azur Caussol, France E-mail: philippe.guillemot@cnes.fr Abstract...Laser Link” experiment T2L2 [1], under development at OCA (Observatoire de la Côte d’Azur) and CNES (Centre National d’Etudes Spatiales), France, will be...Experimental Astronomy, 7, 191-207. [2] P. Fridelance and C. Veillet, 1995, “Operation and data analysis in the LASSO experiment,” Metrologia

  16. Economic sustainability in franchising: a model to predict franchisor success or failure

    OpenAIRE

    Calderón Monge, Esther; Pastor Sanz, Ivan .; Huerta Zavala, Pilar Angélica

    2017-01-01

    As a business model, franchising makes a major contribution to gross domestic product (GDP). A model that predicts franchisor success or failure is therefore necessary to ensure economic sustainability. In this study, such a model was developed by applying Lasso regression to a sample of franchises operating between 2002 and 2013. For franchises with the highest likelihood of survival, the franchise fees and the ratio of company-owned to franchised outlets were suited to the age ...

  17. Understanding the spectrum of residential energy-saving behaviours: French evidence using disaggregated data

    International Nuclear Information System (INIS)

    Belaïd, Fateh; Garcia, Thomas

    2016-01-01

    Analysing household energy-saving behaviours is crucial to improve energy consumption predictions and energy policy making. How should we quantitatively measure them? What are their determinants? This study explores the main factors influencing residential energy-saving behaviours based on a bottom-up multivariate statistical approach using data from the recent French PHEBUS survey. Firstly, we assess energy-saving behaviours on a one-dimension scale using IRT. Secondly, we use linear regression with an innovative variable selection method via adaptive lasso to tease out the effects of both macro and micro factors on the behavioural score. The results highlight the impact of five main attributes incentivizing energy-saving behaviours based on cross-variable analyses: energy price, household income, education level, age of head of household and dwelling energy performance. In addition, our results suggest that the analysis of the inverted U-shape impact of age enables the expansion of the energy consumption life cycle theory to energy-saving behaviours. - Highlights: • We examine the main factors influencing residential energy-saving behaviours. • We use data from the recent French PHEBUS survey. • We use IRT to assess energy-saving behaviours on a one-dimension scale. • We use linear regression with an innovative variable selection method via adaptive lasso. • We highlight the impact of five main attributes incentivizing energy-saving behaviours.

  18. Predicting Kenya Short Rains Using the Indian Ocean SST

    Science.gov (United States)

    Peng, X.; Albertson, J. D.; Steinschneider, S.

    2017-12-01

    The rainfall over the Eastern Africa is charaterized by the typical bimodal monsoon system. Literatures have shown that the monsoon system is closely connected with the large-scale atmospheric motion which is believed to be driven by sea surface temperature anomalies (SSTA). Therefore, we may make use of the predictability of SSTA in estimating future Easter Africa monsoon. In this study, we tried predict the Kenya short rains (Oct, Nov and Dec rainfall) based on the Indian Ocean SSTA. The Least Absolute Shrinkage and Selection Operator (LASSO) regression is used to avoid over-fitting issues. Models for different lead times are trained using a 28-year training set (2006-1979) and are tested using a 10-year test set (2007-2016). Satisfying prediciton skills are achieved at relatively long lead times (i.e., 8 and 10 months) in terms of correlation coefficient and sign accuracy. Unlike some of the previous work, the prediction models are obtained from a data-driven method. Limited predictors are selected for each model and can be used in understanding the underlying physical connection. Still, further investigation is needed since the sampling variability issue cannot be excluded due to the limited sample size.

  19. Automated emergency operating procedures

    International Nuclear Information System (INIS)

    Perez-Ramirez, G.; Nelson, P.F.

    1990-01-01

    This paper describes the development of a training tool for the symptom oriented emergency operating procedures used at the Laguna Verde Nuclear Power Plant. EOPs and operator training are intended to assist the operator for managing accident situations. A prototype expert system based on the EOPs has been developed for operator training. The demonstration expert system was developed using a commercial shell. The knowledge base consists of two parts. The specific operator actions to be executed for 5 selected accident sequences and the EOPs steps for the reactor pressure vessel control of the water level, pressure, and power. The knowledge is expressed in the form of IF-THEN production rules. A typical training session will display a set of conditions and will prompt the trainee to indicate the appropriate step to perform. This mode will guide the trainee through selected accident sequences. A second mode of the expert system will prompt the trainee for the current plant conditions and the expert system will respond with the EOPs which are required to be performed under these conditions. This allows the trainee to study What if situations

  20. One-carbon metabolism, cognitive impairment and CSF measures of Alzheimer pathology: homocysteine and beyond.

    Science.gov (United States)

    Dayon, Loïc; Guiraud, Seu Ping; Corthésy, John; Da Silva, Laeticia; Migliavacca, Eugenia; Tautvydaitė, Domilė; Oikonomidi, Aikaterini; Moullet, Barbara; Henry, Hugues; Métairon, Sylviane; Marquis, Julien; Descombes, Patrick; Collino, Sebastiano; Martin, François-Pierre J; Montoliu, Ivan; Kussmann, Martin; Wojcik, Jérôme; Bowman, Gene L; Popp, Julius

    2017-06-17

    Hyperhomocysteinemia is a risk factor for cognitive decline and dementia, including Alzheimer disease (AD). Homocysteine (Hcy) is a sulfur-containing amino acid and metabolite of the methionine pathway. The interrelated methionine, purine, and thymidylate cycles constitute the one-carbon metabolism that plays a critical role in the synthesis of DNA, neurotransmitters, phospholipids, and myelin. In this study, we tested the hypothesis that one-carbon metabolites beyond Hcy are relevant to cognitive function and cerebrospinal fluid (CSF) measures of AD pathology in older adults. Cross-sectional analysis was performed on matched CSF and plasma collected from 120 older community-dwelling adults with (n = 72) or without (n = 48) cognitive impairment. Liquid chromatography-mass spectrometry was performed to quantify one-carbon metabolites and their cofactors. Least absolute shrinkage and selection operator (LASSO) regression was initially applied to clinical and biomarker measures that generate the highest diagnostic accuracy of a priori-defined cognitive impairment (Clinical Dementia Rating-based) and AD pathology (i.e., CSF tau phosphorylated at threonine 181 [p-tau181]/β-Amyloid 1-42 peptide chain [Aβ 1-42 ] >0.0779) to establish a reference benchmark. Two other LASSO-determined models were generated that included the one-carbon metabolites in CSF and then plasma. Correlations of CSF and plasma one-carbon metabolites with CSF amyloid and tau were explored. LASSO-determined models were stratified by apolipoprotein E (APOE) ε4 carrier status. The diagnostic accuracy of cognitive impairment for the reference model was 80.8% and included age, years of education, Aβ 1-42 , tau, and p-tau181. A model including CSF cystathionine, methionine, S-adenosyl-L-homocysteine (SAH), S-adenosylmethionine (SAM), serine, cysteine, and 5-methyltetrahydrofolate (5-MTHF) improved the diagnostic accuracy to 87.4%. A second model derived from plasma included cystathionine

  1. Ion-selective electrode reviews

    CERN Document Server

    Thomas, J D R

    1982-01-01

    Ion-Selective Electrode Reviews, Volume 3, provides a review of articles on ion-selective electrodes (ISEs). The volume begins with an article on methods based on titration procedures for surfactant analysis, which have been developed for discrete batch operation and for continuous AutoAnalyser use. Separate chapters deal with detection limits of ion-selective electrodes; the possibility of using inorganic ion-exchange materials as ion-sensors; and the effect of solvent on potentials of cells with ion-selective electrodes. Also included is a chapter on advances in calibration procedures, the d

  2. A flexible framework for sparse simultaneous component based data integration

    Directory of Open Access Journals (Sweden)

    Van Deun Katrijn

    2011-11-01

    Full Text Available Abstract 1 Background High throughput data are complex and methods that reveal structure underlying the data are most useful. Principal component analysis, frequently implemented as a singular value decomposition, is a popular technique in this respect. Nowadays often the challenge is to reveal structure in several sources of information (e.g., transcriptomics, proteomics that are available for the same biological entities under study. Simultaneous component methods are most promising in this respect. However, the interpretation of the principal and simultaneous components is often daunting because contributions of each of the biomolecules (transcripts, proteins have to be taken into account. 2 Results We propose a sparse simultaneous component method that makes many of the parameters redundant by shrinking them to zero. It includes principal component analysis, sparse principal component analysis, and ordinary simultaneous component analysis as special cases. Several penalties can be tuned that account in different ways for the block structure present in the integrated data. This yields known sparse approaches as the lasso, the ridge penalty, the elastic net, the group lasso, sparse group lasso, and elitist lasso. In addition, the algorithmic results can be easily transposed to the context of regression. Metabolomics data obtained with two measurement platforms for the same set of Escherichia coli samples are used to illustrate the proposed methodology and the properties of different penalties with respect to sparseness across and within data blocks. 3 Conclusion Sparse simultaneous component analysis is a useful method for data integration: First, simultaneous analyses of multiple blocks offer advantages over sequential and separate analyses and second, interpretation of the results is highly facilitated by their sparseness. The approach offered is flexible and allows to take the block structure in different ways into account. As such

  3. A flexible framework for sparse simultaneous component based data integration.

    Science.gov (United States)

    Van Deun, Katrijn; Wilderjans, Tom F; van den Berg, Robert A; Antoniadis, Anestis; Van Mechelen, Iven

    2011-11-15

    High throughput data are complex and methods that reveal structure underlying the data are most useful. Principal component analysis, frequently implemented as a singular value decomposition, is a popular technique in this respect. Nowadays often the challenge is to reveal structure in several sources of information (e.g., transcriptomics, proteomics) that are available for the same biological entities under study. Simultaneous component methods are most promising in this respect. However, the interpretation of the principal and simultaneous components is often daunting because contributions of each of the biomolecules (transcripts, proteins) have to be taken into account. We propose a sparse simultaneous component method that makes many of the parameters redundant by shrinking them to zero. It includes principal component analysis, sparse principal component analysis, and ordinary simultaneous component analysis as special cases. Several penalties can be tuned that account in different ways for the block structure present in the integrated data. This yields known sparse approaches as the lasso, the ridge penalty, the elastic net, the group lasso, sparse group lasso, and elitist lasso. In addition, the algorithmic results can be easily transposed to the context of regression. Metabolomics data obtained with two measurement platforms for the same set of Escherichia coli samples are used to illustrate the proposed methodology and the properties of different penalties with respect to sparseness across and within data blocks. Sparse simultaneous component analysis is a useful method for data integration: First, simultaneous analyses of multiple blocks offer advantages over sequential and separate analyses and second, interpretation of the results is highly facilitated by their sparseness. The approach offered is flexible and allows to take the block structure in different ways into account. As such, structures can be found that are exclusively tied to one data platform

  4. Genomic Prediction Accuracy for Resistance Against Piscirickettsia salmonis in Farmed Rainbow Trout

    Directory of Open Access Journals (Sweden)

    Grazyella M. Yoshida

    2018-02-01

    Full Text Available Salmonid rickettsial syndrome (SRS, caused by the intracellular bacterium Piscirickettsia salmonis, is one of the main diseases affecting rainbow trout (Oncorhynchus mykiss farming. To accelerate genetic progress, genomic selection methods can be used as an effective approach to control the disease. The aims of this study were: (i to compare the accuracy of estimated breeding values using pedigree-based best linear unbiased prediction (PBLUP with genomic BLUP (GBLUP, single-step GBLUP (ssGBLUP, Bayes C, and Bayesian Lasso (LASSO; and (ii to test the accuracy of genomic prediction and PBLUP using different marker densities (0.5, 3, 10, 20, and 27 K for resistance against P. salmonis in rainbow trout. Phenotypes were recorded as number of days to death (DD and binary survival (BS from 2416 fish challenged with P. salmonis. A total of 1934 fish were genotyped using a 57 K single-nucleotide polymorphism (SNP array. All genomic prediction methods achieved higher accuracies than PBLUP. The relative increase in accuracy for different genomic models ranged from 28 to 41% for both DD and BS at 27 K SNP. Between different genomic models, the highest relative increase in accuracy was obtained with Bayes C (∼40%, where 3 K SNP was enough to achieve a similar accuracy to that of the 27 K SNP for both traits. For resistance against P. salmonis in rainbow trout, we showed that genomic predictions using GBLUP, ssGBLUP, Bayes C, and LASSO can increase accuracy compared with PBLUP. Moreover, it is possible to use relatively low-density SNP panels for genomic prediction without compromising accuracy predictions for resistance against P. salmonis in rainbow trout.

  5. Network-based group variable selection for detecting expression quantitative trait loci (eQTL

    Directory of Open Access Journals (Sweden)

    Zhang Xuegong

    2011-06-01

    Full Text Available Abstract Background Analysis of expression quantitative trait loci (eQTL aims to identify the genetic loci associated with the expression level of genes. Penalized regression with a proper penalty is suitable for the high-dimensional biological data. Its performance should be enhanced when we incorporate biological knowledge of gene expression network and linkage disequilibrium (LD structure between loci in high-noise background. Results We propose a network-based group variable selection (NGVS method for QTL detection. Our method simultaneously maps highly correlated expression traits sharing the same biological function to marker sets formed by LD. By grouping markers, complex joint activity of multiple SNPs can be considered and the dimensionality of eQTL problem is reduced dramatically. In order to demonstrate the power and flexibility of our method, we used it to analyze two simulations and a mouse obesity and diabetes dataset. We considered the gene co-expression network, grouped markers into marker sets and treated the additive and dominant effect of each locus as a group: as a consequence, we were able to replicate results previously obtained on the mouse linkage dataset. Furthermore, we observed several possible sex-dependent loci and interactions of multiple SNPs. Conclusions The proposed NGVS method is appropriate for problems with high-dimensional data and high-noise background. On eQTL problem it outperforms the classical Lasso method, which does not consider biological knowledge. Introduction of proper gene expression and loci correlation information makes detecting causal markers more accurate. With reasonable model settings, NGVS can lead to novel biological findings.

  6. Directional migration of recirculating lymphocytes through lymph nodes via random walks.

    Directory of Open Access Journals (Sweden)

    Niclas Thomas

    Full Text Available Naive T lymphocytes exhibit extensive antigen-independent recirculation between blood and lymph nodes, where they may encounter dendritic cells carrying cognate antigen. We examine how long different T cells may spend in an individual lymph node by examining data from long term cannulation of blood and efferent lymphatics of a single lymph node in the sheep. We determine empirically the distribution of transit times of migrating T cells by applying the Least Absolute Shrinkage & Selection Operator (LASSO or regularised S-LASSO to fit experimental data describing the proportion of labelled infused cells in blood and efferent lymphatics over time. The optimal inferred solution reveals a distribution with high variance and strong skew. The mode transit time is typically between 10 and 20 hours, but a significant number of cells spend more than 70 hours before exiting. We complement the empirical machine learning based approach by modelling lymphocyte passage through the lymph node insilico. On the basis of previous two photon analysis of lymphocyte movement, we optimised distributions which describe the transit times (first passage times of discrete one dimensional and continuous (Brownian three dimensional random walks with drift. The optimal fit is obtained when drift is small, i.e. the ratio of probabilities of migrating forward and backward within the node is close to one. These distributions are qualitatively similar to the inferred empirical distribution, with high variance and strong skew. In contrast, an optimised normal distribution of transit times (symmetrical around mean fitted the data poorly. The results demonstrate that the rapid recirculation of lymphocytes observed at a macro level is compatible with predominantly randomised movement within lymph nodes, and significant probabilities of long transit times. We discuss how this pattern of migration may contribute to facilitating interactions between low frequency T cells and antigen

  7. Selecting and calculating joint operation of oil and petroleum gas collection systems, and mechanized production methods

    Energy Technology Data Exchange (ETDEWEB)

    Guseva, L S; D' yachuk, A I; Davydova, L V; Maslov, V P; Salyautdinova, R M; Suslov, V M

    1979-01-01

    The possibility is examined of formalizing the indicated procedure in the process of performing step by step calculations. At the first step, considering limitations imposed by the dominating parameters, preliminary selection is performed of the acceptable combination of the type of collection system and methods of mechanized production for the development conditions examined. The second step provides for physical simulation at a well of an experimental section of time-variable conditions of field development. The values of the technological indices thus defined are then considered to be reliable information for technico-economic calculations. Parallel research is done on the technological features of operation of the collection systems chosen and their individual elements (pipeline system, separation units, etc.), which the experimental section is fitted with beforehand. Material is given which illustrates in detail the basic assumptions of the technique proposed and the calculation procedure.

  8. Psychological and Physiological Selection of Military Special Operations Forces Personnel (Selection psychologique et physiologique des militaires des forces d’operations speciales)

    Science.gov (United States)

    2012-10-01

    in the selection literature today is the Five Factor Model ( FFM ) or “Big 5” model of personality. This model includes: 1) Openness; 2...Conscientiousness; 3) Extraversion; 4) Agreeableness; and 5) Emotional Stability. Meta-analytic studies have found the FFM of personality to be predictive...is a self-report measure of the FFM that has demonstrated reliability and validity in numerous studies [18]. Another FFM measure, the Trait Self

  9. Managing risk and expected financial return from selective expansion of operating room capacity: mean-variance analysis of a hospital's portfolio of surgeons.

    Science.gov (United States)

    Dexter, Franklin; Ledolter, Johannes

    2003-07-01

    Surgeons using the same amount of operating room (OR) time differ in their achieved hospital contribution margins (revenue minus variable costs) by >1000%. Thus, to improve the financial return from perioperative facilities, OR strategic decisions should selectively focus additional OR capacity and capital purchasing on a few surgeons or subspecialties. These decisions use estimates of each surgeon's and/or subspecialty's contribution margin per OR hour. The estimates are subject to uncertainty (e.g., from outliers). We account for the uncertainties by using mean-variance portfolio analysis (i.e., quadratic programming). This method characterizes the problem of selectively expanding OR capacity based on the expected financial return and risk of different portfolios of surgeons. The assessment reveals whether the choices, of which surgeons have their OR capacity expanded, are sensitive to the uncertainties in the surgeons' contribution margins per OR hour. Thus, mean-variance analysis reduces the chance of making strategic decisions based on spurious information. We also assess the financial benefit of using mean-variance portfolio analysis when the planned expansion of OR capacity is well diversified over at least several surgeons or subspecialties. Our results show that, in such circumstances, there may be little benefit from further changing the portfolio to reduce its financial risk. Surgeon and subspecialty specific hospital financial data are uncertain, a fact that should be taken into account when making decisions about expanding operating room capacity. We show that mean-variance portfolio analysis can incorporate this uncertainty, thereby guiding operating room management decision-making and reducing the chance of a strategic decision being made based on spurious information.

  10. Improving the efficiency of spatially selective operations for agricultural robotics in cropping field

    Directory of Open Access Journals (Sweden)

    Y. L. Li

    2013-01-01

    Full Text Available Cropping fields often have well-defined poor-performing patches due to spatial and temporal variability. In an attempt to increase crop performance on poor patches, spatially selective field operations may be performed by agricultural robotics to apply additional inputs with targeted requirements. This paper addresses the route planning problem for an agricultural robot that has to treat some poor-patches in a field with row crops, with respect to the minimization of the total non-working distance travelled during headland turnings and in-field travel distance. The traversal of patches in the field is expressed as the traversal of a mixed weighted graph, and then the problem of finding an optimal patch sequence is formulated as an asymmetric traveling salesman problem and solved by the partheno-genetic algorithm. The proposed method is applied on a cropping field located in Northwestern China. Research results show that by using optimum patch sequences, the total non-working distance travelled during headland turnings and in-field travel distance can be reduced. But the savings on the non-working distance inside the field interior depend on the size and location of patches in the field, and the introduction of agricultural robotics is beneficial to increase field efficiency.

  11. Improving the efficiency of spatially selective operations for agricultural robotics in cropping field

    Energy Technology Data Exchange (ETDEWEB)

    Li, Y. L.; Yi, S. P.

    2013-05-01

    Cropping fields often have well-defined poor-performing patches due to spatial and temporal variability. In an attempt to increase crop performance on poor patches, spatially selective field operations may be performed by agricultural robotics to apply additional inputs with targeted requirements. This paper addresses the route planning problem for an agricultural robot that has to treat some poor-patches in a field with row crops, with respect to the minimization of the total non-working distance travelled during headland turnings and in-field travel distance. The traversal of patches in the field is expressed as the traversal of a mixed weighted graph, and then the problem of finding an optimal patch sequence is formulated as an asymmetric traveling salesman problem and solved by the parthenogenetic algorithm. The proposed method is applied on a cropping field located in Northwestern China. Research results show that by using optimum patch sequences, the total non-working distance travelled during headland turnings and in-field travel distance can be reduced. But the savings on the non-working distance inside the field interior depend on the size and location of patches in the field, and the introduction of agricultural robotics is beneficial to increase field efficiency. (Author) 21 refs.

  12. Optimizing Warehouse Logistics Operations Through Site Selection Models: Istanbul, Turkey

    National Research Council Canada - National Science Library

    Erdemir, Ugur

    2003-01-01

    .... Given the dynamic environment surrounding the military operations, logistic sustainability requirements, rapid information technology developments, and budget-constrained Turkish DoD acquisition...

  13. Microbiological sampling plan based on risk classification to verify supplier selection and production of served meals in food service operation.

    Science.gov (United States)

    Lahou, Evy; Jacxsens, Liesbeth; Van Landeghem, Filip; Uyttendaele, Mieke

    2014-08-01

    Food service operations are confronted with a diverse range of raw materials and served meals. The implementation of a microbial sampling plan in the framework of verification of suppliers and their own production process (functionality of their prerequisite and HACCP program), demands selection of food products and sampling frequencies. However, these are often selected without a well described scientifically underpinned sampling plan. Therefore, an approach on how to set-up a focused sampling plan, enabled by a microbial risk categorization of food products, for both incoming raw materials and meals served to the consumers is presented. The sampling plan was implemented as a case study during a one-year period in an institutional food service operation to test the feasibility of the chosen approach. This resulted in 123 samples of raw materials and 87 samples of meal servings (focused on high risk categorized food products) which were analyzed for spoilage bacteria, hygiene indicators and food borne pathogens. Although sampling plans are intrinsically limited in assessing the quality and safety of sampled foods, it was shown to be useful to reveal major non-compliances and opportunities to improve the food safety management system in place. Points of attention deduced in the case study were control of Listeria monocytogenes in raw meat spread and raw fish as well as overall microbial quality of served sandwiches and salads. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. BSL-3 laboratory practices in the United States: comparison of select agent and non-select agent facilities.

    Science.gov (United States)

    Richards, Stephanie L; Pompei, Victoria C; Anderson, Alice

    2014-01-01

    New construction of biosafety level 3 (BSL-3) laboratories in the United States has increased in the past decade to facilitate research on potential bioterrorism agents. The Centers for Disease Control and Prevention inspect BSL-3 facilities and review commissioning documentation, but no single agency has oversight over all BSL-3 facilities. This article explores the extent to which standard operating procedures in US BSL-3 facilities vary between laboratories with select agent or non-select agent status. Comparisons are made for the following variables: personnel training, decontamination, personal protective equipment (PPE), medical surveillance, security access, laboratory structure and maintenance, funding, and pest management. Facilities working with select agents had more complex training programs and decontamination procedures than non-select agent facilities. Personnel working in select agent laboratories were likely to use powered air purifying respirators, while non-select agent laboratories primarily used N95 respirators. More rigorous medical surveillance was carried out in select agent workers (although not required by the select agent program) and a higher level of restrictive access to laboratories was found. Most select agent and non-select agent laboratories reported adequate structural integrity in facilities; however, differences were observed in personnel perception of funding for repairs. Pest management was carried out by select agent personnel more frequently than non-select agent personnel. Our findings support the need to promote high quality biosafety training and standard operating procedures in both select agent and non-select agent laboratories to improve occupational health and safety.

  15. Analysis of remote operating systems for space-based servicing operations, volume 1

    Science.gov (United States)

    1985-01-01

    A two phase study was conducted to analyze and develop the requirements for remote operating systems as applied to space based operations for the servicing, maintenance, and repair of satellites. Phase one consisted of the development of servicing requirements to establish design criteria for remote operating systems. Phase two defined preferred system concepts and development plans which met the requirements established in phase one. The specific tasks in phase two were to: (1) identify desirable operational and conceptual approaches for selected mission scenarios; (2) examine the potential impact of remote operating systems incorporated into the design of the space station; (3) address remote operating systems design issues, such as mobility, which are effected by the space station configuration; and (4) define the programmatic approaches for technology development, testing, simulation, and flight demonstration.

  16. Pulmonary vein isolation using the Rhythmia mapping system: Verification of intracardiac signals using the Orion mini-basket catheter.

    Science.gov (United States)

    Anter, Elad; Tschabrunn, Cory M; Contreras-Valdes, Fernando M; Li, Jianqing; Josephson, Mark E

    2015-09-01

    During pulmonary vein isolation (PVI), a circular lasso catheter is positioned at the junction between the left atrium (LA) and the pulmonary vein (PV) to confirm PVI. The Rhythmia mapping system uses the Orion mini-basket catheter with 64 electrodes instead of the lasso catheter. However, its feasibility to determine PVI has not been studied. The purpose of this study was to compare signals between the mini-basket and lasso catheters at the LA-PV junction. In 12 patients undergoing PVI using Rhythmia, the mini-basket and lasso catheters were placed simultaneously at the LA-PV junction for baseline and post-PVI signal assessment. Pacing from both catheters was performed to examine the presence of exit block. At baseline, recordings of LA and PV potentials were concordant in all PVs. However, after PVI, concordance between the catheters was only 68%. Discordance in all cases resulted from loss of PV potentials on the lasso catheter with persistence of PV potentials on the mini-basket catheter. In 9 of 13 PVs (69%), these potentials represented true PV potentials that were exclusively recorded with the smaller and closely spaced mini-basket electrodes. In the other 4 PVs (31%), these potentials originated from neighboring structures and resulted in underestimation of PVI. The use of the mini-basket catheter alone is sufficient to determine PVI. While it improves recording of PV potentials after incomplete ablation, it is also associated with frequent recording of "PV-like" potentials originating from neighboring structures. In these cases, pacing maneuvers are helpful to determine PVI and avoid excessive ablation. Copyright © 2015 Heart Rhythm Society. Published by Elsevier Inc. All rights reserved.

  17. Expert system for operational personnel support during power unit operation control in regulation range

    International Nuclear Information System (INIS)

    Yanitskij, V.A.

    1992-01-01

    The problems met when developing the systems for NPP operator support in the process of power unit operation are considered. The expert system for NPP personnel intelligent support combining the properties belonging to the artificial intelligence systems including selection of the analysis method taking into account the concrete technological situation and capability of application of algothmic calculations of the equipment characteristics using the information accumulated during the system development, erection and operation is described

  18. Identifying Associations Between Brain Imaging Phenotypes and Genetic Factors via A Novel Structured SCCA Approach.

    Science.gov (United States)

    Du, Lei; Zhang, Tuo; Liu, Kefei; Yan, Jingwen; Yao, Xiaohui; Risacher, Shannon L; Saykin, Andrew J; Han, Junwei; Guo, Lei; Shen, Li

    2017-06-01

    Brain imaging genetics attracts more and more attention since it can reveal associations between genetic factors and the structures or functions of human brain. Sparse canonical correlation analysis (SCCA) is a powerful bi-multivariate association identification technique in imaging genetics. There have been many SCCA methods which could capture different types of structured imaging genetic relationships. These methods either use the group lasso to recover the group structure, or employ the graph/network guided fused lasso to find out the network structure. However, the group lasso methods have limitation in generalization because of the incomplete or unavailable prior knowledge in real world. The graph/network guided methods are sensitive to the sign of the sample correlation which may be incorrectly estimated. We introduce a new SCCA model using a novel graph guided pairwise group lasso penalty, and propose an efficient optimization algorithm. The proposed method has a strong upper bound for the grouping effect for both positively and negatively correlated variables. We show that our method performs better than or equally to two state-of-the-art SCCA methods on both synthetic and real neuroimaging genetics data. In particular, our method identifies stronger canonical correlations and captures better canonical loading profiles, showing its promise for revealing biologically meaningful imaging genetic associations.

  19. Non-operative management of perforated peptic ulcer

    International Nuclear Information System (INIS)

    Rahman, M.M.; Ahsan, H.N.; Hossain, M.D.

    2003-01-01

    Objective: The aim of this study was to see the morbidity and mortality in peptic ulcer perforation cases by non-operative management in selected cases. Results: In the selected 54 patients, male: female were 49:05. Nine had history of NSAID intake. There was no mortality. Morbidity analysis showed that three had hepatic abscess, four had pelvic abscess, six took prolonged time for improvement, in two cases conservative treatment had to be abandoned and laparotomy was done in the same hospital admission. Conclusion: Non-operative procedure is a safe and effective measure for the management of perforated peptic ulcer in selected cases. (author)

  20. Research of psychological characteristics and performance relativity of operators

    International Nuclear Information System (INIS)

    Fang Xiang; He Xuhong; Zhao Bingquan

    2008-01-01

    Based on the working tasks of an operator being taken into full consideration in this paper, on the one hand the table of measuring psychological characteristics is designed through the selection of special dimensions; on the other hand the table of performance appraisal is drafted through the choice of suitable standards of an operator. The paper analyzes the results of two aspects, sets relevant nuclear power plant operators as the research objective, and obtains the psychological characteristics and performance relativity of operators. The research can be as important and applied reference for the selection, evaluation and use of operators

  1. Practical selection and method of operation of the sedimentation settling tanks in the clay mining industry; Praktische Auswahl und Betriebsart der Sedimentationsklaerbecken im Tonbergbau

    Energy Technology Data Exchange (ETDEWEB)

    Groborz, Withold-Simon [Sibelco Deutschland GmbH, Ransbach-Baumbach (Germany)

    2009-10-22

    The application of the ''linear principles as optimisation basis for the technical planning of the sedimentation tanks in the clay mining industry'' described in GLUeCKAUF 143 (2007), No. 10 permits rapid and simplified planning of the settling tanks required for this purpose, which is fully dependent on the size of the dirty water pump used. The geometrical tank size is specified in advance. The course of the sedimentation process can be clearly improved, if there is more than one settling tank in operation, whereby selection of the method of operation of the tanks can basically be left to the mine operator. Nevertheless practical experience in this field has proved that connection of the tanks in series can be regarded as more effective for the sedimentation process. (orig.)

  2. The human factor in nuclear reactor operation

    International Nuclear Information System (INIS)

    Bertron, L.

    1982-05-01

    The principal operating characteristics of nuclear power plants are summarized. A description of major hazards relating to operator fallibility in normal and abnormal operating conditions is followed by a specific analysis of control room hazards, shift organization and selection and training of management personnel

  3. Assembly For Moving a Robotic Device Along Selected Axes

    Science.gov (United States)

    Nowlin, Brentley Craig (Inventor); Koch, Lisa Danielle (Inventor)

    2001-01-01

    An assembly for moving a robotic device along selected axes includes a programmable logic controller (PLC) for controlling movement of the device along selected axes to effect movement of the device to a selected disposition. The PLC includes a plurality of single axis motion control modules, and a central processing unit (CPU) in communication with the motion control modules. A human-machine interface is provided for operator selection of configurations of device movements and is in communication with the CPU. A motor drive is in communication with each of the motion control modules and is operable to effect movement of the device along the selected axes to obtain movement of the device to the selected disposition.

  4. Reactor operator screening test experiences

    International Nuclear Information System (INIS)

    O'Brien, W.J.; Penkala, J.L.; Witzig, W.F.

    1976-01-01

    When it became apparent to Duquesne Light Company of Pittsburgh, Pennsylvania, that the throughput of their candidate selection-Phase I training-reactor operator certification sequence was something short of acceptable, the utility decided to ask consultants to make recommendations with respect to candidate selection procedures. The recommendation implemented was to create a Nuclear Training Test that would predict the success of a candidate in completing Phase I training and subsequently qualify for reactor operator certification. The mechanics involved in developing and calibrating the Nuclear Training Test are described. An arbitration decision that resulted when a number of International Brotherhood of Electrical Workers union employees filed a grievance alleging that the selection examination was unfair, invalid, not job related, inappropriate, and discriminatorily evaluated is also discussed. The arbitration decision favored the use of the Nuclear Training Test

  5. Real-time operating system for selected Intel processors

    Science.gov (United States)

    Pool, W. R.

    1980-01-01

    The rationale for system development is given along with reasons for not using vendor supplied operating systems. Although many system design and performance goals were dictated by problems with vendor supplied systems, other goals surfaced as a result of a design for a custom system able to span multiple projects. System development and management problems and areas that required redesign or major code changes for system implementation are examined as well as the relative successes of the initial projects. A generic description of the actual project is provided and the ongoing support requirements and future plans are discussed.

  6. HARMONIC DRIVE SELECTION

    Directory of Open Access Journals (Sweden)

    Piotr FOLĘGA

    2014-03-01

    Full Text Available The variety of types and sizes currently in production harmonic drive is a problem in their rational choice. Properly selected harmonic drive must meet certain requirements during operation, and achieve the anticipated service life. The paper discusses the problems associated with the selection of the harmonic drive. It also presents the algorithm correct choice of harmonic drive. The main objective of this study was to develop a computer program that allows the correct choice of harmonic drive by developed algorithm.

  7. Reproducibility of CT bone dosimetry: Operator versus automated ROI definition

    International Nuclear Information System (INIS)

    Louis, O.; Luypaert, R.; Osteaux, M.; Kalender, W.

    1988-01-01

    Intrasubject reproducibility with repeated determination of vertebral mineral density from a given set of CT images was investigated. The region of interest (ROI) in 10 patient scans was selected by four independent operators either manually or with an automated procedure separating cortical and spongeous bone, the operators being requested to interact in ROI selection. The mean intrasubject variation was found to be much lower with the automated process (0.3 to 0.6%) than with the conventional method (2.5 to 5.2%). In a second study, 10 patients were examined twice to determine the reproducibility of CT slice selection by the operator. The errors were of the same order of magnitude as in ROI selection. (orig.)

  8. Temperature Characteristics of Monolithically Integrated Wavelength-Selectable Light Sources

    International Nuclear Information System (INIS)

    Han Liang-Shun; Zhu Hong-Liang; Zhang Can; Ma Li; Liang Song; Wang Wei

    2013-01-01

    The temperature characteristics of monolithically integrated wavelength-selectable light sources are experimentally investigated. The wavelength-selectable light sources consist of four distributed feedback (DFB) lasers, a multimode interferometer coupler, and a semiconductor optical amplifier. The oscillating wavelength of the DFB laser could be modulated by adjusting the device operating temperature. A wavelength range covering over 8.0nm is obtained with stable single-mode operation by selecting the appropriate laser and chip temperature. The thermal crosstalk caused by the lateral heat spreading between lasers operating simultaneously is evaluated by oscillating-wavelength shift. The thermal crosstalk approximately decreases exponentially as the increasing distance between lasers

  9. Operating experience feedback in TVO

    Energy Technology Data Exchange (ETDEWEB)

    Piirto, A [Teollisuuden Voima Oy (Finland)

    1997-12-31

    TVO is a power company operating with two 710 MW BWR units at Olkiluoto. For operating experience feedback TVO has not established a separate organizational unit but rather relies on a group of persons representing various technical disciplines. The ``Operating Experience Group`` meets at about three-week intervals to handle the reports of events (in plant and external) which have been selected for handling by an engineer responsible for experience feedback. 7 charts.

  10. Operator-based metric for nuclear operations automation assessment

    Energy Technology Data Exchange (ETDEWEB)

    Zacharias, G.L.; Miao, A.X.; Kalkan, A. [Charles River Analytics Inc., Cambridge, MA (United States)] [and others

    1995-04-01

    Continuing advances in real-time computational capabilities will support enhanced levels of smart automation and AI-based decision-aiding systems in the nuclear power plant (NPP) control room of the future. To support development of these aids, we describe in this paper a research tool, and more specifically, a quantitative metric, to assess the impact of proposed automation/aiding concepts in a manner that can account for a number of interlinked factors in the control room environment. In particular, we describe a cognitive operator/plant model that serves as a framework for integrating the operator`s information-processing capabilities with his procedural knowledge, to provide insight as to how situations are assessed by the operator, decisions made, procedures executed, and communications conducted. Our focus is on the situation assessment (SA) behavior of the operator, the development of a quantitative metric reflecting overall operator awareness, and the use of this metric in evaluating automation/aiding options. We describe the results of a model-based simulation of a selected emergency scenario, and metric-based evaluation of a range of contemplated NPP control room automation/aiding options. The results demonstrate the feasibility of model-based analysis of contemplated control room enhancements, and highlight the need for empirical validation.

  11. Louis Boutet de Monvel, selected works

    CERN Document Server

    Sjöstrand, Johannes

    2017-01-01

    This book features a selection of articles by Louis Boutet de Monvel and presents his contributions to the theory of partial differential equations and analysis. The works selected here reveal his central role in the development of his field, including three cornerstones: firstly, analytic pseudodifferential operators, which have become a fundamental aspect of analytic microlocal analysis, and secondly the Boutet de Monvel calculus for boundary problems for elliptic partial differential operators, which is still an important tool also in index theory. Thirdly, Boutet de Monvel was one of the first people to recognize the importance of the existence of generalized functions, whose singularities are concentrated on a single ray in phase space, which led him to make essential contributions to hypoelliptic operators and to a very successful and influential calculus of Toeplitz operators with applications to spectral and index theory. Other topics treated here include microlocal analysis, star products and deforma...

  12. Breast cancer Ki67 expression preoperative discrimination by DCE-MRI radiomics features

    Science.gov (United States)

    Ma, Wenjuan; Ji, Yu; Qin, Zhuanping; Guo, Xinpeng; Jian, Xiqi; Liu, Peifang

    2018-02-01

    To investigate whether quantitative radiomics features extracted from dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) are associated with Ki67 expression of breast cancer. In this institutional review board approved retrospective study, we collected 377 cases Chinese women who were diagnosed with invasive breast cancer in 2015. This cohort included 53 low-Ki67 expression (Ki67 proliferation index less than 14%) and 324 cases with high-Ki67 expression (Ki67 proliferation index more than 14%). A binary-classification of low- vs. high- Ki67 expression was performed. A set of 52 quantitative radiomics features, including morphological, gray scale statistic, and texture features, were extracted from the segmented lesion area. Three most common machine learning classification methods, including Naive Bayes, k-Nearest Neighbor and support vector machine with Gaussian kernel, were employed for the classification and the least absolute shrink age and selection operator (LASSO) method was used to select most predictive features set for the classifiers. Classification performance was evaluated by the area under receiver operating characteristic curve (AUC), accuracy, sensitivity and specificity. The model that used Naive Bayes classification method achieved the best performance than the other two methods, yielding 0.773 AUC value, 0.757 accuracy, 0.777 sensitivity and 0.769 specificity. Our study showed that quantitative radiomics imaging features of breast tumor extracted from DCE-MRI are associated with breast cancer Ki67 expression. Future larger studies are needed in order to further evaluate the findings.

  13. 48 CFR 970.3102-05 - Selected costs.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Selected costs. 970.3102... SUPPLEMENTARY REGULATIONS DOE MANAGEMENT AND OPERATING CONTRACTS Contract Cost Principles and Procedures 970.3102-05 Selected costs. ...

  14. 48 CFR 218.201 - Contingency operation.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 3 2010-10-01 2010-10-01 false Contingency operation. 218... Flexibilities 218.201 Contingency operation. (1) Selection, appointment, and termination of appointment... in a contingency contracting force. See 201.603-2(2). (2) Policy for unique item identification...

  15. Asymptotically Honest Confidence Regions for High Dimensional

    DEFF Research Database (Denmark)

    Caner, Mehmet; Kock, Anders Bredahl

    While variable selection and oracle inequalities for the estimation and prediction error have received considerable attention in the literature on high-dimensional models, very little work has been done in the area of testing and construction of confidence bands in high-dimensional models. However...... develop an oracle inequality for the conservative Lasso only assuming the existence of a certain number of moments. This is done by means of the Marcinkiewicz-Zygmund inequality which in our context provides sharper bounds than Nemirovski's inequality. As opposed to van de Geer et al. (2014) we allow...

  16. Sampling Operations on Big Data

    Science.gov (United States)

    2015-11-29

    gories. These include edge sampling methods where edges are selected by a predetermined criteria; snowball sampling methods where algorithms start... Sampling Operations on Big Data Vijay Gadepally, Taylor Herr, Luke Johnson, Lauren Milechin, Maja Milosavljevic, Benjamin A. Miller Lincoln...process and disseminate information for discovery and exploration under real-time constraints. Common signal processing operations such as sampling and

  17. DISIS: prediction of drug response through an iterative sure independence screening.

    Directory of Open Access Journals (Sweden)

    Yun Fang

    Full Text Available Prediction of drug response based on genomic alterations is an important task in the research of personalized medicine. Current elastic net model utilized a sure independence screening to select relevant genomic features with drug response, but it may neglect the combination effect of some marginally weak features. In this work, we applied an iterative sure independence screening scheme to select drug response relevant features from the Cancer Cell Line Encyclopedia (CCLE dataset. For each drug in CCLE, we selected up to 40 features including gene expressions, mutation and copy number alterations of cancer-related genes, and some of them are significantly strong features but showing weak marginal correlation with drug response vector. Lasso regression based on the selected features showed that our prediction accuracies are higher than those by elastic net regression for most drugs.

  18. Policies and practices pertaining to the selection, qualification requirements, and training programs for nuclear-reactor operating personnel at the Oak Ridge National Laboratory

    International Nuclear Information System (INIS)

    Culbert, W.H.

    1985-10-01

    This document describes the policies and practices of the Oak Ridge National Laboratory (ORNL) regarding the selection of and training requirements for reactor operating personnel at the Laboratory's nuclear-reactor facilities. The training programs, both for initial certification and for requalification, are described and provide the guidelines for ensuring that ORNL's research reactors are operated in a safe and reliable manner by qualified personnel. This document gives an overview of the reactor facilities and addresses the various qualifications, training, testing, and requalification requirements stipulated in DOE Order 5480.1A, Chapter VI (Safety of DOE-Owned Reactors); it is intended to be in compliance with this DOE Order, as applicable to ORNL facilities. Included also are examples of the documentation maintained amenable for audit

  19. Operational Research : Congress of APDIO, the Portuguese Operational Research Society

    CERN Document Server

    Almeida, João; Oliveira, José; Pinto, Alberto

    2018-01-01

    This proceedings book presents selected contributions from the XVIII Congress of APDIO (the Portuguese Association of Operational Research) held in Valença on June 28–30, 2017. Prepared by leading Portuguese and international researchers in the field of operations research, it covers a wide range of complex real-world applications of operations research methods using recent theoretical techniques, in order to narrow the gap between academic research and practical applications. Of particular interest are the applications of, nonlinear and mixed-integer programming, data envelopment analysis, clustering techniques, hybrid heuristics, supply chain management, and lot sizing and job scheduling problems. In most chapters, the problems, methods and methodologies described are complemented by supporting figures, tables and algorithms.   The XVIII Congress of APDIO marked the 18th installment of the regular biannual meetings of APDIO – the Portuguese Association of Operational Research. The meetings bring toget...

  20. Stimulant effects of adenosine antagonists on operant behavior: differential actions of selective A2A and A1 antagonists

    Science.gov (United States)

    Randall, Patrick A.; Nunes, Eric J.; Janniere, Simone L.; Stopper, Colin M.; Farrar, Andrew M.; Sager, Thomas N.; Baqi, Younis; Hockemeyer, Jörg; Müller, Christa E.

    2012-01-01

    Rationale Adenosine A2A antagonists can reverse many of the behavioral effects of dopamine antagonists, including actions on instrumental behavior. However, little is known about the effects of selective adenosine antagonists on operant behavior when these drugs are administered alone. Objective The present studies were undertaken to investigate the potential for rate-dependent stimulant effects of both selective and nonselective adenosine antagonists. Methods Six drugs were tested: two nonselective adenosine antagonists (caffeine and theophylline), two adenosine A1 antagonists (DPCPX and CPT), and two adenosine A2A antagonists (istradefylline (KW6002) and MSX-3). Two schedules of reinforcement were employed; a fixed interval 240-s (FI-240 sec) schedule was used to generate low baseline rates of responding and a fixed ratio 20 (FR20) schedule generated high rates. Results Caffeine and theophylline produced rate-dependent effects on lever pressing, increasing responding on the FI-240 sec schedule but decreasing responding on the FR20 schedule. The A2A antagonists MSX-3 and istradefylline increased FI-240 sec lever pressing but did not suppress FR20 lever pressing in the dose range tested. In fact, there was a tendency for istradefylline to increase FR20 responding at a moderate dose. A1 antagonists failed to increase lever pressing rate, but DPCPX decreased FR20 responding at higher doses. Conclusions These results suggest that adenosine A2A antagonists enhance operant response rates, but A1 antagonists do not. The involvement of adenosine A2A receptors in regulating aspects of instrumental response output and behavioral activation may have implications for the treatment of effort-related psychiatric dysfunctions, such as psychomotor slowing and anergia in depression. PMID:21347642

  1. Genomic-Enabled Prediction Based on Molecular Markers and Pedigree Using the Bayesian Linear Regression Package in R

    Directory of Open Access Journals (Sweden)

    Paulino Pérez

    2010-09-01

    Full Text Available The availability of dense molecular markers has made possible the use of genomic selection in plant and animal breeding. However, models for genomic selection pose several computational and statistical challenges and require specialized computer programs, not always available to the end user and not implemented in standard statistical software yet. The R-package BLR (Bayesian Linear Regression implements several statistical procedures (e.g., Bayesian Ridge Regression, Bayesian LASSO in a unified framework that allows including marker genotypes and pedigree data jointly. This article describes the classes of models implemented in the BLR package and illustrates their use through examples. Some challenges faced when applying genomic-enabled selection, such as model choice, evaluation of predictive ability through cross-validation, and choice of hyper-parameters, are also addressed.

  2. Career path for operations personnel

    International Nuclear Information System (INIS)

    Asher, J.A.

    1985-01-01

    This paper explains how selected personnel can now obtain a Bachelor of Science degree in Physics with a Nuclear Power Operations option. The program went into effect the Fall of 1984. Another program was worked out in 1982 whereby students attending the Nuclear Operators Training Program could obtain an Associates of Science degree in Mechanical Engineering Technology at the end of two years of study. This paper presents tables and charts which describe these programs and outline the career path for operators

  3. Single-snapshot DOA estimation by using Compressed Sensing

    Science.gov (United States)

    Fortunati, Stefano; Grasso, Raffaele; Gini, Fulvio; Greco, Maria S.; LePage, Kevin

    2014-12-01

    This paper deals with the problem of estimating the directions of arrival (DOA) of multiple source signals from a single observation vector of an array data. In particular, four estimation algorithms based on the theory of compressed sensing (CS), i.e., the classical ℓ 1 minimization (or Least Absolute Shrinkage and Selection Operator, LASSO), the fast smooth ℓ 0 minimization, and the Sparse Iterative Covariance-Based Estimator, SPICE and the Iterative Adaptive Approach for Amplitude and Phase Estimation, IAA-APES algorithms, are analyzed, and their statistical properties are investigated and compared with the classical Fourier beamformer (FB) in different simulated scenarios. We show that unlike the classical FB, a CS-based beamformer (CSB) has some desirable properties typical of the adaptive algorithms (e.g., Capon and MUSIC) even in the single snapshot case. Particular attention is devoted to the super-resolution property. Theoretical arguments and simulation analysis provide evidence that a CS-based beamformer can achieve resolution beyond the classical Rayleigh limit. Finally, the theoretical findings are validated by processing a real sonar dataset.

  4. Reducing power consumption while performing collective operations on a plurality of compute nodes

    Science.gov (United States)

    Archer, Charles J [Rochester, MN; Blocksome, Michael A [Rochester, MN; Peters, Amanda E [Rochester, MN; Ratterman, Joseph D [Rochester, MN; Smith, Brian E [Rochester, MN

    2011-10-18

    Methods, apparatus, and products are disclosed for reducing power consumption while performing collective operations on a plurality of compute nodes that include: receiving, by each compute node, instructions to perform a type of collective operation; selecting, by each compute node from a plurality of collective operations for the collective operation type, a particular collective operation in dependence upon power consumption characteristics for each of the plurality of collective operations; and executing, by each compute node, the selected collective operation.

  5. Justification of parameters and selection of equipment for laboratory researches of a rammer's operating element dynamics in a soil foundation of a tank for oil and oil products storage

    Science.gov (United States)

    Gruzin, A. V.; Gruzin, V. V.; Shalay, V. V.

    2017-08-01

    The development of technology for a directional soil compaction of tank foundations for oil and oil products storage is a relevant problem which solution will enable simultaneously provide required operational characteristics of a soil foundation and reduce time and material costs to prepare the foundation. The impact dynamics of rammers' operating elements on the soil foundation is planned to specify in the course of laboratory studies. A specialized technique is developed to justify the parameters and select the equipment for laboratory researches. The usage of this technique enabled us to calculate dimensions of the models, of a test bench and specifications of the recording equipment, and a lighting system. The necessary equipment for laboratory studies was selected. Preliminary laboratory tests were carried out. The estimate of accuracy for planned laboratory studies was given.

  6. Optimal population prediction of sandhill crane recruitment based on climate-mediated habitat limitations

    Science.gov (United States)

    Gerber, Brian D.; Kendall, William L.; Hooten, Mevin B.; Dubovsky, James A.; Drewien, Roderick C.

    2015-01-01

    Prediction is fundamental to scientific enquiry and application; however, ecologists tend to favour explanatory modelling. We discuss a predictive modelling framework to evaluate ecological hypotheses and to explore novel/unobserved environmental scenarios to assist conservation and management decision-makers. We apply this framework to develop an optimal predictive model for juvenile (time-scales and spring/summer weather affects recruitment.Our predictive modelling framework focuses on developing a single model that includes all relevant predictor variables, regardless of collinearity. This model is then optimized for prediction by controlling model complexity using a data-driven approach that marginalizes or removes irrelevant predictors from the model. Specifically, we highlight two approaches of statistical regularization, Bayesian least absolute shrinkage and selection operator (LASSO) and ridge regression.Our optimal predictive Bayesian LASSO and ridge regression models were similar and on average 37% superior in predictive accuracy to an explanatory modelling approach. Our predictive models confirmed a priori hypotheses that drought and cold summers negatively affect juvenile recruitment in the RMP. The effects of long-term drought can be alleviated by short-term wet spring–summer months; however, the alleviation of long-term drought has a much greater positive effect on juvenile recruitment. The number of freezing days and snowpack during the summer months can also negatively affect recruitment, while spring snowpack has a positive effect.Breeding habitat, mediated through climate, is a limiting factor on population growth of sandhill cranes in the RMP, which could become more limiting with a changing climate (i.e. increased drought). These effects are likely not unique to cranes. The alteration of hydrological patterns and water levels by drought may impact many migratory, wetland nesting birds in the Rocky Mountains and beyond

  7. Relative aggregation operator in database fuzzy querying

    Directory of Open Access Journals (Sweden)

    Luminita DUMITRIU

    2005-12-01

    Full Text Available Fuzzy selection criteria querying relational databases include vague terms; they usually refer linguistic values form the attribute linguistic domains, defined as fuzzy sets. Generally, when a vague query is processed, the definitions of vague terms must already exist in a knowledge base. But there are also cases when vague terms must be dynamically defined, when a particular operation is used to aggregate simple criteria in a complex selection. The paper presents a new aggregation operator and the corresponding algorithm to evaluate the fuzzy query.

  8. Identifying the white matter impairments among ART-naive HIV patients: a multivariate pattern analysis of DTI data

    Energy Technology Data Exchange (ETDEWEB)

    Tang, Zhenchao [Shandong University, School of Mechanical, Electrical and Information Engineering, Weihai, Shandong Province (China); Institute of Automation, CAS Key Laboratory of Molecular Imaging, Beijing (China); Liu, Zhenyu; Yang, Xin; Wang, Shuo; Yu, Dongdong [Institute of Automation, CAS Key Laboratory of Molecular Imaging, Beijing (China); Li, Ruili; Li, Hongjun [Beijing YouAn Hospital, Capital Medical University, Department of Radiology, Beijing (China); Cui, Xingwei [Zhengzhou University, Cooperative Innovation Center of Internet Healthcare, Zhengzhou (China); Dong, Enqing [Shandong University, School of Mechanical, Electrical and Information Engineering, Weihai, Shandong Province (China); Tian, Jie [Institute of Automation, CAS Key Laboratory of Molecular Imaging, Beijing (China); University of Chinese Academy of Sciences, Beijing (China)

    2017-10-15

    To identify the white matter (WM) impairments of the antiretroviral therapy (ART)-naive HIV patients by conducting a multivariate pattern analysis (MVPA) of Diffusion Tensor Imaging (DTI) data We enrolled 33 ART-naive HIV patients and 32 Normal controls in the current study. Firstly, the DTI metrics in whole brain WM tracts were extracted for each subject and feed into the Least Absolute Shrinkage and Selection Operators procedure (LASSO)-Logistic regression model to identify the impaired WM tracts. Then, Support Vector Machines (SVM) model was constructed based on the DTI metrics in the impaired WM tracts to make HIV-control group classification. Pearson correlations between the WM impairments and HIV clinical statics were also investigated. Extensive HIV-related impairments were observed in the WM tracts associated with motor function, the corpus callosum (CC) and the frontal WM. With leave-one-out cross validation, accuracy of 83.08% (P=0.002) and the area under the Receiver Operating Characteristic curve of 0.9110 were obtained in the SVM classification model. The impairments of the CC were significantly correlated with the HIV clinic statics. The MVPA was sensitive to detect the HIV-related WM changes. Our findings indicated that the MVPA had considerable potential in exploring the HIV-related WM impairments. (orig.)

  9. Identifying the white matter impairments among ART-naive HIV patients: a multivariate pattern analysis of DTI data

    International Nuclear Information System (INIS)

    Tang, Zhenchao; Liu, Zhenyu; Yang, Xin; Wang, Shuo; Yu, Dongdong; Li, Ruili; Li, Hongjun; Cui, Xingwei; Dong, Enqing; Tian, Jie

    2017-01-01

    To identify the white matter (WM) impairments of the antiretroviral therapy (ART)-naive HIV patients by conducting a multivariate pattern analysis (MVPA) of Diffusion Tensor Imaging (DTI) data We enrolled 33 ART-naive HIV patients and 32 Normal controls in the current study. Firstly, the DTI metrics in whole brain WM tracts were extracted for each subject and feed into the Least Absolute Shrinkage and Selection Operators procedure (LASSO)-Logistic regression model to identify the impaired WM tracts. Then, Support Vector Machines (SVM) model was constructed based on the DTI metrics in the impaired WM tracts to make HIV-control group classification. Pearson correlations between the WM impairments and HIV clinical statics were also investigated. Extensive HIV-related impairments were observed in the WM tracts associated with motor function, the corpus callosum (CC) and the frontal WM. With leave-one-out cross validation, accuracy of 83.08% (P=0.002) and the area under the Receiver Operating Characteristic curve of 0.9110 were obtained in the SVM classification model. The impairments of the CC were significantly correlated with the HIV clinic statics. The MVPA was sensitive to detect the HIV-related WM changes. Our findings indicated that the MVPA had considerable potential in exploring the HIV-related WM impairments. (orig.)

  10. The use of UAS in disaster response operations

    Science.gov (United States)

    Gkotsis, I.; Eftychidis, G.; Kolios, P.

    2017-09-01

    The use of UAS by the emergency services has been received with great interest since UAS provide both informant and helper support in a flexible, effective and efficient manner. This is due to the fact that, UAS can strengthen the operational capabilities related to: prevention (e.g., patrolling of large and hard to reach areas), early detection (e.g., mapping of vulnerable elements), disaster preparedness (e.g., incident inspection), response (mapping damages, search and rescue, provide an ad hoc communication network, monitor evacuation, etc). Through PREDICATE, a project concerning civilian use of drones, the necessary methodologies to guide the selection and operational use of UAS in emergencies, are developed. To guide UAS selection, the project performed a detailed needs assessment in cooperation with civil protection and law enforcement agencies. As a result of this assessment, currently available technologies and market solutions were reviewed leading to the development of an online user-friendly tool to support selection of UAS based on operational requirements. To guide the use of UAS, PREDICATE developed an intelligent path planning toolkit to automate the operation of UAS and ease their use for the various civil protection operations. By employing the aforementioned tools, emergency services will be able to better understand how to select and make use of UAS for watch-keeping and patrolling of their own disaster-prone Regions of Interest. The research, innovation and applicability behind both these tools is detailed in this work.

  11. {sup 18}F-Fluorodeoxyglucose Positron Emission Tomography Can Quantify and Predict Esophageal Injury During Radiation Therapy

    Energy Technology Data Exchange (ETDEWEB)

    Niedzielski, Joshua S., E-mail: jsniedzielski@mdanderson.org [Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); University of Texas Houston Graduate School of Biomedical Science, Houston, Texas (United States); Yang, Jinzhong [Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); University of Texas Houston Graduate School of Biomedical Science, Houston, Texas (United States); Liao, Zhongxing; Gomez, Daniel R. [Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Stingo, Francesco [Department of Biostatistics, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Mohan, Radhe; Martel, Mary K.; Briere, Tina M.; Court, Laurence E. [Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); University of Texas Houston Graduate School of Biomedical Science, Houston, Texas (United States)

    2016-11-01

    Purpose: We sought to investigate the ability of mid-treatment {sup 18}F-fluorodeoxyglucose positron emission tomography (PET) studies to objectively and spatially quantify esophageal injury in vivo from radiation therapy for non-small cell lung cancer. Methods and Materials: This retrospective study was approved by the local institutional review board, with written informed consent obtained before enrollment. We normalized {sup 18}F-fluorodeoxyglucose PET uptake to each patient's low-irradiated region (<5 Gy) of the esophagus, as a radiation response measure. Spatially localized metrics of normalized uptake (normalized standard uptake value [nSUV]) were derived for 79 patients undergoing concurrent chemoradiation therapy for non-small cell lung cancer. We used nSUV metrics to classify esophagitis grade at the time of the PET study, as well as maximum severity by treatment completion, according to National Cancer Institute Common Terminology Criteria for Adverse Events, using multivariate least absolute shrinkage and selection operator (LASSO) logistic regression and repeated 3-fold cross validation (training, validation, and test folds). This 3-fold cross-validation LASSO model procedure was used to predict toxicity progression from 43 asymptomatic patients during the PET study. Dose-volume metrics were also tested in both the multivariate classification and the symptom progression prediction analyses. Classification performance was quantified with the area under the curve (AUC) from receiver operating characteristic analysis on the test set from the 3-fold analyses. Results: Statistical analysis showed increasing nSUV is related to esophagitis severity. Axial-averaged maximum nSUV for 1 esophageal slice and esophageal length with at least 40% of axial-averaged nSUV both had AUCs of 0.85 for classifying grade 2 or higher esophagitis at the time of the PET study and AUCs of 0.91 and 0.92, respectively, for maximum grade 2 or higher by treatment completion

  12. Selection, training, qualification and licensing of Three Mile Island reactor operating personnel

    International Nuclear Information System (INIS)

    Eytchison, R.M.

    1980-01-01

    The various programs which were intended to staff Three Mile Island with competent, trained operators and supervisors are reviewed. The analysis includes a review of the regulations concerning operator training and licensing, and describes how the requirements were implemented by the NRC, Metropolitan Edison Company, and Babcock and Wilcox Company. Finally the programs conducted by these three organisations are evaluated. (U.K.)

  13. Increase net plant output through selective operation of the heat-rejection system

    International Nuclear Information System (INIS)

    Ostrowski, E.T.; Queenan, P.T.

    1987-01-01

    Depending on unit load and ambient meteorological conditions, a net increase of 800 to 5500 kW in plant output is possible for many generating units through optimized operation of the major motor-driven equipment in the heat-rejection system - the circulating water pumps and mechanical-draft cooling tower fans. This can be realised when the resulting decrease in auxiliary-power demand is greater than the decrease in gross electric generation caused by operating fewer pumps and/or fans. No capital expenditures are incurred and only operating procedures are involved so that the performance gains are achieved at no cost. The paper considers the application of this technique to nuclear power plants, pump optimization and the superimposition of fan and cooling tower performance curves

  14. Operation and design selection of high temperature superconducting magnetic bearings

    International Nuclear Information System (INIS)

    Werfel, F N; Floegel-Delor, U; Riedel, T; Rothfeld, R; Wippich, D; Goebel, B

    2004-01-01

    Axial and radial high temperature superconducting (HTS) magnetic bearings are evaluated by their parameters. Journal bearings possess advantages over thrust bearings. High magnetic gradients in a multi-pole permanent magnet (PM) configuration, the surrounding melt textured YBCO stator and adequate designs are the key features for increasing the overall bearing stiffness. The gap distance between rotor and stator determines the specific forces and has a strong impact on the PM rotor design. We report on the designing, building and measuring of a 200 mm prototype 100 kg HTS bearing with an encapsulated and thermally insulated melt textured YBCO ring stator. The encapsulation requires a magnetically large-gap (4-5 mm) operation but reduces the cryogenic effort substantially. The bearing requires 3 l of LN 2 for cooling down, and about 0.2 l LN 2 h -1 under operation. This is a dramatic improvement of the efficiency and in the practical usage of HTS magnetic bearings

  15. Policies and practices pertaining to the selection, qualification requirements, and training programs for nuclear-reactor operating personnel at the Oak Ridge National Laboratory

    Energy Technology Data Exchange (ETDEWEB)

    Culbert, W.H.

    1985-10-01

    This document describes the policies and practices of the Oak Ridge National Laboratory (ORNL) regarding the selection of and training requirements for reactor operating personnel at the Laboratory's nuclear-reactor facilities. The training programs, both for initial certification and for requalification, are described and provide the guidelines for ensuring that ORNL's research reactors are operated in a safe and reliable manner by qualified personnel. This document gives an overview of the reactor facilities and addresses the various qualifications, training, testing, and requalification requirements stipulated in DOE Order 5480.1A, Chapter VI (Safety of DOE-Owned Reactors); it is intended to be in compliance with this DOE Order, as applicable to ORNL facilities. Included also are examples of the documentation maintained amenable for audit.

  16. Substantial criteria of decision of the communities in the selection of a network operator in energy-related concession agreements; Materiellrechtliche Entscheidungskriterien der Gemeinden bei der Auswahl des Netzbetreibers in energiewirtschaftlichen Konzessionsvertraegen

    Energy Technology Data Exchange (ETDEWEB)

    Buedenbender, Ulrich [Technische Univ. Dresden (Germany). Lehrstuhl fuer Buergerliches Recht, Energiewirtschaft und Arbeitsrecht

    2011-07-01

    In light of the local roads authority, the communities decide on which partner adopts the operation of the local distribution network for electricity and gas. The question of the selection criteria has been neglected largely. The author of the book under consideration analyzes the relevant issues in energy law and antitrust law including european legal and constitutional issues. Several aspects to be considered in the rightful selection of the concession contract partner are shown. Legal consequences of disregarding the legally prescribed selection criteria are discussed.

  17. Radiomics analysis of DWI data to identify the rectal cancer patients qualified for local excision after neoadjuvant chemoradiotherapy

    Science.gov (United States)

    Tang, Zhenchao; Liu, Zhenyu; Zhang, Xiaoyan; Shi, Yanjie; Wang, Shou; Fang, Mengjie; Sun, Yingshi; Dong, Enqing; Tian, Jie

    2018-02-01

    The Locally advanced rectal cancer (LARC) patients were routinely treated with neoadjuvant chemoradiotherapy (CRT) firstly and received total excision afterwards. While, the LARC patients might relieve to T1N0M0/T0N0M0 stage after the CRT, which would enable the patients be qualified for local excision. However, accurate pathological TNM stage could only be obtained by the pathological examination after surgery. We aimed to conduct a Radiomics analysis of Diffusion weighted Imaging (DWI) data to identify the patients in T1N0M0/T0N0M0 stages before surgery, in hope of providing clinical surgery decision support. 223 routinely treated LARC patients in Beijing Cancer Hospital were enrolled in current study. DWI data and clinical characteristics were collected after CRT. According to the pathological TNM stage, the patients of T1N0M0 and T0N0M0 stages were labelled as 1 and the other patients were labelled as 0. The first 123 patients in chronological order were used as training set, and the rest patients as validation set. 563 image features extracted from the DWI data and clinical characteristics were used as features. Two-sample T test was conducted to pre-select the top 50% discriminating features. Least absolute shrinkage and selection operator (Lasso)-Logistic regression model was conducted to further select features and construct the classification model. Based on the 14 selected image features, the area under the Receiver Operating Characteristic (ROC) curve (AUC) of 0.8781, classification Accuracy (ACC) of 0.8432 were achieved in the training set. In the validation set, AUC of 0.8707, ACC (ACC) of 0.84 were observed.

  18. An information offering system for operation support based on plant functional structure

    International Nuclear Information System (INIS)

    Ohga, Yukiharu; Seki, Hiroshi

    1995-01-01

    A plant information offering system was proposed to support operators in their selection and confirmation of the required information for plant operation under transient conditions in nuclear power plants. The system features include an automatic selection method for information and a dialog input method. The former selects plant information in response to plant status changes and operators' demands. The selection is performed based on the knowledge and data as structured by the plant functional structure; i.e. a means-ends abstraction hierarchy model. In the latter, both speech and CRT touch inputs are transformed into words in Japanese to realize an arbitrary input mode combination. The words are analyzed as a sentence before transforming them into a demand for related programs. A prototype system was evaluated using a BWR simulator, assuming abnormal transients such as loss of feedwater. The contents of the offered information were checked based on emergency operation guidelines. It was confirmed that appropriate information items are automatically selected in real time. Answers are generated in reply to the operators' demands. They include information added to reflect the plant conditions. As for dialog, simple and quick input is realized by combining speech and CRT touch according to the operating situation. (author)

  19. Research about reactor operator's personality characteristics and performance

    International Nuclear Information System (INIS)

    Wei Li; He Xuhong; Zhao Bingquan

    2003-01-01

    To predict and evaluate the reactor operator's performance by personality characteristics is an important part of reactor operator safety assessment. Using related psychological theory combined with the Chinese operator's fact and considering the effect of environmental factors to personality analysis, paper does the research about the about the relationships between reactor operator's performance and personality characteristics, and offers the reference for operator's selection, using and performance in the future. (author)

  20. Non-operative management of abdominal gunshot injuries: Is it safe in all cases?

    Science.gov (United States)

    İflazoğlu, Nidal; Üreyen, Orhan; Öner, Osman Zekai; Meral, Ulvi Mehmet; Yülüklü, Murat

    2018-01-01

    In line with advances in diagnostic methods and expectation of a decrease in the number of negative laparotomies, selective non-operative management of abdominal gunshot wounds has been increasingly used over the last three decades. We aim to detect the possibility of treatment without surgery and present our experience in selected cases referred from Syria to a hospital at the Turkish-Syrian border. Between February 2012 and June 2014, patients admitted with abdominal gunshot wounds were analyzed. Computed tomography was performed for all patients on admission. Patients who were hemodynamically stable and did not have symptoms of peritonitis at the time of presentation were included in the study. The primary outcome parameters were mortality and morbidity. Successful selective non-operative management (Group 1) and unsuccessful selective non-operative management (Group 2) groups were compared in terms of complications, blood transfusion, injury site, injury severity score (ISS), and hospital stay. Of 158 truncal injury patients, 18 were considered feasible for selective non-operative management. Of these, 14 (78%) patients were treated without surgery. Other Four patients were operated upon progressively increasing abdominal pain and tenderness during follow-up. On diagnostic exploration, all of these cases had intestinal perforations. No mortality was observed in selective non-operative management. There was no statistically significant difference between Group 1 and Group 2, in terms of length of hospital stay (96 and 127 h, respectively). Also, there was no difference between groups in terms of blood transfusion necessity, injury site, complication rate, and injury severity score (p>0.05). Decision making on patient selection for selective non-operative management is critical to ensure favorable outcomes. It is not possible to predict the success of selective non-operative management in advance. Cautious clinical examination and close monitoring of these

  1. Disease Definition for Schizophrenia by Functional Connectivity Using Radiomics Strategy.

    Science.gov (United States)

    Cui, Long-Biao; Liu, Lin; Wang, Hua-Ning; Wang, Liu-Xian; Guo, Fan; Xi, Yi-Bin; Liu, Ting-Ting; Li, Chen; Tian, Ping; Liu, Kang; Wu, Wen-Jun; Chen, Yi-Huan; Qin, Wei; Yin, Hong

    2018-02-17

    Specific biomarker reflecting neurobiological substrates of schizophrenia (SZ) is required for its diagnosis and treatment selection of SZ. Evidence from neuroimaging has implicated disrupted functional connectivity in the pathophysiology. We aimed to develop and validate a method of disease definition for SZ by resting-state functional connectivity using radiomics strategy. This study included 2 data sets collected with different scanners. A total of 108 first-episode SZ patients and 121 healthy controls (HCs) participated in the current study, among which 80% patients and HCs (n = 183) and 20% (n = 46) were selected for training and testing in intra-data set validation and 1 of the 2 data sets was selected for training and the other for testing in inter-data set validation, respectively. Functional connectivity was calculated for both groups, features were selected by Least Absolute Shrinkage and Selection Operator (LASSO) method, and the clinical utility of its features and the generalizability of effects across samples were assessed using machine learning by training and validating multivariate classifiers in the independent samples. We found that the accuracy of intra-data set training was 87.09% for diagnosing SZ patients by applying functional connectivity features, with a validation in the independent replication data set (accuracy = 82.61%). The inter-data set validation further confirmed the disease definition by functional connectivity features (accuracy = 83.15% for training and 80.07% for testing). Our findings demonstrate a valid radiomics approach by functional connectivity to diagnose SZ, which is helpful to facilitate objective SZ individualized diagnosis using quantitative and specific functional connectivity biomarker.

  2. Predicting adenocarcinoma recurrence using computational texture models of nodule components in lung CT

    International Nuclear Information System (INIS)

    Depeursinge, Adrien; Yanagawa, Masahiro; Leung, Ann N.; Rubin, Daniel L.

    2015-01-01

    Purpose: To investigate the importance of presurgical computed tomography (CT) intensity and texture information from ground-glass opacities (GGO) and solid nodule components for the prediction of adenocarcinoma recurrence. Methods: For this study, 101 patients with surgically resected stage I adenocarcinoma were selected. During the follow-up period, 17 patients had disease recurrence with six associated cancer-related deaths. GGO and solid tumor components were delineated on presurgical CT scans by a radiologist. Computational texture models of GGO and solid regions were built using linear combinations of steerable Riesz wavelets learned with linear support vector machines (SVMs). Unlike other traditional texture attributes, the proposed texture models are designed to encode local image scales and directions that are specific to GGO and solid tissue. The responses of the locally steered models were used as texture attributes and compared to the responses of unaligned Riesz wavelets. The texture attributes were combined with CT intensities to predict tumor recurrence and patient hazard according to disease-free survival (DFS) time. Two families of predictive models were compared: LASSO and SVMs, and their survival counterparts: Cox-LASSO and survival SVMs. Results: The best-performing predictive model of patient hazard was associated with a concordance index (C-index) of 0.81 ± 0.02 and was based on the combination of the steered models and CT intensities with survival SVMs. The same feature group and the LASSO model yielded the highest area under the receiver operating characteristic curve (AUC) of 0.8 ± 0.01 for predicting tumor recurrence, although no statistically significant difference was found when compared to using intensity features solely. For all models, the performance was found to be significantly higher when image attributes were based on the solid components solely versus using the entire tumors (p < 3.08 × 10 −5 ). Conclusions: This study

  3. CRAN - Package msgl (Version:2.0.125.0)

    DEFF Research Database (Denmark)

    2014-01-01

    Sparse group lasso multiclass classification, suitable for high dimensional problems with many classes. Fast algorithm for solving the multinomial sparse group lasso convex optimization problem. This package apply template metaprogramming techniques, therefore – when compiling the package from so...... source – a high level of optimization is needed to gain full speed (e.g. for the GCC compiler use -O3). Use of multiple processors for cross validation and subsampling is supported through OpenMP. The Armadillo C++ library is used as the primary linear algebra engine....

  4. Techniques on semiautomatic segmentation using the Adobe Photoshop

    Science.gov (United States)

    Park, Jin Seo; Chung, Min Suk; Hwang, Sung Bae

    2005-04-01

    The purpose of this research is to enable anybody to semiautomatically segment the anatomical structures in the MRIs, CTs, and other medical images on the personal computer. The segmented images are used for making three-dimensional images, which are helpful in medical education and research. To achieve this purpose, the following trials were performed. The entire body of a volunteer was MR scanned to make 557 MRIs, which were transferred to a personal computer. On Adobe Photoshop, contours of 19 anatomical structures in the MRIs were semiautomatically drawn using MAGNETIC LASSO TOOL; successively, manually corrected using either LASSO TOOL or DIRECT SELECTION TOOL to make 557 segmented images. In a likewise manner, 11 anatomical structures in the 8,500 anatomcial images were segmented. Also, 12 brain and 10 heart anatomical structures in anatomical images were segmented. Proper segmentation was verified by making and examining the coronal, sagittal, and three-dimensional images from the segmented images. During semiautomatic segmentation on Adobe Photoshop, suitable algorithm could be used, the extent of automatization could be regulated, convenient user interface could be used, and software bugs rarely occurred. The techniques of semiautomatic segmentation using Adobe Photoshop are expected to be widely used for segmentation of the anatomical structures in various medical images.

  5. Optimization of Cognitive Radio Secondary Information Gathering Station Positioning and Operating Channel Selection for IoT Sensor Networks

    Directory of Open Access Journals (Sweden)

    Jinyi Wen

    2018-01-01

    Full Text Available The Internet of Things (IoT is the interconnection of different objects through the internet using different communication technologies. The objects are equipped with sensors and communications modules. The cognitive radio network is a key technique for the IoT and can effectively address spectrum-related issues for IoT applications. In our paper, a novel method for IoT sensor networks is proposed to obtain the optimal positions of secondary information gathering stations (SIGSs and to select the optimal operating channel. Our objective is to maximize secondary system capacity while protecting the primary system. In addition, we propose an appearance probability matrix for secondary IoT devices (SIDs to maximize the supportable number of SIDs that can be installed in a car, in wearable devices, or for other monitoring devices, based on optimal deployment and probability. We derive fitness functions based on the above objectives and also consider signal to interference-plus-noise ratio (SINR and position constraints. The particle swarm optimization (PSO technique is used to find the best position and operating channel for the SIGSs. In a simulation study, the performance of the proposed method is evaluated and compared with a random resources allocation algorithm (parts of this paper were presented at the ICTC2017 conference (Wen et al., 2017.

  6. Standardized comparison of the relative impacts of HIV-1 reverse transcriptase (RT) mutations on nucleoside RT inhibitor susceptibility.

    Science.gov (United States)

    Melikian, George L; Rhee, Soo-Yon; Taylor, Jonathan; Fessel, W Jeffrey; Kaufman, David; Towner, William; Troia-Cancio, Paolo V; Zolopa, Andrew; Robbins, Gregory K; Kagan, Ron; Israelski, Dennis; Shafer, Robert W

    2012-05-01

    Determining the phenotypic impacts of reverse transcriptase (RT) mutations on individual nucleoside RT inhibitors (NRTIs) has remained a statistical challenge because clinical NRTI-resistant HIV-1 isolates usually contain multiple mutations, often in complex patterns, complicating the task of determining the relative contribution of each mutation to HIV drug resistance. Furthermore, the NRTIs have highly variable dynamic susceptibility ranges, making it difficult to determine the relative effect of an RT mutation on susceptibility to different NRTIs. In this study, we analyzed 1,273 genotyped HIV-1 isolates for which phenotypic results were obtained using the PhenoSense assay (Monogram, South San Francisco, CA). We used a parsimonious feature selection algorithm, LASSO, to assess the possible contributions of 177 mutations that occurred in 10 or more isolates in our data set. We then used least-squares regression to quantify the impact of each LASSO-selected mutation on each NRTI. Our study provides a comprehensive view of the most common NRTI resistance mutations. Because our results were standardized, the study provides the first analysis that quantifies the relative phenotypic effects of NRTI resistance mutations on each of the NRTIs. In addition, the study contains new findings on the relative impacts of thymidine analog mutations (TAMs) on susceptibility to abacavir and tenofovir; the impacts of several known but incompletely characterized mutations, including E40F, V75T, Y115F, and K219R; and a tentative role in reduced NRTI susceptibility for K64H, a novel NRTI resistance mutation.

  7. An independent system operator's perspective on operational ramp forecasting

    Energy Technology Data Exchange (ETDEWEB)

    Porter, G. [New Brunswick System Operator, Fredericton, NB (Canada)

    2010-07-01

    One of the principal roles of the power system operator is to select the most economical resources to reliably supply electric system power needs. Operational wind power production forecasts are required by system operators in order to understand the impact of ramp event forecasting on dispatch functions. A centralized dispatch approach can contribute to a more efficient use of resources that traditional economic dispatch methods. Wind ramping events can have a significant impact on system reliability. Power systems can have constrained or robust transmission systems, and may also be islanded or have large connections to neighbouring systems. Power resources can include both flexible and inflexible generation resources. Wind integration tools must be used by system operators to improve communications and connections with wind power plants. Improved wind forecasting techniques are also needed. Sensitivity to forecast errors is dependent on current system conditions. System operators require basic production forecasts, probabilistic forecasts, and event forecasts. Forecasting errors were presented as well as charts outlining the implications of various forecasts. tabs., figs.

  8. Real-time x-ray fluoroscopy-based catheter detection and tracking for cardiac electrophysiology interventions

    Energy Technology Data Exchange (ETDEWEB)

    Ma Yingliang; Housden, R. James; Razavi, Reza; Rhode, Kawal S. [Division of Imaging Sciences and Biomedical Engineering, King' s College London, London SE1 7EH (United Kingdom); Gogin, Nicolas; Cathier, Pascal [Medisys Research Group, Philips Healthcare, Paris 92156 (France); Gijsbers, Geert [Interventional X-ray, Philips Healthcare, Best 5680 DA (Netherlands); Cooklin, Michael; O' Neill, Mark; Gill, Jaswinder; Rinaldi, C. Aldo [Department of Cardiology, Guys and St. Thomas' Hospitals NHS Foundation Trust, London SE1 7EH (United Kingdom)

    2013-07-15

    Purpose: X-ray fluoroscopically guided cardiac electrophysiology (EP) procedures are commonly carried out to treat patients with arrhythmias. X-ray images have poor soft tissue contrast and, for this reason, overlay of a three-dimensional (3D) roadmap derived from preprocedural volumetric images can be used to add anatomical information. It is useful to know the position of the catheter electrodes relative to the cardiac anatomy, for example, to record ablation therapy locations during atrial fibrillation therapy. Also, the electrode positions of the coronary sinus (CS) catheter or lasso catheter can be used for road map motion correction.Methods: In this paper, the authors present a novel unified computational framework for image-based catheter detection and tracking without any user interaction. The proposed framework includes fast blob detection, shape-constrained searching and model-based detection. In addition, catheter tracking methods were designed based on the customized catheter models input from the detection method. Three real-time detection and tracking methods are derived from the computational framework to detect or track the three most common types of catheters in EP procedures: the ablation catheter, the CS catheter, and the lasso catheter. Since the proposed methods use the same blob detection method to extract key information from x-ray images, the ablation, CS, and lasso catheters can be detected and tracked simultaneously in real-time.Results: The catheter detection methods were tested on 105 different clinical fluoroscopy sequences taken from 31 clinical procedures. Two-dimensional (2D) detection errors of 0.50 {+-} 0.29, 0.92 {+-} 0.61, and 0.63 {+-} 0.45 mm as well as success rates of 99.4%, 97.2%, and 88.9% were achieved for the CS catheter, ablation catheter, and lasso catheter, respectively. With the tracking method, accuracies were increased to 0.45 {+-} 0.28, 0.64 {+-} 0.37, and 0.53 {+-} 0.38 mm and success rates increased to 100%, 99

  9. Safety assessment for TA-48 radiochemical operations

    International Nuclear Information System (INIS)

    1994-08-01

    The purpose of this report is to document an assessment performed to evaluate the safety of the radiochemical operations conducted at the Los Alamos National Laboratory operations area designated as TA-48. This Safety Assessment for the TA-48 radiochemical operations was prepared to fulfill the requirements of US Department of Energy (DOE) Order 5481.1B, ''Safety Analysis and Review System.'' The area designated as TA-48 is operated by the Chemical Science and Technology (CST) Division and is involved with radiochemical operations associated with nuclear weapons testing, evaluation of samples collected from a variety of environmental sources, and nuclear medicine activities. This report documents a systematic evaluation of the hazards associated with the radiochemical operations that are conducted at TA-48. The accident analyses are limited to evaluation of the expected consequences associated with a few bounding accident scenarios that are selected as part of the hazard analysis. Section 2 of this report presents an executive summary and conclusions, Section 3 presents pertinent information concerning the TA-48 site and surrounding area, Section 4 presents a description of the TA-48 radiochemical operations, and Section 5 presents a description of the individual facilities. Section 6 of the report presents an evaluation of the hazards that are associated with the TA-48 operations and Section 7 presents a detailed analysis of selected accident scenarios

  10. The Establishment of Object Selection Criteria for Effect Analysis of Electromagnetic Pulse (EMP) in Operating Nuclear Power Plants

    Energy Technology Data Exchange (ETDEWEB)

    Ye, Song Hae; Ryu, Hosun; Kim, Minyi; Lee, Euijong [KHNP CRI, Daejeon (Korea, Republic of)

    2016-10-15

    The electromagnetic pulse (EMP) can be used as a strategic weapon by inducing damaging voltage and currents that the electrical circuits are not designed to withstand. EMPs are lethal to electronic systems. All EMP events have three common components: a source, coupling path, and receptor. It can also travel across power grids, destroying electronics as it passes in less than a second. There have been no research studies on the effect analysis for EMP in domestic nuclear power plants and power grids. To ensure the safety of operating nuclear power plants in this environment, the emission of EMP is needed for the effect analysis and safety measures against EMPs. Actually, it is difficult and inefficient to conduct the effect analysis of EMP with all the equipment and systems in nuclear power plants (NPPs). Therefore, this paper presents the results of establishing the object selection criteria for the effect analysis of EMP in operating nuclear power plants through reviewing previous research in the US and the safety related design concepts in domestic NPPs. It is not necessary to ensure the continued operation of the plant in intense multiple EMP environments. The most probable effect of EMP on a modern nuclear power plant is an unscheduled shutdown. EMP may also cause an extended shutdown by the unnecessary activation of some safety related systems. In general, EMP can be considered a nuisance to nuclear plants, but it is not considered a serious threat to plant safety. The results of EMP effect analysis show less possibility of failure in the tested individual equipment. It was also confirmed that there is no possibility of simultaneous failure for devices in charge of the safety shutdown in the NPP.

  11. Monitoring selected arthropods

    Science.gov (United States)

    R. Chris Stanton; David J. Horn; Foster F. Purrington; John W. Peacock; Eric H. Metzler

    2003-01-01

    Arthropod populations were sampled in four study areas in southern Ohio in 1995 to document patterns of arthropod diversity and establish a baseline dataset for long-term monitoring in mixed-oak forests. Pitfall, Malaise, and blacklight traps were operated in 12 treatment units from May through September. Several insect groups were selected for detailed study due to...

  12. Modifications of center-surround, spot detection and dot-pattern selective operators

    NARCIS (Netherlands)

    Petkov, Nicolai; Visser, Wicher T.

    2005-01-01

    This paper describes modifications of the models of center-surround and dot-pattern selective cells proposed previously. These modifications concern mainly the normalization of the difference of Gaussians (DoG) function used to model center-surround receptive fields, the normalization of

  13. Influence of Mining Operation on Selected Factors of Environment in the Area of Nižná Slaná

    Directory of Open Access Journals (Sweden)

    Erika Fedorová

    2004-12-01

    Full Text Available The area of Nižná Nižná, in whiche only one iron-ore mining plant is operating in Slovakia, namely Siderit, Ltd., is well-known by its mining activities from long-ago. In the past a main interest was focused on gold, copper, mercury and from the beginning of the 20th century also on iron. Thus, thermal technologies are applied in the production of Fe-concentrates and finally of blast furnace pellets, which are suitable for metallurgical processing. An operation of such technologies is often connected with the pollution of air and through this factor also in the contamination of other components of environment. The emission situation is observed by suitable monitoring systems. But, as to immisions a little information is available during the last period. An improvement came from the cooperation between the Siderite plant and the Institute of Geotechnics SAV. The Institute carry out the observation of the immission load of selected environmental factors from the viewpoint of solid pollutants, dustiness, SO2, As a Hg. Recently, the dustfall is monitored on 17 sampling points in the surrounding of the plant. On the basis of obtained results of monitoring, it can be stated that the immission load gradually decreases during the last observed period.

  14. Operations planning simulation: Model study

    Science.gov (United States)

    1974-01-01

    The use of simulation modeling for the identification of system sensitivities to internal and external forces and variables is discussed. The technique provides a means of exploring alternate system procedures and processes, so that these alternatives may be considered on a mutually comparative basis permitting the selection of a mode or modes of operation which have potential advantages to the system user and the operator. These advantages are measurements is system efficiency are: (1) the ability to meet specific schedules for operations, mission or mission readiness requirements or performance standards and (2) to accomplish the objectives within cost effective limits.

  15. Diffusion Indexes with Sparse Loadings

    DEFF Research Database (Denmark)

    Kristensen, Johannes Tang

    The use of large-dimensional factor models in forecasting has received much attention in the literature with the consensus being that improvements on forecasts can be achieved when comparing with standard models. However, recent contributions in the literature have demonstrated that care needs...... to the problem by using the LASSO as a variable selection method to choose between the possible variables and thus obtain sparse loadings from which factors or diffusion indexes can be formed. This allows us to build a more parsimonious factor model which is better suited for forecasting compared...... it to be an important alternative to PC....

  16. Lassoing the Determinants of Retirement

    DEFF Research Database (Denmark)

    Kallestrup-Lamb, Malene; Kock, Anders Bredahl; Kristensen, Johannes Tang

    2016-01-01

    This article uses Danish register data to explain the retirement decision of workers in 1990 and 1998. Many variables might be conjectured to influence this decision such as demographic, socioeconomic, financial, and health related variables as well as all the same factors for the spouse in case ...... that this is the case for core variables such as age, income, wealth, and general health. We also point out the most important differences between these groups and explain why these might be present.......This article uses Danish register data to explain the retirement decision of workers in 1990 and 1998. Many variables might be conjectured to influence this decision such as demographic, socioeconomic, financial, and health related variables as well as all the same factors for the spouse in case...

  17. Lassoing the Determinants of Retirement

    DEFF Research Database (Denmark)

    Kallestrup-Lamb, Malene; Kock, Anders Bredahl; Kristensen, Johannes Tang

    This paper uses Danish register data to explain the retirement decision of workers in 1990 and 1998.Many variables might be conjectured to influence this decision such as demographic, socio-economic, financially and health related variables as well as all the same factors for the spouse in case t...... such as age, income, wealth and general health. We also point out themost important differences between these groups and explain why these might be present.......This paper uses Danish register data to explain the retirement decision of workers in 1990 and 1998.Many variables might be conjectured to influence this decision such as demographic, socio-economic, financially and health related variables as well as all the same factors for the spouse in case...

  18. A Practical Review of Studies on Operator's Supervisory Monitoring Behavior

    International Nuclear Information System (INIS)

    Ha, Jun Su; Seong, Poong Hyun

    2009-01-01

    Correct situation awareness (SA) has been considered a crucial key to improving performance and reducing error in NPPs. There are a lot of information sources that should be monitored in NPPs, but operators have only limited capacity of attention and memory. Operators in NPPs selectively attend to important information sources to effectively develop SA when an abnormal or accidental situation occurs. Selective attention to important information sources is continued while maintaining SA as well. In this work, various models of operator's visual sampling behavior are reviewed for the use in human factors studies in NPPs

  19. Elastic SCAD as a novel penalization method for SVM classification tasks in high-dimensional data.

    Science.gov (United States)

    Becker, Natalia; Toedt, Grischa; Lichter, Peter; Benner, Axel

    2011-05-09

    Classification and variable selection play an important role in knowledge discovery in high-dimensional data. Although Support Vector Machine (SVM) algorithms are among the most powerful classification and prediction methods with a wide range of scientific applications, the SVM does not include automatic feature selection and therefore a number of feature selection procedures have been developed. Regularisation approaches extend SVM to a feature selection method in a flexible way using penalty functions like LASSO, SCAD and Elastic Net.We propose a novel penalty function for SVM classification tasks, Elastic SCAD, a combination of SCAD and ridge penalties which overcomes the limitations of each penalty alone.Since SVM models are extremely sensitive to the choice of tuning parameters, we adopted an interval search algorithm, which in comparison to a fixed grid search finds rapidly and more precisely a global optimal solution. Feature selection methods with combined penalties (Elastic Net and Elastic SCAD SVMs) are more robust to a change of the model complexity than methods using single penalties. Our simulation study showed that Elastic SCAD SVM outperformed LASSO (L1) and SCAD SVMs. Moreover, Elastic SCAD SVM provided sparser classifiers in terms of median number of features selected than Elastic Net SVM and often better predicted than Elastic Net in terms of misclassification error.Finally, we applied the penalization methods described above on four publicly available breast cancer data sets. Elastic SCAD SVM was the only method providing robust classifiers in sparse and non-sparse situations. The proposed Elastic SCAD SVM algorithm provides the advantages of the SCAD penalty and at the same time avoids sparsity limitations for non-sparse data. We were first to demonstrate that the integration of the interval search algorithm and penalized SVM classification techniques provides fast solutions on the optimization of tuning parameters.The penalized SVM

  20. Inverted Nipple Correction with Selective Dissection of Lactiferous Ducts Using an Operative Microscope and a Traction Technique.

    Science.gov (United States)

    Sowa, Yoshihiro; Itsukage, Sizu; Morita, Daiki; Numajiri, Toshiaki

    2017-10-01

    An inverted nipple is a common congenital condition in young women that may cause breastfeeding difficulty, psychological distress, repeated inflammation, and loss of sensation. Various surgical techniques have been reported for correction of inverted nipples, and all have advantages and disadvantages. Here, we report a new technique for correction of an inverted nipple using an operative microscope and traction that results in low recurrence and preserves lactation function and sensation. Between January 2010 and January 2013, we treated eight inverted nipples in seven patients with selective lactiferous duct dissection using an operative microscope. An opposite Z-plasty was added at the junction of the nipple and areola. Postoperatively, traction was applied through an apparatus made from a rubber gasket attached to a sterile syringe. Patients were followed up for 15-48 months. Adequate projection was achieved in all patients, and there was no wound dehiscence or complications such as infection. Three patients had successful pregnancies and subsequent breastfeeding that was not adversely affected by the treatment. There was no loss of sensation in any patient during the postoperative period. Our technique for treating an inverted nipple is effective and preserves lactation function and nipple sensation. The method maintains traction for a longer period, which we believe increases the success rate of the surgery for correction of severely inverted nipples. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .

  1. Use of selective serotonin reuptake inhibitors and risk of re-operation due to post-surgical bleeding in breast cancer patients: a Danish population-based cohort study

    Directory of Open Access Journals (Sweden)

    Lash Timothy L

    2010-01-01

    Full Text Available Abstract Background Selective serotonin reuptake inhibitors (SSRI decrease platelet-function, which suggests that SSRI use may increase the risk of post-surgical bleeding. Few studies have investigated this potential association. Methods We conducted a population-based study of the risk of re-operation due to post-surgical bleeding within two weeks of primary surgery among Danish women with primary breast cancer. Patients were categorised according to their use of SSRI: never users, current users (SSRI prescription within 30 days of initial breast cancer surgery, and former users (SSRI prescription more than 30 days before initial breast cancer surgery. We calculated the risk of re-operation due to post-surgical bleeding within 14 days of initial surgery, and the relative risk (RR of re-operation comparing SSRI users with never users of SSRI adjusting for potential confounders. Results 389 of 14,464 women (2.7% were re-operated. 1592 (11% had a history of SSRI use. Risk of re-operation was 2.6% among never users, 7.0% among current SSRI users, and 2.7% among former users. Current users thus had an increased risk of re-operation due to post-operative bleeding (adjusted relative risk = 2.3; 95% confidence interval (CI = 1.4, 3.9 compared with never users. There was no increased risk of re-operation associated with former use of SSRI (RR = 0.93, 95% CI = 0.66, 1.3. Conclusions Current use of SSRI is associated with an increased risk of re-operation due to bleeding after surgery for breast cancer.

  2. Development and application of nuclear power operation database

    International Nuclear Information System (INIS)

    Shao Juying; Fang Zhaoxia

    1996-01-01

    The article describes the development of the Nuclear Power Operation Database which include Domestic and Overseas Nuclear Event Scale Database, Overseas Nuclear Power Operation Abnormal Event Database, Overseas Nuclear Power Operation General Reliability Database and Qinshan Nuclear Power Operation Abnormal Event Database. The development includes data collection and analysis, database construction and code design, database management system selection. The application of the database to provide support to the safety analysis of the NPPs which have been in commercial operation is also introduced

  3. Nuclear thermal rocket engine operation and control

    International Nuclear Information System (INIS)

    Gunn, S.V.; Savoie, M.T.; Hundal, R.

    1993-06-01

    The operation of a typical Rover/Nerva-derived nuclear thermal rocket (NTR) engine is characterized and the control requirements of the NTR are defined. A rationale for the selection of a candidate diverse redundant NTR engine control system is presented and the projected component operating requirements are related to the state of the art of candidate components and subsystems. The projected operational capabilities of the candidate system are delineated for the startup, full-thrust, shutdown, and decay heat removal phases of the engine operation. 9 refs

  4. Selecting appropriate cases when tracing causal mechanisms

    DEFF Research Database (Denmark)

    Beach, Derek; Pedersen, Rasmus Brun

    2016-01-01

    The last decade has witnessed resurgence in the interest in studying the causal mechanisms linking causes and outcomes in the social sciences. This article explores the overlooked implications for case selection when tracing mechanisms using in-depth case studies. Our argument is that existing case...... selection guidelines are appropriate for research aimed at making cross-case claims about causal relationships, where case selection is primarily used to control for other causes. However, existing guidelines are not in alignment with case-based research that aims to trace mechanisms, where the goal...... is to unpack the causal mechanism between X and Y, enabling causal inferences to be made because empirical evidence is provided for how the mechanism actually operated in a particular case. The in-depth, within-case tracing of how mechanisms operate in particular cases produces what can be termed mechanistic...

  5. A Two-Pass Exact Algorithm for Selection on Parallel Disk Systems.

    Science.gov (United States)

    Mi, Tian; Rajasekaran, Sanguthevar

    2013-07-01

    Numerous OLAP queries process selection operations of "top N", median, "top 5%", in data warehousing applications. Selection is a well-studied problem that has numerous applications in the management of data and databases since, typically, any complex data query can be reduced to a series of basic operations such as sorting and selection. The parallel selection has also become an important fundamental operation, especially after parallel databases were introduced. In this paper, we present a deterministic algorithm Recursive Sampling Selection (RSS) to solve the exact out-of-core selection problem, which we show needs no more than (2 + ε ) passes ( ε being a very small fraction). We have compared our RSS algorithm with two other algorithms in the literature, namely, the Deterministic Sampling Selection and QuickSelect on the Parallel Disks Systems. Our analysis shows that DSS is a (2 + ε )-pass algorithm when the total number of input elements N is a polynomial in the memory size M (i.e., N = M c for some constant c ). While, our proposed algorithm RSS runs in (2 + ε ) passes without any assumptions. Experimental results indicate that both RSS and DSS outperform QuickSelect on the Parallel Disks Systems. Especially, the proposed algorithm RSS is more scalable and robust to handle big data when the input size is far greater than the core memory size, including the case of N ≫ M c .

  6. Analysis of remote operating systems for space-based servicing operations. Volume 2: Study results

    Science.gov (United States)

    1985-01-01

    The developments in automation and robotics have increased the importance of applications for space based servicing using remotely operated systems. A study on three basic remote operating systems (teleoperation, telepresence and robotics) was performed in two phases. In phase one, requirements development, which consisted of one three-month task, a group of ten missions were selected. These included the servicing of user equipment on the station and the servicing of the station itself. In phase two, concepts development, which consisted of three tasks, overall system concepts were developed for the selected missions. These concepts, which include worksite servicing equipment, a carrier system, and payload handling equipment, were evaluated relative to the configurations of the overall worksite. It is found that the robotic/teleoperator concepts are appropriate for relatively simple structured tasks, while the telepresence/teleoperator concepts are applicable for missions that are complex, unstructured tasks.

  7. Model validity and frequency band selection in operational modal analysis

    Science.gov (United States)

    Au, Siu-Kui

    2016-12-01

    Experimental modal analysis aims at identifying the modal properties (e.g., natural frequencies, damping ratios, mode shapes) of a structure using vibration measurements. Two basic questions are encountered when operating in the frequency domain: Is there a mode near a particular frequency? If so, how much spectral data near the frequency can be included for modal identification without incurring significant modeling error? For data with high signal-to-noise (s/n) ratios these questions can be addressed using empirical tools such as singular value spectrum. Otherwise they are generally open and can be challenging, e.g., for modes with low s/n ratios or close modes. In this work these questions are addressed using a Bayesian approach. The focus is on operational modal analysis, i.e., with 'output-only' ambient data, where identification uncertainty and modeling error can be significant and their control is most demanding. The approach leads to 'evidence ratios' quantifying the relative plausibility of competing sets of modeling assumptions. The latter involves modeling the 'what-if-not' situation, which is non-trivial but is resolved by systematic consideration of alternative models and using maximum entropy principle. Synthetic and field data are considered to investigate the behavior of evidence ratios and how they should be interpreted in practical applications.

  8. Considerations in Physiological Metric Selection for Online Detection of Operator State: A Case Study

    Science.gov (United States)

    2016-07-17

    multiple unmanned aerial vehicles (UAVs) to decrease demand for operators, safeguard human lives, in- crease efficiency of operations, and increase...often referred to as the “vigilance decrement ” and can occur as a result of monotony or sustained periods of high task-load. The vigilance decrement ... decrements resulting from fatigue may occur even before an operator is aware of them [15] and thus performance measures can be more useful than subjective

  9. Selected Hanford reactor and separations operating data for 1960--1964

    Energy Technology Data Exchange (ETDEWEB)

    Gydesen, S.P.

    1992-09-01

    The purpose of this letter report is to reconstruct from available information that data which can be used to develop daily reactor operating history for 1960--1964. The information needed for source team calculations (as determined by the Source Terms Task Leader) were extracted and included in this report. The data on the amount of uranium dissolved by the separations plants (expressed both as tons and as MW) is also included in this compilation.

  10. Selected Hanford reactor and separations operating data for 1960--1964

    International Nuclear Information System (INIS)

    Gydesen, S.P.

    1992-09-01

    The purpose of this letter report is to reconstruct from available information that data which can be used to develop daily reactor operating history for 1960--1964. The information needed for source team calculations (as determined by the Source Terms Task Leader) were extracted and included in this report. The data on the amount of uranium dissolved by the separations plants (expressed both as tons and as MW) is also included in this compilation

  11. Mobile Geospatial Information Systems for Land Force Operations: Analysis of Operational Needs and Research Opportunities

    Science.gov (United States)

    2010-03-01

    road barriers (e.g., dragon teeth) and searches vehicles for weapons and proper license plates. The rifleman also escorts VIPs after conveying the...Are there automated systems that know that in X scenario, operator Y would want to see Z , or is there an exhaustive list of options that the operator...directional) or a trackball (moves in any direction). Selection is made my depressing the wheel/ ball . • Keyboard – The size of the keyboard can

  12. Variable importance analysis based on rank aggregation with applications in metabolomics for biomarker discovery.

    Science.gov (United States)

    Yun, Yong-Huan; Deng, Bai-Chuan; Cao, Dong-Sheng; Wang, Wei-Ting; Liang, Yi-Zeng

    2016-03-10

    Biomarker discovery is one important goal in metabolomics, which is typically modeled as selecting the most discriminating metabolites for classification and often referred to as variable importance analysis or variable selection. Until now, a number of variable importance analysis methods to discover biomarkers in the metabolomics studies have been proposed. However, different methods are mostly likely to generate different variable ranking results due to their different principles. Each method generates a variable ranking list just as an expert presents an opinion. The problem of inconsistency between different variable ranking methods is often ignored. To address this problem, a simple and ideal solution is that every ranking should be taken into account. In this study, a strategy, called rank aggregation, was employed. It is an indispensable tool for merging individual ranking lists into a single "super"-list reflective of the overall preference or importance within the population. This "super"-list is regarded as the final ranking for biomarker discovery. Finally, it was used for biomarkers discovery and selecting the best variable subset with the highest predictive classification accuracy. Nine methods were used, including three univariate filtering and six multivariate methods. When applied to two metabolic datasets (Childhood overweight dataset and Tubulointerstitial lesions dataset), the results show that the performance of rank aggregation has improved greatly with higher prediction accuracy compared with using all variables. Moreover, it is also better than penalized method, least absolute shrinkage and selectionator operator (LASSO), with higher prediction accuracy or less number of selected variables which are more interpretable. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. Multivariate pattern analysis strategies in detection of remitted major depressive disorder using resting state functional connectivity

    Directory of Open Access Journals (Sweden)

    Runa Bhaumik

    2017-01-01

    Full Text Available Understanding abnormal resting-state functional connectivity of distributed brain networks may aid in probing and targeting mechanisms involved in major depressive disorder (MDD. To date, few studies have used resting state functional magnetic resonance imaging (rs-fMRI to attempt to discriminate individuals with MDD from individuals without MDD, and to our knowledge no investigations have examined a remitted (r population. In this study, we examined the efficiency of support vector machine (SVM classifier to successfully discriminate rMDD individuals from healthy controls (HCs in a narrow early-adult age range. We empirically evaluated four feature selection methods including multivariate Least Absolute Shrinkage and Selection Operator (LASSO and Elastic Net feature selection algorithms. Our results showed that SVM classification with Elastic Net feature selection achieved the highest classification accuracy of 76.1% (sensitivity of 81.5% and specificity of 68.9% by leave-one-out cross-validation across subjects from a dataset consisting of 38 rMDD individuals and 29 healthy controls. The highest discriminating functional connections were between the left amygdala, left posterior cingulate cortex, bilateral dorso-lateral prefrontal cortex, and right ventral striatum. These appear to be key nodes in the etiopathophysiology of MDD, within and between default mode, salience and cognitive control networks. This technique demonstrates early promise for using rs-fMRI connectivity as a putative neurobiological marker capable of distinguishing between individuals with and without rMDD. These methods may be extended to periods of risk prior to illness onset, thereby allowing for earlier diagnosis, prevention, and intervention.

  14. Multivariate pattern analysis strategies in detection of remitted major depressive disorder using resting state functional connectivity.

    Science.gov (United States)

    Bhaumik, Runa; Jenkins, Lisanne M; Gowins, Jennifer R; Jacobs, Rachel H; Barba, Alyssa; Bhaumik, Dulal K; Langenecker, Scott A

    2017-01-01

    Understanding abnormal resting-state functional connectivity of distributed brain networks may aid in probing and targeting mechanisms involved in major depressive disorder (MDD). To date, few studies have used resting state functional magnetic resonance imaging (rs-fMRI) to attempt to discriminate individuals with MDD from individuals without MDD, and to our knowledge no investigations have examined a remitted (r) population. In this study, we examined the efficiency of support vector machine (SVM) classifier to successfully discriminate rMDD individuals from healthy controls (HCs) in a narrow early-adult age range. We empirically evaluated four feature selection methods including multivariate Least Absolute Shrinkage and Selection Operator (LASSO) and Elastic Net feature selection algorithms. Our results showed that SVM classification with Elastic Net feature selection achieved the highest classification accuracy of 76.1% (sensitivity of 81.5% and specificity of 68.9%) by leave-one-out cross-validation across subjects from a dataset consisting of 38 rMDD individuals and 29 healthy controls. The highest discriminating functional connections were between the left amygdala, left posterior cingulate cortex, bilateral dorso-lateral prefrontal cortex, and right ventral striatum. These appear to be key nodes in the etiopathophysiology of MDD, within and between default mode, salience and cognitive control networks. This technique demonstrates early promise for using rs-fMRI connectivity as a putative neurobiological marker capable of distinguishing between individuals with and without rMDD. These methods may be extended to periods of risk prior to illness onset, thereby allowing for earlier diagnosis, prevention, and intervention.

  15. Identification of Urinary Polyphenol Metabolite Patterns Associated with Polyphenol-Rich Food Intake in Adults from Four European Countries

    Directory of Open Access Journals (Sweden)

    Hwayoung Noh

    2017-07-01

    Full Text Available We identified urinary polyphenol metabolite patterns by a novel algorithm that combines dimension reduction and variable selection methods to explain polyphenol-rich food intake, and compared their respective performance with that of single biomarkers in the European Prospective Investigation into Cancer and Nutrition (EPIC study. The study included 475 adults from four European countries (Germany, France, Italy, and Greece. Dietary intakes were assessed with 24-h dietary recalls (24-HDR and dietary questionnaires (DQ. Thirty-four polyphenols were measured by ultra-performance liquid chromatography–electrospray ionization-tandem mass spectrometry (UPLC-ESI-MS-MS in 24-h urine. Reduced rank regression-based variable importance in projection (RRR-VIP and least absolute shrinkage and selection operator (LASSO methods were used to select polyphenol metabolites. Reduced rank regression (RRR was then used to identify patterns in these metabolites, maximizing the explained variability in intake of pre-selected polyphenol-rich foods. The performance of RRR models was evaluated using internal cross-validation to control for over-optimistic findings from over-fitting. High performance was observed for explaining recent intake (24-HDR of red wine (r = 0.65; AUC = 89.1%, coffee (r = 0.51; AUC = 89.1%, and olives (r = 0.35; AUC = 82.2%. These metabolite patterns performed better or equally well compared to single polyphenol biomarkers. Neither metabolite patterns nor single biomarkers performed well in explaining habitual intake (as reported in the DQ of polyphenol-rich foods. This proposed strategy of biomarker pattern identification has the potential of expanding the currently still limited list of available dietary intake biomarkers.

  16. Variable selection for mixture and promotion time cure rate models.

    Science.gov (United States)

    Masud, Abdullah; Tu, Wanzhu; Yu, Zhangsheng

    2016-11-16

    Failure-time data with cured patients are common in clinical studies. Data from these studies are typically analyzed with cure rate models. Variable selection methods have not been well developed for cure rate models. In this research, we propose two least absolute shrinkage and selection operators based methods, for variable selection in mixture and promotion time cure models with parametric or nonparametric baseline hazards. We conduct an extensive simulation study to assess the operating characteristics of the proposed methods. We illustrate the use of the methods using data from a study of childhood wheezing. © The Author(s) 2016.

  17. [Selective neck dissection for treating recurrent branchial anomalies].

    Science.gov (United States)

    Chen, Liangsi; Song, Xinhan; Zhang, Siyi; Han, Zhijuan; Luo, Xiaoning; Chen, Shaohua; Zhan, Jiandong

    2011-01-01

    To evaluate the role of selective neck dissection in the treatment of recurrent branchial anomalies. The clinical data of 18 patients with recurrent branchial anomalies were retrospectively analyzed. In accordance with the embryologic and anatomic features of branchial anomalies, different types of selective neck dissection were applied. With dissection and protection of important vessels, nerves and other structures, enbloc resection principles were applied to extirpate branchial lesions, scarrings and inflammatory granuloma during the operation. Of all 18 patients, 16 cases were healed with primary healing, 2 cases with local incision infection were healed after dressing changes. A temporary facial nerve paralysis occurred in 1 case with recurrent first branchial cleft fistula postoperatively, and completely recovered 2 months after operation. A postoperative temporary vocal cord paralysis occurred in 1 case with recurrent fourth branchial cleft fistula, and totally recuperated 1 month after operation. No recurrences were found in all 18 cases with a follow-up period of 12-78 months (average 35 months). Selective neck dissection is a safe and effective surgical procedure for the radical treatment of recurrent branchial anomalies.

  18. Pretreatment 18F-FDG PET Textural Features in Locally Advanced Non-Small Cell Lung Cancer: Secondary Analysis of ACRIN 6668/RTOG 0235.

    Science.gov (United States)

    Ohri, Nitin; Duan, Fenghai; Snyder, Bradley S; Wei, Bo; Machtay, Mitchell; Alavi, Abass; Siegel, Barry A; Johnson, Douglas W; Bradley, Jeffrey D; DeNittis, Albert; Werner-Wasik, Maria; El Naqa, Issam

    2016-06-01

    In a secondary analysis of American College of Radiology Imaging Network (ACRIN) 6668/RTOG 0235, high pretreatment metabolic tumor volume (MTV) on (18)F-FDG PET was found to be a poor prognostic factor for patients treated with chemoradiotherapy for locally advanced non-small cell lung cancer (NSCLC). Here we utilize the same dataset to explore whether heterogeneity metrics based on PET textural features can provide additional prognostic information. Patients with locally advanced NSCLC underwent (18)F-FDG PET prior to treatment. A gradient-based segmentation tool was used to contour each patient's primary tumor. MTV, maximum SUV, and 43 textural features were extracted for each tumor. To address overfitting and high collinearity among PET features, the least absolute shrinkage and selection operator (LASSO) method was applied to identify features that were independent predictors of overall survival (OS) after adjusting for MTV. Recursive binary partitioning in a conditional inference framework was utilized to identify optimal thresholds. Kaplan-Meier curves and log-rank testing were used to compare outcomes among patient groups. Two hundred one patients met inclusion criteria. The LASSO procedure identified 1 textural feature (SumMean) as an independent predictor of OS. The optimal cutpoint for MTV was 93.3 cm(3), and the optimal SumMean cutpoint for tumors above 93.3 cm(3) was 0.018. This grouped patients into three categories: low tumor MTV (n = 155; median OS, 22.6 mo), high tumor MTV and high SumMean (n = 23; median OS, 20.0 mo), and high tumor MTV and low SumMean (n = 23; median OS, 6.2 mo; log-rank P textural PET features in the context of established prognostic factors. We have also identified a promising feature that may have prognostic value in locally advanced NSCLC patients with large tumors who are treated with chemoradiotherapy. Validation studies are warranted. © 2016 by the Society of Nuclear Medicine and Molecular Imaging, Inc.

  19. Pretreatment 18F-FDG PET Textural Features in Locally Advanced Non–Small Cell Lung Cancer: Secondary Analysis of ACRIN 6668/RTOG 0235

    Science.gov (United States)

    Ohri, Nitin; Duan, Fenghai; Snyder, Bradley S.; Wei, Bo; Machtay, Mitchell; Alavi, Abass; Siegel, Barry A.; Johnson, Douglas W.; Bradley, Jeffrey D.; DeNittis, Albert; Werner-Wasik, Maria; El Naqa, Issam

    2016-01-01

    In a secondary analysis of American College of Radiology Imaging Network (ACRIN) 6668/RTOG 0235, high pretreatment metabolic tumor volume (MTV) on 18F-FDG PET was found to be a poor prognostic factor for patients treated with chemoradiotherapy for locally advanced non–small cell lung cancer (NSCLC). Here we utilize the same dataset to explore whether heterogeneity metrics based on PET textural features can provide additional prognostic information. Methods Patients with locally advanced NSCLC underwent 18F-FDG PET prior to treatment. A gradient-based segmentation tool was used to contour each patient’s primary tumor. MTV, maximum SUV, and 43 textural features were extracted for each tumor. To address over-fitting and high collinearity among PET features, the least absolute shrinkage and selection operator (LASSO) method was applied to identify features that were independent predictors of overall survival (OS) after adjusting for MTV. Recursive binary partitioning in a conditional inference framework was utilized to identify optimal thresholds. Kaplan–Meier curves and log-rank testing were used to compare outcomes among patient groups. Results Two hundred one patients met inclusion criteria. The LASSO procedure identified 1 textural feature (SumMean) as an independent predictor of OS. The optimal cutpoint for MTV was 93.3 cm3, and the optimal Sum-Mean cutpoint for tumors above 93.3 cm3 was 0.018. This grouped patients into three categories: low tumor MTV (n = 155; median OS, 22.6 mo), high tumor MTV and high SumMean (n = 23; median OS, 20.0 mo), and high tumor MTV and low SumMean (n = 23; median OS, 6.2 mo; log-rank P textural PET features in the context of established prognostic factors. We have also identified a promising feature that may have prognostic value in locally advanced NSCLC patients with large tumors who are treated with chemoradiotherapy. Validation studies are warranted. PMID:26912429

  20. TU-CD-BRB-01: Normal Lung CT Texture Features Improve Predictive Models for Radiation Pneumonitis

    International Nuclear Information System (INIS)

    Krafft, S; Briere, T; Court, L; Martel, M

    2015-01-01

    Purpose: Existing normal tissue complication probability (NTCP) models for radiation pneumonitis (RP) traditionally rely on dosimetric and clinical data but are limited in terms of performance and generalizability. Extraction of pre-treatment image features provides a potential new category of data that can improve NTCP models for RP. We consider quantitative measures of total lung CT intensity and texture in a framework for prediction of RP. Methods: Available clinical and dosimetric data was collected for 198 NSCLC patients treated with definitive radiotherapy. Intensity- and texture-based image features were extracted from the T50 phase of the 4D-CT acquired for treatment planning. A total of 3888 features (15 clinical, 175 dosimetric, and 3698 image features) were gathered and considered candidate predictors for modeling of RP grade≥3. A baseline logistic regression model with mean lung dose (MLD) was first considered. Additionally, a least absolute shrinkage and selection operator (LASSO) logistic regression was applied to the set of clinical and dosimetric features, and subsequently to the full set of clinical, dosimetric, and image features. Model performance was assessed by comparing area under the curve (AUC). Results: A simple logistic fit of MLD was an inadequate model of the data (AUC∼0.5). Including clinical and dosimetric parameters within the framework of the LASSO resulted in improved performance (AUC=0.648). Analysis of the full cohort of clinical, dosimetric, and image features provided further and significant improvement in model performance (AUC=0.727). Conclusions: To achieve significant gains in predictive modeling of RP, new categories of data should be considered in addition to clinical and dosimetric features. We have successfully incorporated CT image features into a framework for modeling RP and have demonstrated improved predictive performance. Validation and further investigation of CT image features in the context of RP NTCP

  1. Selectivity of the gas sensor based on the 50%In2O3-50%Ga2O3 thin film in dynamic mode of operation

    Science.gov (United States)

    Demin, I. E.; Kozlov, A. G.

    2018-01-01

    The article considers the gas sensor with the sensitive layer based on the 50%In2O3 -50%Ga2O3 thin film. The temperature and concentration dependencies of gas-induced resistance response of this sensor and the dynamical dependencies of its resistance response on the test gases in air are investigated. The test gases were ethanol, acetone, ammonia and liquefied petroleum gas. The information parameters of the sensor in the dynamical mode of operation were considered to improve its selectivity. The presented results show that the selectivity of the sensor in this mode may be improved by using the following information parameters: gas-induced resistance response in steady state, activation energy of the response and pre-exponential factor of the temperature dependence of the response time constant.

  2. Digital computer operation of a nuclear reactor

    International Nuclear Information System (INIS)

    Colley, R.W.

    1984-01-01

    A method is described for the safe operation of a complex system such as a nuclear reactor using a digital computer. The computer is supplied with a data base containing a list of the safe state of the reactor and a list of operating instructions for achieving a safe state when the actual state of the reactor does not correspond to a listed safe state, the computer selects operating instructions to return the reactor to a safe state

  3. The site selection process

    International Nuclear Information System (INIS)

    Kittel, J.H.

    1989-01-01

    One of the most arduous tasks associated with the management of radioactive wastes is the siting of new disposal facilities. Experience has shown that the performance of the disposal facility during and after disposal operations is critically dependent on the characteristics of the site itself. The site selection process consists of defining needs and objectives, identifying geographic regions of interest, screening and selecting candidate sites, collecting data on the candidate sites, and finally selecting the preferred site. Before the site selection procedures can be implemented, however, a formal legal system must be in place that defines broad objectives and, most importantly, clearly establishes responsibilities and accompanying authorities for the decision-making steps in the procedure. Site selection authorities should make every effort to develop trust and credibility with the public, local officials, and the news media. The responsibilities of supporting agencies must also be spelled out. Finally, a stable funding arrangement must be established so that activities such as data collection can proceed without interruption. Several examples, both international and within the US, are given

  4. Environmental impact of ongoing operation

    International Nuclear Information System (INIS)

    Henry, L.C.

    1980-07-01

    Present technology in the management of uranium mine and mill wastes, coupled with appropriate site selection, quality construction and good operating procedures, can ensure that impacts on health, safety and the environment will be acceptably low over the period of operation. The methods of chemical and physical stabilization of the tailings and retention structures are also compatible with close-out procedures and will ensure that any releases to the environment will continue to be within the requirements, assuming the continued availability of surveillance

  5. Selecting an oxygen plant for a copper smelter modernization

    Science.gov (United States)

    Larson, Kenneth H.; Hutchison, Robert L.

    1994-10-01

    The selection of an oxygen plant for the Cyprus Miami smelter modernization project began with a good definition of the use requirements and the smelter process variables that can affect oxygen demand. To achieve a reliable supply of oxygen with a reasonable amount of capital, critical equipment items were reviewed and reliability was added through the use of installed spares, purchase of insurance spare parts or the installation of equipment design for 50 percent of the production design such that the plant could operate with one unit while the other unit is being maintained. The operating range of the plant was selected to cover variability in smelter oxygen demand, and it was recognized that the broader operating range sacrificed about two to three percent in plant power consumption. Careful consideration of the plant "design point" was important to both the capital and operating costs of the plant, and a design point was specified that allowed a broad range of operation for maximum flexibility.

  6. Quantifying predictive capability of electronic health records for the most harmful breast cancer

    Science.gov (United States)

    Wu, Yirong; Fan, Jun; Peissig, Peggy; Berg, Richard; Tafti, Ahmad Pahlavan; Yin, Jie; Yuan, Ming; Page, David; Cox, Jennifer; Burnside, Elizabeth S.

    2018-03-01

    Improved prediction of the "most harmful" breast cancers that cause the most substantive morbidity and mortality would enable physicians to target more intense screening and preventive measures at those women who have the highest risk; however, such prediction models for the "most harmful" breast cancers have rarely been developed. Electronic health records (EHRs) represent an underused data source that has great research and clinical potential. Our goal was to quantify the value of EHR variables in the "most harmful" breast cancer risk prediction. We identified 794 subjects who had breast cancer with primary non-benign tumors with their earliest diagnosis on or after 1/1/2004 from an existing personalized medicine data repository, including 395 "most harmful" breast cancer cases and 399 "least harmful" breast cancer cases. For these subjects, we collected EHR data comprised of 6 components: demographics, diagnoses, symptoms, procedures, medications, and laboratory results. We developed two regularized prediction models, Ridge Logistic Regression (Ridge-LR) and Lasso Logistic Regression (Lasso-LR), to predict the "most harmful" breast cancer one year in advance. The area under the ROC curve (AUC) was used to assess model performance. We observed that the AUCs of Ridge-LR and Lasso-LR models were 0.818 and 0.839 respectively. For both the Ridge-LR and LassoLR models, the predictive performance of the whole EHR variables was significantly higher than that of each individual component (pbreast cancer, providing the possibility to personalize care for those women at the highest risk in clinical practice.

  7. Bacterial selection during the formation of early-stage aerobic granules in wastewater treatment systems operated under wash-out dynamics

    Directory of Open Access Journals (Sweden)

    David Gregory Weissbrodt

    2012-09-01

    Full Text Available Aerobic granular sludge is attractive for high-rate biological wastewater treatment. Biomass wash-out conditions stimulate the formation of aerobic granules. Deteriorated performances in biomass settling and nutrient removal during start-up have however often been reported. The effect of wash-out dynamics was investigated on bacterial selection, biomass settling behavior, and metabolic activities during the formation of early-stage granules from activated sludge of two wastewater treatment plants (WWTP over start-up periods of maximum 60 days. Five bubble-column sequencing batch reactors were operated with feast-famine regimes consisting of rapid pulse or slow anaerobic feeding followed by aerobic starvation. Slow-settling fluffy granules were formed when an insufficient superficial air velocity (SAV; 1.8 cm s-1 was applied, when the inoculation sludge was taken from a WWTP removing organic matter only, or when reactors were operated at 30°C. Fast-settling dense granules were obtained with 4.0 cm s-1 SAV, or when the inoculation sludge was taken from a WWTP removing all nutrients biologically. However, only carbon was aerobically removed during start-up. Fluffy granules and dense granules were displaying distinct predominant phylotypes, namely filamentous Burkholderiales affiliates and Zoogloea relatives, respectively. The latter were predominant in dense granules independently from the feeding regime. A combination of insufficient solid retention time and of leakage of acetate into the aeration phase during intensive biomass wash-out was the cause for the proliferation of Zoogloea spp. in dense granules, and for the deterioration of BNR performances. It is however not certain that Zoogloea-like organisms are essential in granule formation. Optimal operation conditions should be elucidated for maintaining a balance between organisms with granulation propensity and nutrient removing organisms in order to form granules with BNR activities in

  8. Bacterial Selection during the Formation of Early-Stage Aerobic Granules in Wastewater Treatment Systems Operated Under Wash-Out Dynamics.

    Science.gov (United States)

    Weissbrodt, David G; Lochmatter, Samuel; Ebrahimi, Sirous; Rossi, Pierre; Maillard, Julien; Holliger, Christof

    2012-01-01

    Aerobic granular sludge is attractive for high-rate biological wastewater treatment. Biomass wash-out conditions stimulate the formation of aerobic granules. Deteriorated performances in biomass settling and nutrient removal during start-up have however often been reported. The effect of wash-out dynamics was investigated on bacterial selection, biomass settling behavior, and metabolic activities during the formation of early-stage granules from activated sludge of two wastewater treatment plants (WWTP) over start-up periods of maximum 60 days. Five bubble-column sequencing batch reactors were operated with feast-famine regimes consisting of rapid pulse or slow anaerobic feeding followed by aerobic starvation. Slow-settling fluffy granules were formed when an insufficient superficial air velocity (SAV; 1.8 cm s(-1)) was applied, when the inoculation sludge was taken from a WWTP removing organic matter only, or when reactors were operated at 30°C. Fast-settling dense granules were obtained with 4.0 cm s(-1) SAV, or when the inoculation sludge was taken from a WWTP removing all nutrients biologically. However, only carbon was aerobically removed during start-up. Fluffy granules and dense granules were displaying distinct predominant phylotypes, namely filamentous Burkholderiales affiliates and Zoogloea relatives, respectively. The latter were predominant in dense granules independently from the feeding regime. A combination of insufficient solid retention time and of leakage of acetate into the aeration phase during intensive biomass wash-out was the cause for the proliferation of Zoogloea spp. in dense granules, and for the deterioration of BNR performances. It is however not certain that Zoogloea-like organisms are essential in granule formation. Optimal operation conditions should be elucidated for maintaining a balance between organisms with granulation propensity and nutrient removing organisms in order to form granules with BNR activities in short

  9. Exploring Large Scale Data Analysis and Visualization for ARM Data Discovery Using NoSQL Technologies

    Science.gov (United States)

    Krishna, B.; Gustafson, W. I., Jr.; Vogelmann, A. M.; Toto, T.; Devarakonda, R.; Palanisamy, G.

    2016-12-01

    This paper presents a new way of providing ARM data discovery through data analysis and visualization services. ARM stands for Atmospheric Radiation Measurement. This Program was created to study cloud formation processes and their influence on radiative transfer and also include additional measurements of aerosol and precipitation at various highly instrumented ground and mobile stations. The total volume of ARM data is roughly 900TB. The current search for ARM data is performed by using its metadata, such as the site name, instrument name, date, etc. NoSQL technologies were explored to improve the capabilities of data searching, not only by their metadata, but also by using the measurement values. Two technologies that are currently being implemented for testing are Apache Cassandra (noSQL database) and Apache Spark (noSQL based analytics framework). Both of these technologies were developed to work in a distributed environment and hence can handle large data for storing and analytics. D3.js is a JavaScript library that can generate interactive data visualizations in web browsers by making use of commonly used SVG, HTML5, and CSS standards. To test the performance of NoSQL for ARM data, we will be using ARM's popular measurements to locate the data based on its value. Recently noSQL technology has been applied to a pilot project called LASSO, which stands for LES ARM Symbiotic Simulation and Observation Workflow. LASSO will be packaging LES output and observations in "data bundles" and analyses will require the ability for users to analyze both observations and LES model output either individually or together across multiple time periods. The LASSO implementation strategy suggests that enormous data storage is required to store the above mentioned quantities. Thus noSQL was used to provide a powerful means to store portions of the data that provided users with search capabilities on each simulation's traits through a web application. Based on the user selection

  10. Comparison and selection of off-grid PV systems

    Science.gov (United States)

    Izmailov, Andrey Yu.; Lobachevsky, Yakov P.; Shepovalova, Olga V.

    2018-05-01

    This work deals with comparison, evaluation and selection of PV systems of the same type based on their technical parameters either indicated in their technical specifications or calculated ones. Stand-alone and grid backed up photoelectric systems have been considered. General requirements for photoelectric system selection and evaluation have been presented that ensure system operability and required efficiency in operation conditions. Generic principles and definition of photoelectric systems characteristics have been considered. The described method is mainly targeted at PV engineering personnel and private customers purchasing PV systems. It can be also applied in the course of project contests, tenders, etc.

  11. A content analysis of the papers published in the Journal of School of Business Administration: Operations Research and Operations Management (1972 -2007)

    OpenAIRE

    Akçay Kasapoğlu, Özlem

    2012-01-01

    In this study operations research and operations management papers that were published between the years 1972 and 2007 at Journal of School of Business Administration, Istanbul University are assessed. It is aimed to reach general conclusions on the qualitative and quantitative characteristics of operations research and operations management papers published in the journal. Additionally, in the study, a content analysis of some selected papers is done. During the research, 161 articles are in...

  12. Reliability analysis and operator modelling

    International Nuclear Information System (INIS)

    Hollnagel, Erik

    1996-01-01

    The paper considers the state of operator modelling in reliability analysis. Operator models are needed in reliability analysis because operators are needed in process control systems. HRA methods must therefore be able to account both for human performance variability and for the dynamics of the interaction. A selected set of first generation HRA approaches is briefly described in terms of the operator model they use, their classification principle, and the actual method they propose. In addition, two examples of second generation methods are also considered. It is concluded that first generation HRA methods generally have very simplistic operator models, either referring to the time-reliability relationship or to elementary information processing concepts. It is argued that second generation HRA methods must recognise that cognition is embedded in a context, and be able to account for that in the way human reliability is analysed and assessed

  13. Science Operations Management

    Science.gov (United States)

    Squibb, Gael F.

    1984-10-01

    The operation teams for the Infrared Astronomical Satellite (IRAS) included scientists from the IRAS International Science Team. The scientific decisions on an hour-to-hour basis, as well as the long-term strategic decisions, were made by science team members. The IRAS scientists were involved in the analysis of the instrument performance, the analysis of the quality of the data, the decision to reacquire data that was contaminated by radiation effects, the strategy for acquiring the survey data, and the process for using the telescope for additional observations, as well as the processing decisions required to ensure the publication of the final scientific products by end of flight operations plus one year. Early in the project, two science team members were selected to be responsible for the scientific operational decisions. One, located at the operations control center in England, was responsible for the scientific aspects of the satellite operations; the other, located at the scientific processing center in Pasadena, was responsible for the scientific aspects of the processing. These science team members were then responsible for approving the design and test of the tools to support their responsibilities and then, after launch, for using these tools in making their decisions. The ability of the project to generate the final science data products one year after the end of flight operations is due in a large measure to the active participation of the science team members in the operations. This paper presents a summary of the operational experiences gained from this scientific involvement.

  14. Training device for nuclear power plant operators

    International Nuclear Information System (INIS)

    Schoessow, G. J.

    1985-01-01

    A simulated nuclear energy power plant system with visible internal working components comprising a reactor adapted to contain a liquid with heating elements submerged in the liquid and capable of heating the liquid to an elevated temperature, a steam generator containing water and a heat exchanger means to receive the liquid at an elevated temperature, transform the water to steam, and return the spent liquid to the reactor; a steam turbine receiving high energy steam to drive the turbine and discharging low energy steam to a condenser where the low energy steam is condensed to water which is returned to the steam generator; an electric generator driven by the turbine; indicating means to identify the physical status of the reactor and its contents; and manual and automatic controls to selectively establish normal or abnormal operating conditions in the reactor, steam generator, pressurizer, turbine, electric generator, condenser, and pumps; and to be selectively adjusted to bring the reactor to acceptable operating condition after being placed in an abnormal operation. This device is particularly useful as an education device in demonstrating nuclear reactor operations and in training operating personnel for nuclear reactor systems and also as a device for conducting research on various safety systems to improve the safety of nuclear power plants

  15. Individual-Tree Diameter Growth Models for Mixed Nothofagus Second Growth Forests in Southern Chile

    Directory of Open Access Journals (Sweden)

    Paulo C. Moreno

    2017-12-01

    Full Text Available Second growth forests of Nothofagus obliqua (roble, N. alpina (raulí, and N. dombeyi (coihue, known locally as RORACO, are among the most important native mixed forests in Chile. To improve the sustainable management of these forests, managers need adequate information and models regarding not only existing forest conditions, but their future states with varying alternative silvicultural activities. In this study, an individual-tree diameter growth model was developed for the full geographical distribution of the RORACO forest type. This was achieved by fitting a complete model by comparing two variable selection procedures: cross-validation (CV, and least absolute shrinkage and selection operator (LASSO regression. A small set of predictors successfully explained a large portion of the annual increment in diameter at breast height (DBH growth, particularly variables associated with competition at both the tree- and stand-level. Goodness-of-fit statistics for this final model showed an empirical coefficient of correlation (R2emp of 0.56, relative root mean square error of 44.49% and relative bias of −1.96% for annual DBH growth predictions, and R2emp of 0.98 and 0.97 for DBH projection at 6 and 12 years, respectively. This model constitutes a simple and useful tool to support management plans for these forest ecosystems.

  16. Operating experience with snubbers

    International Nuclear Information System (INIS)

    Levin, H.; Cudlin, R.

    1978-06-01

    Recent operating experience with hydraulic and mechanical snubbers has indicated that there is a need to evaluate current practice in the industry associated with snubber qualification testing programs, design and analysis procedures, selection and specification criteria, and the preservice inspection and inservice surveillance programs. The report provides a summary of operational experiences that represent problems that are generic throughout the industry. Generic Task A-13 is part of the NRC Program for the Resolution of Generic Issues Related to Nuclear Power Plants described in NUREG-0410. The report is based upon a rather large amount of data that have become available in the past four years. These data have been evaluated by the Division of Operating Reactors to develop a data base for use in connection with several NRC activities including Category A, Technical Activity A-13 (Snubbers); the Standard Review Plan; future Regulatory Guides; ASME Code Provisions; and various technical specifications of operating nuclear power plants

  17. Estimating Influenza Outbreaks Using Both Search Engine Query Data and Social Media Data in South Korea.

    Science.gov (United States)

    Woo, Hyekyung; Cho, Youngtae; Shim, Eunyoung; Lee, Jong-Koo; Lee, Chang-Gun; Kim, Seong Hwan

    2016-07-04

    As suggested as early as in 2006, logs of queries submitted to search engines seeking information could be a source for detection of emerging influenza epidemics if changes in the volume of search queries are monitored (infodemiology). However, selecting queries that are most likely to be associated with influenza epidemics is a particular challenge when it comes to generating better predictions. In this study, we describe a methodological extension for detecting influenza outbreaks using search query data; we provide a new approach for query selection through the exploration of contextual information gleaned from social media data. Additionally, we evaluate whether it is possible to use these queries for monitoring and predicting influenza epidemics in South Korea. Our study was based on freely available weekly influenza incidence data and query data originating from the search engine on the Korean website Daum between April 3, 2011 and April 5, 2014. To select queries related to influenza epidemics, several approaches were applied: (1) exploring influenza-related words in social media data, (2) identifying the chief concerns related to influenza, and (3) using Web query recommendations. Optimal feature selection by least absolute shrinkage and selection operator (Lasso) and support vector machine for regression (SVR) were used to construct a model predicting influenza epidemics. In total, 146 queries related to influenza were generated through our initial query selection approach. A considerable proportion of optimal features for final models were derived from queries with reference to the social media data. The SVR model performed well: the prediction values were highly correlated with the recent observed influenza-like illness (r=.956; Psearch queries to enhance influenza surveillance in South Korea. In addition, an approach for query selection using social media data seems ideal for supporting influenza surveillance based on search query data.

  18. Reactor operation feed-back in France

    International Nuclear Information System (INIS)

    Feltin, C.; Fourest, B.; Libmann, J.

    1982-09-01

    The Nuclear Safety Department (DSN), technical support of French Safety Authorities, is, in particular, in charge of the analysis of reactor operation and of measures taken consequently to incidents. It proposed the criteria used to select significant incidents; it analyzes such incidents. DSN also analyzes the operating experience of each plant, several years after starting. It examines foreign incidents to assess in what extent lessons learned can be applied to french reactors. The examples presented show that to improve the safety of units operation, the experience feed-back leads to make arrangements, or modifications concerning not only circuits or materials but often procedures. Moreover they show the importance of procedures concerning the operations carried out during reactor shutdown

  19. Guide to the selection, training, and licensing or certification of reprocessing plant operators. Volume I

    International Nuclear Information System (INIS)

    1976-06-01

    The Code of Federal Regulations, Title 10, Part 55, establishes procedures and criteria for the licensing of operators, including senior operators, in ''Production and Utilization Facilities'', which includes plants for reprocessing irradiated fuel. A training guide is presented which will facilitate the licensing of operators for nuclear reprocessing plants by offering generalized descriptions of the basic principles (theory) and the unit operations (mechanics) employed in reprocessing spent fuels. In the present volume, details about the portions of a training program that are of major interest to management are presented

  20. Guide to the selection, training, and licensing or certification of reprocessing plant operators. Volume I

    Energy Technology Data Exchange (ETDEWEB)

    None

    1976-06-01

    The Code of Federal Regulations, Title 10, Part 55, establishes procedures and criteria for the licensing of operators, including senior operators, in ''Production and Utilization Facilities'', which includes plants for reprocessing irradiated fuel. A training guide is presented which will facilitate the licensing of operators for nuclear reprocessing plants by offering generalized descriptions of the basic principles (theory) and the unit operations (mechanics) employed in reprocessing spent fuels. In the present volume, details about the portions of a training program that are of major interest to management are presented. (JSR)

  1. Quality Control Of Selected Pesticides With GC

    Energy Technology Data Exchange (ETDEWEB)

    Karasali, H. [Benaki Phytopathological Institute Laboratory of Physical and Chemical Analysis of Pesticides, Ekalis (Greece)

    2009-07-15

    The practical quality control of selected pesticides with GC is treated. Detailed descriptions are given on materials and methods used, including sample preparation and GC operating conditions. The systematic validation of multi methods is described, comprising performance characteristics in routine analysis, like selectivity, specificity etc. This is illustrated by chromatograms, calibration curves and tables derived from real laboratory data. (author)

  2. Real-time x-ray fluoroscopy-based catheter detection and tracking for cardiac electrophysiology interventions

    International Nuclear Information System (INIS)

    Ma Yingliang; Housden, R. James; Razavi, Reza; Rhode, Kawal S.; Gogin, Nicolas; Cathier, Pascal; Gijsbers, Geert; Cooklin, Michael; O'Neill, Mark; Gill, Jaswinder; Rinaldi, C. Aldo

    2013-01-01

    Purpose: X-ray fluoroscopically guided cardiac electrophysiology (EP) procedures are commonly carried out to treat patients with arrhythmias. X-ray images have poor soft tissue contrast and, for this reason, overlay of a three-dimensional (3D) roadmap derived from preprocedural volumetric images can be used to add anatomical information. It is useful to know the position of the catheter electrodes relative to the cardiac anatomy, for example, to record ablation therapy locations during atrial fibrillation therapy. Also, the electrode positions of the coronary sinus (CS) catheter or lasso catheter can be used for road map motion correction.Methods: In this paper, the authors present a novel unified computational framework for image-based catheter detection and tracking without any user interaction. The proposed framework includes fast blob detection, shape-constrained searching and model-based detection. In addition, catheter tracking methods were designed based on the customized catheter models input from the detection method. Three real-time detection and tracking methods are derived from the computational framework to detect or track the three most common types of catheters in EP procedures: the ablation catheter, the CS catheter, and the lasso catheter. Since the proposed methods use the same blob detection method to extract key information from x-ray images, the ablation, CS, and lasso catheters can be detected and tracked simultaneously in real-time.Results: The catheter detection methods were tested on 105 different clinical fluoroscopy sequences taken from 31 clinical procedures. Two-dimensional (2D) detection errors of 0.50 ± 0.29, 0.92 ± 0.61, and 0.63 ± 0.45 mm as well as success rates of 99.4%, 97.2%, and 88.9% were achieved for the CS catheter, ablation catheter, and lasso catheter, respectively. With the tracking method, accuracies were increased to 0.45 ± 0.28, 0.64 ± 0.37, and 0.53 ± 0.38 mm and success rates increased to 100%, 99.2%, and 96

  3. TU-AB-BRA-10: Prognostic Value of Intra-Radiation Treatment FDG-PET and CT Imaging Features in Locally Advanced Head and Neck Cancer

    Energy Technology Data Exchange (ETDEWEB)

    Song, J; Pollom, E; Durkee, B; Aggarwal, S; Bui, T; Le, Q; Loo, B; Hara, W [Stanford University, Palo Alto, CA (United States); Cui, Y [Hokkaido University, Global Institute for Collaborative Research and Educat, Sapporo, Hokkaido (Japan); Li, R [Stanford University, Palo Alto, CA (United States); Hokkaido University, Global Institute for Collaborative Research and Educat, Sapporo, Hokkaido (Japan)

    2015-06-15

    Purpose: To predict response to radiation treatment using computational FDG-PET and CT images in locally advanced head and neck cancer (HNC). Methods: 68 patients with State III-IVB HNC treated with chemoradiation were included in this retrospective study. For each patient, we analyzed primary tumor and lymph nodes on PET and CT scans acquired both prior to and during radiation treatment, which led to 8 combinations of image datasets. From each image set, we extracted high-throughput, radiomic features of the following types: statistical, morphological, textural, histogram, and wavelet, resulting in a total of 437 features. We then performed unsupervised redundancy removal and stability test on these features. To avoid over-fitting, we trained a logistic regression model with simultaneous feature selection based on least absolute shrinkage and selection operator (LASSO). To objectively evaluate the prediction ability, we performed 5-fold cross validation (CV) with 50 random repeats of stratified bootstrapping. Feature selection and model training was solely conducted on the training set and independently validated on the holdout test set. Receiver operating characteristic (ROC) curve of the pooled Result and the area under the ROC curve (AUC) was calculated as figure of merit. Results: For predicting local-regional recurrence, our model built on pre-treatment PET of lymph nodes achieved the best performance (AUC=0.762) on 5-fold CV, which compared favorably with node volume and SUVmax (AUC=0.704 and 0.449, p<0.001). Wavelet coefficients turned out to be the most predictive features. Prediction of distant recurrence showed a similar trend, in which pre-treatment PET features of lymph nodes had the highest AUC of 0.705. Conclusion: The radiomics approach identified novel imaging features that are predictive to radiation treatment response. If prospectively validated in larger cohorts, they could aid in risk-adaptive treatment of HNC.

  4. Advanced operation strategy for feed-and-bleed operation in an OPR1000

    International Nuclear Information System (INIS)

    Kim, Bo Gyung; Yoon, Ho Joon; Kim, Jaewhan; Kang, Hyun Gook

    2016-01-01

    Highlights: • Advanced operating strategy covers all necessary conditions for F&B operation. • Advanced operating strategy identifies the urgency of F&B operation. • An advanced operating strategy for F&B operation is developed using a decision tree. • Human error probability is re-estimated based on a thermohydraulic analysis and K-HRA method. • An advanced operation strategy provides indications under various plant situations. - Abstract: When the secondary side is unavailable in a pressurized water reactor (PWR), heat from the core will accumulate in the primary side causing core damage. In this situation a heat removal mechanism called feed-and-bleed operation (F&B operation) must be used, which is a process of directly cooling the primary reactor cooling system (RCS). However, conventional operation strategy in emergency operating procedures (EOPs) does not cover all possible conditions to initiate F&B operation. If the EOP informs on the urgency of F&B operation, operators will be able to more clearly make decisions regarding F&B operation initiation. In order to cover all possible scenarios for F&B operation and systematically inform its urgency, an advanced operating strategy using a decision tree is developed in this study. The plant condition can be classified according to failure of secondary side, RCS pressure condition, injectable inventory to RCS, and remaining core inventory. RCS pressure, core level, and RCS temperature are representative indicators which provide information regarding the initiation of F&B operation. Indicators can be selected based on their detectability and quantification, and a decision tree is developed according to combinations of indicators. To estimate the effects of the advanced operation strategy, human error probability (HEP) of F&B operation is re-estimated based on a thermohydraulic analysis. The available time for operators to initiate F&B operation is also re-estimated to obtain more realistic data. This

  5. Professional adaptability of nuclear power plant operators

    International Nuclear Information System (INIS)

    He Xuhong; Huang Xiangrui

    2006-01-01

    The paper concerns in the results of analysis for nuclear power plant (NPP) operator job and analysis for human errors related NPP accidents. Based on the principle of ergonomics a full psychological selection system of the professional adaptability of NPP operators including cognitive ability, personality and psychological health was established. The application way and importance of the professional adaptability research are discussed. (authors)

  6. Exponential operations and aggregation operators of interval neutrosophic sets and their decision making methods.

    Science.gov (United States)

    Ye, Jun

    2016-01-01

    An interval neutrosophic set (INS) is a subclass of a neutrosophic set and a generalization of an interval-valued intuitionistic fuzzy set, and then the characteristics of INS are independently described by the interval numbers of its truth-membership, indeterminacy-membership, and falsity-membership degrees. However, the exponential parameters (weights) of all the existing exponential operational laws of INSs and the corresponding exponential aggregation operators are crisp values in interval neutrosophic decision making problems. As a supplement, this paper firstly introduces new exponential operational laws of INSs, where the bases are crisp values or interval numbers and the exponents are interval neutrosophic numbers (INNs), which are basic elements in INSs. Then, we propose an interval neutrosophic weighted exponential aggregation (INWEA) operator and a dual interval neutrosophic weighted exponential aggregation (DINWEA) operator based on these exponential operational laws and introduce comparative methods based on cosine measure functions for INNs and dual INNs. Further, we develop decision-making methods based on the INWEA and DINWEA operators. Finally, a practical example on the selecting problem of global suppliers is provided to illustrate the applicability and rationality of the proposed methods.

  7. Entropy and Selection: Life as an Adaptation for Universe Replication

    Directory of Open Access Journals (Sweden)

    Michael E. Price

    2017-01-01

    Full Text Available Natural selection is the strongest known antientropic process in the universe when operating at the biological level and may also operate at the cosmological level. Consideration of how biological natural selection creates adaptations may illuminate the consequences and significance of cosmological natural selection. An organismal trait is more likely to constitute an adaptation if characterized by more improbable complex order, and such order is the hallmark of biological selection. If the same is true of traits created by selection in general, then the more improbably ordered something is (i.e., the lower its entropy, the more likely it is to be a biological or cosmological adaptation. By this logic, intelligent life (as the least-entropic known entity is more likely than black holes or anything else to be an adaptation designed by cosmological natural selection. This view contrasts with Smolin’s suggestion that black holes are an adaptation designed by cosmological natural selection and that life is the by-product of selection for black holes. Selection may be the main or only ultimate antientropic process in the universe/multiverse; that is, much or all observed order may ultimately be the product or by-product of biological and cosmological selection.

  8. Experience in startup and operation of fast flux facility

    International Nuclear Information System (INIS)

    Moffitt, W.C.

    1980-01-01

    The testing program was structured to perform all testing under formal testing procedures with a test engineer as the test director and the plant operators operating the systems and equipment. This provided excellent training and experience for the operators in preparation for eventual reactor operation. Operations preparations for the testing and operation activities has consisted of academic training, formal on-the-job training including systems operation and examinations by persons with an expert knowledge on that portion of the plant, training at EBR-II and the High Temperature Sodium Facility for selected senior operators, operating procedure preparation, training on an FFTF Control Room operator training simulator, and formal written, oral and operating examinations

  9. Accident selection methodology for TA-55 FSAR

    International Nuclear Information System (INIS)

    Letellier, B.C.; Pan, P.Y.; Sasser, M.K.

    1995-01-01

    In the past, the selection of representative accidents for refined analysis from the numerous scenarios identified in hazards analyses (HAs) has involved significant judgment and has been difficult to defend. As part of upgrading the Final Safety Analysis Report (FSAR) for the TA-55 plutonium facility at the Los Alamos National Laboratory, an accident selection process was developed that is mostly mechanical and reproducible in nature and fulfills the requirements of the Department of Energy (DOE) Standard 3009 and DOE Order 5480.23. Among the objectives specified by this guidance are the requirements that accident screening (1) consider accidents during normal and abnormal operating conditions, (2) consider both design basis and beyond design basis accidents, (3) characterize accidents by category (operational, natural phenomena, etc.) and by type (spill, explosion, fire, etc.), and (4) identify accidents that bound all foreseeable accident types. The accident selection process described here in the context of the TA-55 FSAR is applicable to all types of DOE facilities

  10. Sensors based on mesoporous SnO{sub 2}-CuWO{sub 4} with high selective sensitivity to H{sub 2}S at low operating temperature

    Energy Technology Data Exchange (ETDEWEB)

    Stanoiu, Adelina; Simion, Cristian E. [National Institute of Materials Physics, Atomistilor 405A, P.O. Box MG-7, 077125 Bucharest, Măgurele (Romania); Calderon-Moreno, Jose Maria; Osiceanu, Petre [“Ilie Murgulescu” Institute of Physical Chemistry, Romanian Academy, Surface Chemistry and Catalysis Laboratory, Spl. Independentei 202, 060021, Bucharest (Romania); Florea, Mihaela [University of Bucharest, Faculty of Chemistry, Department of Organic Chemistry, Biochemistry and Catalysis, B-dul Regina Elisabeta 4-12, Bucharest (Romania); National Institute of Materials Physics, Atomistilor 405A, P.O. Box MG-7, 077125 Bucharest, Măgurele (Romania); Teodorescu, Valentin S. [National Institute of Materials Physics, Atomistilor 405A, P.O. Box MG-7, 077125 Bucharest, Măgurele (Romania); Somacescu, Simona, E-mail: somacescu.simona@gmail.com [“Ilie Murgulescu” Institute of Physical Chemistry, Romanian Academy, Surface Chemistry and Catalysis Laboratory, Spl. Independentei 202, 060021, Bucharest (Romania)

    2017-06-05

    Highlights: • Mesoporous SnO{sub 2}-CuWO{sub 4} obtained by an inexpensive synthesis route. • Powders characterization performed by a variety of complementary techniques. • SnO{sub 2}-CuWO{sub 4} layers with high selective sensitivity to H{sub 2}S. • Low operating temperature and relative humidity influences. - Abstract: Development of new sensitive materials by different synthesis routes in order to emphasize the sensing properties for hazardous H{sub 2}S detection is one of a nowadays challenge in the field of gas sensors. In this study we obtained mesoporous SnO{sub 2}-CuWO{sub 4} with selective sensitivity to H{sub 2}S by an inexpensive synthesis route with low environmental pollution level, using tripropylamine (TPA) as template and polyvinylpyrrolidone (PVP) as dispersant/stabilizer. In order to bring insights about the intrinsic properties, the powders were characterized by means of a variety of complementary techniques such as: X-Ray Diffraction, XRD; Transmission Electron Microscopy, TEM; High Resolution TEM, HRTEM; Raman Spectroscopy, RS; Porosity Analysis by N{sub 2} adsorption/desorption, BET; Scanning Electron Microscopy, SEM and X-ray Photoelectron Spectroscopy, XPS. The sensors were fabricated by powders deposition via screen-printing technique onto planar commercial Al{sub 2}O{sub 3} substrates. The sensor signals towards H{sub 2}S exposure at low operating temperature (100 °C) reaches values from 10{sup 5} (for SnWCu600) to 10{sup 6} (for SnWCu800) over the full range of concentrations (5–30 ppm). The recovery processes were induced by a short temperature trigger of 500 °C. The selective sensitivity was underlined with respect to the H{sub 2}S, relative to other potential pollutants and relative humidity (10–70% RH).

  11. Interval-Valued Neutrosophic Bonferroni Mean Operators and the Application in the Selection of Renewable Energy

    OpenAIRE

    Pu Ji; Peng-fei Cheng; Hong-yu Zhang; Jian-qiang Wang

    2018-01-01

    Renewable energy selection, which is a multi-criteria decision-making (MCDM) problem, is crucial for the sustainable development of economy. Criteria are interdependent in the selection problem of renewable energy.

  12. Development of JOYO operational guidance system for emergency condition

    International Nuclear Information System (INIS)

    Takatsuto, Hiroshi; Owada, Toshio; Morimoto, Makoto; Aoki, Hiroshi; Tokita, Mitsuhiko; Terunuma, Seiichi

    1989-01-01

    Operational guidance system in JOYO has been developed for safe and stable plant operations and improvement of operational reliability. JOYCAT (JOYO Consulting and Analysing Tool), one of the JOYO operational guidance systems, supports the plant operator to present the causal alarm and select the suitable guidance manual in anomaly situations using artificial intelligence technology. Verification test of JOYCAT was performed using a JOYO operator-training simulator and on-line operation was started by partially linking to the actual plant in May 1988. As the result, the proper diagnosis function was confirmed in the actual plant. (author)

  13. Priority issues affecting operators' and suppliers' liens: the Alberta perspective

    International Nuclear Information System (INIS)

    Corbett, W.T.

    1996-01-01

    Selected aspects of priority issues in contractual obligations in the petroleum industry were discussed, focusing on the priority issues claimed by suppliers and operators with respect to Alberta properties. Discussions touched upon suppliers' lien rights in Alberta, operators' set-off rights, and on some of the priority issues involving operators' liens

  14. Safety Standard for Hydrogen and Hydrogen Systems: Guidelines for Hydrogen System Design, Materials Selection, Operations, Storage and Transportation. Revision

    Science.gov (United States)

    1997-01-01

    The NASA Safety Standard, which establishes a uniform process for hydrogen system design, materials selection, operation, storage, and transportation, is presented. The guidelines include suggestions for safely storing, handling, and using hydrogen in gaseous (GH2), liquid (LH2), or slush (SLH2) form whether used as a propellant or non-propellant. The handbook contains 9 chapters detailing properties and hazards, facility design, design of components, materials compatibility, detection, and transportation. Chapter 10 serves as a reference and the appendices contained therein include: assessment examples; scaling laws, explosions, blast effects, and fragmentation; codes, standards, and NASA directives; and relief devices along with a list of tables and figures, abbreviations, a glossary and an index for ease of use. The intent of the handbook is to provide enough information that it can be used alone, but at the same time, reference data sources that can provide much more detail if required.

  15. Magnetospheric Multiscale Instrument Suite Operations and Data System

    Science.gov (United States)

    Baker, D. N.; Riesberg, L.; Pankratz, C. K.; Panneton, R. S.; Giles, B. L.; Wilder, F. D.; Ergun, R. E.

    2016-03-01

    The four Magnetospheric Multiscale (MMS) spacecraft will collect a combined volume of ˜100 gigabits per day of particle and field data. On average, only 4 gigabits of that volume can be transmitted to the ground. To maximize the scientific value of each transmitted data segment, MMS has developed the Science Operations Center (SOC) to manage science operations, instrument operations, and selection, downlink, distribution, and archiving of MMS science data sets. The SOC is managed by the Laboratory for Atmospheric and Space Physics (LASP) in Boulder, Colorado and serves as the primary point of contact for community participation in the mission. MMS instrument teams conduct their operations through the SOC, and utilize the SOC's Science Data Center (SDC) for data management and distribution. The SOC provides a single mission data archive for the housekeeping and science data, calibration data, ephemerides, attitude and other ancillary data needed to support the scientific use and interpretation. All levels of data products will reside at and be publicly disseminated from the SDC. Documentation and metadata describing data products, algorithms, instrument calibrations, validation, and data quality will be provided. Arguably, the most important innovation developed by the SOC is the MMS burst data management and selection system. With nested automation and "Scientist-in-the-Loop" (SITL) processes, these systems are designed to maximize the value of the burst data by prioritizing the data segments selected for transmission to the ground. This paper describes the MMS science operations approach, processes and data systems, including the burst system and the SITL concept.

  16. Operating experience and TPA: the Italian perspective

    International Nuclear Information System (INIS)

    Grimaldi, G.

    1990-01-01

    Collection and analysis of operating experience from the Italian plants and utilization of abroad data both to plants in operation and in construction are presented. Some results are also referred, aimed to evidence the role of the international cooperation to safe operation of nuclear plants. The approach to the Trend and Pattern analyses is described as well, and the use of computerized techniques of analysis on personal computer. Finally on going activities are introduced, specifically application of operating experience of plants in operation to small sized reactors and to ones with more intrinsic safety characteristics; review of the reporting system for future application and comparative analysis of the different realization of selected safety systems

  17. Intra-operative removal of chest tube in video-assisted thoracoscopic procedures

    Directory of Open Access Journals (Sweden)

    Moustafa M. El-Badry

    2017-12-01

    Conclusions: Intra-operative removal of chest tube during VATS procedures was a safe technique in well selected patients with an intra-operative successful air-leak test with radiological and clinical follow-up. This technique provided lesser post-operative pain with shorter hospital stay.

  18. Metabolomics biomarkers to predict acamprosate treatment response in alcohol-dependent subjects.

    Science.gov (United States)

    Hinton, David J; Vázquez, Marely Santiago; Geske, Jennifer R; Hitschfeld, Mario J; Ho, Ada M C; Karpyak, Victor M; Biernacka, Joanna M; Choi, Doo-Sup

    2017-05-31

    Precision medicine for alcohol use disorder (AUD) allows optimal treatment of the right patient with the right drug at the right time. Here, we generated multivariable models incorporating clinical information and serum metabolite levels to predict acamprosate treatment response. The sample of 120 patients was randomly split into a training set (n = 80) and test set (n = 40) five independent times. Treatment response was defined as complete abstinence (no alcohol consumption during 3 months of acamprosate treatment) while nonresponse was defined as any alcohol consumption during this period. In each of the five training sets, we built a predictive model using a least absolute shrinkage and section operator (LASSO) penalized selection method and then evaluated the predictive performance of each model in the corresponding test set. The models predicted acamprosate treatment response with a mean sensitivity and specificity in the test sets of 0.83 and 0.31, respectively, suggesting our model performed well at predicting responders, but not non-responders (i.e. many non-responders were predicted to respond). Studies with larger sample sizes and additional biomarkers will expand the clinical utility of predictive algorithms for pharmaceutical response in AUD.

  19. An Optimal DEM Reconstruction Method for Linear Array Synthetic Aperture Radar Based on Variational Model

    Directory of Open Access Journals (Sweden)

    Shi Jun

    2015-02-01

    Full Text Available Downward-looking Linear Array Synthetic Aperture Radar (LASAR has many potential applications in the topographic mapping, disaster monitoring and reconnaissance applications, especially in the mountainous area. However, limited by the sizes of platforms, its resolution in the linear array direction is always far lower than those in the range and azimuth directions. This disadvantage leads to the blurring of Three-Dimensional (3D images in the linear array direction, and restricts the application of LASAR. To date, the research on 3D SAR image enhancement has focused on the sparse recovery technique. In this case, the one-to-one mapping of Digital Elevation Model (DEM brakes down. To overcome this, an optimal DEM reconstruction method for LASAR based on the variational model is discussed in an effort to optimize the DEM and the associated scattering coefficient map, and to minimize the Mean Square Error (MSE. Using simulation experiments, it is found that the variational model is more suitable for DEM enhancement applications to all kinds of terrains compared with the Orthogonal Matching Pursuit (OMPand Least Absolute Shrinkage and Selection Operator (LASSO methods.

  20. Rating of intra-operative neuro-monitoring results in operative correction of the spinal deformities

    Directory of Open Access Journals (Sweden)

    A. A. Skripnikov

    2015-01-01

    Full Text Available Purpose of the work was filing the electrophysiological phenomena observed in the process of intra-operative neuromonitoring followed by development of the results’ scale of intra-operative neuro-physiological testing of the pyramidal tract. Materials and мethods. The selection for evaluation included data of 147 protocols of intra-operative neuromonitoring in 135 patients (53 males, 82 females, aged from 1 y. 5 m. to 52 years (14,1±0,7 years with spinal deformities of different etiology who underwent instrumentation spinal correction followed by fixation of thoracic / thoracolumbar spine segments using various variants of internal systems of trans-pedicular fixation. Intra-operative neuro-monitoring was performed using system «ISIS IOM» (Inomed Medizintechnik GmbH, Germany. The changes of motor evoked potentials were evaluated according to this scale. Results. Five types of pyramidal system reaction to operative invasion were revealed. According to neurophysiological criteria three grades of the risk of neurological disorders development during operative spinal deformity correction and, correspondingly, three levels of anxiety for the surgeon were defined. Conclusion. Intra-operative neurophysiological monitoring is the effective highly technological instrument to prevent neurological disorders in the spinal deformity. Offered rating scale of the risk of neurological complications gives the possibility to highlight three levels of anxiety during operative invasion.

  1. Two-stage clustering (TSC: a pipeline for selecting operational taxonomic units for the high-throughput sequencing of PCR amplicons.

    Directory of Open Access Journals (Sweden)

    Xiao-Tao Jiang

    Full Text Available Clustering 16S/18S rRNA amplicon sequences into operational taxonomic units (OTUs is a critical step for the bioinformatic analysis of microbial diversity. Here, we report a pipeline for selecting OTUs with a relatively low computational demand and a high degree of accuracy. This pipeline is referred to as two-stage clustering (TSC because it divides tags into two groups according to their abundance and clusters them sequentially. The more abundant group is clustered using a hierarchical algorithm similar to that in ESPRIT, which has a high degree of accuracy but is computationally costly for large datasets. The rarer group, which includes the majority of tags, is then heuristically clustered to improve efficiency. To further improve the computational efficiency and accuracy, two preclustering steps are implemented. To maintain clustering accuracy, all tags are grouped into an OTU depending on their pairwise Needleman-Wunsch distance. This method not only improved the computational efficiency but also mitigated the spurious OTU estimation from 'noise' sequences. In addition, OTUs clustered using TSC showed comparable or improved performance in beta-diversity comparisons compared to existing OTU selection methods. This study suggests that the distribution of sequencing datasets is a useful property for improving the computational efficiency and increasing the clustering accuracy of the high-throughput sequencing of PCR amplicons. The software and user guide are freely available at http://hwzhoulab.smu.edu.cn/paperdata/.

  2. Domain-Specific Control of Selective Attention

    Science.gov (United States)

    Lin, Szu-Hung; Yeh, Yei-Yu

    2014-01-01

    Previous research has shown that loading information on working memory affects selective attention. However, whether the load effect on selective attention is domain-general or domain-specific remains unresolved. The domain-general effect refers to the findings that load in one content (e.g. phonological) domain in working memory influences processing in another content (e.g., visuospatial) domain. Attentional control supervises selection regardless of information domain. The domain-specific effect refers to the constraint of influence only when maintenance and processing operate in the same domain. Selective attention operates in a specific content domain. This study is designed to resolve this controversy. Across three experiments, we manipulated the type of representation maintained in working memory and the type of representation upon which the participants must exert control to resolve conflict and select a target into the focus of attention. In Experiments 1a and 1b, participants maintained digits and nonverbalized objects, respectively, in working memory while selecting a target in a letter array. In Experiment 2, we presented auditory digits with a letter flanker task to exclude the involvement of resource competition within the same input modality. In Experiments 3a and 3b, we replaced the letter flanker task with an object flanker task while manipulating the memory load on object and digit representation, respectively. The results consistently showed that memory load modulated distractibility only when the stimuli of the two tasks were represented in the same domain. The magnitude of distractor interference was larger under high load than under low load, reflecting a lower efficacy of information prioritization. When the stimuli of the two tasks were represented in different domains, memory load did not modulate distractibility. Control of processing priority in selective attention demands domain-specific resources. PMID:24866977

  3. Whipple procedure: patient selection and special considerations

    Directory of Open Access Journals (Sweden)

    Tan-Tam C

    2016-07-01

    Full Text Available Clara Tan-Tam,1 Maja Segedi,2 Stephen W Chung2 1Department of Surgery, Bassett Healthcare, Columbia University, Cooperstown, New York, NY, USA; 2Department of Hepatobiliary and Pancreatic Surgery and Liver Transplant, Vancouver General Hospital, University of British Columbia, Vancouver, BC, Canada Abstract: At the inception of pancreatic surgery by Dr Whipple in 1930s, the mortality and morbidity risk was more than 20%. With further understanding of disease processes and improvements in pancreas resection techniques, the mortality risk has decreased to less than 5%. Age and chronic illnesses are no longer a contraindication to surgical treatment. Life expectancy and quality of life at a later age have improved, making older patients more likely to receive pancreatic surgery , thereby also putting emphasis on operative patient selection to minimize complications. This review summarizes the benign and malignant illnesses that are treated with pancreas operations, and innovations and improvements in pancreatic surgery and perioperative care, and describes the careful selection process for patients who would benefit from an operation. These indications are not reserved only to Whipple operation, but to pancreatectomies as well.Keywords: pancreaticoduodenectomy, mortality, morbidity, cancer, trauma, pancreatitis

  4. Autonomous component carrier selection

    DEFF Research Database (Denmark)

    Garcia, Luis Guilherme Uzeda; Pedersen, Klaus; Mogensen, Preben

    2009-01-01

    management and efficient system operation. Due to the expected large number of user-deployed cells, centralized network planning becomes unpractical and new scalable alternatives must be sought. In this article, we propose a fully distributed and scalable solution to the interference management problem...... in local areas, basing our study case on LTE-Advanced. We present extensive network simulation results to demonstrate that a simple and robust interference management scheme, called autonomous component carrier selection allows each cell to select the most attractive frequency configuration; improving...... the experience of all users and not just the few best ones; while overall cell capacity is not compromised....

  5. A selective study of Information technologies to improve operations efficiency in construction

    Directory of Open Access Journals (Sweden)

    Konikov Alexandr

    2018-01-01

    Full Text Available Today, information technologies (IT are used in almost every production industry. While the aspects of IT are well studied and discussed in relevant monographs, articles, web sources, etc., this paper reviews the performance improvement options in the construction industry by leveraging IT. From a wide range of information technologies the author has picked the most relevant solutions, from his point of view, based on several considerations, the most important one being the lack of adequate attention to these technologies specifically in the construction industry. The paper covers the following technologies: Big Data (a smart technology for high-speed processing of huge and diverse data arrays; situation centers (SC for construction and operations projects (SCs are successfully used in other industries for operating control of sophisticated facilities; data warehouses (DW for the construction industry (DWs are viewed as a standalone project rather than a supplement to Data Mining or Big Data; operational and dispatch radio communication service (radio communication can ensure instant connectivity between several subscribers; VSAT (a satellite technology for prompt connection of a distant construction site with the 'outer world' when no alternatives are available. The paper briefly presents the essence of each technology, describes the pre-requisites for its use in construction, outlines the key advantages, limits and shortcomings, and lists construction projects where it shall be worthwhile to use a specific technology projects.

  6. Towards Automatic Decentralized Control Structure Selection

    DEFF Research Database (Denmark)

    for decentralized control is determined automatically, and the resulting decentralized control structure is automatically tuned using standard techniques. Dynamic simulation of the resulting process system gives immediate feedback to the process design engineer regarding practical operability of the process......A subtask in integration of design and control of chemical processes is the selection of a control structure. Automating the selection of the control structure enables sequential integration of process and controld esign. As soon as the process is specified or computed, a structure....... The control structure selection problem is formulated as a special MILP employing cost coefficients which are computed using Parseval's theorem combined with RGA and IMC concepts. This approach enables selection and tuning of large-scale plant-wide decentralized controllers through efficient combination...

  7. Towards Automatic Decentralized Control Structure Selection

    DEFF Research Database (Denmark)

    Jørgensen, John Bagterp; Jørgensen, Sten Bay

    2000-01-01

    for decentralized control is determined automatically, and the resulting decentralized control structure is automatically tuned using standard techniques. Dynamic simulation of the resulting process system gives immediate feedback to the process design engineer regarding practical operability of the process......A subtask in integration of design and control of chemical processes is the selection of a control structure. Automating the selection of the control structure enables sequential integration of process and control design. As soon as the process is specified or computed, a structure....... The control structure selection problem is formulated as a special MILP employing cost coefficients which are computed using Parseval's theorem combined with RGA and IMC concepts. This approach enables selection and tuning of large-scale plant-wide decentralized controllers through efficient combination...

  8. Selective Leaching of Gray Cast Iron: Electrochemical Aspects

    International Nuclear Information System (INIS)

    Na, Kyung Hwan; Yun, Eun Sub; Park, Young Sheop

    2010-01-01

    Currently, to keep step with increases in energy consumption, much attention has been paid to the construction of new nuclear power plants (NPPs) and to the continued operation of NPPs. For continued operation, the selective leaching of materials should be evaluated by visual inspections and hardness measurements as a part of One-Time Inspection Program according to the requirements of the guidelines for continued operation of pressured water reactors (PWRs) in Korea and license renewals in the United States, entitled the 'Generic Aging Lessons Learned (GALL) report.' However, the acceptance criteria for hardness have yet to be provided. Recently, USNRC released a new draft of the GALL report for comment and plans to publish its formal version by the end of 2010. In the new draft, the quantitative acceptance criteria for hardness are given at last: no more than a 20 percent decrease in hardness for gray cast iron and brass containing more than 15 percent zinc. Selective leaching is the preferential removal of one of the alloying elements from a solid alloy by corrosion processes, leaving behind a weakened spongy or porous residual structure. The materials susceptible to selective leaching include gray cast iron and brass, which are mainly used as pump casings and valve bodies in the fire protection systems of NPPs. Since selective leaching proceeds slowly during a long period of time and causes a decrease in strength without changing the overall dimensions of original material, it is difficult to identify. In the present work, the selective leaching of gray cast iron is investigated in terms of its electrochemical aspects as part of an ongoing research project to study the changes in metal properties by selective leaching

  9. Personality, personnel selection, and job performance

    OpenAIRE

    Linden, Dimitri; Pelt, Dirk; Dunkel, Curtis; Born, Marise

    2017-01-01

    markdownabstractJob Performance: The term job performance can either refer to the objective or subjective outcomes one achieves in a specific job (e.g., the profit of a sales persons, the number of publications of a scientist, the number of successful operations of a surgeon) or to work-related activities (e.g., writing an article, conducting specific surgical acts). In the majority of research on this topic, job performance as an outcome is used. Personnel selection: Personnel selection refe...

  10. Evaluation of methods and marker Systems in Genomic Selection of oil palm (Elaeis guineensis Jacq.).

    Science.gov (United States)

    Kwong, Qi Bin; Teh, Chee Keng; Ong, Ai Ling; Chew, Fook Tim; Mayes, Sean; Kulaveerasingam, Harikrishna; Tammi, Martti; Yeoh, Suat Hui; Appleton, David Ross; Harikrishna, Jennifer Ann

    2017-12-11

    Genomic selection (GS) uses genome-wide markers as an attempt to accelerate genetic gain in breeding programs of both animals and plants. This approach is particularly useful for perennial crops such as oil palm, which have long breeding cycles, and for which the optimal method for GS is still under debate. In this study, we evaluated the effect of different marker systems and modeling methods for implementing GS in an introgressed dura family derived from a Deli dura x Nigerian dura (Deli x Nigerian) with 112 individuals. This family is an important breeding source for developing new mother palms for superior oil yield and bunch characters. The traits of interest selected for this study were fruit-to-bunch (F/B), shell-to-fruit (S/F), kernel-to-fruit (K/F), mesocarp-to-fruit (M/F), oil per palm (O/P) and oil-to-dry mesocarp (O/DM). The marker systems evaluated were simple sequence repeats (SSRs) and single nucleotide polymorphisms (SNPs). RR-BLUP, Bayesian A, B, Cπ, LASSO, Ridge Regression and two machine learning methods (SVM and Random Forest) were used to evaluate GS accuracy of the traits. The kinship coefficient between individuals in this family ranged from 0.35 to 0.62. S/F and O/DM had the highest genomic heritability, whereas F/B and O/P had the lowest. The accuracies using 135 SSRs were low, with accuracies of the traits around 0.20. The average accuracy of machine learning methods was 0.24, as compared to 0.20 achieved by other methods. The trait with the highest mean accuracy was F/B (0.28), while the lowest were both M/F and O/P (0.18). By using whole genomic SNPs, the accuracies for all traits, especially for O/DM (0.43), S/F (0.39) and M/F (0.30) were improved. The average accuracy of machine learning methods was 0.32, compared to 0.31 achieved by other methods. Due to high genomic resolution, the use of whole-genome SNPs improved the efficiency of GS dramatically for oil palm and is recommended for dura breeding programs. Machine learning slightly

  11. The continuation training of operators and feedback of operational experience in the Royal Navy's nuclear submarine programme

    International Nuclear Information System (INIS)

    Manson, R.P.

    1983-01-01

    Naval continuation training has relied heavily on the use of realistic simulators for over ten years, and this has been proved to be a cost-effective and efficient method of training. The type of simulator used, the selection and qualification of simulator instructors, and the method of training experienced operators is described. Also, the assessment of operator performance, the use of simulators during the final stages of operator qualification, and their use for training operators on plant operation whilst shut-down are covered. The Navy also pays great attention to the feedback of operating experience from sea into both continuation and basic training. This is accomplished using Incident Reports, which are rendered whenever the plant is operated outside the approved Operating Documentation, or when any other unusual circumstance arises. Each Report is individually assessed and replied to by a qualified operator, and those incidents of more general interest are published in a wider circulation document available to all plant operators. In addition, each crew is given an annual lecture on recent operating experiences. Important lessons are fed forward into new plant design, and the incident reports are also used as a source of information for plant reliability data. (author)

  12. Personality Factors and Nuclear Power Plant Operators: Initial License Success

    Science.gov (United States)

    DeVita-Cochrane, Cynthia

    Commercial nuclear power utilities are under pressure to effectively recruit and retain licensed reactor operators in light of poor candidate training completion rates and recent candidate failures on the Nuclear Regulatory Commission (NRC) license exam. One candidate failure can cost a utility over $400,000, making the successful licensing of new operators a critical path to operational excellence. This study was designed to discover if the NEO-PI-3, a 5-factor measure of personality, could improve selection in nuclear utilities by identifying personality factors that predict license candidate success. Two large U.S. commercial nuclear power corporations provided potential participant contact information and candidate results on the 2014 NRC exam from their nuclear power units nation-wide. License candidates who participated (n = 75) completed the NEO-PI-3 personality test and results were compared to 3 outcomes on the NRC exam: written exam, simulated operating exam, and overall exam result. Significant correlations were found between several personality factors and both written and operating exam outcomes on the NRC exam. Further, a regression analysis indicated that personality factors, particularly Conscientiousness, predicted simulated operating exam scores. The results of this study may be used to support the use of the NEO-PI-3 to improve operator selection as an addition to the current selection protocol. Positive social change implications from this study include support for the use of a personality measure by utilities to improve their return-on-investment in candidates and by individual candidates to avoid career failures. The results of this study may also positively impact the public by supporting the safe and reliable operation of commercial nuclear power utilities in the United States.

  13. Improved productivity justifies world record underbalanced perforating operation

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, A. M.; Bakker, E. R. [NAM B.V. (Netherlands); Hungerford, K.

    1998-12-31

    To achieve vertical connectivity with all the layers, and thus long term sustained productivity in a highly stratified reservoir, a one run underbalanced perforating operation was considered necessary. Due to coiled tube limitations in this deep (5136 m along hole, 3700 m true vertical depth, with a maximum deviation of 89 degrees), high pressure well a hydraulic workover unit (HWU) was selected to deploy and retrieve the guns. The operation is considered a world record since this is the longest section (total gross interval of 1026 m perforated) of guns conveyed, fired underbalanced and deployed out of a live well. It is concluded that the improved productivity more than justified the additional time, effort and expenditure; considering the full life cycle of the well it is readily apparent that the operation was an economic and technical success. Details of the considerations leading to the perforating technique selection, the planning and the execution of the operation, and the validation of the technique in terms of productivity gains, are provided. 13 refs., 7 figs.

  14. Optimizing winter/snow removal operations in MoDOT St. Louis district : includes outcome based evaluation of operations.

    Science.gov (United States)

    2011-10-01

    The objective of this project was to develop fleet location, route decision, material selection, and treatment procedures for winter snow removal operations to improve MoDOTs services and lower costs. This work uses a systematic, heuristic-based o...

  15. The development of reactor operator license examination question bank

    International Nuclear Information System (INIS)

    Kim, In Hwan; Woo, S. M.; Kam, S. C.; Nam, K. J.; Lim, H. P.

    2001-12-01

    The number of NPP keeps increasing therefore there is more need of reactor operators. This trend requires the more efficiency in managing the license examination. Question bank system will help us to develop good quality examination materials and keep them in it. The ultimate purpose of the bank system is for selecting qualified reactor operators who are primarily responsible for the safety of reactor operation in NPP

  16. Testing the efficiency of rover science protocols for robotic sample selection: A GeoHeuristic Operational Strategies Test

    Science.gov (United States)

    Yingst, R. A.; Bartley, J. K.; Chidsey, T. C.; Cohen, B. A.; Gilleaudeau, G. J.; Hynek, B. M.; Kah, L. C.; Minitti, M. E.; Williams, R. M. E.; Black, S.; Gemperline, J.; Schaufler, R.; Thomas, R. J.

    2018-05-01

    The GHOST field tests are designed to isolate and test science-driven rover operations protocols, to determine best practices. During a recent field test at a potential Mars 2020 landing site analog, we tested two Mars Science Laboratory data-acquisition and decision-making methods to assess resulting science return and sample quality: a linear method, where sites of interest are studied in the order encountered, and a "walkabout-first" method, where sites of interest are examined remotely before down-selecting to a subset of sites that are interrogated with more resource-intensive instruments. The walkabout method cost less time and fewer resources, while increasing confidence in interpretations. Contextual data critical to evaluating site geology was acquired earlier than for the linear method, and given a higher priority, which resulted in development of more mature hypotheses earlier in the analysis process. Combined, this saved time and energy in the collection of data with more limited spatial coverage. Based on these results, we suggest that the walkabout method be used where doing so would provide early context and time for the science team to develop hypotheses-critical tests; and that in gathering context, coverage may be more important than higher resolution.

  17. International Workshop on Operator Theory and Applications

    CERN Document Server

    Jacob, Birgit; Ran, André; Zwart, Hans

    2016-01-01

    This volume collects a selected number of papers presented at the International Workshop on Operator Theory and its Applications (IWOTA) held in July 2014 at Vrije Universiteit in Amsterdam. Main developments in the broad area of operator theory are covered, with special emphasis on applications to science and engineering. The volume also presents papers dedicated to the eightieth birthday of Damir Arov and to the sixty-fifth birthday of Leiba Rodman, both leading figures in the area of operator theory and its applications, in particular, to systems theory.

  18. Filtration behavior of casein glycomacropeptide (CGMP) in an enzymatic membrane reactor: fouling control by membrane selection and threshold flux operation

    DEFF Research Database (Denmark)

    Luo, Jianquan; Morthensen, Sofie Thage; Meyer, Anne S.

    2014-01-01

    . In this study, the filtration performance and fouling behavior during ultrafiltration (UF) of CGMP for the enzymatic production of 3′-sialyllactose were investigated. A 5kDa regenerated cellulose membrane with high anti-fouling performance, could retain CGMP well, permeate 3′-sialyllactose, and was found...... to be the most suitable membrane for this application. Low pH increased CGMP retention but produced more fouling. Higher agitation and lower CGMP concentration induced larger permeate flux and higher CGMP retention. Adsorption fouling and pore blocking by CGMP in/on membranes could be controlled by selecting...... a highly hydrophilic membrane with appropriate pore size. Operating under threshold flux could minimize the concentration polarization and cake/gel/scaling layers, but might not avoid irreversible fouling caused by adsorption and pore blocking. The effects of membrane properties, pH, agitation and CGMP...

  19. ExplorOcean H2O SOS: Help Heal the Ocean-Student Operated Solutions: Operation Climate Change

    Science.gov (United States)

    Weiss, N.; Wood, J. H.

    2016-12-01

    The ExplorOcean H2O SOS: Help Heal the Ocean—Student Operated Solutions: Operation Climate Change, teaches middle and high school students about ocean threats related to climate change through hands-on activities and learning experiences in the field. During each session (in-class or after-school as a club), students build an understanding about how climate change impacts our oceans using resources provided by ExplorOcean (hands-on activities, presentations, multi-media). Through a student leadership model, students present lessons to each other, interweaving a deep learning of science, 21st century technology, communication skills, and leadership. After participating in learning experiences and activities related to 6 key climate change concepts: 1) Introduction to climate change, 2) Increased sea temperatures, 3) Ocean acidification, 4) Sea level rise, 5) Feedback mechanisms, and 6) Innovative solutions. H2O SOS- Operation Climate change participants select one focus issue and use it to design a multi-pronged campaign to increase awareness about this issue in their local community. The campaign includes social media, an interactive activity, and a visual component. All participating clubs that meet participation and action goals earn a field trip to ExplorOcean where they dive deeper into their selected issue through hands-on activities, real-world investigations, and interviews or presentations with experts. In addition to self-selected opportunities to showcase their focus issue, teams will participate in one of several key events identified by ExplorOcean, including ExplorOcean's annual World Oceans Day Expo.

  20. Research about reactor operator's personability characteristics and performance

    Energy Technology Data Exchange (ETDEWEB)

    Wei Li; He Xuhong; Zhao Bingquan [Tsinghua Univ., Institute of Nuclear Energy Technology, Beijing (China)

    2003-03-01

    To predict and evaluate the reactor operator's performance by personality characteristics is an important part of reactor operator safety assessment. Using related psychological theory combined with the Chinese operator's fact and considering the effect of environmental factors to personality analysis, paper does the research about the about the relationships between reactor operator's performance and personality characteristics, and offers the reference for operator's selection, using and performance in the future. (author)

  1. Operating experience with high beta superconducting RF cavities

    International Nuclear Information System (INIS)

    Dylla, H.F.; Doolittle, L.R.; Benesch, J.F.

    1993-01-01

    The number of installed and operational β=1 superconducting rf cavities has grown significantly over the last two years in accelerator laboratories in Europe, Japan and the U.S. The total installed acceleration capability as of mid-1993 is approximately 1 GeV at nominal gradients. Major installations at CERN, DESY, KEK and CEBAF have provided large increments to the installed base and valuable operational experience. A selection of test data and operational experience gathered to date is reviewed

  2. Operating experience with high beta superconducting rf cavities

    International Nuclear Information System (INIS)

    Dylla, H.F.; Doolittle, L.R.; Benesch, J.F.

    1993-06-01

    The number of installed and operational β = 1 superconducting rf cavities has grown significantly over the last two years in accelerator laboratories in Europe, Japan and the US. The total installed acceleration capability as of mid-1993 is approximately 1 GeV at nominal gradients. Major installations at CERN, DESY, KEK and CEBAF have provided large increments to the installed base and valuable operational experience. A selection of test data and operational experience gathered to date is reviewed

  3. Method for evaluating operator inputs to digital controllers

    International Nuclear Information System (INIS)

    Venhuizen, J.R.

    1983-01-01

    Most industrial processes employ operator-interactive control systems. The performance of these control systems is influenced by the choice of control station (device through which operator enters control commands). While the importance of proper control-station selection is widely accepted, standard and simple selection methods are not available for the control station using color-graphics terminals. This paper describes a unique facility for evaluating the effectiveness of various control stations. In the facility, a process is simulated on a hybrid computer, color-graphics display terminals provide information to the operator, and different control stations accept input commands to control the simulation. Tests are being conducted to evaluate a keyboard, a graphics tablet, and a CRT touch panel for use as control stations on a nuclear power plant. Preliminary results indicate that our facility can be used to determine those situations where each type of station is advantageous

  4. Neuromorphic VLSI Models of Selective Attention: From Single Chip Vision Sensors to Multi-chip Systems.

    Science.gov (United States)

    Indiveri, Giacomo

    2008-09-03

    Biological organisms perform complex selective attention operations continuously and effortlessly. These operations allow them to quickly determine the motor actions to take in response to combinations of external stimuli and internal states, and to pay attention to subsets of sensory inputs suppressing non salient ones. Selective attention strategies are extremely effective in both natural and artificial systems which have to cope with large amounts of input data and have limited computational resources. One of the main computational primitives used to perform these selection operations is the Winner-Take-All (WTA) network. These types of networks are formed by arrays of coupled computational nodes that selectively amplify the strongest input signals, and suppress the weaker ones. Neuromorphic circuits are an optimal medium for constructing WTA networks and for implementing efficient hardware models of selective attention systems. In this paper we present an overview of selective attention systems based on neuromorphic WTA circuits ranging from single-chip vision sensors for selecting and tracking the position of salient features, to multi-chip systems implement saliency-map based models of selective attention.

  5. Neuromorphic VLSI Models of Selective Attention: From Single Chip Vision Sensors to Multi-chip Systems

    Directory of Open Access Journals (Sweden)

    Giacomo Indiveri

    2008-09-01

    Full Text Available Biological organisms perform complex selective attention operations continuously and effortlessly. These operations allow them to quickly determine the motor actions to take in response to combinations of external stimuli and internal states, and to pay attention to subsets of sensory inputs suppressing non salient ones. Selective attention strategies are extremely effective in both natural and artificial systems which have to cope with large amounts of input data and have limited computational resources. One of the main computational primitives used to perform these selection operations is the Winner-Take-All (WTA network. These types of networks are formed by arrays of coupled computational nodes that selectively amplify the strongest input signals, and suppress the weaker ones. Neuromorphic circuits are an optimal medium for constructing WTA networks and for implementing efficient hardware models of selective attention systems. In this paper we present an overview of selective attention systems based on neuromorphic WTA circuits ranging from single-chip vision sensors for selecting and tracking the position of salient features, to multi-chip systems implement saliency-map based models of selective attention.

  6. Parameter Selection for Ant Colony Algorithm Based on Bacterial Foraging Algorithm

    Directory of Open Access Journals (Sweden)

    Peng Li

    2016-01-01

    Full Text Available The optimal performance of the ant colony algorithm (ACA mainly depends on suitable parameters; therefore, parameter selection for ACA is important. We propose a parameter selection method for ACA based on the bacterial foraging algorithm (BFA, considering the effects of coupling between different parameters. Firstly, parameters for ACA are mapped into a multidimensional space, using a chemotactic operator to ensure that each parameter group approaches the optimal value, speeding up the convergence for each parameter set. Secondly, the operation speed for optimizing the entire parameter set is accelerated using a reproduction operator. Finally, the elimination-dispersal operator is used to strengthen the global optimization of the parameters, which avoids falling into a local optimal solution. In order to validate the effectiveness of this method, the results were compared with those using a genetic algorithm (GA and a particle swarm optimization (PSO, and simulations were conducted using different grid maps for robot path planning. The results indicated that parameter selection for ACA based on BFA was the superior method, able to determine the best parameter combination rapidly, accurately, and effectively.

  7. Minimization over randomly selected lines

    Directory of Open Access Journals (Sweden)

    Ismet Sahin

    2013-07-01

    Full Text Available This paper presents a population-based evolutionary optimization method for minimizing a given cost function. The mutation operator of this method selects randomly oriented lines in the cost function domain, constructs quadratic functions interpolating the cost function at three different points over each line, and uses extrema of the quadratics as mutated points. The crossover operator modifies each mutated point based on components of two points in population, instead of one point as is usually performed in other evolutionary algorithms. The stopping criterion of this method depends on the number of almost degenerate quadratics. We demonstrate that the proposed method with these mutation and crossover operations achieves faster and more robust convergence than the well-known Differential Evolution and Particle Swarm algorithms.

  8. An Analysis of the Network Selection Problem for Heterogeneous Environments with User-Operator Joint Satisfaction and Multi-RAT Transmission

    Directory of Open Access Journals (Sweden)

    J. J. Escudero-Garzás

    2017-01-01

    Full Text Available The trend in wireless networks is that several wireless radio access technologies (RATs coexist in the same area, forming heterogeneous networks in which the users may connect to any of the available RATs. The problem of associating a user to the most suitable RAT, known as network selection problem (NSP, is of capital importance for the satisfaction of the users in these emerging environments. However, also the satisfaction of the operator is important in this scenario. In this work, we propose that a connection may be served by more than one RAT by using multi-RAT terminals. We formulate the NSP with multiple RAT association based on utility functions that take into consideration both user’s satisfaction and provider’s satisfaction. As users are characterized according to their expected quality of service, our results exhaustively analyze the influence of the user’s profile, along with the network topology and the type of applications served.

  9. Feature selection toolbox software package

    Czech Academy of Sciences Publication Activity Database

    Pudil, Pavel; Novovičová, Jana; Somol, Petr

    2002-01-01

    Roč. 23, č. 4 (2002), s. 487-492 ISSN 0167-8655 R&D Projects: GA ČR GA402/01/0981 Institutional research plan: CEZ:AV0Z1075907 Keywords : pattern recognition * feature selection * loating search algorithms Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.409, year: 2002

  10. Selectivity in a trawl codend during haul-back operation: An overlooked phenomenon

    DEFF Research Database (Denmark)

    Madsen, Niels; Skeide, R.; Breen, M.

    2008-01-01

    ), whiting (Merlangius merlangus) and Norway lobster (Nephrops norvegicus), respectively, the mean percentages escaping at the surface were 16, 12 and 38% of the total escape while 17, 8 and 28% escaped during the haul-up phase. Compared to towing, the escape rate (no./min) increased for haddock by a factor...... 2.7 during haul-up and by a factor 1.7 at the surface, whereas the escape rates of whiting were similar for the three phases. The escape rate of Norway lobster increased by a factor of approximately 7 for both the haul-up and surface phases, compared to the towing phase. The selectivity parameters L......50 (50% retention length) and SR (selection rangeá=áL75-L25) were estimated and compared for the three different phases and for the whole haul for haddock, whiting and Norway lobster. For all three species there was no significant (Pá>á0.05) difference in L50 between the three phases of the haul...

  11. Laboratory Information Systems Management and Operations.

    Science.gov (United States)

    Cucoranu, Ioan C

    2015-06-01

    The main mission of a laboratory information system (LIS) is to manage workflow and deliver accurate results for clinical management. Successful selection and implementation of an anatomic pathology LIS is not complete unless it is complemented by specialized information technology support and maintenance. LIS is required to remain continuously operational with minimal or no downtime and the LIS team has to ensure that all operations are compliant with the mandated rules and regulations. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. The effects of predictor method factors on selection outcomes: A modular approach to personnel selection procedures.

    Science.gov (United States)

    Lievens, Filip; Sackett, Paul R

    2017-01-01

    Past reviews and meta-analyses typically conceptualized and examined selection procedures as holistic entities. We draw on the product design literature to propose a modular approach as a complementary perspective to conceptualizing selection procedures. A modular approach means that a product is broken down into its key underlying components. Therefore, we start by presenting a modular framework that identifies the important measurement components of selection procedures. Next, we adopt this modular lens for reviewing the available evidence regarding each of these components in terms of affecting validity, subgroup differences, and applicant perceptions, as well as for identifying new research directions. As a complement to the historical focus on holistic selection procedures, we posit that the theoretical contributions of a modular approach include improved insight into the isolated workings of the different components underlying selection procedures and greater theoretical connectivity among different selection procedures and their literatures. We also outline how organizations can put a modular approach into operation to increase the variety in selection procedures and to enhance the flexibility in designing them. Overall, we believe that a modular perspective on selection procedures will provide the impetus for programmatic and theory-driven research on the different measurement components of selection procedures. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  13. Selection, qualification and training of personnel for nuclear power plants

    International Nuclear Information System (INIS)

    Anon.

    1981-01-01

    This standard provides criteria for the selection, qualification and training of personnel for stationary nuclear power plants. Qualifications, responsibilities, and training of personnel in operating and support organizations appropriate for the safe and efficient operation of nuclear power plants are addressed

  14. Exploring factors contributing to voluntarily withdrawal by candidates during South African operational forces selection

    CSIR Research Space (South Africa)

    Van Heerden, A

    2016-11-01

    Full Text Available context affecting expectancies for control of reinforcement. Cognitive Therapy and Research, 6(4), 409-427. Eskreis-Winkler, L., Shulman, E. P., Beal, S. A., & Duckworth, A. L. (2014). The grit effect: Predicting retention in the military, the workplace... suitability and PT tests. Approximately 41% of candidates invited to attend the pre-selection preparation phase are lost. The Pre-selection Preparation phase entails medical and physical as well as psychological (personality and cognitive) measurements...

  15. Operations management for construction

    CERN Document Server

    March, Chris

    2009-01-01

    Students studying construction management and related subjects need to have a broad understanding of the major aspects of controlling the building processes. Operations Management for Construction is one of three textbooks (Business Organisation, Operations Management and Finance Control) written to systematically cover the field. Focusing on construction sites and operations which are challenging to run, Chris March explores issues such as the setting up of the site, the deciding of the methodology of construction, and the sequence of work and resourcing. As changing and increasing regulations affect the way sites are managed, he also considers the issues and methods of successful administering, safety, quality and environment. Finally, the contractor's responsibility to the environment, including relationships with third parties, selection of materials, waste management and sustainability is discussed. Chris March has a wealth of practical experience in the construction industry, as well as considerable exp...

  16. ITER operating limit definition criteria

    International Nuclear Information System (INIS)

    Ciattaglia, S.; Barabaschi, P.; Carretero, J.A.; Chiocchio, S.; Hureau, D.; Girard, J.Ph.; Gordon, C.; Portone, A.; Rodrigo, L. Rodriguez; Roldan, C.; Saibene, G.; Uzan-Elbez, J.

    2009-01-01

    The operating limits and conditions (OLCs) are operating parameters and conditions, chosen among all system/components, which, together, define the domain of the safe operation of ITER in all foreseen ITER states (operation, maintenance, commissioning). At the same time they are selected to guarantee the required operation flexibility which is a critical factor for the success of an experimental machine such as ITER. System and components that are important for personnel or public safety (safety important class, SIC) are identified considering their functional importance in the overall plant safety analysis. SIC classification has to be presented already in the preliminary safety analysis report and approved by the licensing authority before manufacturing and construction. OLCs comprise the safety limits that, if exceeded, could result in a potential safety hazard, the relevant settings that determine the intervention of SIC systems, and the operational limits on equipment which warn against or stop a functional deviation from a planned operational status that could challenge equipment and functions. Some operational conditions, e.g. in-Vacuum Vessel (VV) radioactive inventories, will be controlled through procedures. Operating experience from present tokamaks, in particular JET, and from nuclear plants, is considered to the maximum possible extent. This paper presents the guidelines for the development of the ITER OLCs with particular reference to safety limits.

  17. Operational radiation protection: A guide to optimization

    International Nuclear Information System (INIS)

    1990-01-01

    The purpose of this publication is to provide practical guidance on the application of the dose limitation system contained in the Basic Safety Standards for Radiation Protection to operational situations both in large nuclear installations and in much smaller facilities. It is anticipated that this Guide will be useful to both the management and radiation protection staff of operations in which there is a potential for occupational radiation exposures and to the competent authorities with responsibilities for providing a programme of regulatory control. Contents: Dose limitation system; Optimization and its practical application to operational radiation protection; Major elements of an effective operational radiation protection programme; Review of selected parts of the basic safety standards with special reference to operational radiation protection; Optimization of radiation protection; Techniques for the systematic appraisal of operational radiation protection programmes. Refs and figs

  18. Modeling Operating Modes for the Monju Nuclear Power Plant

    DEFF Research Database (Denmark)

    Lind, Morten; Yoshikawa, H.; Jørgensen, Sten Bay

    2012-01-01

    of the process plant, its function and its structural elements. The paper explains how the means-end concepts of MFM can be used to provide formalized definitions of plant operation modes. The paper will introduce the mode types defined by MFM and show how selected operation modes can be represented...

  19. Multi-criteria selection of offshore wind farms: Case study for the Baltic States

    International Nuclear Information System (INIS)

    Chaouachi, Aymen; Covrig, Catalin Felix; Ardelean, Mircea

    2017-01-01

    This paper presents a multi-criteria selection approach for offshore wind sites assessment. The proposed site selection framework takes into consideration the electricity network’s operating security aspects, economic investment, operation costs and capacity performances relative to each potential site. The selection decision is made through Analytic Hierarchy Process (AHP), with an inherited flexibility that aims to allow end users to adjust the expected benefits accordingly to their respective and global priorities. The proposed site selection framework is implemented as an interactive case study for three Baltic States in the 2020 time horizon, based on real data and exhaustive power network models, taking into consideration the foreseen upgrades and network reinforcements. For each country the optimal offshore wind sites are assessed under multiple weight contribution scenarios, reflecting the characteristics of market design, regulatory aspects or renewable integration targets. - Highlights: • We use a multi-criteria selection approach for offshore wind sites assessment. • Security aspects, economic investment, operation costs and capacity performances are included. • The selection decision is made through an Analytic Hierarchy Process (AHP). • We implement the methodology as a case study for three Baltic States in the 2020 time horizon.

  20. Selection decisions among reindeer herders in Finland

    Directory of Open Access Journals (Sweden)

    Kirsi Muuttoranta

    2011-04-01

    Full Text Available Selection of breeding animals is a tool to improve the revenues in animal production. Information about selection practices and criteria are essential in assessing the possibilities for systematic selection schemes. Attitudes of reindeer herders towards use of selection in improving production were investigated by means of interviews. We interviewed the managers of reindeer herding cooperatives concerning their selection decisions. Fortyfive out of 56 managers answered to the semi-structured questionnaire. Among herding operations, selection of breeding animals was regarded by managers as critical for calf’s autumn weight and survival. The main selection criteria were calf’s health, vigour, body size and muscularity, dam or dam line, and maternal care. Hair quality and hair length were important as well, while such often quoted traits as antler characteristics, e.g. early shedding of antler velvet and thick antler bases, were unimportant. The results show that reindeer herders i acknowledge the importance and effects of selective breeding, and ii have empirical knowledge to list the most important selection criteria.

  1. Beyond Safe Operating Space: Finding Chemical Footprinting Feasible

    DEFF Research Database (Denmark)

    Posthuma, Leo; Bjørn, Anders; Zijp, Michiel C.

    2014-01-01

    undefined boundary in their selection of planetary boundaries delineating the “safe operating space for humanity”. Can we use the well-known concept of “ecological footprints” to express a chemical pollution boundary aimed at preventing the overshoot of the Earth’s capacity to assimilate environmental...... scenarios that allow us to avoid “chemical overshoot” beyond the Earth’s safe operating space....

  2. Automated validation of a computer operating system

    Science.gov (United States)

    Dervage, M. M.; Milberg, B. A.

    1970-01-01

    Programs apply selected input/output loads to complex computer operating system and measure performance of that system under such loads. Technique lends itself to checkout of computer software designed to monitor automated complex industrial systems.

  3. Economic/operational advantages of top drive installations

    Energy Technology Data Exchange (ETDEWEB)

    Brouse, M. [Tesco Drilling Technology, Houston, TX (United States)

    1996-10-01

    This article addresses specific types of drilling operations, procedures and techniques associated with top drive drilling that create selected and/or specific economic opportunities and justifications for using a rental top drive. Types of drilling operations and/or wells that commonly justify top drive drilling described here include: drilling through sloughing or swelling formations; drilling extended reach, high angle and/or horizontal wells; drilling underbalanced through normal and subnormal formations; and wells with high daily operating costs, time constraints, i.e., days vs. depth, and/or safety and environmental concerns. Top drives, of course, are not justified for every drilling operation. Here, the discussion indicates some situations where portable systems may be applicable.

  4. High-power, cladding-pumped all-fiber laser with selective transverse mode generation property.

    Science.gov (United States)

    Li, Lei; Wang, Meng; Liu, Tong; Leng, Jinyong; Zhou, Pu; Chen, Jinbao

    2017-06-10

    We demonstrate, to the best of our knowledge, the first cladding-pumped all-fiber oscillator configuration with selective transverse mode generation based on a mode-selective fiber Bragg grating pair. Operating in the second-order (LP 11 ) mode, maximum output power of 4.2 W is obtained with slope efficiency of about 38%. This is the highest reported output power of single higher-order transverse mode generation in an all-fiber configuration. The intensity distribution profile and spectral evolution have also been investigated in this paper. Our work suggests the potential of realizing higher power with selective transverse mode operation based on a mode-selective fiber Bragg grating pair.

  5. Can a nuclear reactor operate for 100 years?

    International Nuclear Information System (INIS)

    Hertel, O.

    2010-01-01

    The TWR (Travelling Wave Reactor) concept was invented in the fifties, then forgotten and it reappeared in 2001 but it was considered too immature to be selected for the fourth generation of nuclear reactors, now an American company 'Terrapower' proposes one whose design is given in the article. This TWR operates with depleted uranium, only the lower part of the fuel rod involves uranium fuel with a civil enrichment ratio (less that 20%). The lower part of the fuel will ignite the fission reaction and enrich the part of fuel just above through neutron absorption. The burning part of the fuel will move up progressively. The main advantage of this reactor is that it can operate for decades without maintenance nor fuel loading. The principle is right on the paper but requires huge technological work to select materials and systems that will be able to withstand decades of operation time in harsh conditions. (A.C.)

  6. Prediction of the time-dependent failure rate for normally operating components taking into account the operational history

    International Nuclear Information System (INIS)

    Vrbanic, I.; Simic, Z.; Sljivac, D.

    2008-01-01

    The prediction of the time-dependent failure rate has been studied, taking into account the operational history of a component used in applications such as system modeling in a probabilistic safety analysis in order to evaluate the impact of equipment aging and maintenance strategies on the risk measures considered. We have selected a time-dependent model for the failure rate which is based on the Weibull distribution and the principles of proportional age reduction by equipment overhauls. Estimation of the parameters that determine the failure rate is considered, including the definition of the operational history model and likelihood function for the Bayesian analysis of parameters for normally operating repairable components. The operational history is provided as a time axis with defined times of overhauls and failures. An example for demonstration is described with prediction of the future behavior for seven different operational histories. (orig.)

  7. SNCR technology for NO sub x reduction in the cement industry. [Selective non-catalytic reduction

    Energy Technology Data Exchange (ETDEWEB)

    Kupper, D; Brentrup, L [Krupp Polysius AG, Beckum (Germany)

    1992-03-01

    This article discusses the selective non-catalytic (SNCR) process for reducing nitrogen oxides in exhaust gases from cement plants. Topics covered include operating experience, injection of additives, selection of the additive, operating costs, reduction efficiency of SNCR, capital expenditure, secondary emissions and cycles of ammonium. (UK).

  8. Operator models for delivering municipal solid waste management services in developing countries: Part B: Decision support.

    Science.gov (United States)

    Soós, Reka; Whiteman, Andrew D; Wilson, David C; Briciu, Cosmin; Nürnberger, Sofia; Oelz, Barbara; Gunsilius, Ellen; Schwehn, Ekkehard

    2017-08-01

    This is the second of two papers reporting the results of a major study considering 'operator models' for municipal solid waste management (MSWM) in emerging and developing countries. Part A documents the evidence base, while Part B presents a four-step decision support system for selecting an appropriate operator model in a particular local situation. Step 1 focuses on understanding local problems and framework conditions; Step 2 on formulating and prioritising local objectives; and Step 3 on assessing capacities and conditions, and thus identifying strengths and weaknesses, which underpin selection of the operator model. Step 4A addresses three generic questions, including public versus private operation, inter-municipal co-operation and integration of services. For steps 1-4A, checklists have been developed as decision support tools. Step 4B helps choose locally appropriate models from an evidence-based set of 42 common operator models ( coms); decision support tools here are a detailed catalogue of the coms, setting out advantages and disadvantages of each, and a decision-making flowchart. The decision-making process is iterative, repeating steps 2-4 as required. The advantages of a more formal process include avoiding pre-selection of a particular com known to and favoured by one decision maker, and also its assistance in identifying the possible weaknesses and aspects to consider in the selection and design of operator models. To make the best of whichever operator models are selected, key issues which need to be addressed include the capacity of the public authority as 'client', management in general and financial management in particular.

  9. Operations Research for Freight Train Routing and Scheduling

    DEFF Research Database (Denmark)

    Harrod, Steven; Gorman, Michael F.

    2011-01-01

    This article describes the service design activities that plan and implement the rail freight operating plan. Elements of strategic service design include the setting of train frequency, the routing of cars among trains, and the consolidation of cars, called blocking. At the operational level......, trains are dispatched either according to train paths configured in advance, called timetables, or according to priority rules. We describe the North American and European practice along with selected modeling and problem solving methodologies appropriate for each of the operating conditions described...

  10. 1st International Symposium and 10th Balkan Conference on Operational Research

    CERN Document Server

    Sifaleras, Angelo; Georgiadis, Christos; Papathanasiou, Jason; Stiakakis, Emmanuil

    2013-01-01

    Over the past two decades, the Balkan Conference on Operational Research (BALCOR) has facilitated the exchange of scientific and technical information on the subject of Operations Research and related fields such as Mathematical Programming, Game Theory, Multiple Criteria Decision Analysis, Information Systems, Data Mining, and more, in order to promote international scientific cooperation.  The contributed papers contained in this volume consist of 25 selected research papers based on results presented at the 10th Balkan Conference & 1st International Symposium on Operational Research in Thessalonike, Greece. Subjects include, but do not restrict to, the development of theory and mathematical models for Operations Research, theory and applications of Combinatorial Optimization, Supply Chain Optimization, and Military Operations Research.  These carefully selected papers present important recent developments and modern applications, and will serve as excellent reference for students, researchers, and pr...

  11. OPERATIONAL DISTRIBUTION OF THE TRAIN TRAFFIC VOLUME ON THE SECTIONS OF RAILWAY OPERATING DOMAIN

    Directory of Open Access Journals (Sweden)

    G. Ya. Моzolevich

    2013-10-01

    Full Text Available Purpose. The task of the operational distribution of train traffic volume on the sections of operating domain is the optimization one. It is solved in the operational conditions by the dispatch station. The article sets the problem of formalizing and finding the new ways to solve this urgent problem. Methodology. A new approach to solving the problem of operational distribution of train traffic volume on the sections of the rail network with a choice of routes for all train traffics was proposed. Findings. A study of possible routes for the train traffic handle on the operating domain used for mass freight transportations between Krivyi Rih and Donbas was carried out. The use of the proposed method allowed us to obtain a rational distribution of trains on the rail network sections. Originality. The method of train traffic volume distribution in the network under operational conditions was improved. The method, as opposed to the current one allows one to select the route of separate units handle (according to the criteria of the weighted average cost for 1 ton of cargo. Practical value. The use of the proposed technology of the operational distribution of train traffic volume will increase the efficiency of the railways in general and ensure the competitiveness of rail transportations. The methodology implementation involves the use of railway dispatch station for the automated workplaces with appropriate informational support.

  12. Distance based control system for machine vision-based selective spraying

    NARCIS (Netherlands)

    Steward, B.L.; Tian, L.F.; Tang, L.

    2002-01-01

    For effective operation of a selective sprayer with real-time local weed sensing, herbicides must be delivered, accurately to weed targets in the field. With a machine vision-based selective spraying system, acquiring sequential images and switching nozzles on and off at the correct locations are

  13. Wrapper-based selection of genetic features in genome-wide association studies through fast matrix operations

    Science.gov (United States)

    2012-01-01

    Background Through the wealth of information contained within them, genome-wide association studies (GWAS) have the potential to provide researchers with a systematic means of associating genetic variants with a wide variety of disease phenotypes. Due to the limitations of approaches that have analyzed single variants one at a time, it has been proposed that the genetic basis of these disorders could be determined through detailed analysis of the genetic variants themselves and in conjunction with one another. The construction of models that account for these subsets of variants requires methodologies that generate predictions based on the total risk of a particular group of polymorphisms. However, due to the excessive number of variants, constructing these types of models has so far been computationally infeasible. Results We have implemented an algorithm, known as greedy RLS, that we use to perform the first known wrapper-based feature selection on the genome-wide level. The running time of greedy RLS grows linearly in the number of training examples, the number of features in the original data set, and the number of selected features. This speed is achieved through computational short-cuts based on matrix calculus. Since the memory consumption in present-day computers can form an even tighter bottleneck than running time, we also developed a space efficient variation of greedy RLS which trades running time for memory. These approaches are then compared to traditional wrapper-based feature selection implementations based on support vector machines (SVM) to reveal the relative speed-up and to assess the feasibility of the new algorithm. As a proof of concept, we apply greedy RLS to the Hypertension – UK National Blood Service WTCCC dataset and select the most predictive variants using 3-fold external cross-validation in less than 26 minutes on a high-end desktop. On this dataset, we also show that greedy RLS has a better classification performance on independent

  14. Development of an operator`s mental model acquisition system. 1. Estimation of a physical mental model acquisition system

    Energy Technology Data Exchange (ETDEWEB)

    Ikeda, Mitsuru; Mizoguchi, Riichirou [Inst. of Scientific and Industrial Research, Osaka Univ., Ibaraki (Japan); Yoshikawa, Shinji; Ozawa, Kenji

    1997-03-01

    This report describes a technical survey of acquisition method of an operator`s understanding for functions and structures of his target nuclear plant. This method is to play a key role in the information processing framework to support on-training operators in forming their knowledge of the nuclear plants. This kind of technical framework is aiming at enhancing human operator`s ability to cope with anomaly plant situations which are difficult to expect from preceding experiences or engineering surveillance. In these cases, cause identifications and responding operation selections are desired to made not only empirically but also based on thoughts about possible phenomena to take place within the nuclear plant. This report focuses on a particular element technique, defined as `explanation-based knowledge acquisition`, as the candidate technique to potentially be extended to meet the requirement written above, and discusses about applicability to the learning support system and about necessary improvements, to identify future technical developments. (author)

  15. Volunteerism and socioemotional selectivity in later life.

    Science.gov (United States)

    Hendricks, Jon; Cutler, Stephen J

    2004-09-01

    The goal of this work was to assess the applicability of socioemotional selectivity theory to the realm of volunteerism by analyzing data drawn from the September 2002 Current Population Survey Volunteer Supplement. Total number of organizations volunteered for and total number of hours engaged in volunteer activities were utilized to obtain measures of volunteer hours per organization and volunteer hours in the main organization to determine whether a selective process could be observed. Descriptive statistics on age patterns were followed by a series of curve estimations to identify the best-fitting curves. Logistic age patterns of slowly increasing then relatively stable volunteer activity suggest that socioemotional selectivity processes are operative in the realm of voluntary activities. Socioemotional selectivity theory is applicable to voluntary activities.

  16. Clinical application of an improved utero-operator in the interventional treatment of infertility

    International Nuclear Information System (INIS)

    Huang Yaoming; Zhang Guangfu; Li Detai

    2002-01-01

    Objective: To evaluate the effectiveness of an improved utero-operator in the interventional treatment of tube obstruction infertility, and to make a comparison with other methods. Methods: One hundred cases of infertile women with tubal obstruction were divided into 3 groups and treated separately under TV fluoroscopy with 3 different methods and follow-up examination was made up to 24 months. Among the 100 cases, 60 cases were treated with improved utero-operator (109 tubes), 20 with Cook cupped coaxial catheter (36 tubes), and 20 with emulsoid double-cavity tube (20 tubes). Result: Among the improved utero-operator group, Cook cupped coaxial catheter group, and emulsoid double-cavity tube group, the successful rate of selective catheterization was 92.7%, 80.6% and 80.0%, respectively. The successful rate of recanalization was 72.3%, 72.4% and 71.4%, respectively. The pregnancy rate was 36.4%, 35.7% and 36.4%, respectively. Improved utero-operator has the highest successful rate in selective catheterization (x 2 = 4.275, P < 0.05). Conclusion: Improved utero-operator has a high successful rate of selective catheterization in selective salpingography and treatment of tube obstruction infertility, and it is an easy and stable method which spends less time and received less X-ray. It is an ideal treating method at the moment

  17. Remote operation and maintenance demonstration facility at ORNL

    International Nuclear Information System (INIS)

    Harvey, H.W.; Floyd, S.D.; Kuban, D.P.; Singletary, B.H.; Stradley, J.G.

    1978-01-01

    The Remote Operation and Maintenance Facility is a versatile facility arranged to mock up various hot cell configurations. Modular units of simulated shielding and viewing windows were built to provide flexibility in arrangement. The facility is fully equipped with hoists, manipulators, television, and other basic equipment and services necessary to provide capability for both remote operation and maintenance of several selected functional process equipment groups

  18. Method of operating a nuclear reactor

    International Nuclear Information System (INIS)

    Spurgin, A.J.; Schaefer, W.F.

    1978-01-01

    A method of controlling a nuclear power generting station in the event of a malfunction of particular operating components is described. Upon identification of a malfunction, preselected groups of control rods are fully inserted sequentially until a predetermined power level is approached. Additional control rods are then selectively inserted to quickly bring the reactor to a second given power level to be compatible with safe operation of the system with the malfunctioning component. At the time the thermal power output of the reactor is being reduced, the turbine is operated at a rate consistent with the output of the reactor. In the event of a malfunction, the power generating system is operated in a turbine following reactor mode, with the reactor power rapidly reduced, in a controlled manner, to a safe level compatible with the type of malfunction experienced

  19. Solvent selection methodology for pharmaceutical processes: Solvent swap

    DEFF Research Database (Denmark)

    Papadakis, Emmanouil; Kumar Tula, Anjan; Gani, Rafiqul

    2016-01-01

    A method for the selection of appropriate solvents for the solvent swap task in pharmaceutical processes has been developed. This solvent swap method is based on the solvent selection method of Gani et al. (2006) and considers additional selection criteria such as boiling point difference...... in pharmaceutical processes as well as new solvent swap alternatives. The method takes into account process considerations such as batch distillation and crystallization to achieve the swap task. Rigorous model based simulations of the swap operation are performed to evaluate and compare the performance...

  20. System of selective disemination of information at ININ

    International Nuclear Information System (INIS)

    Martinez G, M.A.

    1981-01-01

    A study of the systems of selective dissemination of information (SDI) is presented, the concepts for such systems outlined, and their developments traced in advanced countries, its forms of operation and implications for the special libraries. Also the operation of INIS at CIDN (Centro de Informacion y Documentacion Nuclear) is presented, together with recommendations and conclusions to improve the development of this service in Mexico. (author)