WorldWideScience

Sample records for selection method based

  1. EEG feature selection method based on decision tree.

    Science.gov (United States)

    Duan, Lijuan; Ge, Hui; Ma, Wei; Miao, Jun

    2015-01-01

    This paper aims to solve automated feature selection problem in brain computer interface (BCI). In order to automate feature selection process, we proposed a novel EEG feature selection method based on decision tree (DT). During the electroencephalogram (EEG) signal processing, a feature extraction method based on principle component analysis (PCA) was used, and the selection process based on decision tree was performed by searching the feature space and automatically selecting optimal features. Considering that EEG signals are a series of non-linear signals, a generalized linear classifier named support vector machine (SVM) was chosen. In order to test the validity of the proposed method, we applied the EEG feature selection method based on decision tree to BCI Competition II datasets Ia, and the experiment showed encouraging results.

  2. Personnel Selection Based on Fuzzy Methods

    Directory of Open Access Journals (Sweden)

    Lourdes Cañós

    2011-03-01

    Full Text Available The decisions of managers regarding the selection of staff strongly determine the success of the company. A correct choice of employees is a source of competitive advantage. We propose a fuzzy method for staff selection, based on competence management and the comparison with the valuation that the company considers the best in each competence (ideal candidate. Our method is based on the Hamming distance and a Matching Level Index. The algorithms, implemented in the software StaffDesigner, allow us to rank the candidates, even when the competences of the ideal candidate have been evaluated only in part. Our approach is applied in a numerical example.

  3. Supplier selection based on multi-criterial AHP method

    Directory of Open Access Journals (Sweden)

    Jana Pócsová

    2010-03-01

    Full Text Available This paper describes a case-study of supplier selection based on multi-criterial Analytic Hierarchy Process (AHP method.It is demonstrated that using adequate mathematical method can bring us “unprejudiced” conclusion, even if the alternatives (suppliercompanies are very similar in given selection-criteria. The result is the best possible supplier company from the viewpoint of chosen criteriaand the price of the product.

  4. Personnel Selection Method Based on Personnel-Job Matching

    OpenAIRE

    Li Wang; Xilin Hou; Lili Zhang

    2013-01-01

    The existing personnel selection decisions in practice are based on the evaluation of job seeker's human capital, and it may be difficult to make personnel-job matching and make each party satisfy. Therefore, this paper puts forward a new personnel selection method by consideration of bilateral matching. Starting from the employment thoughts of ¡°satisfy¡±, the satisfaction evaluation indicator system of each party are constructed. The multi-objective optimization model is given according to ...

  5. A fuzzy logic based PROMETHEE method for material selection problems

    Directory of Open Access Journals (Sweden)

    Muhammet Gul

    2018-03-01

    Full Text Available Material selection is a complex problem in the design and development of products for diverse engineering applications. This paper presents a fuzzy PROMETHEE (Preference Ranking Organization Method for Enrichment Evaluation method based on trapezoidal fuzzy interval numbers that can be applied to the selection of materials for an automotive instrument panel. Also, it presents uniqueness in making a significant contribution to the literature in terms of the application of fuzzy decision-making approach to material selection problems. The method is illustrated, validated, and compared against three different fuzzy MCDM methods (fuzzy VIKOR, fuzzy TOPSIS, and fuzzy ELECTRE in terms of its ranking performance. Also, the relationships between the compared methods and the proposed scenarios for fuzzy PROMETHEE are evaluated via the Spearman’s correlation coefficient. Styrene Maleic Anhydride and Polypropylene are determined optionally as suitable materials for the automotive instrument panel case. We propose a generic fuzzy MCDM methodology that can be practically implemented to material selection problem. The main advantages of the methodology are consideration of the vagueness, uncertainty, and fuzziness to decision making environment.

  6. FEATURE SELECTION METHODS BASED ON MUTUAL INFORMATION FOR CLASSIFYING HETEROGENEOUS FEATURES

    Directory of Open Access Journals (Sweden)

    Ratri Enggar Pawening

    2016-06-01

    Full Text Available Datasets with heterogeneous features can affect feature selection results that are not appropriate because it is difficult to evaluate heterogeneous features concurrently. Feature transformation (FT is another way to handle heterogeneous features subset selection. The results of transformation from non-numerical into numerical features may produce redundancy to the original numerical features. In this paper, we propose a method to select feature subset based on mutual information (MI for classifying heterogeneous features. We use unsupervised feature transformation (UFT methods and joint mutual information maximation (JMIM methods. UFT methods is used to transform non-numerical features into numerical features. JMIM methods is used to select feature subset with a consideration of the class label. The transformed and the original features are combined entirely, then determine features subset by using JMIM methods, and classify them using support vector machine (SVM algorithm. The classification accuracy are measured for any number of selected feature subset and compared between UFT-JMIM methods and Dummy-JMIM methods. The average classification accuracy for all experiments in this study that can be achieved by UFT-JMIM methods is about 84.47% and Dummy-JMIM methods is about 84.24%. This result shows that UFT-JMIM methods can minimize information loss between transformed and original features, and select feature subset to avoid redundant and irrelevant features.

  7. Traditional and robust vector selection methods for use with similarity based models

    International Nuclear Information System (INIS)

    Hines, J. W.; Garvey, D. R.

    2006-01-01

    Vector selection, or instance selection as it is often called in the data mining literature, performs a critical task in the development of nonparametric, similarity based models. Nonparametric, similarity based modeling (SBM) is a form of 'lazy learning' which constructs a local model 'on the fly' by comparing a query vector to historical, training vectors. For large training sets the creation of local models may become cumbersome, since each training vector must be compared to the query vector. To alleviate this computational burden, varying forms of training vector sampling may be employed with the goal of selecting a subset of the training data such that the samples are representative of the underlying process. This paper describes one such SBM, namely auto-associative kernel regression (AAKR), and presents five traditional vector selection methods and one robust vector selection method that may be used to select prototype vectors from a larger data set in model training. The five traditional vector selection methods considered are min-max, vector ordering, combination min-max and vector ordering, fuzzy c-means clustering, and Adeli-Hung clustering. Each method is described in detail and compared using artificially generated data and data collected from the steam system of an operating nuclear power plant. (authors)

  8. A Robust Service Selection Method Based on Uncertain QoS

    Directory of Open Access Journals (Sweden)

    Yanping Chen

    2016-01-01

    Full Text Available Nowadays, the number of Web services on the Internet is quickly increasing. Meanwhile, different service providers offer numerous services with the similar functions. Quality of Service (QoS has become an important factor used to select the most appropriate service for users. The most prominent QoS-based service selection models only take the certain attributes into account, which is an ideal assumption. In the real world, there are a large number of uncertain factors. In particular, at the runtime, QoS may become very poor or unacceptable. In order to solve the problem, a global service selection model based on uncertain QoS was proposed, including the corresponding normalization and aggregation functions, and then a robust optimization model adopted to transform the model. Experiment results show that the proposed method can effectively select services with high robustness and optimality.

  9. ReliefF-Based EEG Sensor Selection Methods for Emotion Recognition.

    Science.gov (United States)

    Zhang, Jianhai; Chen, Ming; Zhao, Shaokai; Hu, Sanqing; Shi, Zhiguo; Cao, Yu

    2016-09-22

    Electroencephalogram (EEG) signals recorded from sensor electrodes on the scalp can directly detect the brain dynamics in response to different emotional states. Emotion recognition from EEG signals has attracted broad attention, partly due to the rapid development of wearable computing and the needs of a more immersive human-computer interface (HCI) environment. To improve the recognition performance, multi-channel EEG signals are usually used. A large set of EEG sensor channels will add to the computational complexity and cause users inconvenience. ReliefF-based channel selection methods were systematically investigated for EEG-based emotion recognition on a database for emotion analysis using physiological signals (DEAP). Three strategies were employed to select the best channels in classifying four emotional states (joy, fear, sadness and relaxation). Furthermore, support vector machine (SVM) was used as a classifier to validate the performance of the channel selection results. The experimental results showed the effectiveness of our methods and the comparison with the similar strategies, based on the F-score, was given. Strategies to evaluate a channel as a unity gave better performance in channel reduction with an acceptable loss of accuracy. In the third strategy, after adjusting channels' weights according to their contribution to the classification accuracy, the number of channels was reduced to eight with a slight loss of accuracy (58.51% ± 10.05% versus the best classification accuracy 59.13% ± 11.00% using 19 channels). In addition, the study of selecting subject-independent channels, related to emotion processing, was also implemented. The sensors, selected subject-independently from frontal, parietal lobes, have been identified to provide more discriminative information associated with emotion processing, and are distributed symmetrically over the scalp, which is consistent with the existing literature. The results will make a contribution to the

  10. ReliefF-Based EEG Sensor Selection Methods for Emotion Recognition

    Directory of Open Access Journals (Sweden)

    Jianhai Zhang

    2016-09-01

    Full Text Available Electroencephalogram (EEG signals recorded from sensor electrodes on the scalp can directly detect the brain dynamics in response to different emotional states. Emotion recognition from EEG signals has attracted broad attention, partly due to the rapid development of wearable computing and the needs of a more immersive human-computer interface (HCI environment. To improve the recognition performance, multi-channel EEG signals are usually used. A large set of EEG sensor channels will add to the computational complexity and cause users inconvenience. ReliefF-based channel selection methods were systematically investigated for EEG-based emotion recognition on a database for emotion analysis using physiological signals (DEAP. Three strategies were employed to select the best channels in classifying four emotional states (joy, fear, sadness and relaxation. Furthermore, support vector machine (SVM was used as a classifier to validate the performance of the channel selection results. The experimental results showed the effectiveness of our methods and the comparison with the similar strategies, based on the F-score, was given. Strategies to evaluate a channel as a unity gave better performance in channel reduction with an acceptable loss of accuracy. In the third strategy, after adjusting channels’ weights according to their contribution to the classification accuracy, the number of channels was reduced to eight with a slight loss of accuracy (58.51% ± 10.05% versus the best classification accuracy 59.13% ± 11.00% using 19 channels. In addition, the study of selecting subject-independent channels, related to emotion processing, was also implemented. The sensors, selected subject-independently from frontal, parietal lobes, have been identified to provide more discriminative information associated with emotion processing, and are distributed symmetrically over the scalp, which is consistent with the existing literature. The results will make a

  11. A Dynamic and Adaptive Selection Radar Tracking Method Based on Information Entropy

    Directory of Open Access Journals (Sweden)

    Ge Jianjun

    2017-12-01

    Full Text Available Nowadays, the battlefield environment has become much more complex and variable. This paper presents a quantitative method and lower bound for the amount of target information acquired from multiple radar observations to adaptively and dynamically organize the detection of battlefield resources based on the principle of information entropy. Furthermore, for minimizing the given information entropy’s lower bound for target measurement at every moment, a method to dynamically and adaptively select radars with a high amount of information for target tracking is proposed. The simulation results indicate that the proposed method has higher tracking accuracy than that of tracking without adaptive radar selection based on entropy.

  12. A Feature Selection Method for Large-Scale Network Traffic Classification Based on Spark

    Directory of Open Access Journals (Sweden)

    Yong Wang

    2016-02-01

    Full Text Available Currently, with the rapid increasing of data scales in network traffic classifications, how to select traffic features efficiently is becoming a big challenge. Although a number of traditional feature selection methods using the Hadoop-MapReduce framework have been proposed, the execution time was still unsatisfactory with numeral iterative computations during the processing. To address this issue, an efficient feature selection method for network traffic based on a new parallel computing framework called Spark is proposed in this paper. In our approach, the complete feature set is firstly preprocessed based on Fisher score, and a sequential forward search strategy is employed for subsets. The optimal feature subset is then selected using the continuous iterations of the Spark computing framework. The implementation demonstrates that, on the precondition of keeping the classification accuracy, our method reduces the time cost of modeling and classification, and improves the execution efficiency of feature selection significantly.

  13. A Feature Subset Selection Method Based On High-Dimensional Mutual Information

    Directory of Open Access Journals (Sweden)

    Chee Keong Kwoh

    2011-04-01

    Full Text Available Feature selection is an important step in building accurate classifiers and provides better understanding of the data sets. In this paper, we propose a feature subset selection method based on high-dimensional mutual information. We also propose to use the entropy of the class attribute as a criterion to determine the appropriate subset of features when building classifiers. We prove that if the mutual information between a feature set X and the class attribute Y equals to the entropy of Y , then X is a Markov Blanket of Y . We show that in some cases, it is infeasible to approximate the high-dimensional mutual information with algebraic combinations of pairwise mutual information in any forms. In addition, the exhaustive searches of all combinations of features are prerequisite for finding the optimal feature subsets for classifying these kinds of data sets. We show that our approach outperforms existing filter feature subset selection methods for most of the 24 selected benchmark data sets.

  14. Absolute cosine-based SVM-RFE feature selection method for prostate histopathological grading.

    Science.gov (United States)

    Sahran, Shahnorbanun; Albashish, Dheeb; Abdullah, Azizi; Shukor, Nordashima Abd; Hayati Md Pauzi, Suria

    2018-04-18

    Feature selection (FS) methods are widely used in grading and diagnosing prostate histopathological images. In this context, FS is based on the texture features obtained from the lumen, nuclei, cytoplasm and stroma, all of which are important tissue components. However, it is difficult to represent the high-dimensional textures of these tissue components. To solve this problem, we propose a new FS method that enables the selection of features with minimal redundancy in the tissue components. We categorise tissue images based on the texture of individual tissue components via the construction of a single classifier and also construct an ensemble learning model by merging the values obtained by each classifier. Another issue that arises is overfitting due to the high-dimensional texture of individual tissue components. We propose a new FS method, SVM-RFE(AC), that integrates a Support Vector Machine-Recursive Feature Elimination (SVM-RFE) embedded procedure with an absolute cosine (AC) filter method to prevent redundancy in the selected features of the SV-RFE and an unoptimised classifier in the AC. We conducted experiments on H&E histopathological prostate and colon cancer images with respect to three prostate classifications, namely benign vs. grade 3, benign vs. grade 4 and grade 3 vs. grade 4. The colon benchmark dataset requires a distinction between grades 1 and 2, which are the most difficult cases to distinguish in the colon domain. The results obtained by both the single and ensemble classification models (which uses the product rule as its merging method) confirm that the proposed SVM-RFE(AC) is superior to the other SVM and SVM-RFE-based methods. We developed an FS method based on SVM-RFE and AC and successfully showed that its use enabled the identification of the most crucial texture feature of each tissue component. Thus, it makes possible the distinction between multiple Gleason grades (e.g. grade 3 vs. grade 4) and its performance is far superior to

  15. Linear feature selection in texture analysis - A PLS based method

    DEFF Research Database (Denmark)

    Marques, Joselene; Igel, Christian; Lillholm, Martin

    2013-01-01

    We present a texture analysis methodology that combined uncommitted machine-learning techniques and partial least square (PLS) in a fully automatic framework. Our approach introduces a robust PLS-based dimensionality reduction (DR) step to specifically address outliers and high-dimensional feature...... and considering all CV groups, the methods selected 36 % of the original features available. The diagnosis evaluation reached a generalization area-under-the-ROC curve of 0.92, which was higher than established cartilage-based markers known to relate to OA diagnosis....

  16. Speech Emotion Feature Selection Method Based on Contribution Analysis Algorithm of Neural Network

    International Nuclear Information System (INIS)

    Wang Xiaojia; Mao Qirong; Zhan Yongzhao

    2008-01-01

    There are many emotion features. If all these features are employed to recognize emotions, redundant features may be existed. Furthermore, recognition result is unsatisfying and the cost of feature extraction is high. In this paper, a method to select speech emotion features based on contribution analysis algorithm of NN is presented. The emotion features are selected by using contribution analysis algorithm of NN from the 95 extracted features. Cluster analysis is applied to analyze the effectiveness for the features selected, and the time of feature extraction is evaluated. Finally, 24 emotion features selected are used to recognize six speech emotions. The experiments show that this method can improve the recognition rate and the time of feature extraction

  17. A kernel-based multivariate feature selection method for microarray data classification.

    Directory of Open Access Journals (Sweden)

    Shiquan Sun

    Full Text Available High dimensionality and small sample sizes, and their inherent risk of overfitting, pose great challenges for constructing efficient classifiers in microarray data classification. Therefore a feature selection technique should be conducted prior to data classification to enhance prediction performance. In general, filter methods can be considered as principal or auxiliary selection mechanism because of their simplicity, scalability, and low computational complexity. However, a series of trivial examples show that filter methods result in less accurate performance because they ignore the dependencies of features. Although few publications have devoted their attention to reveal the relationship of features by multivariate-based methods, these methods describe relationships among features only by linear methods. While simple linear combination relationship restrict the improvement in performance. In this paper, we used kernel method to discover inherent nonlinear correlations among features as well as between feature and target. Moreover, the number of orthogonal components was determined by kernel Fishers linear discriminant analysis (FLDA in a self-adaptive manner rather than by manual parameter settings. In order to reveal the effectiveness of our method we performed several experiments and compared the results between our method and other competitive multivariate-based features selectors. In our comparison, we used two classifiers (support vector machine, [Formula: see text]-nearest neighbor on two group datasets, namely two-class and multi-class datasets. Experimental results demonstrate that the performance of our method is better than others, especially on three hard-classify datasets, namely Wang's Breast Cancer, Gordon's Lung Adenocarcinoma and Pomeroy's Medulloblastoma.

  18. a Method for the Seamlines Network Automatic Selection Based on Building Vector

    Science.gov (United States)

    Li, P.; Dong, Y.; Hu, Y.; Li, X.; Tan, P.

    2018-04-01

    In order to improve the efficiency of large scale orthophoto production of city, this paper presents a method for automatic selection of seamlines network in large scale orthophoto based on the buildings' vector. Firstly, a simple model of the building is built by combining building's vector, height and DEM, and the imaging area of the building on single DOM is obtained. Then, the initial Voronoi network of the measurement area is automatically generated based on the positions of the bottom of all images. Finally, the final seamlines network is obtained by optimizing all nodes and seamlines in the network automatically based on the imaging areas of the buildings. The experimental results show that the proposed method can not only get around the building seamlines network quickly, but also remain the Voronoi network' characteristics of projection distortion minimum theory, which can solve the problem of automatic selection of orthophoto seamlines network in image mosaicking effectively.

  19. Systematic differences in the response of genetic variation to pedigree and genome-based selection methods

    NARCIS (Netherlands)

    Heidaritabar, M.; Vereijken, A.; Muir, W.M.; Meuwissen, T.H.E.; Cheng, H.; Megens, H.J.W.C.; Groenen, M.; Bastiaansen, J.W.M.

    2014-01-01

    Genomic selection (GS) is a DNA-based method of selecting for quantitative traits in animal and plant breeding, and offers a potentially superior alternative to traditional breeding methods that rely on pedigree and phenotype information. Using a 60¿K SNP chip with markers spaced throughout the

  20. A Feature Selection Method Based on Fisher's Discriminant Ratio for Text Sentiment Classification

    Science.gov (United States)

    Wang, Suge; Li, Deyu; Wei, Yingjie; Li, Hongxia

    With the rapid growth of e-commerce, product reviews on the Web have become an important information source for customers' decision making when they intend to buy some product. As the reviews are often too many for customers to go through, how to automatically classify them into different sentiment orientation categories (i.e. positive/negative) has become a research problem. In this paper, based on Fisher's discriminant ratio, an effective feature selection method is proposed for product review text sentiment classification. In order to validate the validity of the proposed method, we compared it with other methods respectively based on information gain and mutual information while support vector machine is adopted as the classifier. In this paper, 6 subexperiments are conducted by combining different feature selection methods with 2 kinds of candidate feature sets. Under 1006 review documents of cars, the experimental results indicate that the Fisher's discriminant ratio based on word frequency estimation has the best performance with F value 83.3% while the candidate features are the words which appear in both positive and negative texts.

  1. A Time-Series Water Level Forecasting Model Based on Imputation and Variable Selection Method.

    Science.gov (United States)

    Yang, Jun-He; Cheng, Ching-Hsue; Chan, Chia-Pan

    2017-01-01

    Reservoirs are important for households and impact the national economy. This paper proposed a time-series forecasting model based on estimating a missing value followed by variable selection to forecast the reservoir's water level. This study collected data from the Taiwan Shimen Reservoir as well as daily atmospheric data from 2008 to 2015. The two datasets are concatenated into an integrated dataset based on ordering of the data as a research dataset. The proposed time-series forecasting model summarily has three foci. First, this study uses five imputation methods to directly delete the missing value. Second, we identified the key variable via factor analysis and then deleted the unimportant variables sequentially via the variable selection method. Finally, the proposed model uses a Random Forest to build the forecasting model of the reservoir's water level. This was done to compare with the listing method under the forecasting error. These experimental results indicate that the Random Forest forecasting model when applied to variable selection with full variables has better forecasting performance than the listing model. In addition, this experiment shows that the proposed variable selection can help determine five forecast methods used here to improve the forecasting capability.

  2. Selection of boron based tribological hard coatings using multi-criteria decision making methods

    International Nuclear Information System (INIS)

    Çalışkan, Halil

    2013-01-01

    Highlights: • Boron based coating selection problem for cutting tools was solved. • EXPROM2, TOPSIS and VIKOR methods were used for ranking the alternative materials. • The best coatings for cutting tool were selected as TiBN and TiSiBN. • The ranking results are in good agreement with cutting test results in literature. - Abstract: Mechanical and tribological properties of hard coatings can be enhanced using boron as alloying element. Therefore, multicomponent nanostructured boron based hard coatings are deposited on cutting tools by different methods at different parameters. Different mechanical and tribological properties are obtained after deposition, and it is a difficult task to select the best coating material. In this paper, therefore, a systematic evaluation model was proposed to tackle the difficulty of the material selection with specific properties among a set of available alternatives. The alternatives consist of multicomponent nanostructured TiBN, TiCrBN, TiSiBN and TiAlSiBN coatings deposited by magnetron sputtering and ion implantation assisted magnetron sputtering at different parameters. The alternative coating materials were ranked by using three multi-criteria decision-making (MCDM) methods, i.e. EXPROM2 (preference ranking organization method for enrichment evaluation), TOPSIS (technique for order performance by similarity to ideal solution) and VIKOR (VIšekriterijumsko KOmpromisno Rangiranje), in order to determine the best coating material for cutting tools. Hardness (H), Young’s modulus (E), elastic recovery, friction coefficient, critical load, H/E and H 3 /E 2 ratios were considered as material selection criteria. In order to determine the importance weights of the evaluation criteria, a compromised weighting method, which composes of the analytic hierarchy process and Entropy methods, were used. The ranking results showed that TiBN and TiSiBN coatings deposited at given parameters are the best coatings for cutting tools

  3. PLS-based and regularization-based methods for the selection of relevant variables in non-targeted metabolomics data

    Directory of Open Access Journals (Sweden)

    Renata Bujak

    2016-07-01

    Full Text Available Non-targeted metabolomics constitutes a part of systems biology and aims to determine many metabolites in complex biological samples. Datasets obtained in non-targeted metabolomics studies are multivariate and high-dimensional due to the sensitivity of mass spectrometry-based detection methods as well as complexity of biological matrices. Proper selection of variables which contribute into group classification is a crucial step, especially in metabolomics studies which are focused on searching for disease biomarker candidates. In the present study, three different statistical approaches were tested using two metabolomics datasets (RH and PH study. Orthogonal projections to latent structures-discriminant analysis (OPLS-DA without and with multiple testing correction as well as least absolute shrinkage and selection operator (LASSO were tested and compared. For the RH study, OPLS-DA model built without multiple testing correction, selected 46 and 218 variables based on VIP criteria using Pareto and UV scaling, respectively. In the case of the PH study, 217 and 320 variables were selected based on VIP criteria using Pareto and UV scaling, respectively. In the RH study, OPLS-DA model built with multiple testing correction, selected 4 and 19 variables as statistically significant in terms of Pareto and UV scaling, respectively. For PH study, 14 and 18 variables were selected based on VIP criteria in terms of Pareto and UV scaling, respectively. Additionally, the concept and fundaments of the least absolute shrinkage and selection operator (LASSO with bootstrap procedure evaluating reproducibility of results, was demonstrated. In the RH and PH study, the LASSO selected 14 and 4 variables with reproducibility between 99.3% and 100%. However, apart from the popularity of PLS-DA and OPLS-DA methods in metabolomics, it should be highlighted that they do not control type I or type II error, but only arbitrarily establish a cut-off value for PLS-DA loadings

  4. The MCDM Model for Personnel Selection Based on SWARA and ARAS Methods

    Directory of Open Access Journals (Sweden)

    Darjan Karabasevic

    2015-05-01

    Full Text Available Competent employees are the key resource in an organization for achieving success and, therefore, competitiveness on the market. The aim of the recruitment and selection process is to acquire personnel with certain competencies required for a particular position, i.e.,a position within the company. Bearing in mind the fact that in the process of decision making decision-makers have underused the methods of making decisions, this paper aims to establish an MCDM model for the evaluation and selection of candidates in the process of the recruitment and selection of personnel based on the SWARA and the ARAS methods. Apart from providing an MCDM model, the paper will additionally provide a set of evaluation criteria for the position of a sales manager (the middle management in the telecommunication industry which will also be used in the numerical example. On the basis of a numerical example, in the process of employment, theproposed MCDMmodel can be successfully usedin selecting candidates.

  5. A Time-Series Water Level Forecasting Model Based on Imputation and Variable Selection Method

    Directory of Open Access Journals (Sweden)

    Jun-He Yang

    2017-01-01

    Full Text Available Reservoirs are important for households and impact the national economy. This paper proposed a time-series forecasting model based on estimating a missing value followed by variable selection to forecast the reservoir’s water level. This study collected data from the Taiwan Shimen Reservoir as well as daily atmospheric data from 2008 to 2015. The two datasets are concatenated into an integrated dataset based on ordering of the data as a research dataset. The proposed time-series forecasting model summarily has three foci. First, this study uses five imputation methods to directly delete the missing value. Second, we identified the key variable via factor analysis and then deleted the unimportant variables sequentially via the variable selection method. Finally, the proposed model uses a Random Forest to build the forecasting model of the reservoir’s water level. This was done to compare with the listing method under the forecasting error. These experimental results indicate that the Random Forest forecasting model when applied to variable selection with full variables has better forecasting performance than the listing model. In addition, this experiment shows that the proposed variable selection can help determine five forecast methods used here to improve the forecasting capability.

  6. A method for selection of spent nuclear fuel (SNF) transportation route considering socioeconomic cost based on contingent valuation method (CVM)

    International Nuclear Information System (INIS)

    Kim, Young Sik

    2008-02-01

    A transportation of SNF may cause an additional radiation exposure to human beings. It means that the radiological risk should be estimated and managed quantitatively for the public who live near the shipments route. Before the SNF transportation is performed, the route selection is concluded based on the radiological risk estimated with RADTRAN code in existing method generally. It means the existing method for route selection is based only on the radiological health risk but there are not only the impacts related to the radiological health risk but also the socioeconomic impacts related to the cost. In this study, a new method and its numerical formula for route selection on transporting SNF is proposed based on cost estimation because there are several costs in transporting SNF. The total cost consists of radiological health cost, transportation cost, and socioeconomic cost. Each cost is defined properly to the characteristics of SNF transportation and many coefficients and variables describing the meaning of each cost are obtained or estimated through many surveys. Especially to get the socioeconomic cost, contingent valuation method (CVM) is used with a questionnaire. The socioeconomic cost estimation is the most important part of the total cost originated from transporting SNF because it is a very dominant cost in the total cost. The route selection regarding SNF transportation can be supported with the proposed method reasonably and unnecessary or exhausting controversies about the shipments could be avoided

  7. An Entropy-based gene selection method for cancer classification using microarray data

    Directory of Open Access Journals (Sweden)

    Krishnan Arun

    2005-03-01

    Full Text Available Abstract Background Accurate diagnosis of cancer subtypes remains a challenging problem. Building classifiers based on gene expression data is a promising approach; yet the selection of non-redundant but relevant genes is difficult. The selected gene set should be small enough to allow diagnosis even in regular clinical laboratories and ideally identify genes involved in cancer-specific regulatory pathways. Here an entropy-based method is proposed that selects genes related to the different cancer classes while at the same time reducing the redundancy among the genes. Results The present study identifies a subset of features by maximizing the relevance and minimizing the redundancy of the selected genes. A merit called normalized mutual information is employed to measure the relevance and the redundancy of the genes. In order to find a more representative subset of features, an iterative procedure is adopted that incorporates an initial clustering followed by data partitioning and the application of the algorithm to each of the partitions. A leave-one-out approach then selects the most commonly selected genes across all the different runs and the gene selection algorithm is applied again to pare down the list of selected genes until a minimal subset is obtained that gives a satisfactory accuracy of classification. The algorithm was applied to three different data sets and the results obtained were compared to work done by others using the same data sets Conclusion This study presents an entropy-based iterative algorithm for selecting genes from microarray data that are able to classify various cancer sub-types with high accuracy. In addition, the feature set obtained is very compact, that is, the redundancy between genes is reduced to a large extent. This implies that classifiers can be built with a smaller subset of genes.

  8. Experimental Evaluation of Suitability of Selected Multi-Criteria Decision-Making Methods for Large-Scale Agent-Based Simulations.

    Science.gov (United States)

    Tučník, Petr; Bureš, Vladimír

    2016-01-01

    Multi-criteria decision-making (MCDM) can be formally implemented by various methods. This study compares suitability of four selected MCDM methods, namely WPM, TOPSIS, VIKOR, and PROMETHEE, for future applications in agent-based computational economic (ACE) models of larger scale (i.e., over 10 000 agents in one geographical region). These four MCDM methods were selected according to their appropriateness for computational processing in ACE applications. Tests of the selected methods were conducted on four hardware configurations. For each method, 100 tests were performed, which represented one testing iteration. With four testing iterations conducted on each hardware setting and separated testing of all configurations with the-server parameter de/activated, altogether, 12800 data points were collected and consequently analyzed. An illustrational decision-making scenario was used which allows the mutual comparison of all of the selected decision making methods. Our test results suggest that although all methods are convenient and can be used in practice, the VIKOR method accomplished the tests with the best results and thus can be recommended as the most suitable for simulations of large-scale agent-based models.

  9. A target recognition method for maritime surveillance radars based on hybrid ensemble selection

    Science.gov (United States)

    Fan, Xueman; Hu, Shengliang; He, Jingbo

    2017-11-01

    In order to improve the generalisation ability of the maritime surveillance radar, a novel ensemble selection technique, termed Optimisation and Dynamic Selection (ODS), is proposed. During the optimisation phase, the non-dominated sorting genetic algorithm II for multi-objective optimisation is used to find the Pareto front, i.e. a set of ensembles of classifiers representing different tradeoffs between the classification error and diversity. During the dynamic selection phase, the meta-learning method is used to predict whether a candidate ensemble is competent enough to classify a query instance based on three different aspects, namely, feature space, decision space and the extent of consensus. The classification performance and time complexity of ODS are compared against nine other ensemble methods using a self-built full polarimetric high resolution range profile data-set. The experimental results clearly show the effectiveness of ODS. In addition, the influence of the selection of diversity measures is studied concurrently.

  10. The multi-objective decision making methods based on MULTIMOORA and MOOSRA for the laptop selection problem

    Science.gov (United States)

    Aytaç Adalı, Esra; Tuş Işık, Ayşegül

    2017-06-01

    A decision making process requires the values of conflicting objectives for alternatives and the selection of the best alternative according to the needs of decision makers. Multi-objective optimization methods may provide solution for this selection. In this paper it is aimed to present the laptop selection problem based on MOORA plus full multiplicative form (MULTIMOORA) and multi-objective optimization on the basis of simple ratio analysis (MOOSRA) which are relatively new multi-objective optimization methods. The novelty of this paper is solving this problem with the MULTIMOORA and MOOSRA methods for the first time.

  11. A QSAR Study of Environmental Estrogens Based on a Novel Variable Selection Method

    Directory of Open Access Journals (Sweden)

    Aiqian Zhang

    2012-05-01

    Full Text Available A large number of descriptors were employed to characterize the molecular structure of 53 natural, synthetic, and environmental chemicals which are suspected of disrupting endocrine functions by mimicking or antagonizing natural hormones and may thus pose a serious threat to the health of humans and wildlife. In this work, a robust quantitative structure-activity relationship (QSAR model with a novel variable selection method has been proposed for the effective estrogens. The variable selection method is based on variable interaction (VSMVI with leave-multiple-out cross validation (LMOCV to select the best subset. During variable selection, model construction and assessment, the Organization for Economic Co-operation and Development (OECD principles for regulation of QSAR acceptability were fully considered, such as using an unambiguous multiple-linear regression (MLR algorithm to build the model, using several validation methods to assessment the performance of the model, giving the define of applicability domain and analyzing the outliers with the results of molecular docking. The performance of the QSAR model indicates that the VSMVI is an effective, feasible and practical tool for rapid screening of the best subset from large molecular descriptors.

  12. Hybrid Multicriteria Group Decision Making Method for Information System Project Selection Based on Intuitionistic Fuzzy Theory

    Directory of Open Access Journals (Sweden)

    Jian Guo

    2013-01-01

    Full Text Available Information system (IS project selection is of critical importance to every organization in dynamic competing environment. The aim of this paper is to develop a hybrid multicriteria group decision making approach based on intuitionistic fuzzy theory for IS project selection. The decision makers’ assessment information can be expressed in the form of real numbers, interval-valued numbers, linguistic variables, and intuitionistic fuzzy numbers (IFNs. All these evaluation pieces of information can be transformed to the form of IFNs. Intuitionistic fuzzy weighted averaging (IFWA operator is utilized to aggregate individual opinions of decision makers into a group opinion. Intuitionistic fuzzy entropy is used to obtain the entropy weights of the criteria. TOPSIS method combined with intuitionistic fuzzy set is proposed to select appropriate IS project in group decision making environment. Finally, a numerical example for information system projects selection is given to illustrate application of hybrid multi-criteria group decision making (MCGDM method based on intuitionistic fuzzy theory and TOPSIS method.

  13. Analysis of a wavelength selectable cascaded DFB laser based on the transfer matrix method

    International Nuclear Information System (INIS)

    Xie Hongyun; Chen Liang; Shen Pei; Sun Botao; Wang Renqing; Xiao Ying; You Yunxia; Zhang Wanrong

    2010-01-01

    A novel cascaded DFB laser, which consists of two serial gratings to provide selectable wavelengths, is presented and analyzed by the transfer matrix method. In this method, efficient facet reflectivity is derived from the transfer matrix built for each serial section and is then used to simulate the performance of the novel cascaded DFB laser through self-consistently solving the gain equation, the coupled wave equation and the current continuity equations. The simulations prove the feasibility of this kind of wavelength selectable laser and a corresponding designed device with two selectable wavelengths of 1.51 μm and 1.53 μm is realized by experiments on InP-based multiple quantum well structure. (semiconductor devices)

  14. Feature selection for splice site prediction: A new method using EDA-based feature ranking

    Directory of Open Access Journals (Sweden)

    Rouzé Pierre

    2004-05-01

    Full Text Available Abstract Background The identification of relevant biological features in large and complex datasets is an important step towards gaining insight in the processes underlying the data. Other advantages of feature selection include the ability of the classification system to attain good or even better solutions using a restricted subset of features, and a faster classification. Thus, robust methods for fast feature selection are of key importance in extracting knowledge from complex biological data. Results In this paper we present a novel method for feature subset selection applied to splice site prediction, based on estimation of distribution algorithms, a more general framework of genetic algorithms. From the estimated distribution of the algorithm, a feature ranking is derived. Afterwards this ranking is used to iteratively discard features. We apply this technique to the problem of splice site prediction, and show how it can be used to gain insight into the underlying biological process of splicing. Conclusion We show that this technique proves to be more robust than the traditional use of estimation of distribution algorithms for feature selection: instead of returning a single best subset of features (as they normally do this method provides a dynamical view of the feature selection process, like the traditional sequential wrapper methods. However, the method is faster than the traditional techniques, and scales better to datasets described by a large number of features.

  15. TU-AB-202-10: How Effective Are Current Atlas Selection Methods for Atlas-Based Auto-Contouring in Radiotherapy Planning?

    Energy Technology Data Exchange (ETDEWEB)

    Peressutti, D; Schipaanboord, B; Kadir, T; Gooding, M [Mirada Medical Limited, Science and Medical Technology, Oxford (United Kingdom); Soest, J van; Lustberg, T; Elmpt, W van; Dekker, A [Maastricht University Medical Centre, Department of Radiation Oncology MAASTRO - GROW School for Oncology Developmental Biology, Maastricht (Netherlands)

    2016-06-15

    Purpose: To investigate the effectiveness of atlas selection methods for improving atlas-based auto-contouring in radiotherapy planning. Methods: 275 H&N clinically delineated cases were employed as an atlas database from which atlases would be selected. A further 40 previously contoured cases were used as test patients against which atlas selection could be performed and evaluated. 26 variations of selection methods proposed in the literature and used in commercial systems were investigated. Atlas selection methods comprised either global or local image similarity measures, computed after rigid or deformable registration, combined with direct atlas search or with an intermediate template image. Workflow Box (Mirada-Medical, Oxford, UK) was used for all auto-contouring. Results on brain, brainstem, parotids and spinal cord were compared to random selection, a fixed set of 10 “good” atlases, and optimal selection by an “oracle” with knowledge of the ground truth. The Dice score and the average ranking with respect to the “oracle” were employed to assess the performance of the top 10 atlases selected by each method. Results: The fixed set of “good” atlases outperformed all of the atlas-patient image similarity-based selection methods (mean Dice 0.715 c.f. 0.603 to 0.677). In general, methods based on exhaustive comparison of local similarity measures showed better average Dice scores (0.658 to 0.677) compared to the use of either template image (0.655 to 0.672) or global similarity measures (0.603 to 0.666). The performance of image-based selection methods was found to be only slightly better than a random (0.645). Dice scores given relate to the left parotid, but similar results patterns were observed for all organs. Conclusion: Intuitively, atlas selection based on the patient CT is expected to improve auto-contouring performance. However, it was found that published approaches performed marginally better than random and use of a fixed set of

  16. Research on filter’s parameter selection based on PROMETHEE method

    Science.gov (United States)

    Zhu, Hui-min; Wang, Hang-yu; Sun, Shi-yan

    2018-03-01

    The selection of filter’s parameters in target recognition was studied in this paper. The PROMETHEE method was applied to the optimization problem of Gabor filter parameters decision, the correspondence model of the elemental relation between two methods was established. The author took the identification of military target as an example, problem about the filter’s parameter decision was simulated and calculated by PROMETHEE. The result showed that using PROMETHEE method for the selection of filter’s parameters was more scientific. The human disturbance caused by the experts method and empirical method could be avoided by this way. The method can provide reference for the parameter configuration scheme decision of the filter.

  17. Interval MULTIMOORA method with target values of attributes based on interval distance and preference degree: biomaterials selection

    Science.gov (United States)

    Hafezalkotob, Arian; Hafezalkotob, Ashkan

    2017-06-01

    A target-based MADM method covers beneficial and non-beneficial attributes besides target values for some attributes. Such techniques are considered as the comprehensive forms of MADM approaches. Target-based MADM methods can also be used in traditional decision-making problems in which beneficial and non-beneficial attributes only exist. In many practical selection problems, some attributes have given target values. The values of decision matrix and target-based attributes can be provided as intervals in some of such problems. Some target-based decision-making methods have recently been developed; however, a research gap exists in the area of MADM techniques with target-based attributes under uncertainty of information. We extend the MULTIMOORA method for solving practical material selection problems in which material properties and their target values are given as interval numbers. We employ various concepts of interval computations to reduce degeneration of uncertain data. In this regard, we use interval arithmetic and introduce innovative formula for interval distance of interval numbers to create interval target-based normalization technique. Furthermore, we use a pairwise preference matrix based on the concept of degree of preference of interval numbers to calculate the maximum, minimum, and ranking of these numbers. Two decision-making problems regarding biomaterials selection of hip and knee prostheses are discussed. Preference degree-based ranking lists for subordinate parts of the extended MULTIMOORA method are generated by calculating the relative degrees of preference for the arranged assessment values of the biomaterials. The resultant rankings for the problem are compared with the outcomes of other target-based models in the literature.

  18. Supplier Portfolio Selection and Optimum Volume Allocation: A Knowledge Based Method

    NARCIS (Netherlands)

    Aziz, Romana; Aziz, R.; van Hillegersberg, Jos; Kersten, W.; Blecker, T.; Luthje, C.

    2010-01-01

    Selection of suppliers and allocation of optimum volumes to suppliers is a strategic business decision. This paper presents a decision support method for supplier selection and the optimal allocation of volumes in a supplier portfolio. The requirements for the method were gathered during a case

  19. A ROC-based feature selection method for computer-aided detection and diagnosis

    Science.gov (United States)

    Wang, Songyuan; Zhang, Guopeng; Liao, Qimei; Zhang, Junying; Jiao, Chun; Lu, Hongbing

    2014-03-01

    Image-based computer-aided detection and diagnosis (CAD) has been a very active research topic aiming to assist physicians to detect lesions and distinguish them from benign to malignant. However, the datasets fed into a classifier usually suffer from small number of samples, as well as significantly less samples available in one class (have a disease) than the other, resulting in the classifier's suboptimal performance. How to identifying the most characterizing features of the observed data for lesion detection is critical to improve the sensitivity and minimize false positives of a CAD system. In this study, we propose a novel feature selection method mR-FAST that combines the minimal-redundancymaximal relevance (mRMR) framework with a selection metric FAST (feature assessment by sliding thresholds) based on the area under a ROC curve (AUC) generated on optimal simple linear discriminants. With three feature datasets extracted from CAD systems for colon polyps and bladder cancer, we show that the space of candidate features selected by mR-FAST is more characterizing for lesion detection with higher AUC, enabling to find a compact subset of superior features at low cost.

  20. FCNN-MR: A Parallel Instance Selection Method Based on Fast Condensed Nearest Neighbor Rule

    OpenAIRE

    Lu Si; Jie Yu; Shasha Li; Jun Ma; Lei Luo; Qingbo Wu; Yongqi Ma; Zhengji Liu

    2017-01-01

    Instance selection (IS) technique is used to reduce the data size to improve the performance of data mining methods. Recently, to process very large data set, several proposed methods divide the training set into some disjoint subsets and apply IS algorithms independently to each subset. In this paper, we analyze the limitation of these methods and give our viewpoint about how to divide and conquer in IS procedure. Then, based on fast condensed nearest neighbor (FCNN) rul...

  1. Efficient Multi-Label Feature Selection Using Entropy-Based Label Selection

    Directory of Open Access Journals (Sweden)

    Jaesung Lee

    2016-11-01

    Full Text Available Multi-label feature selection is designed to select a subset of features according to their importance to multiple labels. This task can be achieved by ranking the dependencies of features and selecting the features with the highest rankings. In a multi-label feature selection problem, the algorithm may be faced with a dataset containing a large number of labels. Because the computational cost of multi-label feature selection increases according to the number of labels, the algorithm may suffer from a degradation in performance when processing very large datasets. In this study, we propose an efficient multi-label feature selection method based on an information-theoretic label selection strategy. By identifying a subset of labels that significantly influence the importance of features, the proposed method efficiently outputs a feature subset. Experimental results demonstrate that the proposed method can identify a feature subset much faster than conventional multi-label feature selection methods for large multi-label datasets.

  2. Systematic differences in the response of genetic variation to pedigree and genome-based selection methods.

    Science.gov (United States)

    Heidaritabar, M; Vereijken, A; Muir, W M; Meuwissen, T; Cheng, H; Megens, H-J; Groenen, M A M; Bastiaansen, J W M

    2014-12-01

    Genomic selection (GS) is a DNA-based method of selecting for quantitative traits in animal and plant breeding, and offers a potentially superior alternative to traditional breeding methods that rely on pedigree and phenotype information. Using a 60 K SNP chip with markers spaced throughout the entire chicken genome, we compared the impact of GS and traditional BLUP (best linear unbiased prediction) selection methods applied side-by-side in three different lines of egg-laying chickens. Differences were demonstrated between methods, both at the level and genomic distribution of allele frequency changes. In all three lines, the average allele frequency changes were larger with GS, 0.056 0.064 and 0.066, compared with BLUP, 0.044, 0.045 and 0.036 for lines B1, B2 and W1, respectively. With BLUP, 35 selected regions (empirical P selected regions were identified. Empirical thresholds for local allele frequency changes were determined from gene dropping, and differed considerably between GS (0.167-0.198) and BLUP (0.105-0.126). Between lines, the genomic regions with large changes in allele frequencies showed limited overlap. Our results show that GS applies selection pressure much more locally than BLUP, resulting in larger allele frequency changes. With these results, novel insights into the nature of selection on quantitative traits have been gained and important questions regarding the long-term impact of GS are raised. The rapid changes to a part of the genetic architecture, while another part may not be selected, at least in the short term, require careful consideration, especially when selection occurs before phenotypes are observed.

  3. A novel selection method of seismic attributes based on gray relational degree and support vector machine.

    Directory of Open Access Journals (Sweden)

    Yaping Huang

    Full Text Available The selection of seismic attributes is a key process in reservoir prediction because the prediction accuracy relies on the reliability and credibility of the seismic attributes. However, effective selection method for useful seismic attributes is still a challenge. This paper presents a novel selection method of seismic attributes for reservoir prediction based on the gray relational degree (GRD and support vector machine (SVM. The proposed method has a two-hierarchical structure. In the first hierarchy, the primary selection of seismic attributes is achieved by calculating the GRD between seismic attributes and reservoir parameters, and the GRD between the seismic attributes. The principle of the primary selection is that these seismic attributes with higher GRD to the reservoir parameters will have smaller GRD between themselves as compared to those with lower GRD to the reservoir parameters. Then the SVM is employed in the second hierarchy to perform an interactive error verification using training samples for the purpose of determining the final seismic attributes. A real-world case study was conducted to evaluate the proposed GRD-SVM method. Reliable seismic attributes were selected to predict the coalbed methane (CBM content in southern Qinshui basin, China. In the analysis, the instantaneous amplitude, instantaneous bandwidth, instantaneous frequency, and minimum negative curvature were selected, and the predicted CBM content was fundamentally consistent with the measured CBM content. This real-world case study demonstrates that the proposed method is able to effectively select seismic attributes, and improve the prediction accuracy. Thus, the proposed GRD-SVM method can be used for the selection of seismic attributes in practice.

  4. Research on Methods for Discovering and Selecting Cloud Infrastructure Services Based on Feature Modeling

    Directory of Open Access Journals (Sweden)

    Huamin Zhu

    2016-01-01

    Full Text Available Nowadays more and more cloud infrastructure service providers are providing large numbers of service instances which are a combination of diversified resources, such as computing, storage, and network. However, for cloud infrastructure services, the lack of a description standard and the inadequate research of systematic discovery and selection methods have exposed difficulties in discovering and choosing services for users. First, considering the highly configurable properties of a cloud infrastructure service, the feature model method is used to describe such a service. Second, based on the description of the cloud infrastructure service, a systematic discovery and selection method for cloud infrastructure services are proposed. The automatic analysis techniques of the feature model are introduced to verify the model’s validity and to perform the matching of the service and demand models. Finally, we determine the critical decision metrics and their corresponding measurement methods for cloud infrastructure services, where the subjective and objective weighting results are combined to determine the weights of the decision metrics. The best matching instances from various providers are then ranked by their comprehensive evaluations. Experimental results show that the proposed methods can effectively improve the accuracy and efficiency of cloud infrastructure service discovery and selection.

  5. Selecting the patients for morning report sessions: case-based vs. conventional method.

    Science.gov (United States)

    Rabiei, Mehdi; Saeidi, Masumeh; Kiani, Mohammad Ali; Amin, Sakineh Mohebi; Ahanchian, Hamid; Jafari, Seyed Ali; Kianifar, Hamidreza

    2015-08-01

    One of the most important issues in morning report sessions is the number of patients. The aim of this study was to investigate and compare the number of cases reported in the morning report sessions in terms of case-based and conventional methods from the perspective of pediatric residents of Mashhad University of Medical Sciences. The present study was conducted on 24 pediatric residents of Mashhad University of Medical Sciences in the academic year 2014-2015. In this survey, the residents replied to a 20-question researcher-made questionnaire that had been designed to measure the views of residents regarding the number of patients in the morning report sessions using case-based and conventional methods. The validity of the questionnaire was confirmed by experts' views and its reliability by calculating Cronbach's alpha coefficients. Data were analyzed by t-test analysis. The mean age of the residents was 30.852 ± 2.506, and 66.6% of them were female. The results showed that there was no significant relationship among the variables of academic year, gender, and residents' perspective to choosing the number of patients in the morning report sessions (P > 0.05). T-test analysis showed a significant relationship among the average scores of residents in the selection of the case-based method in comparison to the conventional method (P case-based morning report was preferred compared to the conventional method. This method makes residents pay more attention to the details of patients' issues and therefore helps them to better plan how to address patient problems and improve their differential diagnosis skills.

  6. A Novel Fault Line Selection Method Based on Improved Oscillator System of Power Distribution Network

    Directory of Open Access Journals (Sweden)

    Xiaowei Wang

    2014-01-01

    Full Text Available A novel method of fault line selection based on IOS is presented. Firstly, the IOS is established by using math model, which adopted TZSC signal to replace built-in signal of duffing chaotic oscillator by selecting appropriate parameters. Then, each line’s TZSC decomposed by db10 wavelet packet to get CFB with the maximum energy principle, and CFB was solved by IOS. Finally, maximum chaotic distance and average chaotic distance on the phase trajectory are used to judge fault line. Simulation results show that the proposed method can accurately judge fault line and healthy line in strong noisy background. Besides, the nondetection zones of proposed method are elaborated.

  7. METHOD FOR SELECTION OF PROJECT MANAGEMENT APPROACH BASED ON FUZZY CONCEPTS

    Directory of Open Access Journals (Sweden)

    Igor V. KONONENKO

    2017-03-01

    Full Text Available Literature analysis of works that devoted to research of the selection a project management approach and development of effective methods for this problem solution is given. Mathematical model and method for selection of project management approach with fuzzy concepts of applicability of existing approaches are proposed. The selection is made of such approaches as the PMBOK Guide, the ISO21500 standard, the PRINCE2 methodology, the SWEBOK Guide, agile methodologies Scrum, XP, and Kanban. The number of project parameters which have a great impact on the result of the selection and measure of their impact is determined. Project parameters relate to information about the project, team, communication, critical project risks. They include the number of people involved in the project, the customer's experience with this project team, the project team's experience in this field, the project team's understanding of requirements, adapting ability, initiative, and others. The suggested method is considered on the example of its application for selection a project management approach to software development project.

  8. Wavelength Selection Method Based on Differential Evolution for Precise Quantitative Analysis Using Terahertz Time-Domain Spectroscopy.

    Science.gov (United States)

    Li, Zhi; Chen, Weidong; Lian, Feiyu; Ge, Hongyi; Guan, Aihong

    2017-12-01

    Quantitative analysis of component mixtures is an important application of terahertz time-domain spectroscopy (THz-TDS) and has attracted broad interest in recent research. Although the accuracy of quantitative analysis using THz-TDS is affected by a host of factors, wavelength selection from the sample's THz absorption spectrum is the most crucial component. The raw spectrum consists of signals from the sample and scattering and other random disturbances that can critically influence the quantitative accuracy. For precise quantitative analysis using THz-TDS, the signal from the sample needs to be retained while the scattering and other noise sources are eliminated. In this paper, a novel wavelength selection method based on differential evolution (DE) is investigated. By performing quantitative experiments on a series of binary amino acid mixtures using THz-TDS, we demonstrate the efficacy of the DE-based wavelength selection method, which yields an error rate below 5%.

  9. A Permutation Importance-Based Feature Selection Method for Short-Term Electricity Load Forecasting Using Random Forest

    Directory of Open Access Journals (Sweden)

    Nantian Huang

    2016-09-01

    Full Text Available The prediction accuracy of short-term load forecast (STLF depends on prediction model choice and feature selection result. In this paper, a novel random forest (RF-based feature selection method for STLF is proposed. First, 243 related features were extracted from historical load data and the time information of prediction points to form the original feature set. Subsequently, the original feature set was used to train an RF as the original model. After the training process, the prediction error of the original model on the test set was recorded and the permutation importance (PI value of each feature was obtained. Then, an improved sequential backward search method was used to select the optimal forecasting feature subset based on the PI value of each feature. Finally, the optimal forecasting feature subset was used to train a new RF model as the final prediction model. Experiments showed that the prediction accuracy of RF trained by the optimal forecasting feature subset was higher than that of the original model and comparative models based on support vector regression and artificial neural network.

  10. A Method Based on Intuitionistic Fuzzy Dependent Aggregation Operators for Supplier Selection

    Directory of Open Access Journals (Sweden)

    Fen Wang

    2013-01-01

    Full Text Available Recently, resolving the decision making problem of evaluation and ranking the potential suppliers have become as a key strategic factor for business firms. In this paper, two new intuitionistic fuzzy aggregation operators are developed: dependent intuitionistic fuzzy ordered weighed averaging (DIFOWA operator and dependent intuitionistic fuzzy hybrid weighed aggregation (DIFHWA operator. Some of their main properties are studied. A method based on the DIFHWA operator for intuitionistic fuzzy multiple attribute decision making is presented. Finally, an illustrative example concerning supplier selection is given.

  11. NetProt: Complex-based Feature Selection.

    Science.gov (United States)

    Goh, Wilson Wen Bin; Wong, Limsoon

    2017-08-04

    Protein complex-based feature selection (PCBFS) provides unparalleled reproducibility with high phenotypic relevance on proteomics data. Currently, there are five PCBFS paradigms, but not all representative methods have been implemented or made readily available. To allow general users to take advantage of these methods, we developed the R-package NetProt, which provides implementations of representative feature-selection methods. NetProt also provides methods for generating simulated differential data and generating pseudocomplexes for complex-based performance benchmarking. The NetProt open source R package is available for download from https://github.com/gohwils/NetProt/releases/ , and online documentation is available at http://rpubs.com/gohwils/204259 .

  12. A novel EMD selecting thresholding method based on multiple iteration for denoising LIDAR signal

    Science.gov (United States)

    Li, Meng; Jiang, Li-hui; Xiong, Xing-long

    2015-06-01

    Empirical mode decomposition (EMD) approach has been believed to be potentially useful for processing the nonlinear and non-stationary LIDAR signals. To shed further light on its performance, we proposed the EMD selecting thresholding method based on multiple iteration, which essentially acts as a development of EMD interval thresholding (EMD-IT), and randomly alters the samples of noisy parts of all the corrupted intrinsic mode functions to generate a better effect of iteration. Simulations on both synthetic signals and LIDAR signals from real world support this method.

  13. A New Decision-Making Method for Stock Portfolio Selection Based on Computing with Linguistic Assessment

    Directory of Open Access Journals (Sweden)

    Chen-Tung Chen

    2009-01-01

    Full Text Available The purpose of stock portfolio selection is how to allocate the capital to a large number of stocks in order to bring a most profitable return for investors. In most of past literatures, experts considered the portfolio of selection problem only based on past crisp or quantitative data. However, many qualitative and quantitative factors will influence the stock portfolio selection in real investment situation. It is very important for experts or decision-makers to use their experience or knowledge to predict the performance of each stock and make a stock portfolio. Because of the knowledge, experience, and background of each expert are different and vague, different types of 2-tuple linguistic variable are suitable used to express experts' opinions for the performance evaluation of each stock with respect to criteria. According to the linguistic evaluations of experts, the linguistic TOPSIS and linguistic ELECTRE methods are combined to present a new decision-making method for dealing with stock selection problems in this paper. Once the investment set has been determined, the risk preferences of investor are considered to calculate the investment ratio of each stock in the investment set. Finally, an example is implemented to demonstrate the practicability of the proposed method.

  14. A stereo remote sensing feature selection method based on artificial bee colony algorithm

    Science.gov (United States)

    Yan, Yiming; Liu, Pigang; Zhang, Ye; Su, Nan; Tian, Shu; Gao, Fengjiao; Shen, Yi

    2014-05-01

    To improve the efficiency of stereo information for remote sensing classification, a stereo remote sensing feature selection method is proposed in this paper presents, which is based on artificial bee colony algorithm. Remote sensing stereo information could be described by digital surface model (DSM) and optical image, which contain information of the three-dimensional structure and optical characteristics, respectively. Firstly, three-dimensional structure characteristic could be analyzed by 3D-Zernike descriptors (3DZD). However, different parameters of 3DZD could descript different complexity of three-dimensional structure, and it needs to be better optimized selected for various objects on the ground. Secondly, features for representing optical characteristic also need to be optimized. If not properly handled, when a stereo feature vector composed of 3DZD and image features, that would be a lot of redundant information, and the redundant information may not improve the classification accuracy, even cause adverse effects. To reduce information redundancy while maintaining or improving the classification accuracy, an optimized frame for this stereo feature selection problem is created, and artificial bee colony algorithm is introduced for solving this optimization problem. Experimental results show that the proposed method can effectively improve the computational efficiency, improve the classification accuracy.

  15. Influence of sand base preparation on properties of chromite moulding sands with sodium silicate hardened with selected methods

    Directory of Open Access Journals (Sweden)

    Stachowicz M.

    2017-03-01

    Full Text Available The paper presents a research on the relation between thermal preparation of chromite sand base of moulding sands containing sodium silicate, hardened with selected physical and chemical methods, and structure of the created bonding bridges. Test specimens were prepared of chromite sand - fresh or baked at 950°C for 10 or 24 hours - mixed with 0.5 wt.% of the selected non-modified inorganic binder and, after forming, were hardened with CO2 or liquid esters, dried traditionally or heated with microwaves at 2.45 GHz. It was shown on the grounds of SEM observations that the time of baking the base sand and the hardening method significantly affect structure of the bonding bridges and are correlated with mechanical properties of the moulding sands. It was found that hardening chromite-based moulding mixtures with physical methods is much more favourable than hardening with chemical methods, guaranteeing also more than ten times higher mechanical properties.

  16. TEHRAN AIR POLLUTANTS PREDICTION BASED ON RANDOM FOREST FEATURE SELECTION METHOD

    Directory of Open Access Journals (Sweden)

    A. Shamsoddini

    2017-09-01

    Full Text Available Air pollution as one of the most serious forms of environmental pollutions poses huge threat to human life. Air pollution leads to environmental instability, and has harmful and undesirable effects on the environment. Modern prediction methods of the pollutant concentration are able to improve decision making and provide appropriate solutions. This study examines the performance of the Random Forest feature selection in combination with multiple-linear regression and Multilayer Perceptron Artificial Neural Networks methods, in order to achieve an efficient model to estimate carbon monoxide and nitrogen dioxide, sulfur dioxide and PM2.5 contents in the air. The results indicated that Artificial Neural Networks fed by the attributes selected by Random Forest feature selection method performed more accurate than other models for the modeling of all pollutants. The estimation accuracy of sulfur dioxide emissions was lower than the other air contaminants whereas the nitrogen dioxide was predicted more accurate than the other pollutants.

  17. Tehran Air Pollutants Prediction Based on Random Forest Feature Selection Method

    Science.gov (United States)

    Shamsoddini, A.; Aboodi, M. R.; Karami, J.

    2017-09-01

    Air pollution as one of the most serious forms of environmental pollutions poses huge threat to human life. Air pollution leads to environmental instability, and has harmful and undesirable effects on the environment. Modern prediction methods of the pollutant concentration are able to improve decision making and provide appropriate solutions. This study examines the performance of the Random Forest feature selection in combination with multiple-linear regression and Multilayer Perceptron Artificial Neural Networks methods, in order to achieve an efficient model to estimate carbon monoxide and nitrogen dioxide, sulfur dioxide and PM2.5 contents in the air. The results indicated that Artificial Neural Networks fed by the attributes selected by Random Forest feature selection method performed more accurate than other models for the modeling of all pollutants. The estimation accuracy of sulfur dioxide emissions was lower than the other air contaminants whereas the nitrogen dioxide was predicted more accurate than the other pollutants.

  18. A Selection Method That Succeeds!

    Science.gov (United States)

    Weitman, Catheryn J.

    Provided a structural selection method is carried out, it is possible to find quality early childhood personnel. The hiring process involves five definite steps, each of which establishes a base for the next. A needs assessment formulating basic minimal qualifications is the first step. The second step involves review of current job descriptions…

  19. Discriminative Projection Selection Based Face Image Hashing

    Science.gov (United States)

    Karabat, Cagatay; Erdogan, Hakan

    Face image hashing is an emerging method used in biometric verification systems. In this paper, we propose a novel face image hashing method based on a new technique called discriminative projection selection. We apply the Fisher criterion for selecting the rows of a random projection matrix in a user-dependent fashion. Moreover, another contribution of this paper is to employ a bimodal Gaussian mixture model at the quantization step. Our simulation results on three different databases demonstrate that the proposed method has superior performance in comparison to previously proposed random projection based methods.

  20. Band selection method based on spectrum difference in targets of interest in hyperspectral imagery

    Science.gov (United States)

    Zhang, Xiaohan; Yang, Guang; Yang, Yongbo; Huang, Junhua

    2016-10-01

    While hyperspectral data shares rich spectrum information, it has numbers of bands with high correlation coefficients, causing great data redundancy. A reasonable band selection is important for subsequent processing. Bands with large amount of information and low correlation should be selected. On this basis, according to the needs of target detection applications, the spectral characteristics of the objects of interest are taken into consideration in this paper, and a new method based on spectrum difference is proposed. Firstly, according to the spectrum differences of targets of interest, a difference matrix which represents the different spectral reflectance of different targets in different bands is structured. By setting a threshold, the bands satisfying the conditions would be left, constituting a subset of bands. Then, the correlation coefficients between bands are calculated and correlation matrix is given. According to the size of the correlation coefficient, the bands can be set into several groups. At last, the conception of normalized variance is used on behalf of the information content of each band. The bands are sorted by the value of its normalized variance. Set needing number of bands, and the optimum band combination solution can be get by these three steps. This method retains the greatest degree of difference between the target of interest and is easy to achieve by computer automatically. Besides, false color image synthesis experiment is carried out using the bands selected by this method as well as other 3 methods to show the performance of method in this paper.

  1. Data Visualization and Feature Selection Methods in Gel-based Proteomics

    DEFF Research Database (Denmark)

    Silva, Tomé Santos; Richard, Nadege; Dias, Jorge P.

    2014-01-01

    -based proteomics, summarizing the current state of research within this field. Particular focus is given on discussing the usefulness of available multivariate analysis tools both for data visualization and feature selection purposes. Visual examples are given using a real gel-based proteomic dataset as basis....

  2. A New Variable Selection Method Based on Mutual Information Maximization by Replacing Collinear Variables for Nonlinear Quantitative Structure-Property Relationship Models

    Energy Technology Data Exchange (ETDEWEB)

    Ghasemi, Jahan B.; Zolfonoun, Ehsan [Toosi University of Technology, Tehran (Korea, Republic of)

    2012-05-15

    Selection of the most informative molecular descriptors from the original data set is a key step for development of quantitative structure activity/property relationship models. Recently, mutual information (MI) has gained increasing attention in feature selection problems. This paper presents an effective mutual information-based feature selection approach, named mutual information maximization by replacing collinear variables (MIMRCV), for nonlinear quantitative structure-property relationship models. The proposed variable selection method was applied to three different QSPR datasets, soil degradation half-life of 47 organophosphorus pesticides, GC-MS retention times of 85 volatile organic compounds, and water-to-micellar cetyltrimethylammonium bromide partition coefficients of 62 organic compounds.The obtained results revealed that using MIMRCV as feature selection method improves the predictive quality of the developed models compared to conventional MI based variable selection algorithms.

  3. A New Variable Selection Method Based on Mutual Information Maximization by Replacing Collinear Variables for Nonlinear Quantitative Structure-Property Relationship Models

    International Nuclear Information System (INIS)

    Ghasemi, Jahan B.; Zolfonoun, Ehsan

    2012-01-01

    Selection of the most informative molecular descriptors from the original data set is a key step for development of quantitative structure activity/property relationship models. Recently, mutual information (MI) has gained increasing attention in feature selection problems. This paper presents an effective mutual information-based feature selection approach, named mutual information maximization by replacing collinear variables (MIMRCV), for nonlinear quantitative structure-property relationship models. The proposed variable selection method was applied to three different QSPR datasets, soil degradation half-life of 47 organophosphorus pesticides, GC-MS retention times of 85 volatile organic compounds, and water-to-micellar cetyltrimethylammonium bromide partition coefficients of 62 organic compounds.The obtained results revealed that using MIMRCV as feature selection method improves the predictive quality of the developed models compared to conventional MI based variable selection algorithms

  4. Using MACBETH method for supplier selection in manufacturing environment

    Directory of Open Access Journals (Sweden)

    Prasad Karande

    2013-04-01

    Full Text Available Supplier selection is always found to be a complex decision-making problem in manufacturing environment. The presence of several independent and conflicting evaluation criteria, either qualitative or quantitative, makes the supplier selection problem a candidate to be solved by multi-criteria decision-making (MCDM methods. Even several MCDM methods have already been proposed for solving the supplier selection problems, the need for an efficient method that can deal with qualitative judgments related to supplier selection still persists. In this paper, the applicability and usefulness of measuring attractiveness by a categorical-based evaluation technique (MACBETH is demonstrated to act as a decision support tool while solving two real time supplier selection problems having qualitative performance measures. The ability of MACBETH method to quantify the qualitative performance measures helps to provide a numerical judgment scale for ranking the alternative suppliers and selecting the best one. The results obtained from MACBETH method exactly corroborate with those derived by the past researchers employing different mathematical approaches.

  5. The criteria for selecting a method for unfolding neutron spectra based on the information entropy theory

    International Nuclear Information System (INIS)

    Zhu, Qingjun; Song, Fengquan; Ren, Jie; Chen, Xueyong; Zhou, Bin

    2014-01-01

    To further expand the application of an artificial neural network in the field of neutron spectrometry, the criteria for choosing between an artificial neural network and the maximum entropy method for the purpose of unfolding neutron spectra was presented. The counts of the Bonner spheres for IAEA neutron spectra were used as a database, and the artificial neural network and the maximum entropy method were used to unfold neutron spectra; the mean squares of the spectra were defined as the differences between the desired and unfolded spectra. After the information entropy of each spectrum was calculated using information entropy theory, the relationship between the mean squares of the spectra and the information entropy was acquired. Useful information from the information entropy guided the selection of unfolding methods. Due to the importance of the information entropy, the method for predicting the information entropy using the Bonner spheres' counts was established. The criteria based on the information entropy theory can be used to choose between the artificial neural network and the maximum entropy method unfolding methods. The application of an artificial neural network to unfold neutron spectra was expanded. - Highlights: • Two neutron spectra unfolding methods, ANN and MEM, were compared. • The spectrum's entropy offers useful information for selecting unfolding methods. • For the spectrum with low entropy, the ANN was generally better than MEM. • The spectrum's entropy was predicted based on the Bonner spheres' counts

  6. Optimal Site Selection of Electric Vehicle Charging Stations Based on a Cloud Model and the PROMETHEE Method

    Directory of Open Access Journals (Sweden)

    Yunna Wu

    2016-03-01

    Full Text Available The task of site selection for electric vehicle charging stations (EVCS is hugely important from the perspective of harmonious and sustainable development. However, flaws and inadequacies in the currently used multi-criteria decision making methods could result in inaccurate and irrational decision results. First of all, the uncertainty of the information cannot be described integrally in the evaluation of the EVCS site selection. Secondly, rigorous consideration of the mutual influence between the various criteria is lacking, which is mainly evidenced in two aspects: one is ignoring the correlation, and the other is the unconscionable measurements. Last but not least, the ranking method adopted in previous studies is not very appropriate for evaluating the EVCS site selection problem. As a result of the above analysis, a Preference Ranking Organization Method for Enrichment Evaluations (PROMETHEE method-based decision system combined with the cloud model is proposed in this paper for EVCS site selection. Firstly, the use of the PROMETHEE method can bolster the confidence and visibility for decision makers. Secondly, the cloud model is recommended to describe the fuzziness and randomness of linguistic terms integrally and accurately. Finally, the Analytical Network Process (ANP method is adopted to measure the correlation of the indicators with a greatly simplified calculation of the parameters and the steps required.

  7. A Lightweight Structure Redesign Method Based on Selective Laser Melting

    Directory of Open Access Journals (Sweden)

    Li Tang

    2016-11-01

    Full Text Available The purpose of this paper is to present a new design method of lightweight parts fabricated by selective laser melting (SLM based on the “Skin-Frame” and to explore the influence of fabrication defects on SLM parts with different sizes. Some standard lattice parts were designed according to the Chinese GB/T 1452-2005 standard and manufactured by SLM. Then these samples were tested in an MTS Insight 30 compression testing machine to study the trends of the yield process with different structure sizes. A set of standard cylinder samples were also designed according to the Chinese GB/T 228-2010 standard. These samples, which were made of iron-nickel alloy (IN718, were also processed by SLM, and then tested in the universal material testing machine INSTRON 1346 to obtain their tensile strength. Furthermore, a lightweight redesigned method was researched. Then some common parts such as a stopper and connecting plate were redesigned using this method. These redesigned parts were fabricated and some application tests have already been performed. The compression testing results show that when the minimum structure size is larger than 1.5 mm, the mechanical characteristics will hardly be affected by process defects. The cylinder parts were fractured by the universal material testing machine at about 1069.6 MPa. These redesigned parts worked well in application tests, with both the weight and fabrication time of these parts reduced more than 20%.

  8. Supplier selection an MCDA-based approach

    CERN Document Server

    Mukherjee, Krishnendu

    2017-01-01

    The purpose of this book is to present a comprehensive review of the latest research and development trends at the international level for modeling and optimization of the supplier selection process for different industrial sectors. It is targeted to serve two audiences: the MBA and PhD student interested in procurement, and the practitioner who wishes to gain a deeper understanding of procurement analysis with multi-criteria based decision tools to avoid upstream risks to get better supply chain visibility. The book is expected to serve as a ready reference for supplier selection criteria and various multi-criteria based supplier’s evaluation methods for forward, reverse and mass customized supply chain. This book encompasses several criteria, methods for supplier selection in a systematic way based on extensive literature review from 1998 to 2012. It provides several case studies and some useful links which can serve as a starting point for interested researchers. In the appendix several computer code wri...

  9. Parameter Selection Method for Support Vector Regression Based on Adaptive Fusion of the Mixed Kernel Function

    Directory of Open Access Journals (Sweden)

    Hailun Wang

    2017-01-01

    Full Text Available Support vector regression algorithm is widely used in fault diagnosis of rolling bearing. A new model parameter selection method for support vector regression based on adaptive fusion of the mixed kernel function is proposed in this paper. We choose the mixed kernel function as the kernel function of support vector regression. The mixed kernel function of the fusion coefficients, kernel function parameters, and regression parameters are combined together as the parameters of the state vector. Thus, the model selection problem is transformed into a nonlinear system state estimation problem. We use a 5th-degree cubature Kalman filter to estimate the parameters. In this way, we realize the adaptive selection of mixed kernel function weighted coefficients and the kernel parameters, the regression parameters. Compared with a single kernel function, unscented Kalman filter (UKF support vector regression algorithms, and genetic algorithms, the decision regression function obtained by the proposed method has better generalization ability and higher prediction accuracy.

  10. Alternative microbial methods: An overview and selection criteria.

    Science.gov (United States)

    Jasson, Vicky; Jacxsens, Liesbeth; Luning, Pieternel; Rajkovic, Andreja; Uyttendaele, Mieke

    2010-09-01

    This study provides an overview and criteria for the selection of a method, other than the reference method, for microbial analysis of foods. In a first part an overview of the general characteristics of rapid methods available, both for enumeration and detection, is given with reference to relevant bibliography. Perspectives on future development and the potential of the rapid method for routine application in food diagnostics are discussed. As various alternative "rapid" methods in different formats are available on the market, it can be very difficult for a food business operator or for a control authority to select the most appropriate method which fits its purpose. Validation of a method by a third party, according to international accepted protocol based upon ISO 16140, may increase the confidence in the performance of a method. A list of at the moment validated methods for enumeration of both utility indicators (aerobic plate count) and hygiene indicators (Enterobacteriaceae, Escherichia coli, coagulase positive Staphylococcus) as well as for detection of the four major pathogens (Salmonella spp., Listeria monocytogenes, E. coli O157 and Campylobacter spp.) is included with reference to relevant websites to check for updates. In a second part of this study, selection criteria are introduced to underpin the choice of the appropriate method(s) for a defined application. The selection criteria link the definition of the context in which the user of the method functions - and thus the prospective use of the microbial test results - with the technical information on the method and its operational requirements and sustainability. The selection criteria can help the end user of the method to obtain a systematic insight into all relevant factors to be taken into account for selection of a method for microbial analysis. Copyright 2010 Elsevier Ltd. All rights reserved.

  11. Reference satellite selection method for GNSS high-precision relative positioning

    Directory of Open Access Journals (Sweden)

    Xiao Gao

    2017-03-01

    Full Text Available Selecting the optimal reference satellite is an important component of high-precision relative positioning because the reference satellite directly influences the strength of the normal equation. The reference satellite selection methods based on elevation and positional dilution of precision (PDOP value were compared. Results show that all the above methods cannot select the optimal reference satellite. We introduce condition number of the design matrix in the reference satellite selection method to improve structure of the normal equation, because condition number can indicate the ill condition of the normal equation. The experimental results show that the new method can improve positioning accuracy and reliability in precise relative positioning.

  12. A consistency-based feature selection method allied with linear SVMs for HIV-1 protease cleavage site prediction.

    Directory of Open Access Journals (Sweden)

    Orkun Oztürk

    Full Text Available BACKGROUND: Predicting type-1 Human Immunodeficiency Virus (HIV-1 protease cleavage site in protein molecules and determining its specificity is an important task which has attracted considerable attention in the research community. Achievements in this area are expected to result in effective drug design (especially for HIV-1 protease inhibitors against this life-threatening virus. However, some drawbacks (like the shortage of the available training data and the high dimensionality of the feature space turn this task into a difficult classification problem. Thus, various machine learning techniques, and specifically several classification methods have been proposed in order to increase the accuracy of the classification model. In addition, for several classification problems, which are characterized by having few samples and many features, selecting the most relevant features is a major factor for increasing classification accuracy. RESULTS: We propose for HIV-1 data a consistency-based feature selection approach in conjunction with recursive feature elimination of support vector machines (SVMs. We used various classifiers for evaluating the results obtained from the feature selection process. We further demonstrated the effectiveness of our proposed method by comparing it with a state-of-the-art feature selection method applied on HIV-1 data, and we evaluated the reported results based on attributes which have been selected from different combinations. CONCLUSION: Applying feature selection on training data before realizing the classification task seems to be a reasonable data-mining process when working with types of data similar to HIV-1. On HIV-1 data, some feature selection or extraction operations in conjunction with different classifiers have been tested and noteworthy outcomes have been reported. These facts motivate for the work presented in this paper. SOFTWARE AVAILABILITY: The software is available at http

  13. Methods for selective functionalization and separation of carbon nanotubes

    Science.gov (United States)

    Strano, Michael S. (Inventor); Usrey, Monica (Inventor); Barone, Paul (Inventor); Dyke, Christopher A. (Inventor); Tour, James M. (Inventor); Kittrell, W. Carter (Inventor); Hauge, Robert H (Inventor); Smalley, Richard E. (Inventor); Marek, legal representative, Irene Marie (Inventor)

    2011-01-01

    The present invention is directed toward methods of selectively functionalizing carbon nanotubes of a specific type or range of types, based on their electronic properties, using diazonium chemistry. The present invention is also directed toward methods of separating carbon nanotubes into populations of specific types or range(s) of types via selective functionalization and electrophoresis, and also to the novel compositions generated by such separations.

  14. An Integrated DEMATEL-VIKOR Method-Based Approach for Cotton Fibre Selection and Evaluation

    Science.gov (United States)

    Chakraborty, Shankar; Chatterjee, Prasenjit; Prasad, Kanika

    2018-01-01

    Selection of the most appropriate cotton fibre type for yarn manufacturing is often treated as a multi-criteria decision-making (MCDM) problem as the optimal selection decision needs to be taken in presence of several conflicting fibre properties. In this paper, two popular MCDM methods in the form of decision making trial and evaluation laboratory (DEMATEL) and VIse Kriterijumska Optimizacija kompromisno Resenje (VIKOR) are integrated to aid the cotton fibre selection decision. DEMATEL method addresses the interrelationships between various physical properties of cotton fibres while segregating them into cause and effect groups, whereas, VIKOR method helps in ranking all the considered 17 cotton fibres from the best to the worst. The derived ranking of cotton fibre alternatives closely matches with that obtained by the past researchers. This model can assist the spinning industry personnel in the blending process while making accurate fibre selection decision when cotton fibre properties are numerous and interrelated.

  15. [Research on K-means clustering segmentation method for MRI brain image based on selecting multi-peaks in gray histogram].

    Science.gov (United States)

    Chen, Zhaoxue; Yu, Haizhong; Chen, Hao

    2013-12-01

    To solve the problem of traditional K-means clustering in which initial clustering centers are selected randomly, we proposed a new K-means segmentation algorithm based on robustly selecting 'peaks' standing for White Matter, Gray Matter and Cerebrospinal Fluid in multi-peaks gray histogram of MRI brain image. The new algorithm takes gray value of selected histogram 'peaks' as the initial K-means clustering center and can segment the MRI brain image into three parts of tissue more effectively, accurately, steadily and successfully. Massive experiments have proved that the proposed algorithm can overcome many shortcomings caused by traditional K-means clustering method such as low efficiency, veracity, robustness and time consuming. The histogram 'peak' selecting idea of the proposed segmentootion method is of more universal availability.

  16. Selective Distance-Based K+ Quantification on Paper-Based Microfluidics.

    Science.gov (United States)

    Gerold, Chase T; Bakker, Eric; Henry, Charles S

    2018-04-03

    In this study, paper-based microfluidic devices (μPADs) capable of K + quantification in aqueous samples, as well as in human serum, using both colorimetric and distance-based methods are described. A lipophilic phase containing potassium ionophore I (valinomycin) was utilized to achieve highly selective quantification of K + in the presence of Na + , Li + , and Mg 2+ ions. Successful addition of a suspended lipophilic phase to a wax printed paper-based device is described and offers a solution to current approaches that rely on organic solvents, which damage wax barriers. The approach provides an avenue for future alkali/alkaline quantification utilizing μPADs. Colorimetric spot tests allowed for K + quantification from 0.1-5.0 mM using only 3.00 μL of sample solution. Selective distance-based quantification required small sample volumes (6.00 μL) and gave responses sensitive enough to distinguish between 1.0 and 2.5 mM of sample K + . μPADs using distance-based methods were also capable of differentiating between 4.3 and 6.9 mM K + in human serum samples. Distance-based methods required no digital analysis, electronic hardware, or pumps; any steps required for quantification could be carried out using the naked eye.

  17. Computational intelligence-based polymerase chain reaction primer selection based on a novel teaching-learning-based optimisation.

    Science.gov (United States)

    Cheng, Yu-Huei

    2014-12-01

    Specific primers play an important role in polymerase chain reaction (PCR) experiments, and therefore it is essential to find specific primers of outstanding quality. Unfortunately, many PCR constraints must be simultaneously inspected which makes specific primer selection difficult and time-consuming. This paper introduces a novel computational intelligence-based method, Teaching-Learning-Based Optimisation, to select the specific and feasible primers. The specified PCR product lengths of 150-300 bp and 500-800 bp with three melting temperature formulae of Wallace's formula, Bolton and McCarthy's formula and SantaLucia's formula were performed. The authors calculate optimal frequency to estimate the quality of primer selection based on a total of 500 runs for 50 random nucleotide sequences of 'Homo species' retrieved from the National Center for Biotechnology Information. The method was then fairly compared with the genetic algorithm (GA) and memetic algorithm (MA) for primer selection in the literature. The results show that the method easily found suitable primers corresponding with the setting primer constraints and had preferable performance than the GA and the MA. Furthermore, the method was also compared with the common method Primer3 according to their method type, primers presentation, parameters setting, speed and memory usage. In conclusion, it is an interesting primer selection method and a valuable tool for automatic high-throughput analysis. In the future, the usage of the primers in the wet lab needs to be validated carefully to increase the reliability of the method.

  18. Determining Selection across Heterogeneous Landscapes: A Perturbation-Based Method and Its Application to Modeling Evolution in Space.

    Science.gov (United States)

    Wickman, Jonas; Diehl, Sebastian; Blasius, Bernd; Klausmeier, Christopher A; Ryabov, Alexey B; Brännström, Åke

    2017-04-01

    Spatial structure can decisively influence the way evolutionary processes unfold. To date, several methods have been used to study evolution in spatial systems, including population genetics, quantitative genetics, moment-closure approximations, and individual-based models. Here we extend the study of spatial evolutionary dynamics to eco-evolutionary models based on reaction-diffusion equations and adaptive dynamics. Specifically, we derive expressions for the strength of directional and stabilizing/disruptive selection that apply both in continuous space and to metacommunities with symmetrical dispersal between patches. For directional selection on a quantitative trait, this yields a way to integrate local directional selection across space and determine whether the trait value will increase or decrease. The robustness of this prediction is validated against quantitative genetics. For stabilizing/disruptive selection, we show that spatial heterogeneity always contributes to disruptive selection and hence always promotes evolutionary branching. The expression for directional selection is numerically very efficient and hence lends itself to simulation studies of evolutionary community assembly. We illustrate the application and utility of the expressions for this purpose with two examples of the evolution of resource utilization. Finally, we outline the domain of applicability of reaction-diffusion equations as a modeling framework and discuss their limitations.

  19. Supplier Selection Using Weighted Utility Additive Method

    Science.gov (United States)

    Karande, Prasad; Chakraborty, Shankar

    2015-10-01

    Supplier selection is a multi-criteria decision-making (MCDM) problem which mainly involves evaluating a number of available suppliers according to a set of common criteria for choosing the best one to meet the organizational needs. For any manufacturing or service organization, selecting the right upstream suppliers is a key success factor that will significantly reduce purchasing cost, increase downstream customer satisfaction and improve competitive ability. The past researchers have attempted to solve the supplier selection problem employing different MCDM techniques which involve active participation of the decision makers in the decision-making process. This paper deals with the application of weighted utility additive (WUTA) method for solving supplier selection problems. The WUTA method, an extension of utility additive approach, is based on ordinal regression and consists of building a piece-wise linear additive decision model from a preference structure using linear programming (LP). It adopts preference disaggregation principle and addresses the decision-making activities through operational models which need implicit preferences in the form of a preorder of reference alternatives or a subset of these alternatives present in the process. The preferential preorder provided by the decision maker is used as a restriction of a LP problem, which has its own objective function, minimization of the sum of the errors associated with the ranking of each alternative. Based on a given reference ranking of alternatives, one or more additive utility functions are derived. Using these utility functions, the weighted utilities for individual criterion values are combined into an overall weighted utility for a given alternative. It is observed that WUTA method, having a sound mathematical background, can provide accurate ranking to the candidate suppliers and choose the best one to fulfill the organizational requirements. Two real time examples are illustrated to prove

  20. A review of methods supporting supplier selection

    NARCIS (Netherlands)

    de Boer, L.; Labro, Eva; Morlacchi, Pierangela

    2001-01-01

    this paper we present a review of decision methods reported in the literature for supporting the supplier selection process. The review is based on an extensive search in the academic literature. We position the contributions in a framework that takes the diversity of procurement situations in terms

  1. Automatic Peak Selection by a Benjamini-Hochberg-Based Algorithm

    KAUST Repository

    Abbas, Ahmed; Kong, Xin-Bing; Liu, Zhi; Jing, Bing-Yi; Gao, Xin

    2013-01-01

    A common issue in bioinformatics is that computational methods often generate a large number of predictions sorted according to certain confidence scores. A key problem is then determining how many predictions must be selected to include most of the true predictions while maintaining reasonably high precision. In nuclear magnetic resonance (NMR)-based protein structure determination, for instance, computational peak picking methods are becoming more and more common, although expert-knowledge remains the method of choice to determine how many peaks among thousands of candidate peaks should be taken into consideration to capture the true peaks. Here, we propose a Benjamini-Hochberg (B-H)-based approach that automatically selects the number of peaks. We formulate the peak selection problem as a multiple testing problem. Given a candidate peak list sorted by either volumes or intensities, we first convert the peaks into p-values and then apply the B-H-based algorithm to automatically select the number of peaks. The proposed approach is tested on the state-of-the-art peak picking methods, including WaVPeak [1] and PICKY [2]. Compared with the traditional fixed number-based approach, our approach returns significantly more true peaks. For instance, by combining WaVPeak or PICKY with the proposed method, the missing peak rates are on average reduced by 20% and 26%, respectively, in a benchmark set of 32 spectra extracted from eight proteins. The consensus of the B-H-selected peaks from both WaVPeak and PICKY achieves 88% recall and 83% precision, which significantly outperforms each individual method and the consensus method without using the B-H algorithm. The proposed method can be used as a standard procedure for any peak picking method and straightforwardly applied to some other prediction selection problems in bioinformatics. The source code, documentation and example data of the proposed method is available at http://sfb.kaust.edu.sa/pages/software.aspx. © 2013

  2. Automatic Peak Selection by a Benjamini-Hochberg-Based Algorithm

    KAUST Repository

    Abbas, Ahmed

    2013-01-07

    A common issue in bioinformatics is that computational methods often generate a large number of predictions sorted according to certain confidence scores. A key problem is then determining how many predictions must be selected to include most of the true predictions while maintaining reasonably high precision. In nuclear magnetic resonance (NMR)-based protein structure determination, for instance, computational peak picking methods are becoming more and more common, although expert-knowledge remains the method of choice to determine how many peaks among thousands of candidate peaks should be taken into consideration to capture the true peaks. Here, we propose a Benjamini-Hochberg (B-H)-based approach that automatically selects the number of peaks. We formulate the peak selection problem as a multiple testing problem. Given a candidate peak list sorted by either volumes or intensities, we first convert the peaks into p-values and then apply the B-H-based algorithm to automatically select the number of peaks. The proposed approach is tested on the state-of-the-art peak picking methods, including WaVPeak [1] and PICKY [2]. Compared with the traditional fixed number-based approach, our approach returns significantly more true peaks. For instance, by combining WaVPeak or PICKY with the proposed method, the missing peak rates are on average reduced by 20% and 26%, respectively, in a benchmark set of 32 spectra extracted from eight proteins. The consensus of the B-H-selected peaks from both WaVPeak and PICKY achieves 88% recall and 83% precision, which significantly outperforms each individual method and the consensus method without using the B-H algorithm. The proposed method can be used as a standard procedure for any peak picking method and straightforwardly applied to some other prediction selection problems in bioinformatics. The source code, documentation and example data of the proposed method is available at http://sfb.kaust.edu.sa/pages/software.aspx. © 2013

  3. Log-Linear Model Based Behavior Selection Method for Artificial Fish Swarm Algorithm

    Directory of Open Access Journals (Sweden)

    Zhehuang Huang

    2015-01-01

    Full Text Available Artificial fish swarm algorithm (AFSA is a population based optimization technique inspired by social behavior of fishes. In past several years, AFSA has been successfully applied in many research and application areas. The behavior of fishes has a crucial impact on the performance of AFSA, such as global exploration ability and convergence speed. How to construct and select behaviors of fishes are an important task. To solve these problems, an improved artificial fish swarm algorithm based on log-linear model is proposed and implemented in this paper. There are three main works. Firstly, we proposed a new behavior selection algorithm based on log-linear model which can enhance decision making ability of behavior selection. Secondly, adaptive movement behavior based on adaptive weight is presented, which can dynamically adjust according to the diversity of fishes. Finally, some new behaviors are defined and introduced into artificial fish swarm algorithm at the first time to improve global optimization capability. The experiments on high dimensional function optimization showed that the improved algorithm has more powerful global exploration ability and reasonable convergence speed compared with the standard artificial fish swarm algorithm.

  4. Log-linear model based behavior selection method for artificial fish swarm algorithm.

    Science.gov (United States)

    Huang, Zhehuang; Chen, Yidong

    2015-01-01

    Artificial fish swarm algorithm (AFSA) is a population based optimization technique inspired by social behavior of fishes. In past several years, AFSA has been successfully applied in many research and application areas. The behavior of fishes has a crucial impact on the performance of AFSA, such as global exploration ability and convergence speed. How to construct and select behaviors of fishes are an important task. To solve these problems, an improved artificial fish swarm algorithm based on log-linear model is proposed and implemented in this paper. There are three main works. Firstly, we proposed a new behavior selection algorithm based on log-linear model which can enhance decision making ability of behavior selection. Secondly, adaptive movement behavior based on adaptive weight is presented, which can dynamically adjust according to the diversity of fishes. Finally, some new behaviors are defined and introduced into artificial fish swarm algorithm at the first time to improve global optimization capability. The experiments on high dimensional function optimization showed that the improved algorithm has more powerful global exploration ability and reasonable convergence speed compared with the standard artificial fish swarm algorithm.

  5. Evaluating the sustainable mining contractor selection problems: An imprecise last aggregation preference selection index method

    Directory of Open Access Journals (Sweden)

    Mohammad Panahi Borujeni

    2017-01-01

    Full Text Available The increasing complexity surrounding decision-making situations has made it inevitable for practitioners to apply ideas from a group of experts or decision makers (DMs instead of individuals. In a large proportion of recent studies, not enough attention has been paid to considering uncertainty in practical ways. In this paper, a hesitant fuzzy preference selection index (HFPSI method is proposed based on a new soft computing approach with risk preferences of DMs to deal with imprecise multi-criteria decision-making problems. Meanwhile, qualitative assessing criteria are considered in the process of the proposed method to help the DMs by providing suitable expressions of membership degrees for an element under a set. Moreover, the best alternative is selected based on considering the concepts of preference relation and hesitant fuzzy sets, simultaneously. Therefore, DMs' weights are determined according to the proposed hesitant fuzzy compromise solution technique to prevent judgment errors. Moreover, the proposed method has been extended based on the last aggregation method by aggregating the DMs' opinions during the last stage to avoid data loss. In this respect, a real case study about the mining contractor selection problem is provided to represent the effectiveness and efficiency of the proposed HFPSI method in practice. Then, a comparative analysis is performed to show the feasibility of the presented approach. Finally, sensitivity analysis is carried out to show the effect of considering the DMs' weights and last aggregation approach in a dispersion of the alternatives’ ranking values.

  6. Mining method selection by integrated AHP and PROMETHEE method.

    Science.gov (United States)

    Bogdanovic, Dejan; Nikolic, Djordje; Ilic, Ivana

    2012-03-01

    Selecting the best mining method among many alternatives is a multicriteria decision making problem. The aim of this paper is to demonstrate the implementation of an integrated approach that employs AHP and PROMETHEE together for selecting the most suitable mining method for the "Coka Marin" underground mine in Serbia. The related problem includes five possible mining methods and eleven criteria to evaluate them. Criteria are accurately chosen in order to cover the most important parameters that impact on the mining method selection, such as geological and geotechnical properties, economic parameters and geographical factors. The AHP is used to analyze the structure of the mining method selection problem and to determine weights of the criteria, and PROMETHEE method is used to obtain the final ranking and to make a sensitivity analysis by changing the weights. The results have shown that the proposed integrated method can be successfully used in solving mining engineering problems.

  7. Method for Selection of Solvents for Promotion of Organic Reactions

    DEFF Research Database (Denmark)

    Gani, Rafiqul; Jiménez-González, Concepción; Constable, David J.C.

    2005-01-01

    is to produce, for a given reaction, a short list of chemicals that could be considered as potential solvents, to evaluate their performance in the reacting system, and, based on this, to rank them according to a scoring system. Several examples of application are given to illustrate the main features and steps......A method to select appropriate green solvents for the promotion of a class of organic reactions has been developed. The method combines knowledge from industrial practice and physical insights with computer-aided property estimation tools for selection/design of solvents. In particular, it employs...... estimates of thermodynamic properties to generate a knowledge base of reaction, solvent and environment related properties that directly or indirectly influence the rate and/or conversion of a given reaction. Solvents are selected using a rules-based procedure where the estimated reaction-solvent properties...

  8. Proactive AP Selection Method Considering the Radio Interference Environment

    Science.gov (United States)

    Taenaka, Yuzo; Kashihara, Shigeru; Tsukamoto, Kazuya; Yamaguchi, Suguru; Oie, Yuji

    In the near future, wireless local area networks (WLANs) will overlap to provide continuous coverage over a wide area. In such ubiquitous WLANs, a mobile node (MN) moving freely between multiple access points (APs) requires not only permanent access to the Internet but also continuous communication quality during handover. In order to satisfy these requirements, an MN needs to (1) select an AP with better performance and (2) execute a handover seamlessly. To satisfy requirement (2), we proposed a seamless handover method in a previous study. Moreover, in order to achieve (1), the Received Signal Strength Indicator (RSSI) is usually employed to measure wireless link quality in a WLAN system. However, in a real environment, especially if APs are densely situated, it is difficult to always select an AP with better performance based on only the RSSI. This is because the RSSI alone cannot detect the degradation of communication quality due to radio interference. Moreover, it is important that AP selection is completed only on an MN, because we can assume that, in ubiquitous WLANs, various organizations or operators will manage APs. Hence, we cannot modify the APs for AP selection. To overcome these difficulties, in the present paper, we propose and implement a proactive AP selection method considering wireless link condition based on the number of frame retransmissions in addition to the RSSI. In the evaluation, we show that the proposed AP selection method can appropriately select an AP with good wireless link quality, i.e., high RSSI and low radio interference.

  9. The experiments and analysis of several selective video encryption methods

    Science.gov (United States)

    Zhang, Yue; Yang, Cheng; Wang, Lei

    2013-07-01

    This paper presents four methods for selective video encryption based on the MPEG-2 video compression,including the slices, the I-frames, the motion vectors, and the DCT coefficients. We use the AES encryption method for simulation experiment for the four methods on VS2010 Platform, and compare the video effects and the processing speed of each frame after the video encrypted. The encryption depth can be arbitrarily selected, and design the encryption depth by using the double limit counting method, so the accuracy can be increased.

  10. A Selection Method for COTS Systems

    DEFF Research Database (Denmark)

    Hedman, Jonas

    new skills and methods supporting the process of evaluating and selecting information systems. This paper presents a method for selecting COTS systems. The method includes the following phases: problem framing, requirements and appraisal, and selection of systems. The idea and distinguishing feature...... behind the method is that improved understanding of organizational' ends' or goals should govern the selection of a COTS system. This can also be expressed as a match or fit between ‘ends' (e.g. improved organizational effectiveness) and ‘means' (e.g. implementing COTS systems). This way of approaching...

  11. Optimization methods for activities selection problems

    Science.gov (United States)

    Mahad, Nor Faradilah; Alias, Suriana; Yaakop, Siti Zulaika; Arshad, Norul Amanina Mohd; Mazni, Elis Sofia

    2017-08-01

    Co-curriculum activities must be joined by every student in Malaysia and these activities bring a lot of benefits to the students. By joining these activities, the students can learn about the time management and they can developing many useful skills. This project focuses on the selection of co-curriculum activities in secondary school using the optimization methods which are the Analytic Hierarchy Process (AHP) and Zero-One Goal Programming (ZOGP). A secondary school in Negeri Sembilan, Malaysia was chosen as a case study. A set of questionnaires were distributed randomly to calculate the weighted for each activity based on the 3 chosen criteria which are soft skills, interesting activities and performances. The weighted was calculated by using AHP and the results showed that the most important criteria is soft skills. Then, the ZOGP model will be analyzed by using LINGO Software version 15.0. There are two priorities to be considered. The first priority which is to minimize the budget for the activities is achieved since the total budget can be reduced by RM233.00. Therefore, the total budget to implement the selected activities is RM11,195.00. The second priority which is to select the co-curriculum activities is also achieved. The results showed that 9 out of 15 activities were selected. Thus, it can concluded that AHP and ZOGP approach can be used as the optimization methods for activities selection problem.

  12. Computational Experiment Study on Selection Mechanism of Project Delivery Method Based on Complex Factors

    Directory of Open Access Journals (Sweden)

    Xiang Ding

    2014-01-01

    Full Text Available Project delivery planning is a key stage used by the project owner (or project investor for organizing design, construction, and other operations in a construction project. The main task in this stage is to select an appropriate project delivery method. In order to analyze different factors affecting the PDM selection, this paper establishes a multiagent model mainly to show how project complexity, governance strength, and market environment affect the project owner’s decision on PDM. Experiment results show that project owner usually choose Design-Build method when the project is very complex within a certain range. Besides, this paper points out that Design-Build method will be the prior choice when the potential contractors develop quickly. This paper provides the owners with methods and suggestions in terms of showing how the factors affect PDM selection, and it may improve the project performance.

  13. Selecting the minimum prediction base of historical data to perform 5-year predictions of the cancer burden: The GoF-optimal method.

    Science.gov (United States)

    Valls, Joan; Castellà, Gerard; Dyba, Tadeusz; Clèries, Ramon

    2015-06-01

    Predicting the future burden of cancer is a key issue for health services planning, where a method for selecting the predictive model and the prediction base is a challenge. A method, named here Goodness-of-Fit optimal (GoF-optimal), is presented to determine the minimum prediction base of historical data to perform 5-year predictions of the number of new cancer cases or deaths. An empirical ex-post evaluation exercise for cancer mortality data in Spain and cancer incidence in Finland using simple linear and log-linear Poisson models was performed. Prediction bases were considered within the time periods 1951-2006 in Spain and 1975-2007 in Finland, and then predictions were made for 37 and 33 single years in these periods, respectively. The performance of three fixed different prediction bases (last 5, 10, and 20 years of historical data) was compared to that of the prediction base determined by the GoF-optimal method. The coverage (COV) of the 95% prediction interval and the discrepancy ratio (DR) were calculated to assess the success of the prediction. The results showed that (i) models using the prediction base selected through GoF-optimal method reached the highest COV and the lowest DR and (ii) the best alternative strategy to GoF-optimal was the one using the base of prediction of 5-years. The GoF-optimal approach can be used as a selection criterion in order to find an adequate base of prediction. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Selective Integration in the Material-Point Method

    DEFF Research Database (Denmark)

    Andersen, Lars; Andersen, Søren; Damkilde, Lars

    2009-01-01

    The paper deals with stress integration in the material-point method. In order to avoid parasitic shear in bending, a formulation is proposed, based on selective integration in the background grid that is used to solve the governing equations. The suggested integration scheme is compared...... to a traditional material-point-method computation in which the stresses are evaluated at the material points. The deformation of a cantilever beam is analysed, assuming elastic or elastoplastic material behaviour....

  15. Personnel selection using group fuzzy AHP and SAW methods

    Directory of Open Access Journals (Sweden)

    Ali Reza Afshari

    2017-01-01

    Full Text Available Personnel evaluation and selection is a very important activity for the enterprises. Different job needs different ability and the requirement of criteria which can measure ability is different. It needs a suitable and flexible method to evaluate the performance of each candidate according to different requirements of different jobs in relation to each criterion. Analytic Hierarchy Process (AHP is one of Multi Criteria decision making methods derived from paired comparisons. Simple Additive Weighting (SAW is most frequently used multi attribute decision technique. The method is based on the weighted average. It successfully models the ambiguity and imprecision associated with the pair wise comparison process and reduces the personal biasness. This study tries to analyze the Analytic Hierarchy Process in order to make the recruitment process more reasonable, based on the fuzzy multiple criteria decision making model to achieve the goal of personnel selection. Finally, an example is implemented to demonstrate the practicability of the proposed method.

  16. A simple and rapid method for calixarene-based selective extraction of bioactive molecules from natural products.

    Science.gov (United States)

    Segneanu, Adina-Elena; Damian, Daniel; Hulka, Iosif; Grozescu, Ioan; Salifoglou, Athanasios

    2016-03-01

    Natural products derived from medicinal plants have gained an important role in drug discovery due to their complex and abundant composition of secondary metabolites, with their structurally unique molecular components bearing a significant number of stereo-centers exhibiting high specificity linked to biological activity. Usually, the extraction process of natural products involves various techniques targeting separation of a specific class of compounds from a highly complex matrix. Aiding the process entails the use of well-defined and selective molecular extractants with distinctly configured structural attributes. Calixarenes conceivably belong to that class of molecules. They have been studied intensely over the years in an effort to develop new and highly selective receptors for biomolecules. These macrocycles, which display remarkable structural architectures and properties, could help usher a new approach in the efficient separation of specific classes of compounds from complex matrices in natural products. A simple and rapid such extraction method is presented herein, based on host-guest interaction(s) between a calixarene synthetic receptor, 4-tert-butyl-calix[6]arene, and natural biomolecular targets (amino acids and peptides) from Helleborus purpurascens and Viscum album. Advanced physicochemical methods (including GC-MS and chip-based nanoESI-MS analysis) suggest that the molecular structure and specifically the calixarene cavity size are closely linked to the nature of compounds separated. Incorporation of biomolecules and modification of the macrocyclic architecture during separation were probed and confirmed by scanning electronic microscopy and atomic force microscopy. The collective results project calixarene as a promising molecular extractant candidate, facilitating the selective separation of amino acids and peptides from natural products.

  17. Oral cancer prognosis based on clinicopathologic and genomic markers using a hybrid of feature selection and machine learning methods

    Science.gov (United States)

    2013-01-01

    Background Machine learning techniques are becoming useful as an alternative approach to conventional medical diagnosis or prognosis as they are good for handling noisy and incomplete data, and significant results can be attained despite a small sample size. Traditionally, clinicians make prognostic decisions based on clinicopathologic markers. However, it is not easy for the most skilful clinician to come out with an accurate prognosis by using these markers alone. Thus, there is a need to use genomic markers to improve the accuracy of prognosis. The main aim of this research is to apply a hybrid of feature selection and machine learning methods in oral cancer prognosis based on the parameters of the correlation of clinicopathologic and genomic markers. Results In the first stage of this research, five feature selection methods have been proposed and experimented on the oral cancer prognosis dataset. In the second stage, the model with the features selected from each feature selection methods are tested on the proposed classifiers. Four types of classifiers are chosen; these are namely, ANFIS, artificial neural network, support vector machine and logistic regression. A k-fold cross-validation is implemented on all types of classifiers due to the small sample size. The hybrid model of ReliefF-GA-ANFIS with 3-input features of drink, invasion and p63 achieved the best accuracy (accuracy = 93.81%; AUC = 0.90) for the oral cancer prognosis. Conclusions The results revealed that the prognosis is superior with the presence of both clinicopathologic and genomic markers. The selected features can be investigated further to validate the potential of becoming as significant prognostic signature in the oral cancer studies. PMID:23725313

  18. Simulation-based investigation of the paired-gear method in cod-end selectivity studies

    DEFF Research Database (Denmark)

    Herrmann, Bent; Frandsen, Rikke; Holst, René

    2007-01-01

    In this paper, the paired-gear and covered cod-end methods for estimating the selectivity of trawl cod-ends are compared. A modified version of the cod-end selectivity simulator PRESEMO is used to simulate the data that would be collected from a paired-gear experiment where the test cod-end also ...

  19. Information Gain Based Dimensionality Selection for Classifying Text Documents

    Energy Technology Data Exchange (ETDEWEB)

    Dumidu Wijayasekara; Milos Manic; Miles McQueen

    2013-06-01

    Selecting the optimal dimensions for various knowledge extraction applications is an essential component of data mining. Dimensionality selection techniques are utilized in classification applications to increase the classification accuracy and reduce the computational complexity. In text classification, where the dimensionality of the dataset is extremely high, dimensionality selection is even more important. This paper presents a novel, genetic algorithm based methodology, for dimensionality selection in text mining applications that utilizes information gain. The presented methodology uses information gain of each dimension to change the mutation probability of chromosomes dynamically. Since the information gain is calculated a priori, the computational complexity is not affected. The presented method was tested on a specific text classification problem and compared with conventional genetic algorithm based dimensionality selection. The results show an improvement of 3% in the true positives and 1.6% in the true negatives over conventional dimensionality selection methods.

  20. Entropy Based Test Point Evaluation and Selection Method for Analog Circuit Fault Diagnosis

    Directory of Open Access Journals (Sweden)

    Yuan Gao

    2014-01-01

    Full Text Available By simplifying tolerance problem and treating faulty voltages on different test points as independent variables, integer-coded table technique is proposed to simplify the test point selection process. Usually, simplifying tolerance problem may induce a wrong solution while the independence assumption will result in over conservative result. To address these problems, the tolerance problem is thoroughly considered in this paper, and dependency relationship between different test points is considered at the same time. A heuristic graph search method is proposed to facilitate the test point selection process. First, the information theoretic concept of entropy is used to evaluate the optimality of test point. The entropy is calculated by using the ambiguous sets and faulty voltage distribution, determined by component tolerance. Second, the selected optimal test point is used to expand current graph node by using dependence relationship between the test point and graph node. Simulated results indicate that the proposed method more accurately finds the optimal set of test points than other methods; therefore, it is a good solution to minimize the size of the test point set. To simplify and clarify the proposed method, only catastrophic and some specific parametric faults are discussed in this paper.

  1. Efficient nonparametric and asymptotic Bayesian model selection methods for attributed graph clustering

    KAUST Repository

    Xu, Zhiqiang

    2017-02-16

    Attributed graph clustering, also known as community detection on attributed graphs, attracts much interests recently due to the ubiquity of attributed graphs in real life. Many existing algorithms have been proposed for this problem, which are either distance based or model based. However, model selection in attributed graph clustering has not been well addressed, that is, most existing algorithms assume the cluster number to be known a priori. In this paper, we propose two efficient approaches for attributed graph clustering with automatic model selection. The first approach is a popular Bayesian nonparametric method, while the second approach is an asymptotic method based on a recently proposed model selection criterion, factorized information criterion. Experimental results on both synthetic and real datasets demonstrate that our approaches for attributed graph clustering with automatic model selection significantly outperform the state-of-the-art algorithm.

  2. Efficient nonparametric and asymptotic Bayesian model selection methods for attributed graph clustering

    KAUST Repository

    Xu, Zhiqiang; Cheng, James; Xiao, Xiaokui; Fujimaki, Ryohei; Muraoka, Yusuke

    2017-01-01

    Attributed graph clustering, also known as community detection on attributed graphs, attracts much interests recently due to the ubiquity of attributed graphs in real life. Many existing algorithms have been proposed for this problem, which are either distance based or model based. However, model selection in attributed graph clustering has not been well addressed, that is, most existing algorithms assume the cluster number to be known a priori. In this paper, we propose two efficient approaches for attributed graph clustering with automatic model selection. The first approach is a popular Bayesian nonparametric method, while the second approach is an asymptotic method based on a recently proposed model selection criterion, factorized information criterion. Experimental results on both synthetic and real datasets demonstrate that our approaches for attributed graph clustering with automatic model selection significantly outperform the state-of-the-art algorithm.

  3. Diversified models for portfolio selection based on uncertain semivariance

    Science.gov (United States)

    Chen, Lin; Peng, Jin; Zhang, Bo; Rosyida, Isnaini

    2017-02-01

    Since the financial markets are complex, sometimes the future security returns are represented mainly based on experts' estimations due to lack of historical data. This paper proposes a semivariance method for diversified portfolio selection, in which the security returns are given subjective to experts' estimations and depicted as uncertain variables. In the paper, three properties of the semivariance of uncertain variables are verified. Based on the concept of semivariance of uncertain variables, two types of mean-semivariance diversified models for uncertain portfolio selection are proposed. Since the models are complex, a hybrid intelligent algorithm which is based on 99-method and genetic algorithm is designed to solve the models. In this hybrid intelligent algorithm, 99-method is applied to compute the expected value and semivariance of uncertain variables, and genetic algorithm is employed to seek the best allocation plan for portfolio selection. At last, several numerical examples are presented to illustrate the modelling idea and the effectiveness of the algorithm.

  4. Supplier selection in manufacturing innovation chain-oriented public procurement based on improved PSO method

    Directory of Open Access Journals (Sweden)

    Xin Xu

    2014-01-01

    Full Text Available Purpose: At the dynamic innovation market, it is very difficult for an enterprise to accomplish innovation individually; technology innovation is shifting towards collaborative R&D chain mode. Thus, supplier selection based on individually innovation efficiency of enterprise is inapplicable to construct collaborative R&D innovation chain. This study is seeking to address how to select R&D innovation chain supplier in manufacturing industry. Design/methodology/approach: Firstly, Delphi method and AHP method are applied to establish an index system evaluating the suppliers of innovation chain, and then each index is weighted by experts with AHP method. Thirdly, optimized PSO algorithm is put forwarded based on the optimal efficiency of innovation chain to discriminate ideal suppliers meeting realistic conditions. Fourthly, innovation chain construction at generator manufacturing industry was taken as empirical case study to testify the improved PSO model. Findings: The innovation chain is comprised up by several enterprises, innovation performance of a single enterprise is not always positively correlated to that of one innovation chain, and the proposed model is capable to find out the best combination to construct an innovation chain. Research limitations/implications: The relations between these constructs with other variables of interest to the academicals fields were analyzed by a precise and credible data with a clear and concise description of the supply chain integration measurement scales. Practical implications: providing scales that are valid as a diagnostic tool for best practices, as well as providing a benchmark with which to compare the score for each individual plant against a chain of industrial innovation from machinery. Originality/value: Innovation chain integration is an important factor in explaining the innovation performance of companies. The vast range of results obtained is due to the fact that there is no exactness to

  5. Local Strategy Combined with a Wavelength Selection Method for Multivariate Calibration

    Directory of Open Access Journals (Sweden)

    Haitao Chang

    2016-06-01

    Full Text Available One of the essential factors influencing the prediction accuracy of multivariate calibration models is the quality of the calibration data. A local regression strategy, together with a wavelength selection approach, is proposed to build the multivariate calibration models based on partial least squares regression. The local algorithm is applied to create a calibration set of spectra similar to the spectrum of an unknown sample; the synthetic degree of grey relation coefficient is used to evaluate the similarity. A wavelength selection method based on simple-to-use interactive self-modeling mixture analysis minimizes the influence of noisy variables, and the most informative variables of the most similar samples are selected to build the multivariate calibration model based on partial least squares regression. To validate the performance of the proposed method, ultraviolet-visible absorbance spectra of mixed solutions of food coloring analytes in a concentration range of 20–200 µg/mL is measured. Experimental results show that the proposed method can not only enhance the prediction accuracy of the calibration model, but also greatly reduce its complexity.

  6. Sound recovery via intensity variations of speckle pattern pixels selected with variance-based method

    Science.gov (United States)

    Zhu, Ge; Yao, Xu-Ri; Qiu, Peng; Mahmood, Waqas; Yu, Wen-Kai; Sun, Zhi-Bin; Zhai, Guang-Jie; Zhao, Qing

    2018-02-01

    In general, the sound waves can cause the vibration of the objects that are encountered in the traveling path. If we make a laser beam illuminate the rough surface of an object, it will be scattered into a speckle pattern that vibrates with these sound waves. Here, an efficient variance-based method is proposed to recover the sound information from speckle patterns captured by a high-speed camera. This method allows us to select the proper pixels that have large variances of the gray-value variations over time, from a small region of the speckle patterns. The gray-value variations of these pixels are summed together according to a simple model to recover the sound with a high signal-to-noise ratio. Meanwhile, our method will significantly simplify the computation compared with the traditional digital-image-correlation technique. The effectiveness of the proposed method has been verified by applying a variety of objects. The experimental results illustrate that the proposed method is robust to the quality of the speckle patterns and costs more than one-order less time to perform the same number of the speckle patterns. In our experiment, a sound signal of time duration 1.876 s is recovered from various objects with time consumption of 5.38 s only.

  7. Quick Link Selection Method by Using Pricing Strategy Based on User Equilibrium for Implementing an Effective Urban Travel Demand Management

    Directory of Open Access Journals (Sweden)

    Shahriar Afandizadeh Zargari

    2016-12-01

    Full Text Available This paper presents a two-stage model of optimization as a quick method to choose the best potential links for implementing urban travel demand management (UTDM strategy like road pricing. The model is optimized by minimizing the hidden cost of congestion based on user equilibrium (MHCCUE. It forecasts the exact amount of flows and tolls for links in user equilibrium condition to determine the hidden cost for each link to optimize the link selection based on the network congestion priority. The results show that not only the amount of total cost is decreased, but also the number of selected links for pricing is reduced as compared with the previous toll minimization methods. Moreover, as this model just uses the traffic assignment data for calculation, it could be considered as a quick and optimum solution for choosing the potential links.

  8. [Electroencephalogram Feature Selection Based on Correlation Coefficient Analysis].

    Science.gov (United States)

    Zhou, Jinzhi; Tang, Xiaofang

    2015-08-01

    In order to improve the accuracy of classification with small amount of motor imagery training data on the development of brain-computer interface (BCD systems, we proposed an analyzing method to automatically select the characteristic parameters based on correlation coefficient analysis. Throughout the five sample data of dataset IV a from 2005 BCI Competition, we utilized short-time Fourier transform (STFT) and correlation coefficient calculation to reduce the number of primitive electroencephalogram dimension, then introduced feature extraction based on common spatial pattern (CSP) and classified by linear discriminant analysis (LDA). Simulation results showed that the average rate of classification accuracy could be improved by using correlation coefficient feature selection method than those without using this algorithm. Comparing with support vector machine (SVM) optimization features algorithm, the correlation coefficient analysis can lead better selection parameters to improve the accuracy of classification.

  9. Reliability-based decision making for selection of ready-mix concrete supply using stochastic superiority and inferiority ranking method

    International Nuclear Information System (INIS)

    Chou, Jui-Sheng; Ongkowijoyo, Citra Satria

    2015-01-01

    Corporate competitiveness is heavily influenced by the information acquired, processed, utilized and transferred by professional staff involved in the supply chain. This paper develops a decision aid for selecting on-site ready-mix concrete (RMC) unloading type in decision making situations involving multiple stakeholders and evaluation criteria. The uncertainty of criteria weights set by expert judgment can be transformed in random ways based on the probabilistic virtual-scale method within a prioritization matrix. The ranking is performed by grey relational grade systems considering stochastic criteria weight based on individual preference. Application of the decision aiding model in actual RMC case confirms that the method provides a robust and effective tool for facilitating decision making under uncertainty. - Highlights: • This study models decision aiding method to assess ready-mix concrete unloading type. • Applying Monte Carlo simulation to virtual-scale method achieves a reliable process. • Individual preference ranking method enhances the quality of global decision making. • Robust stochastic superiority and inferiority ranking obtains reasonable results

  10. An opinion formation based binary optimization approach for feature selection

    Science.gov (United States)

    Hamedmoghadam, Homayoun; Jalili, Mahdi; Yu, Xinghuo

    2018-02-01

    This paper proposed a novel optimization method based on opinion formation in complex network systems. The proposed optimization technique mimics human-human interaction mechanism based on a mathematical model derived from social sciences. Our method encodes a subset of selected features to the opinion of an artificial agent and simulates the opinion formation process among a population of agents to solve the feature selection problem. The agents interact using an underlying interaction network structure and get into consensus in their opinions, while finding better solutions to the problem. A number of mechanisms are employed to avoid getting trapped in local minima. We compare the performance of the proposed method with a number of classical population-based optimization methods and a state-of-the-art opinion formation based method. Our experiments on a number of high dimensional datasets reveal outperformance of the proposed algorithm over others.

  11. Toward optimal feature selection using ranking methods and classification algorithms

    Directory of Open Access Journals (Sweden)

    Novaković Jasmina

    2011-01-01

    Full Text Available We presented a comparison between several feature ranking methods used on two real datasets. We considered six ranking methods that can be divided into two broad categories: statistical and entropy-based. Four supervised learning algorithms are adopted to build models, namely, IB1, Naive Bayes, C4.5 decision tree and the RBF network. We showed that the selection of ranking methods could be important for classification accuracy. In our experiments, ranking methods with different supervised learning algorithms give quite different results for balanced accuracy. Our cases confirm that, in order to be sure that a subset of features giving the highest accuracy has been selected, the use of many different indices is recommended.

  12. Studies on the matched potential method for determining the selectivity coefficients of ion-selective electrodes based on neutral ionophores: experimental and theoretical verification.

    Science.gov (United States)

    Tohda, K; Dragoe, D; Shibata, M; Umezawa, Y

    2001-06-01

    A theory is presented that describes the matched potential method (MPM) for the determination of the potentiometric selectivity coefficients (KA,Bpot) of ion-selective electrodes for two ions with any charge. This MPM theory is based on electrical diffuse layers on both the membrane and the aqueous side of the interface, and is therefore independent of the Nicolsky-Eisenman equation. Instead, the Poisson equation is used and a Boltzmann distribution is assumed with respect to all charged species, including primary, interfering and background electrolyte ions located at the diffuse double layers. In this model, the MPM-selectivity coefficients of ions with equal charge (ZA = ZB) are expressed as the ratio of the concentrations of the primary and interfering ions in aqueous solutions at which the same amounts of the primary and interfering ions permselectively extracted into the membrane surface. For ions with unequal charge (ZA not equal to ZB), the selectivity coefficients are expressed as a function not only of the amounts of the primary and interfering ions permeated into the membrane surface, but also of the primary ion concentration in the initial reference solution and the delta EMF value. Using the measured complexation stability constants and single ion distribution coefficients for the relevant systems, the corresponding MPM selectivity coefficients can be calculated from the developed MPM theory. It was found that this MPM theory is capable of accurately and precisely predicting the MPM selectivity coefficients for a series of ion-selective electrodes (ISEs) with representative ionophore systems, which are generally in complete agreement with independently determined MPM selectivity values from the potentiometric measurements. These results also conclude that the assumption for the Boltzmann distribution was in fact valid in the theory. The recent critical papers on MPM have pointed out that because the MPM selectivity coefficients are highly concentration

  13. Determination of Selection Method in Genetic Algorithm for Land Suitability

    Directory of Open Access Journals (Sweden)

    Irfianti Asti Dwi

    2016-01-01

    Full Text Available Genetic Algoirthm is one alternative solution in the field of modeling optimization, automatic programming and machine learning. The purpose of the study was to compare some type of selection methods in Genetic Algorithm for land suitability. Contribution of this research applies the best method to develop region based horticultural commodities. This testing is done by comparing the three methods on the method of selection, the Roulette Wheel, Tournament Selection and Stochastic Universal Sampling. Parameters of the locations used in the test scenarios include Temperature = 27°C, Rainfall = 1200 mm, hummidity = 30%, Cluster fruit = 4, Crossover Probabiitiy (Pc = 0.6, Mutation Probabilty (Pm = 0.2 and Epoch = 10. The second test epoch incluides location parameters consist of Temperature = 30°C, Rainfall = 2000 mm, Humidity = 35%, Cluster fruit = 5, Crossover Probability (Pc = 0.7, Mutation Probability (Pm = 0.3 and Epoch 10. The conclusion of this study shows that the Roulette Wheel is the best method because it produces more stable and fitness value than the other two methods.

  14. The Choice Method of Selected Material has influence single evaporation flash method

    International Nuclear Information System (INIS)

    Sunaryo, Geni Rina; Sumijanto; Nurul L, Siti

    2000-01-01

    The final objective of this research is to design the mini scale of desalination installation. It has been started from 1997/1998 and has been doing for this 3 years. Where the study on the assessment of various desalination system has been done in the first year and thermodynamic in the second year. In this third year, literatully study on material resistance from outside pressure has been done. The number of pressure for single evaporator flashing method is mainly depend on the temperature that applied in that system. In this paper, the configuration stage, the choice method of selecting material for main evaporator vessel, tube, tube plates, water boxes, pipework, and valves for multistage flash distillation will be described. The choice of selecting material for MSF is base on economical consideration, cheap, high resistance and easy to be maintained

  15. Improving Classification of Protein Interaction Articles Using Context Similarity-Based Feature Selection.

    Science.gov (United States)

    Chen, Yifei; Sun, Yuxing; Han, Bing-Qing

    2015-01-01

    Protein interaction article classification is a text classification task in the biological domain to determine which articles describe protein-protein interactions. Since the feature space in text classification is high-dimensional, feature selection is widely used for reducing the dimensionality of features to speed up computation without sacrificing classification performance. Many existing feature selection methods are based on the statistical measure of document frequency and term frequency. One potential drawback of these methods is that they treat features separately. Hence, first we design a similarity measure between the context information to take word cooccurrences and phrase chunks around the features into account. Then we introduce the similarity of context information to the importance measure of the features to substitute the document and term frequency. Hence we propose new context similarity-based feature selection methods. Their performance is evaluated on two protein interaction article collections and compared against the frequency-based methods. The experimental results reveal that the context similarity-based methods perform better in terms of the F1 measure and the dimension reduction rate. Benefiting from the context information surrounding the features, the proposed methods can select distinctive features effectively for protein interaction article classification.

  16. Using an Integrated Group Decision Method Based on SVM, TFN-RS-AHP, and TOPSIS-CD for Cloud Service Supplier Selection

    Directory of Open Access Journals (Sweden)

    Lian-hui Li

    2017-01-01

    Full Text Available To solve the cloud service supplier selection problem under the background of cloud computing emergence, an integrated group decision method is proposed. The cloud service supplier selection index framework is built from two perspectives of technology and technology management. Support vector machine- (SVM- based classification model is applied for the preliminary screening to reduce the number of candidate suppliers. A triangular fuzzy number-rough sets-analytic hierarchy process (TFN-RS-AHP method is designed to calculate supplier’s index value by expert’s wisdom and experience. The index weight is determined by criteria importance through intercriteria correlation (CRITIC. The suppliers are evaluated by the improved TOPSIS replacing Euclidean distance with connection distance (TOPSIS-CD. An electric power enterprise’s case is given to illustrate the correctness and feasibility of the proposed method.

  17. Selective saturation method for EPR dosimetry with tooth enamel

    International Nuclear Information System (INIS)

    Ignatiev, E.A.; Romanyukha, A.A.; Koshta, A.A.; Wieser, A.

    1996-01-01

    The method of selective saturation is based on the difference in the microwave (mw) power dependence of the background and radiation induced EPR components of the tooth enamel spectrum. The subtraction of the EPR spectrum recorded at low mw power from that recorded at higher mw power provides a considerable reduction of the background component in the spectrum. The resolution of the EPR spectrum could be improved 10-fold, however simultaneously the signal-to-noise ratio was found to be reduced twice. A detailed comparative study of reference samples with known absorbed doses was performed to demonstrate the advantage of the method. The application of the selective saturation method for EPR dosimetry with tooth enamel reduced the lower limit of EPR dosimetry to about 100 mGy. (author)

  18. A new DEA-GAHP method for supplier selection problem

    Directory of Open Access Journals (Sweden)

    Behrooz Ahadian

    2012-10-01

    Full Text Available Supplier selection is one of the most important decisions made in supply chain management. Supplier evaluation problem has been in the center of supply chain researcher’s attention in these years. Managers regard some of these studies and methods inappropriate due to simple, weight scoring methods that generally are based on subjective opinions and judgments of decision maker units involved in the supplier evaluation process yielding imprecise and even unreliable results. This paper seeks to propose a methodology to integrate data envelopment analysis (DEA and group analytical hierarchy process (GAHP for evaluating and selecting the most efficient supplier. We develop a methodology, which consists of 6 steps, one by one has been introduced in lecture and finally applicability of proposed method is indicated by assessing 12 suppliers in a numerical example.

  19. Some selected quantitative methods of thermal image analysis in Matlab.

    Science.gov (United States)

    Koprowski, Robert

    2016-05-01

    The paper presents a new algorithm based on some selected automatic quantitative methods for analysing thermal images. It shows the practical implementation of these image analysis methods in Matlab. It enables to perform fully automated and reproducible measurements of selected parameters in thermal images. The paper also shows two examples of the use of the proposed image analysis methods for the area of ​​the skin of a human foot and face. The full source code of the developed application is also provided as an attachment. The main window of the program during dynamic analysis of the foot thermal image. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Discrete Biogeography Based Optimization for Feature Selection in Molecular Signatures.

    Science.gov (United States)

    Liu, Bo; Tian, Meihong; Zhang, Chunhua; Li, Xiangtao

    2015-04-01

    Biomarker discovery from high-dimensional data is a complex task in the development of efficient cancer diagnoses and classification. However, these data are usually redundant and noisy, and only a subset of them present distinct profiles for different classes of samples. Thus, selecting high discriminative genes from gene expression data has become increasingly interesting in the field of bioinformatics. In this paper, a discrete biogeography based optimization is proposed to select the good subset of informative gene relevant to the classification. In the proposed algorithm, firstly, the fisher-markov selector is used to choose fixed number of gene data. Secondly, to make biogeography based optimization suitable for the feature selection problem; discrete migration model and discrete mutation model are proposed to balance the exploration and exploitation ability. Then, discrete biogeography based optimization, as we called DBBO, is proposed by integrating discrete migration model and discrete mutation model. Finally, the DBBO method is used for feature selection, and three classifiers are used as the classifier with the 10 fold cross-validation method. In order to show the effective and efficiency of the algorithm, the proposed algorithm is tested on four breast cancer dataset benchmarks. Comparison with genetic algorithm, particle swarm optimization, differential evolution algorithm and hybrid biogeography based optimization, experimental results demonstrate that the proposed method is better or at least comparable with previous method from literature when considering the quality of the solutions obtained. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Infrared face recognition based on LBP histogram and KW feature selection

    Science.gov (United States)

    Xie, Zhihua

    2014-07-01

    The conventional LBP-based feature as represented by the local binary pattern (LBP) histogram still has room for performance improvements. This paper focuses on the dimension reduction of LBP micro-patterns and proposes an improved infrared face recognition method based on LBP histogram representation. To extract the local robust features in infrared face images, LBP is chosen to get the composition of micro-patterns of sub-blocks. Based on statistical test theory, Kruskal-Wallis (KW) feature selection method is proposed to get the LBP patterns which are suitable for infrared face recognition. The experimental results show combination of LBP and KW features selection improves the performance of infrared face recognition, the proposed method outperforms the traditional methods based on LBP histogram, discrete cosine transform(DCT) or principal component analysis(PCA).

  2. Index Fund Selections with GAs and Classifications Based on Turnover

    Science.gov (United States)

    Orito, Yukiko; Motoyama, Takaaki; Yamazaki, Genji

    It is well known that index fund selections are important for the risk hedge of investment in a stock market. The`selection’means that for`stock index futures’, n companies of all ones in the market are selected. For index fund selections, Orito et al.(6) proposed a method consisting of the following two steps : Step 1 is to select N companies in the market with a heuristic rule based on the coefficient of determination between the return rate of each company in the market and the increasing rate of the stock price index. Step 2 is to construct a group of n companies by applying genetic algorithms to the set of N companies. We note that the rule of Step 1 is not unique. The accuracy of the results using their method depends on the length of time data (price data) in the experiments. The main purpose of this paper is to introduce a more`effective rule’for Step 1. The rule is based on turnover. The method consisting of Step 1 based on turnover and Step 2 is examined with numerical experiments for the 1st Section of Tokyo Stock Exchange. The results show that with our method, it is possible to construct the more effective index fund than the results of Orito et al.(6). The accuracy of the results using our method depends little on the length of time data (turnover data). The method especially works well when the increasing rate of the stock price index over a period can be viewed as a linear time series data.

  3. Laccase-catalyzed oxidation and intramolecular cyclization of dopamine: A new method for selective determination of dopamine with laccase/carbon nanotube-based electrochemical biosensors

    International Nuclear Information System (INIS)

    Xiang, Ling; Lin, Yuqing; Yu, Ping; Su, Lei; Mao, Lanqun

    2007-01-01

    This study demonstrates a new electrochemical method for the selective determination of dopamine (DA) with the coexistence of ascorbic acid (AA) and 3,4-dihydroxyphenylacetic acid (DOPAC) with laccase/multi-walled carbon nanotube (MWNT)-based biosensors prepared by cross-linking laccase into MWNT layer confined onto glassy carbon electrodes. The method described here is essentially based on the chemical reaction properties of DA including oxidation, intramolecular cyclization and disproportionation reactions to finally give 5,6-dihydroxyindoline quinone and on the uses of the two-electron and two-proton reduction of the formed 5,6-dihydroxyindoline quinone to constitute a method for the selective determination of DA at a negative potential that is totally separated from those for the redox processes of AA and DOPAC. Instead of the ECE reactions of DA with the first oxidation of DA being driven electrochemically, laccase is used here as the biocatalyst to drive the first oxidation of DA into its quinone form and thus initialize the sequential reactions of DA finally into 5,6-dihydroxyindoline quinone. In addition, laccase also catalyzes the oxidation of AA and DOPAC into electroinactive species with the concomitant reduction of O 2 . As a consequence, a combinational exploitation of the chemical properties inherent in DA and the multifunctional catalytic properties of laccase as well as the excellent electrochemical properties of carbon nanotubes substantially enables the prepared laccase/MWNT-based biosensors to be well competent for the selective determination of DA with the coexistence of physiological levels of AA and DOPAC. This demonstration offers a new method for the selective determination of DA, which could be potentially employed for the determination of DA in biological systems

  4. Triptycene-based dianhydrides, polyimides, methods of making each, and methods of use

    KAUST Repository

    Ghanem, Bader; Pinnau, Ingo; Swaidan, Raja

    2015-01-01

    A triptycene-based monomer, a method of making a triptycene-based monomer, a triptycene-based aromatic polyimide, a method of making a triptycene- based aromatic polyimide, methods of using triptycene-based aromatic polyimides, structures incorporating triptycene-based aromatic polyimides, and methods of gas separation are provided. Embodiments of the triptycene-based monomers and triptycene-based aromatic polyimides have high permeabilities and excellent selectivities. Embodiments of the triptycene-based aromatic polyimides have one or more of the following characteristics: intrinsic microporosity, good thermal stability, and enhanced solubility. In an exemplary embodiment, the triptycene-based aromatic polyimides are microporous and have a high BET surface area. In an exemplary embodiment, the triptycene-based aromatic polyimides can be used to form a gas separation membrane.

  5. Triptycene-based dianhydrides, polyimides, methods of making each, and methods of use

    KAUST Repository

    Ghanem, Bader

    2015-12-30

    A triptycene-based monomer, a method of making a triptycene-based monomer, a triptycene-based aromatic polyimide, a method of making a triptycene- based aromatic polyimide, methods of using triptycene-based aromatic polyimides, structures incorporating triptycene-based aromatic polyimides, and methods of gas separation are provided. Embodiments of the triptycene-based monomers and triptycene-based aromatic polyimides have high permeabilities and excellent selectivities. Embodiments of the triptycene-based aromatic polyimides have one or more of the following characteristics: intrinsic microporosity, good thermal stability, and enhanced solubility. In an exemplary embodiment, the triptycene-based aromatic polyimides are microporous and have a high BET surface area. In an exemplary embodiment, the triptycene-based aromatic polyimides can be used to form a gas separation membrane.

  6. Automatic learning-based beam angle selection for thoracic IMRT

    International Nuclear Information System (INIS)

    Amit, Guy; Marshall, Andrea; Purdie, Thomas G.; Jaffray, David A.; Levinshtein, Alex; Hope, Andrew J.; Lindsay, Patricia; Pekar, Vladimir

    2015-01-01

    Purpose: The treatment of thoracic cancer using external beam radiation requires an optimal selection of the radiation beam directions to ensure effective coverage of the target volume and to avoid unnecessary treatment of normal healthy tissues. Intensity modulated radiation therapy (IMRT) planning is a lengthy process, which requires the planner to iterate between choosing beam angles, specifying dose–volume objectives and executing IMRT optimization. In thorax treatment planning, where there are no class solutions for beam placement, beam angle selection is performed manually, based on the planner’s clinical experience. The purpose of this work is to propose and study a computationally efficient framework that utilizes machine learning to automatically select treatment beam angles. Such a framework may be helpful for reducing the overall planning workload. Methods: The authors introduce an automated beam selection method, based on learning the relationships between beam angles and anatomical features. Using a large set of clinically approved IMRT plans, a random forest regression algorithm is trained to map a multitude of anatomical features into an individual beam score. An optimization scheme is then built to select and adjust the beam angles, considering the learned interbeam dependencies. The validity and quality of the automatically selected beams evaluated using the manually selected beams from the corresponding clinical plans as the ground truth. Results: The analysis included 149 clinically approved thoracic IMRT plans. For a randomly selected test subset of 27 plans, IMRT plans were generated using automatically selected beams and compared to the clinical plans. The comparison of the predicted and the clinical beam angles demonstrated a good average correspondence between the two (angular distance 16.8° ± 10°, correlation 0.75 ± 0.2). The dose distributions of the semiautomatic and clinical plans were equivalent in terms of primary target volume

  7. Uniform design based SVM model selection for face recognition

    Science.gov (United States)

    Li, Weihong; Liu, Lijuan; Gong, Weiguo

    2010-02-01

    Support vector machine (SVM) has been proved to be a powerful tool for face recognition. The generalization capacity of SVM depends on the model with optimal hyperparameters. The computational cost of SVM model selection results in application difficulty in face recognition. In order to overcome the shortcoming, we utilize the advantage of uniform design--space filling designs and uniformly scattering theory to seek for optimal SVM hyperparameters. Then we propose a face recognition scheme based on SVM with optimal model which obtained by replacing the grid and gradient-based method with uniform design. The experimental results on Yale and PIE face databases show that the proposed method significantly improves the efficiency of SVM model selection.

  8. Pyrochemical and Dry Processing Methods Program. A selected bibliography

    Energy Technology Data Exchange (ETDEWEB)

    McDuffie, H.F.; Smith, D.H.; Owen, P.T.

    1979-03-01

    This selected bibliography with abstracts was compiled to provide information support to the Pyrochemical and Dry Processing Methods (PDPM) Program sponsored by DOE and administered by the Argonne National Laboratory. Objectives of the PDPM Program are to evaluate nonaqueous methods of reprocessing spent fuel as a route to the development of proliferation-resistant and diversion-resistant methods for widespread use in the nuclear industry. Emphasis was placed on the literature indexed in the ERDA--DOE Energy Data Base (EDB). The bibliography includes indexes to authors, subject descriptors, EDB subject categories, and titles.

  9. Pyrochemical and Dry Processing Methods Program. A selected bibliography

    International Nuclear Information System (INIS)

    McDuffie, H.F.; Smith, D.H.; Owen, P.T.

    1979-03-01

    This selected bibliography with abstracts was compiled to provide information support to the Pyrochemical and Dry Processing Methods (PDPM) Program sponsored by DOE and administered by the Argonne National Laboratory. Objectives of the PDPM Program are to evaluate nonaqueous methods of reprocessing spent fuel as a route to the development of proliferation-resistant and diversion-resistant methods for widespread use in the nuclear industry. Emphasis was placed on the literature indexed in the ERDA--DOE Energy Data Base (EDB). The bibliography includes indexes to authors, subject descriptors, EDB subject categories, and titles

  10. A Fourier transform method for the selection of a smoothing interval

    International Nuclear Information System (INIS)

    Kekre, H.B.; Madan, V.K.; Bairi, B.R.

    1989-01-01

    A novel method for the selection of a smoothing interval for the widely used Savitzky and Golay's smoothing filter is proposed. Complementary bandwidths for the nuclear spectral data and the smoothing filter are defined. The criterion for the selection of smoothing interval is based on matching the bandwidths of the spectral data to the filter. Using the above method five real observed spectral peaks of different full width at half maximum, viz. 23.5, 19.5, 17, 8.5 and 6.5 channels, were smoothed and the results are presented. (orig.)

  11. AMES: Towards an Agile Method for ERP Selection

    OpenAIRE

    Juell-Skielse, Gustaf; Nilsson, Anders G.; Nordqvist, Andreas; Westergren, Mattias

    2012-01-01

    Conventional on-premise installations of ERP are now rapidly being replaced by ERP as service. Although ERP becomes more accessible and no longer requires local infrastructure, current selection methods do not take full advantage of the provided agility. In this paper we present AMES (Agile Method for ERP Selection), a novel method for ERP selection which better utilizes the strengths of service oriented ERP. AMES is designed to shorten lead time for selection, support identification of essen...

  12. Improved Frame Mode Selection for AMR-WB+ Based on Decision Tree

    Science.gov (United States)

    Kim, Jong Kyu; Kim, Nam Soo

    In this letter, we propose a coding mode selection method for the AMR-WB+ audio coder based on a decision tree. In order to reduce computation while maintaining good performance, decision tree classifier is adopted with the closed loop mode selection results as the target classification labels. The size of the decision tree is controlled by pruning, so the proposed method does not increase the memory requirement significantly. Through an evaluation test on a database covering both speech and music materials, the proposed method is found to achieve a much better mode selection accuracy compared with the open loop mode selection module in the AMR-WB+.

  13. Estimation of Handgrip Force from SEMG Based on Wavelet Scale Selection.

    Science.gov (United States)

    Wang, Kai; Zhang, Xianmin; Ota, Jun; Huang, Yanjiang

    2018-02-24

    This paper proposes a nonlinear correlation-based wavelet scale selection technology to select the effective wavelet scales for the estimation of handgrip force from surface electromyograms (SEMG). The SEMG signal corresponding to gripping force was collected from extensor and flexor forearm muscles during the force-varying analysis task. We performed a computational sensitivity analysis on the initial nonlinear SEMG-handgrip force model. To explore the nonlinear correlation between ten wavelet scales and handgrip force, a large-scale iteration based on the Monte Carlo simulation was conducted. To choose a suitable combination of scales, we proposed a rule to combine wavelet scales based on the sensitivity of each scale and selected the appropriate combination of wavelet scales based on sequence combination analysis (SCA). The results of SCA indicated that the scale combination VI is suitable for estimating force from the extensors and the combination V is suitable for the flexors. The proposed method was compared to two former methods through prolonged static and force-varying contraction tasks. The experiment results showed that the root mean square errors derived by the proposed method for both static and force-varying contraction tasks were less than 20%. The accuracy and robustness of the handgrip force derived by the proposed method is better than that obtained by the former methods.

  14. Analysis of selected structures for model-based measuring methods using fuzzy logic

    Energy Technology Data Exchange (ETDEWEB)

    Hampel, R.; Kaestner, W.; Fenske, A.; Vandreier, B.; Schefter, S. [Hochschule fuer Technik, Wirtschaft und Sozialwesen Zittau/Goerlitz (FH), Zittau (DE). Inst. fuer Prozesstechnik, Prozessautomatisierung und Messtechnik e.V. (IPM)

    2000-07-01

    Monitoring and diagnosis of safety-related technical processes in nuclear enginering can be improved with the help of intelligent methods of signal processing such as analytical redundancies. This chapter gives an overview about combined methods in form of hybrid models using model based measuring methods (observer) and knowledge-based methods (fuzzy logic). Three variants of hybrid observers (fuzzy-supported observer, hybrid observer with variable gain and hybrid non-linear operating point observer) are explained. As a result of the combination of analytical and fuzzy-based algorithms a new quality of monitoring and diagnosis is achieved. The results will be demonstrated in summary for the example water level estimation within pressure vessels (pressurizer, steam generator, and Boiling Water Reactor) with water-steam mixture during the accidental depressurization. (orig.)

  15. Analysis of selected structures for model-based measuring methods using fuzzy logic

    International Nuclear Information System (INIS)

    Hampel, R.; Kaestner, W.; Fenske, A.; Vandreier, B.; Schefter, S.

    2000-01-01

    Monitoring and diagnosis of safety-related technical processes in nuclear engineering can be improved with the help of intelligent methods of signal processing such as analytical redundancies. This chapter gives an overview about combined methods in form of hybrid models using model based measuring methods (observer) and knowledge-based methods (fuzzy logic). Three variants of hybrid observers (fuzzy-supported observer, hybrid observer with variable gain and hybrid non-linear operating point observer) are explained. As a result of the combination of analytical and fuzzy-based algorithms a new quality of monitoring and diagnosis is achieved. The results will be demonstrated in summary for the example water level estimation within pressure vessels (pressurizer, steam generator, and Boiling Water Reactor) with water-steam mixture during the accidental depressurization. (orig.)

  16. Progressive sampling-based Bayesian optimization for efficient and automatic machine learning model selection.

    Science.gov (United States)

    Zeng, Xueqiang; Luo, Gang

    2017-12-01

    Machine learning is broadly used for clinical data analysis. Before training a model, a machine learning algorithm must be selected. Also, the values of one or more model parameters termed hyper-parameters must be set. Selecting algorithms and hyper-parameter values requires advanced machine learning knowledge and many labor-intensive manual iterations. To lower the bar to machine learning, miscellaneous automatic selection methods for algorithms and/or hyper-parameter values have been proposed. Existing automatic selection methods are inefficient on large data sets. This poses a challenge for using machine learning in the clinical big data era. To address the challenge, this paper presents progressive sampling-based Bayesian optimization, an efficient and automatic selection method for both algorithms and hyper-parameter values. We report an implementation of the method. We show that compared to a state of the art automatic selection method, our method can significantly reduce search time, classification error rate, and standard deviation of error rate due to randomization. This is major progress towards enabling fast turnaround in identifying high-quality solutions required by many machine learning-based clinical data analysis tasks.

  17. Genomic Selection in Plant Breeding: Methods, Models, and Perspectives.

    Science.gov (United States)

    Crossa, José; Pérez-Rodríguez, Paulino; Cuevas, Jaime; Montesinos-López, Osval; Jarquín, Diego; de Los Campos, Gustavo; Burgueño, Juan; González-Camacho, Juan M; Pérez-Elizalde, Sergio; Beyene, Yoseph; Dreisigacker, Susanne; Singh, Ravi; Zhang, Xuecai; Gowda, Manje; Roorkiwal, Manish; Rutkoski, Jessica; Varshney, Rajeev K

    2017-11-01

    Genomic selection (GS) facilitates the rapid selection of superior genotypes and accelerates the breeding cycle. In this review, we discuss the history, principles, and basis of GS and genomic-enabled prediction (GP) as well as the genetics and statistical complexities of GP models, including genomic genotype×environment (G×E) interactions. We also examine the accuracy of GP models and methods for two cereal crops and two legume crops based on random cross-validation. GS applied to maize breeding has shown tangible genetic gains. Based on GP results, we speculate how GS in germplasm enhancement (i.e., prebreeding) programs could accelerate the flow of genes from gene bank accessions to elite lines. Recent advances in hyperspectral image technology could be combined with GS and pedigree-assisted breeding. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Method of App Selection for Healthcare Providers Based on Consumer Needs.

    Science.gov (United States)

    Lee, Jisan; Kim, Jeongeun

    2018-01-01

    Mobile device applications can be used to manage health. However, healthcare providers hesitate to use them because selection methods that consider the needs of health consumers and identify the most appropriate application are rare. This study aimed to create an effective method of identifying applications that address user needs. Women experiencing dysmenorrhea and premenstrual syndrome were the targeted users. First, we searched for related applications from two major sources of mobile applications. Brainstorming, mind mapping, and persona and scenario techniques were used to create a checklist of relevant criteria, which was used to rate the applications. Of the 2784 applications found, 369 were analyzed quantitatively. Of those, five of the top candidates were evaluated by three groups: application experts, clinical experts, and potential users. All three groups ranked one application the highest; however, the remaining rankings differed. The results of this study suggest that the method created is useful because it considers not only the needs of various users but also the knowledge of application and clinical experts. This study proposes a method for finding and using the best among existing applications and highlights the need for nurses who can understand and combine opinions of users and application and clinical experts.

  19. New evaluation methods for conceptual design selection using computational intelligence techniques

    Energy Technology Data Exchange (ETDEWEB)

    Huang, Hong Zhong; Liu, Yu; Li, Yanfeng; Wang, Zhonglai [University of Electronic Science and Technology of China, Chengdu (China); Xue, Lihua [Higher Education Press, Beijing (China)

    2013-03-15

    The conceptual design selection, which aims at choosing the best or most desirable design scheme among several candidates for the subsequent detailed design stage, oftentimes requires a set of tools to conduct design evaluation. Using computational intelligence techniques, such as fuzzy logic, neural network, genetic algorithm, and physical programming, several design evaluation methods are put forth in this paper to realize the conceptual design selection under different scenarios. Depending on whether an evaluation criterion can be quantified or not, the linear physical programming (LPP) model and the RAOGA-based fuzzy neural network (FNN) model can be utilized to evaluate design alternatives in conceptual design stage. Furthermore, on the basis of Vanegas and Labib's work, a multi-level conceptual design evaluation model based on the new fuzzy weighted average (NFWA) and the fuzzy compromise decision-making method is developed to solve the design evaluation problem consisting of many hierarchical criteria. The effectiveness of the proposed methods is demonstrated via several illustrative examples.

  20. New evaluation methods for conceptual design selection using computational intelligence techniques

    International Nuclear Information System (INIS)

    Huang, Hong Zhong; Liu, Yu; Li, Yanfeng; Wang, Zhonglai; Xue, Lihua

    2013-01-01

    The conceptual design selection, which aims at choosing the best or most desirable design scheme among several candidates for the subsequent detailed design stage, oftentimes requires a set of tools to conduct design evaluation. Using computational intelligence techniques, such as fuzzy logic, neural network, genetic algorithm, and physical programming, several design evaluation methods are put forth in this paper to realize the conceptual design selection under different scenarios. Depending on whether an evaluation criterion can be quantified or not, the linear physical programming (LPP) model and the RAOGA-based fuzzy neural network (FNN) model can be utilized to evaluate design alternatives in conceptual design stage. Furthermore, on the basis of Vanegas and Labib's work, a multi-level conceptual design evaluation model based on the new fuzzy weighted average (NFWA) and the fuzzy compromise decision-making method is developed to solve the design evaluation problem consisting of many hierarchical criteria. The effectiveness of the proposed methods is demonstrated via several illustrative examples.

  1. A fractured rock geophysical toolbox method selection tool

    Science.gov (United States)

    Day-Lewis, F. D.; Johnson, C.D.; Slater, L.D.; Robinson, J.L.; Williams, J.H.; Boyden, C.L.; Werkema, D.D.; Lane, J.W.

    2016-01-01

    Geophysical technologies have the potential to improve site characterization and monitoring in fractured rock, but the appropriate and effective application of geophysics at a particular site strongly depends on project goals (e.g., identifying discrete fractures) and site characteristics (e.g., lithology). No method works at every site or for every goal. New approaches are needed to identify a set of geophysical methods appropriate to specific project goals and site conditions while considering budget constraints. To this end, we present the Excel-based Fractured-Rock Geophysical Toolbox Method Selection Tool (FRGT-MST). We envision the FRGT-MST (1) equipping remediation professionals with a tool to understand what is likely to be realistic and cost-effective when contracting geophysical services, and (2) reducing applications of geophysics with unrealistic objectives or where methods are likely to fail.

  2. Two-stage atlas subset selection in multi-atlas based image segmentation

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, Tingting, E-mail: tingtingzhao@mednet.ucla.edu; Ruan, Dan, E-mail: druan@mednet.ucla.edu [The Department of Radiation Oncology, University of California, Los Angeles, California 90095 (United States)

    2015-06-15

    Purpose: Fast growing access to large databases and cloud stored data presents a unique opportunity for multi-atlas based image segmentation and also presents challenges in heterogeneous atlas quality and computation burden. This work aims to develop a novel two-stage method tailored to the special needs in the face of large atlas collection with varied quality, so that high-accuracy segmentation can be achieved with low computational cost. Methods: An atlas subset selection scheme is proposed to substitute a significant portion of the computationally expensive full-fledged registration in the conventional scheme with a low-cost alternative. More specifically, the authors introduce a two-stage atlas subset selection method. In the first stage, an augmented subset is obtained based on a low-cost registration configuration and a preliminary relevance metric; in the second stage, the subset is further narrowed down to a fusion set of desired size, based on full-fledged registration and a refined relevance metric. An inference model is developed to characterize the relationship between the preliminary and refined relevance metrics, and a proper augmented subset size is derived to ensure that the desired atlases survive the preliminary selection with high probability. Results: The performance of the proposed scheme has been assessed with cross validation based on two clinical datasets consisting of manually segmented prostate and brain magnetic resonance images, respectively. The proposed scheme demonstrates comparable end-to-end segmentation performance as the conventional single-stage selection method, but with significant computation reduction. Compared with the alternative computation reduction method, their scheme improves the mean and medium Dice similarity coefficient value from (0.74, 0.78) to (0.83, 0.85) and from (0.82, 0.84) to (0.95, 0.95) for prostate and corpus callosum segmentation, respectively, with statistical significance. Conclusions: The authors

  3. Two-stage atlas subset selection in multi-atlas based image segmentation

    International Nuclear Information System (INIS)

    Zhao, Tingting; Ruan, Dan

    2015-01-01

    Purpose: Fast growing access to large databases and cloud stored data presents a unique opportunity for multi-atlas based image segmentation and also presents challenges in heterogeneous atlas quality and computation burden. This work aims to develop a novel two-stage method tailored to the special needs in the face of large atlas collection with varied quality, so that high-accuracy segmentation can be achieved with low computational cost. Methods: An atlas subset selection scheme is proposed to substitute a significant portion of the computationally expensive full-fledged registration in the conventional scheme with a low-cost alternative. More specifically, the authors introduce a two-stage atlas subset selection method. In the first stage, an augmented subset is obtained based on a low-cost registration configuration and a preliminary relevance metric; in the second stage, the subset is further narrowed down to a fusion set of desired size, based on full-fledged registration and a refined relevance metric. An inference model is developed to characterize the relationship between the preliminary and refined relevance metrics, and a proper augmented subset size is derived to ensure that the desired atlases survive the preliminary selection with high probability. Results: The performance of the proposed scheme has been assessed with cross validation based on two clinical datasets consisting of manually segmented prostate and brain magnetic resonance images, respectively. The proposed scheme demonstrates comparable end-to-end segmentation performance as the conventional single-stage selection method, but with significant computation reduction. Compared with the alternative computation reduction method, their scheme improves the mean and medium Dice similarity coefficient value from (0.74, 0.78) to (0.83, 0.85) and from (0.82, 0.84) to (0.95, 0.95) for prostate and corpus callosum segmentation, respectively, with statistical significance. Conclusions: The authors

  4. GAIN RATIO BASED FEATURE SELECTION METHOD FOR PRIVACY PRESERVATION

    Directory of Open Access Journals (Sweden)

    R. Praveena Priyadarsini

    2011-04-01

    Full Text Available Privacy-preservation is a step in data mining that tries to safeguard sensitive information from unsanctioned disclosure and hence protecting individual data records and their privacy. There are various privacy preservation techniques like k-anonymity, l-diversity and t-closeness and data perturbation. In this paper k-anonymity privacy protection technique is applied to high dimensional datasets like adult and census. since, both the data sets are high dimensional, feature subset selection method like Gain Ratio is applied and the attributes of the datasets are ranked and low ranking attributes are filtered to form new reduced data subsets. K-anonymization privacy preservation technique is then applied on reduced datasets. The accuracy of the privacy preserved reduced datasets and the original datasets are compared for their accuracy on the two functionalities of data mining namely classification and clustering using naïve Bayesian and k-means algorithm respectively. Experimental results show that classification and clustering accuracy are comparatively the same for reduced k-anonym zed datasets and the original data sets.

  5. Construction Tender Subcontract Selection using Case-based Reasoning

    Directory of Open Access Journals (Sweden)

    Due Luu

    2012-11-01

    Full Text Available Obtaining competitive quotations from suitably qualified subcontractors at tender tim n significantly increase the chance of w1nmng a construction project. Amidst an increasingly growing trend to subcontracting in Australia, selecting appropriate subcontractors for a construction project can be a daunting task requiring the analysis of complex and dynamic criteria such as past performance, suitable experience, track record of competitive pricing, financial stability and so on. Subcontractor selection is plagued with uncertainty and vagueness and these conditions are difficul_t o represent in generalised sets of rules. DeciSIOns pertaining to the selection of subcontr:act?s tender time are usually based on the mtu1t1onand past experience of construction estimators. Case-based reasoning (CBR may be an appropriate method of addressing the chal_lenges of selecting subcontractors because CBR 1s able to harness the experiential knowledge of practitioners. This paper reviews the practicality and suitability of a CBR approach for subcontractor tender selection through the development of a prototype CBR procurement advisory system. In this system, subcontractor selection cases are represented by a set of attributes elicited from experienced construction estimators. The results indicate that CBR can enhance the appropriateness of the selection of subcontractors for construction projects.

  6. Supplier selection based on improved MOGA and its application in nuclear power equipment procurement

    International Nuclear Information System (INIS)

    Yan Zhaojun; Wang Dezhong; Zhou Lei

    2007-01-01

    Considering the fact that there are few objective and available methods supporting the supplier selection in nuclear power equipment purchasing process, a supplier selection method based on improved multi-objective genetic algorithm (MOGA) is proposed. The simulation results demonstrate the effectiveness and efficiency of this method for the supplier selection in nuclear power equipment procurement process. (authors)

  7. A BAND SELECTION METHOD FOR SUB-PIXEL TARGET DETECTION IN HYPERSPECTRAL IMAGES BASED ON LABORATORY AND FIELD REFLECTANCE SPECTRAL COMPARISON

    Directory of Open Access Journals (Sweden)

    S. Sharifi hashjin

    2016-06-01

    Full Text Available In recent years, developing target detection algorithms has received growing interest in hyperspectral images. In comparison to the classification field, few studies have been done on dimension reduction or band selection for target detection in hyperspectral images. This study presents a simple method to remove bad bands from the images in a supervised manner for sub-pixel target detection. The proposed method is based on comparing field and laboratory spectra of the target of interest for detecting bad bands. For evaluation, the target detection blind test dataset is used in this study. Experimental results show that the proposed method can improve efficiency of the two well-known target detection methods, ACE and CEM.

  8. Inventory of LCIA selection methods for assessing toxic releases. Methods and typology report part B

    DEFF Research Database (Denmark)

    Larsen, Henrik Fred; Birkved, Morten; Hauschild, Michael Zwicky

    method(s) in Work package 8 (WP8) of the OMNIITOX project. The selection methods and the other CRS methods are described in detail, a set of evaluation criteria are developed and the methods are evaluated against these criteria. This report (Deliverable 11B (D11B)) gives the results from task 7.1d, 7.1e......This report describes an inventory of Life Cycle Impact Assessment (LCIA) selection methods for assessing toxic releases. It consists of an inventory of current selection methods and other Chemical Ranking and Scoring (CRS) methods assessed to be relevant for the development of (a) new selection...... and 7.1f of WP 7 for selection methods. The other part of D11 (D11A) is reported in another report and deals with characterisation methods. A selection method is a method for prioritising chemical emissions to be included in an LCIA characterisation of toxic releases, i.e. calculating indicator scores...

  9. Ranking and selection of commercial off-the-shelf using fuzzy distance based approach

    Directory of Open Access Journals (Sweden)

    Rakesh Garg

    2015-06-01

    Full Text Available There is a tremendous growth of the use of the component based software engineering (CBSE approach for the development of software systems. The selection of the best suited COTS components which fulfils the necessary requirement for the development of software(s has become a major challenge for the software developers. The complexity of the optimal selection problem increases with an increase in alternative potential COTS components and the corresponding selection criteria. In this research paper, the problem of ranking and selection of Data Base Management Systems (DBMS components is modeled as a multi-criteria decision making problem. A ‘Fuzzy Distance Based Approach (FDBA’ method is proposed for the optimal ranking and selection of DBMS COTS components of an e-payment system based on 14 selection criteria grouped under three major categories i.e. ‘Vendor Capabilities’, ‘Business Issues’ and ‘Cost’. The results of this method are compared with other Analytical Hierarchy Process (AHP which is termed as a typical multi-criteria decision making approach. The proposed methodology is explained with an illustrated example.

  10. Fuzzy decision-making: a new method in model selection via various validity criteria

    International Nuclear Information System (INIS)

    Shakouri Ganjavi, H.; Nikravesh, K.

    2001-01-01

    Modeling is considered as the first step in scientific investigations. Several alternative models may be candida ted to express a phenomenon. Scientists use various criteria to select one model between the competing models. Based on the solution of a Fuzzy Decision-Making problem, this paper proposes a new method in model selection. The method enables the scientist to apply all desired validity criteria, systematically by defining a proper Possibility Distribution Function due to each criterion. Finally, minimization of a utility function composed of the Possibility Distribution Functions will determine the best selection. The method is illustrated through a modeling example for the A verage Daily Time Duration of Electrical Energy Consumption in Iran

  11. Inspection methods and their selection

    International Nuclear Information System (INIS)

    Maier, H.J.

    1980-01-01

    First those nondestructive testing methods, which are used in quality assurance, are to be treated, e.g. - ultrasonics - radiography - magnetic particle testing - dye penetrant testing - eddy currents, and their capabilities and limitations are shown. Second the selection of optimal testing methods under the aspect of defect recognition in different materials and components are shown. (orig./RW)

  12. Clustering based gene expression feature selection method: A computational approach to enrich the classifier efficiency of differentially expressed genes

    KAUST Repository

    Abusamra, Heba

    2016-07-20

    The native nature of high dimension low sample size of gene expression data make the classification task more challenging. Therefore, feature (gene) selection become an apparent need. Selecting a meaningful and relevant genes for classifier not only decrease the computational time and cost, but also improve the classification performance. Among different approaches of feature selection methods, however most of them suffer from several problems such as lack of robustness, validation issues etc. Here, we present a new feature selection technique that takes advantage of clustering both samples and genes. Materials and methods We used leukemia gene expression dataset [1]. The effectiveness of the selected features were evaluated by four different classification methods; support vector machines, k-nearest neighbor, random forest, and linear discriminate analysis. The method evaluate the importance and relevance of each gene cluster by summing the expression level for each gene belongs to this cluster. The gene cluster consider important, if it satisfies conditions depend on thresholds and percentage otherwise eliminated. Results Initial analysis identified 7120 differentially expressed genes of leukemia (Fig. 15a), after applying our feature selection methodology we end up with specific 1117 genes discriminating two classes of leukemia (Fig. 15b). Further applying the same method with more stringent higher positive and lower negative threshold condition, number reduced to 58 genes have be tested to evaluate the effectiveness of the method (Fig. 15c). The results of the four classification methods are summarized in Table 11. Conclusions The feature selection method gave good results with minimum classification error. Our heat-map result shows distinct pattern of refines genes discriminating between two classes of leukemia.

  13. Selectively Encrypted Pull-Up Based Watermarking of Biometric data

    Science.gov (United States)

    Shinde, S. A.; Patel, Kushal S.

    2012-10-01

    Biometric authentication systems are becoming increasingly popular due to their potential usage in information security. However, digital biometric data (e.g. thumb impression) are themselves vulnerable to security attacks. There are various methods are available to secure biometric data. In biometric watermarking the data are embedded in an image container and are only retrieved if the secrete key is available. This container image is encrypted to have more security against the attack. As wireless devices are equipped with battery as their power supply, they have limited computational capabilities; therefore to reduce energy consumption we use the method of selective encryption of container image. The bit pull-up-based biometric watermarking scheme is based on amplitude modulation and bit priority which reduces the retrieval error rate to great extent. By using selective Encryption mechanism we expect more efficiency in time at the time of encryption as well as decryption. Significant reduction in error rate is expected to be achieved by the bit pull-up method.

  14. A Simple and Sensitive Plant-Based Western Corn Rootworm Bioassay Method for Resistance Determination and Event Selection.

    Science.gov (United States)

    Wen, Zhimou; Chen, Jeng Shong

    2018-05-26

    We report here a simple and sensitive plant-based western corn rootworm, Diabrotica virgifera virgifera LeConte (Coleoptera: Chrysomelidae), bioassay method that allows for examination of multiple parameters for both plants and insects in a single experimental setup within a short duration. For plants, injury to roots can be visually examined, fresh root weight can be measured, and expression of trait protein in plant roots can be analyzed. For insects, in addition to survival, larval growth and development can be evaluated in several aspects including body weight gain, body length, and head capsule width. We demonstrated using the method that eCry3.1Ab-expressing 5307 corn was very effective against western corn rootworm by eliciting high mortality and significantly inhibiting larval growth and development. We also validated that the method allowed determination of resistance in an eCry3.1Ab-resistant western corn rootworm strain. While data presented in this paper demonstrate the usefulness of the method for selection of events of protein traits and for determination of resistance in laboratory populations, we envision that the method can be applied in much broader applications.

  15. Evidence-based case selection: An innovative knowledge management method to cluster public technical and vocational education and training colleges in South Africa

    Directory of Open Access Journals (Sweden)

    Margaretha M. Visser

    2017-03-01

    Full Text Available Background: Case studies are core constructs used in information management research. A persistent challenge for business, information management and social science researchers is how to select a representative sample of cases among a population with diverse characteristics when convenient or purposive sampling is not considered rigorous enough. The context of the study is post-school education, and it involves an investigation of quantitative methods of clustering the population of public technical and vocational education and training (TVET colleges in South Africa into groups with a similar level of maturity in terms of their information systems. Objectives: The aim of the study was to propose an evidence-based quantitative method for the selection of cases for case study research and to demonstrate the use and usefulness thereof by clustering public TVET colleges. Method: The clustering method was based on the use of a representative characteristic of the context, as a proxy. In this context of management information systems (MISs, website maturity was used as a proxy and website maturity model theory was used in the development of an evaluation questionnaire. The questionnaire was used for capturing data on website characteristics, which was used to determine website maturity. The websites of the 50 public TVET colleges were evaluated by nine evaluators. Multiple statistical techniques were applied to establish inter-rater reliability and to produce clusters of colleges. Results: The analyses revealed three clusters of public TVET colleges based on their website maturity levels. The first cluster includes three colleges with no websites or websites at a low maturity level. The second cluster consists of 30 colleges with websites at an average maturity level. The third cluster contains 17 colleges with websites at a high maturity level. Conclusion: The main contribution to the knowledge domain is an innovative quantitative method employing a

  16. Evaluation and selection of decision-making methods to assess landfill mining projects.

    Science.gov (United States)

    Hermann, Robert; Baumgartner, Rupert J; Vorbach, Stefan; Ragossnig, Arne; Pomberger, Roland

    2015-09-01

    For the first time in Austria, fundamental technological and economic studies on recovering secondary raw materials from large landfills have been carried out, based on the 'LAMIS - Landfill Mining Austria' pilot project. A main focus of the research - and the subject of this article - was to develop an assessment or decision-making procedure that allows landfill owners to thoroughly examine the feasibility of a landfill mining project in advance. Currently there are no standard procedures that would sufficiently cover all the multiple-criteria requirements. The basic structure of the multiple attribute decision making process was used to narrow down on selection, conceptual design and assessment of suitable procedures. Along with a breakdown into preliminary and main assessment, the entire foundation required was created, such as definitions of requirements to an assessment method, selection and accurate description of the various assessment criteria and classification of the target system for the present 'landfill mining' vs. 'retaining the landfill in after-care' decision-making problem. Based on these studies, cost-utility analysis and the analytical-hierarchy process were selected from the range of multiple attribute decision-making procedures and examined in detail. Overall, both methods have their pros and cons with regard to their use for assessing landfill mining projects. Merging these methods or connecting them with single-criteria decision-making methods (like the net present value method) may turn out to be reasonable and constitute an appropriate assessment method. © The Author(s) 2015.

  17. Development of a computer code system for selecting off-site protective action in radiological accidents based on the multiobjective optimization method

    International Nuclear Information System (INIS)

    Ishigami, Tsutomu; Oyama, Kazuo

    1989-09-01

    This report presents a new method to support selection of off-site protective action in nuclear reactor accidents, and provides a user's manual of a computer code system, PRASMA, developed using the method. The PRASMA code system gives several candidates of protective action zones of evacuation, sheltering and no action based on the multiobjective optimization method, which requires objective functions and decision variables. We have assigned population risks of fatality, injury and cost as the objective functions, and distance from a nuclear power plant characterizing the above three protective action zones as the decision variables. (author)

  18. Location of airports - selected quantitative methods

    Directory of Open Access Journals (Sweden)

    Agnieszka Merkisz-Guranowska

    2016-09-01

    Full Text Available Background: The role of air transport in  the economic development of a country and its regions cannot be overestimated. The decision concerning an airport's location must be in line with the expectations of all the stakeholders involved. This article deals with the issues related to the choice of  sites where airports should be located. Methods: Two main quantitative approaches related to the issue of airport location are presented in this article, i.e. the question of optimizing such a choice and the issue of selecting the location from a predefined set. The former involves mathematical programming and formulating the problem as an optimization task, the latter, however, involves ranking the possible variations. Due to various methodological backgrounds, the authors present the advantages and disadvantages of both approaches and point to the one which currently has its own practical application. Results: Based on real-life examples, the authors present a multi-stage procedure, which renders it possible to solve the problem of airport location. Conclusions: Based on the overview of literature of the subject, the authors point to three types of approach to the issue of airport location which could enable further development of currently applied methods.

  19. [Analysis on the accuracy of simple selection method of Fengshi (GB 31)].

    Science.gov (United States)

    Li, Zhixing; Zhang, Haihua; Li, Suhe

    2015-12-01

    To explore the accuracy of simple selection method of Fengshi (GB 31). Through the study of the ancient and modern data,the analysis and integration of the acupuncture books,the comparison of the locations of Fengshi (GB 31) by doctors from all dynasties and the integration of modern anatomia, the modern simple selection method of Fengshi (GB 31) is definite, which is the same as the traditional way. It is believed that the simple selec tion method is in accord with the human-oriented thought of TCM. Treatment by acupoints should be based on the emerging nature and the individual difference of patients. Also, it is proposed that Fengshi (GB 31) should be located through the integration between the simple method and body surface anatomical mark.

  20. Road Network Selection Based on Road Hierarchical Structure Control

    Directory of Open Access Journals (Sweden)

    HE Haiwei

    2015-04-01

    Full Text Available A new road network selection method based on hierarchical structure is studied. Firstly, road network is built as strokes which are then classified into hierarchical collections according to the criteria of betweenness centrality value (BC value. Secondly, the hierarchical structure of the strokes is enhanced using structural characteristic identification technique. Thirdly, the importance calculation model was established according to the relationships among the hierarchical structure of the strokes. Finally, the importance values of strokes are got supported with the model's hierarchical calculation, and with which the road network is selected. Tests are done to verify the advantage of this method by comparing it with other common stroke-oriented methods using three kinds of typical road network data. Comparision of the results show that this method had few need to semantic data, and could eliminate the negative influence of edge strokes caused by the criteria of BC value well. So, it is better to maintain the global hierarchical structure of road network, and suitable to meet with the selection of various kinds of road network at the same time.

  1. Two-stage atlas subset selection in multi-atlas based image segmentation.

    Science.gov (United States)

    Zhao, Tingting; Ruan, Dan

    2015-06-01

    Fast growing access to large databases and cloud stored data presents a unique opportunity for multi-atlas based image segmentation and also presents challenges in heterogeneous atlas quality and computation burden. This work aims to develop a novel two-stage method tailored to the special needs in the face of large atlas collection with varied quality, so that high-accuracy segmentation can be achieved with low computational cost. An atlas subset selection scheme is proposed to substitute a significant portion of the computationally expensive full-fledged registration in the conventional scheme with a low-cost alternative. More specifically, the authors introduce a two-stage atlas subset selection method. In the first stage, an augmented subset is obtained based on a low-cost registration configuration and a preliminary relevance metric; in the second stage, the subset is further narrowed down to a fusion set of desired size, based on full-fledged registration and a refined relevance metric. An inference model is developed to characterize the relationship between the preliminary and refined relevance metrics, and a proper augmented subset size is derived to ensure that the desired atlases survive the preliminary selection with high probability. The performance of the proposed scheme has been assessed with cross validation based on two clinical datasets consisting of manually segmented prostate and brain magnetic resonance images, respectively. The proposed scheme demonstrates comparable end-to-end segmentation performance as the conventional single-stage selection method, but with significant computation reduction. Compared with the alternative computation reduction method, their scheme improves the mean and medium Dice similarity coefficient value from (0.74, 0.78) to (0.83, 0.85) and from (0.82, 0.84) to (0.95, 0.95) for prostate and corpus callosum segmentation, respectively, with statistical significance. The authors have developed a novel two-stage atlas

  2. A quick method based on SIMPLISMA-KPLS for simultaneously selecting outlier samples and informative samples for model standardization in near infrared spectroscopy

    Science.gov (United States)

    Li, Li-Na; Ma, Chang-Ming; Chang, Ming; Zhang, Ren-Cheng

    2017-12-01

    A novel method based on SIMPLe-to-use Interactive Self-modeling Mixture Analysis (SIMPLISMA) and Kernel Partial Least Square (KPLS), named as SIMPLISMA-KPLS, is proposed in this paper for selection of outlier samples and informative samples simultaneously. It is a quick algorithm used to model standardization (or named as model transfer) in near infrared (NIR) spectroscopy. The NIR experiment data of the corn for analysis of the protein content is introduced to evaluate the proposed method. Piecewise direct standardization (PDS) is employed in model transfer. And the comparison of SIMPLISMA-PDS-KPLS and KS-PDS-KPLS is given in this research by discussion of the prediction accuracy of protein content and calculation speed of each algorithm. The conclusions include that SIMPLISMA-KPLS can be utilized as an alternative sample selection method for model transfer. Although it has similar accuracy to Kennard-Stone (KS), it is different from KS as it employs concentration information in selection program. This means that it ensures analyte information is involved in analysis, and the spectra (X) of the selected samples is interrelated with concentration (y). And it can be used for outlier sample elimination simultaneously by validation of calibration. According to the statistical data results of running time, it is clear that the sample selection process is more rapid when using KPLS. The quick algorithm of SIMPLISMA-KPLS is beneficial to improve the speed of online measurement using NIR spectroscopy.

  3. Design Guidelines for a Content-Based Image Retrieval Color-Selection Interface

    NARCIS (Netherlands)

    Eggen, Berry; van den Broek, Egon; van der Veer, Gerrit C.; Kisters, Peter M.F.; Willems, Rob; Vuurpijl, Louis G.

    2004-01-01

    In Content-Based Image Retrieval (CBIR) two query-methods exist: query-by-example and query-by-memory. The user either selects an example image or selects image features retrieved from memory (such as color, texture, spatial attributes, and shape) to define his query. Hitherto, research on CBIR

  4. The effect of the synthesis method on the parameters of pore structure and selectivity of ferrocyanide sorbents based on natural minerals

    International Nuclear Information System (INIS)

    Voronina, A.V.; Gorbunova, T.V.; Semenishchev, V.S.

    2017-01-01

    Ferrocyanide sorbents were obtained via thin-layer and surface modification of natural clinoptilolite and marl. The effect of modification method on surface characteristics of these sorbents and their selectivity for cesium was studied. It was shown that the modification resulted in an increase of selectivity of modified ferrocyanide sorbents to cesium as compared with the natural clinoptilolite in presence of Na + , as well as in an increase of cesium distribution coefficients in presence of K + . The nickel-potassium ferrocyanide based on the clinoptilolite showed the highest selectivity for cesium at sodium concentrations of 10 -4 -2 mol L -1 : cesium distribution coefficient was lg K d = 4.5 ± 0.4 L kg -1 and cesium/sodium separation factor was α(Cs/Na) = 250. In the presence of NH 4 + , all modified sorbents showed approximately equal selectivity for 137 Cs. Probable applications of the sorbents were suggested. (author)

  5. Which missing value imputation method to use in expression profiles: a comparative study and two selection schemes

    Directory of Open Access Journals (Sweden)

    Lotz Meredith J

    2008-01-01

    Full Text Available Abstract Background Gene expression data frequently contain missing values, however, most down-stream analyses for microarray experiments require complete data. In the literature many methods have been proposed to estimate missing values via information of the correlation patterns within the gene expression matrix. Each method has its own advantages, but the specific conditions for which each method is preferred remains largely unclear. In this report we describe an extensive evaluation of eight current imputation methods on multiple types of microarray experiments, including time series, multiple exposures, and multiple exposures × time series data. We then introduce two complementary selection schemes for determining the most appropriate imputation method for any given data set. Results We found that the optimal imputation algorithms (LSA, LLS, and BPCA are all highly competitive with each other, and that no method is uniformly superior in all the data sets we examined. The success of each method can also depend on the underlying "complexity" of the expression data, where we take complexity to indicate the difficulty in mapping the gene expression matrix to a lower-dimensional subspace. We developed an entropy measure to quantify the complexity of expression matrixes and found that, by incorporating this information, the entropy-based selection (EBS scheme is useful for selecting an appropriate imputation algorithm. We further propose a simulation-based self-training selection (STS scheme. This technique has been used previously for microarray data imputation, but for different purposes. The scheme selects the optimal or near-optimal method with high accuracy but at an increased computational cost. Conclusion Our findings provide insight into the problem of which imputation method is optimal for a given data set. Three top-performing methods (LSA, LLS and BPCA are competitive with each other. Global-based imputation methods (PLS, SVD, BPCA

  6. Which missing value imputation method to use in expression profiles: a comparative study and two selection schemes.

    Science.gov (United States)

    Brock, Guy N; Shaffer, John R; Blakesley, Richard E; Lotz, Meredith J; Tseng, George C

    2008-01-10

    Gene expression data frequently contain missing values, however, most down-stream analyses for microarray experiments require complete data. In the literature many methods have been proposed to estimate missing values via information of the correlation patterns within the gene expression matrix. Each method has its own advantages, but the specific conditions for which each method is preferred remains largely unclear. In this report we describe an extensive evaluation of eight current imputation methods on multiple types of microarray experiments, including time series, multiple exposures, and multiple exposures x time series data. We then introduce two complementary selection schemes for determining the most appropriate imputation method for any given data set. We found that the optimal imputation algorithms (LSA, LLS, and BPCA) are all highly competitive with each other, and that no method is uniformly superior in all the data sets we examined. The success of each method can also depend on the underlying "complexity" of the expression data, where we take complexity to indicate the difficulty in mapping the gene expression matrix to a lower-dimensional subspace. We developed an entropy measure to quantify the complexity of expression matrixes and found that, by incorporating this information, the entropy-based selection (EBS) scheme is useful for selecting an appropriate imputation algorithm. We further propose a simulation-based self-training selection (STS) scheme. This technique has been used previously for microarray data imputation, but for different purposes. The scheme selects the optimal or near-optimal method with high accuracy but at an increased computational cost. Our findings provide insight into the problem of which imputation method is optimal for a given data set. Three top-performing methods (LSA, LLS and BPCA) are competitive with each other. Global-based imputation methods (PLS, SVD, BPCA) performed better on mcroarray data with lower complexity

  7. Enhanced individual selection for selecting fast growing fish: the "PROSPER" method, with application on brown trout (Salmo trutta fario

    Directory of Open Access Journals (Sweden)

    Vandeputte Marc

    2004-11-01

    Full Text Available Abstract Growth rate is the main breeding goal of fish breeders, but individual selection has often shown poor responses in fish species. The PROSPER method was developed to overcome possible factors that may contribute to this low success, using (1 a variable base population and high number of breeders (Ne > 100, (2 selection within groups with low non-genetic effects and (3 repeated growth challenges. Using calculations, we show that individual selection within groups, with appropriate management of maternal effects, can be superior to mass selection as soon as the maternal effect ratio exceeds 0.15, when heritability is 0.25. Practically, brown trout were selected on length at the age of one year with the PROSPER method. The genetic gain was evaluated against an unselected control line. After four generations, the mean response per generation in length at one year was 6.2% of the control mean, while the mean correlated response in weight was 21.5% of the control mean per generation. At the 4th generation, selected fish also appeared to be leaner than control fish when compared at the same size, and the response on weight was maximal (≈130% of the control mean between 386 and 470 days post fertilisation. This high response is promising, however, the key points of the method have to be investigated in more detail.

  8. Sustainable Supplier Performance Evaluation and Selection with Neofuzzy TOPSIS Method.

    Science.gov (United States)

    Chaharsooghi, S K; Ashrafi, Mehdi

    2014-01-01

    Supplier selection plays an important role in the supply chain management and traditional criteria such as price, quality, and flexibility are considered for supplier performance evaluation in researches. In recent years sustainability has received more attention in the supply chain management literature with triple bottom line (TBL) describing the sustainability in supply chain management with social, environmental, and economic initiatives. This paper explores sustainability in supply chain management and examines the problem of identifying a new model for supplier selection based on extended model of TBL approach in supply chain by presenting fuzzy multicriteria method. Linguistic values of experts' subjective preferences are expressed with fuzzy numbers and Neofuzzy TOPSIS is proposed for finding the best solution of supplier selection problem. Numerical results show that the proposed model is efficient for integrating sustainability in supplier selection problem. The importance of using complimentary aspects of sustainability and Neofuzzy TOPSIS concept in sustainable supplier selection process is shown with sensitivity analysis.

  9. Primitive polynomials selection method for pseudo-random number generator

    Science.gov (United States)

    Anikin, I. V.; Alnajjar, Kh

    2018-01-01

    In this paper we suggested the method for primitive polynomials selection of special type. This kind of polynomials can be efficiently used as a characteristic polynomials for linear feedback shift registers in pseudo-random number generators. The proposed method consists of two basic steps: finding minimum-cost irreducible polynomials of the desired degree and applying primitivity tests to get the primitive ones. Finally two primitive polynomials, which was found by the proposed method, used in pseudorandom number generator based on fuzzy logic (FRNG) which had been suggested before by the authors. The sequences generated by new version of FRNG have low correlation magnitude, high linear complexity, less power consumption, is more balanced and have better statistical properties.

  10. A Comparative Study of Feature Selection and Classification Methods for Gene Expression Data

    KAUST Repository

    Abusamra, Heba

    2013-05-01

    Microarray technology has enriched the study of gene expression in such a way that scientists are now able to measure the expression levels of thousands of genes in a single experiment. Microarray gene expression data gained great importance in recent years due to its role in disease diagnoses and prognoses which help to choose the appropriate treatment plan for patients. This technology has shifted a new era in molecular classification, interpreting gene expression data remains a difficult problem and an active research area due to their native nature of “high dimensional low sample size”. Such problems pose great challenges to existing classification methods. Thus, effective feature selection techniques are often needed in this case to aid to correctly classify different tumor types and consequently lead to a better understanding of genetic signatures as well as improve treatment strategies. This thesis aims on a comparative study of state-of-the-art feature selection methods, classification methods, and the combination of them, based on gene expression data. We compared the efficiency of three different classification methods including: support vector machines, k- nearest neighbor and random forest, and eight different feature selection methods, including: information gain, twoing rule, sum minority, max minority, gini index, sum of variances, t- statistics, and one-dimension support vector machine. Five-fold cross validation was used to evaluate the classification performance. Two publicly available gene expression data sets of glioma were used for this study. Different experiments have been applied to compare the performance of the classification methods with and without performing feature selection. Results revealed the important role of feature selection in classifying gene expression data. By performing feature selection, the classification accuracy can be significantly boosted by using a small number of genes. The relationship of features selected in

  11. Variable selection by lasso-type methods

    Directory of Open Access Journals (Sweden)

    Sohail Chand

    2011-09-01

    Full Text Available Variable selection is an important property of shrinkage methods. The adaptive lasso is an oracle procedure and can do consistent variable selection. In this paper, we provide an explanation that how use of adaptive weights make it possible for the adaptive lasso to satisfy the necessary and almost sufcient condition for consistent variable selection. We suggest a novel algorithm and give an important result that for the adaptive lasso if predictors are normalised after the introduction of adaptive weights, it makes the adaptive lasso performance identical to the lasso.

  12. FiGS: a filter-based gene selection workbench for microarray data

    Directory of Open Access Journals (Sweden)

    Yun Taegyun

    2010-01-01

    Full Text Available Abstract Background The selection of genes that discriminate disease classes from microarray data is widely used for the identification of diagnostic biomarkers. Although various gene selection methods are currently available and some of them have shown excellent performance, no single method can retain the best performance for all types of microarray datasets. It is desirable to use a comparative approach to find the best gene selection result after rigorous test of different methodological strategies for a given microarray dataset. Results FiGS is a web-based workbench that automatically compares various gene selection procedures and provides the optimal gene selection result for an input microarray dataset. FiGS builds up diverse gene selection procedures by aligning different feature selection techniques and classifiers. In addition to the highly reputed techniques, FiGS diversifies the gene selection procedures by incorporating gene clustering options in the feature selection step and different data pre-processing options in classifier training step. All candidate gene selection procedures are evaluated by the .632+ bootstrap errors and listed with their classification accuracies and selected gene sets. FiGS runs on parallelized computing nodes that capacitate heavy computations. FiGS is freely accessible at http://gexp.kaist.ac.kr/figs. Conclusion FiGS is an web-based application that automates an extensive search for the optimized gene selection analysis for a microarray dataset in a parallel computing environment. FiGS will provide both an efficient and comprehensive means of acquiring optimal gene sets that discriminate disease states from microarray datasets.

  13. Covariance-Based Measurement Selection Criterion for Gaussian-Based Algorithms

    Directory of Open Access Journals (Sweden)

    Fernando A. Auat Cheein

    2013-01-01

    Full Text Available Process modeling by means of Gaussian-based algorithms often suffers from redundant information which usually increases the estimation computational complexity without significantly improving the estimation performance. In this article, a non-arbitrary measurement selection criterion for Gaussian-based algorithms is proposed. The measurement selection criterion is based on the determination of the most significant measurement from both an estimation convergence perspective and the covariance matrix associated with the measurement. The selection criterion is independent from the nature of the measured variable. This criterion is used in conjunction with three Gaussian-based algorithms: the EIF (Extended Information Filter, the EKF (Extended Kalman Filter and the UKF (Unscented Kalman Filter. Nevertheless, the measurement selection criterion shown herein can also be applied to other Gaussian-based algorithms. Although this work is focused on environment modeling, the results shown herein can be applied to other Gaussian-based algorithm implementations. Mathematical descriptions and implementation results that validate the proposal are also included in this work.

  14. Method selection for sustainability assessments: The case of recovery of resources from waste water.

    Science.gov (United States)

    Zijp, M C; Waaijers-van der Loop, S L; Heijungs, R; Broeren, M L M; Peeters, R; Van Nieuwenhuijzen, A; Shen, L; Heugens, E H W; Posthuma, L

    2017-07-15

    Sustainability assessments provide scientific support in decision procedures towards sustainable solutions. However, in order to contribute in identifying and choosing sustainable solutions, the sustainability assessment has to fit the decision context. Two complicating factors exist. First, different stakeholders tend to have different views on what a sustainability assessment should encompass. Second, a plethora of sustainability assessment methods exist, due to the multi-dimensional characteristic of the concept. Different methods provide other representations of sustainability. Based on a literature review, we present a protocol to facilitate method selection together with stakeholders. The protocol guides the exploration of i) the decision context, ii) the different views of stakeholders and iii) the selection of pertinent assessment methods. In addition, we present an online tool for method selection. This tool identifies assessment methods that meet the specifications obtained with the protocol, and currently contains characteristics of 30 sustainability assessment methods. The utility of the protocol and the tool are tested in a case study on the recovery of resources from domestic waste water. In several iterations, a combination of methods was selected, followed by execution of the selected sustainability assessment methods. The assessment results can be used in the first phase of the decision procedure that leads to a strategic choice for sustainable resource recovery from waste water in the Netherlands. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Methodical bases of selection and evaluation of the effectiveness of the projects of the urban territory renovation

    Science.gov (United States)

    Sizova, Evgeniya; Zhutaeva, Evgeniya; Chugunov, Andrei

    2018-03-01

    The article highlights features of processes of urban territory renovation from the perspective of a commercial entity participating in the implementation of a project. The requirements of high-rise construction projects to the entities, that carry out them, are considered. The advantages of large enterprises as participants in renovation projects are systematized, contributing to their most efficient implementation. The factors, which influence the success of the renovation projects, are presented. A method for selecting projects for implementation based on criteria grouped by qualitative characteristics and contributing to the most complete and comprehensive evaluation of the project is suggested. Patterns to prioritize and harmonize renovation projects in terms of multi-project activity of the enterprise are considered.

  16. New decision criteria for selecting delta check methods based on the ratio of the delta difference to the width of the reference range can be generally applicable for each clinical chemistry test item.

    Science.gov (United States)

    Park, Sang Hyuk; Kim, So-Young; Lee, Woochang; Chun, Sail; Min, Won-Ki

    2012-09-01

    Many laboratories use 4 delta check methods: delta difference, delta percent change, rate difference, and rate percent change. However, guidelines regarding decision criteria for selecting delta check methods have not yet been provided. We present new decision criteria for selecting delta check methods for each clinical chemistry test item. We collected 811,920 and 669,750 paired (present and previous) test results for 27 clinical chemistry test items from inpatients and outpatients, respectively. We devised new decision criteria for the selection of delta check methods based on the ratio of the delta difference to the width of the reference range (DD/RR). Delta check methods based on these criteria were compared with those based on the CV% of the absolute delta difference (ADD) as well as those reported in 2 previous studies. The delta check methods suggested by new decision criteria based on the DD/RR ratio corresponded well with those based on the CV% of the ADD except for only 2 items each in inpatients and outpatients. Delta check methods based on the DD/RR ratio also corresponded with those suggested in the 2 previous studies, except for 1 and 7 items in inpatients and outpatients, respectively. The DD/RR method appears to yield more feasible and intuitive selection criteria and can easily explain changes in the results by reflecting both the biological variation of the test item and the clinical characteristics of patients in each laboratory. We suggest this as a measure to determine delta check methods.

  17. Selection of candidate plus phenotypes of Jatropha curcas L. using method of paired comparisons

    Energy Technology Data Exchange (ETDEWEB)

    Mishra, D.K. [Silviculture Division, Arid Forest Research Institute, P.O. Krishi Mandi, New Pali Road, Jodhpur 342005, Rajasthan (India)

    2009-03-15

    Jatropha curcas L. (Euphorbiaceae) is an oil bearing species with multiple uses and considerable potential as a biodiesel crop. The present communication deals with the method of selecting plus phenotypes of J. curcas for exploiting genetic variability for further improvement. Candidate plus tree selection is the first and most important stage in any tree improvement programme. The selection of candidate plus plants (CPPs) is based upon various important attributes associated with the species and their relative ranking. Relative preference between various traits and scoring for each trait has been worked out by using the method of paired comparisons for the selection of CPP in J. curcas L. The most important ones are seed and oil yields. (author)

  18. Methods for model selection in applied science and engineering.

    Energy Technology Data Exchange (ETDEWEB)

    Field, Richard V., Jr.

    2004-10-01

    Mathematical models are developed and used to study the properties of complex systems and/or modify these systems to satisfy some performance requirements in just about every area of applied science and engineering. A particular reason for developing a model, e.g., performance assessment or design, is referred to as the model use. Our objective is the development of a methodology for selecting a model that is sufficiently accurate for an intended use. Information on the system being modeled is, in general, incomplete, so that there may be two or more models consistent with the available information. The collection of these models is called the class of candidate models. Methods are developed for selecting the optimal member from a class of candidate models for the system. The optimal model depends on the available information, the selected class of candidate models, and the model use. Classical methods for model selection, including the method of maximum likelihood and Bayesian methods, as well as a method employing a decision-theoretic approach, are formulated to select the optimal model for numerous applications. There is no requirement that the candidate models be random. Classical methods for model selection ignore model use and require data to be available. Examples are used to show that these methods can be unreliable when data is limited. The decision-theoretic approach to model selection does not have these limitations, and model use is included through an appropriate utility function. This is especially important when modeling high risk systems, where the consequences of using an inappropriate model for the system can be disastrous. The decision-theoretic method for model selection is developed and applied for a series of complex and diverse applications. These include the selection of the: (1) optimal order of the polynomial chaos approximation for non-Gaussian random variables and stationary stochastic processes, (2) optimal pressure load model to be

  19. SELECTION OF NON-CONVENTIONAL MACHINING PROCESSES USING THE OCRA METHOD

    Directory of Open Access Journals (Sweden)

    Miloš Madić

    2015-04-01

    Full Text Available Selection of the most suitable nonconventional machining process (NCMP for a given machining application can be viewed as multi-criteria decision making (MCDM problem with many conflicting and diverse criteria. To aid these selection processes, different MCDM methods have been proposed. This paper introduces the use of an almost unexplored MCDM method, i.e. operational competitiveness ratings analysis (OCRA method for solving the NCMP selection problems. Applicability, suitability and computational procedure of OCRA method have been demonstrated while solving three case studies dealing with selection of the most suitable NCMP. In each case study the obtained rankings were compared with those derived by the past researchers using different MCDM methods. The results obtained using the OCRA method have good correlation with those derived by the past researchers which validate the usefulness of this method while solving complex NCMP selection problems.

  20. Parameter Selection for Ant Colony Algorithm Based on Bacterial Foraging Algorithm

    Directory of Open Access Journals (Sweden)

    Peng Li

    2016-01-01

    Full Text Available The optimal performance of the ant colony algorithm (ACA mainly depends on suitable parameters; therefore, parameter selection for ACA is important. We propose a parameter selection method for ACA based on the bacterial foraging algorithm (BFA, considering the effects of coupling between different parameters. Firstly, parameters for ACA are mapped into a multidimensional space, using a chemotactic operator to ensure that each parameter group approaches the optimal value, speeding up the convergence for each parameter set. Secondly, the operation speed for optimizing the entire parameter set is accelerated using a reproduction operator. Finally, the elimination-dispersal operator is used to strengthen the global optimization of the parameters, which avoids falling into a local optimal solution. In order to validate the effectiveness of this method, the results were compared with those using a genetic algorithm (GA and a particle swarm optimization (PSO, and simulations were conducted using different grid maps for robot path planning. The results indicated that parameter selection for ACA based on BFA was the superior method, able to determine the best parameter combination rapidly, accurately, and effectively.

  1. SORIOS – A method for evaluating and selecting environmental certificates and labels

    DEFF Research Database (Denmark)

    Kikkenborg Pedersen, Dennis; Dukovska-Popovska, Iskra; Ola Strandhagen, Jan

    2012-01-01

    This paper presents a general method for evaluating and selecting environmental certificates and labels for companies to use on products and services. The method is developed based on a case study using a Grounded Theory approach. The result is a generalized six-step method that features an initial...... searching strategy and an evaluation model that weighs the prerequisites, rewards and the organization of certificate or label against the strategic needs of a company....

  2. Developing a Clustering-Based Empirical Bayes Analysis Method for Hotspot Identification

    Directory of Open Access Journals (Sweden)

    Yajie Zou

    2017-01-01

    Full Text Available Hotspot identification (HSID is a critical part of network-wide safety evaluations. Typical methods for ranking sites are often rooted in using the Empirical Bayes (EB method to estimate safety from both observed crash records and predicted crash frequency based on similar sites. The performance of the EB method is highly related to the selection of a reference group of sites (i.e., roadway segments or intersections similar to the target site from which safety performance functions (SPF used to predict crash frequency will be developed. As crash data often contain underlying heterogeneity that, in essence, can make them appear to be generated from distinct subpopulations, methods are needed to select similar sites in a principled manner. To overcome this possible heterogeneity problem, EB-based HSID methods that use common clustering methodologies (e.g., mixture models, K-means, and hierarchical clustering to select “similar” sites for building SPFs are developed. Performance of the clustering-based EB methods is then compared using real crash data. Here, HSID results, when computed on Texas undivided rural highway cash data, suggest that all three clustering-based EB analysis methods are preferred over the conventional statistical methods. Thus, properly classifying the road segments for heterogeneous crash data can further improve HSID accuracy.

  3. Selective Route Based on SNR with Cross-Layer Scheme in Wireless Ad Hoc Network

    Directory of Open Access Journals (Sweden)

    Istikmal

    2017-01-01

    Full Text Available In this study, we developed network and throughput formulation models and proposed new method of the routing protocol algorithm with a cross-layer scheme based on signal-to-noise ratio (SNR. This method is an enhancement of routing protocol ad hoc on-demand distance vector (AODV. This proposed scheme uses selective route based on the SNR threshold in the reverse route mechanism. We developed AODV SNR-selective route (AODV SNR-SR for a mechanism better than AODV SNR, that is, the routing protocol that used average or sum of path SNR, and also better than AODV which is hop-count-based. We also used selective reverse route based on SNR mechanism, replacing the earlier method to avoid routing overhead. The simulation results show that AODV SNR-SR outperforms AODV SNR and AODV in terms of throughput, end-to-end delay, and routing overhead. This proposed method is expected to support Device-to-Device (D2D communications that are concerned with the quality of the channel awareness in the development of the future Fifth Generation (5G.

  4. Trust-Enhanced Cloud Service Selection Model Based on QoS Analysis.

    Science.gov (United States)

    Pan, Yuchen; Ding, Shuai; Fan, Wenjuan; Li, Jing; Yang, Shanlin

    2015-01-01

    Cloud computing technology plays a very important role in many areas, such as in the construction and development of the smart city. Meanwhile, numerous cloud services appear on the cloud-based platform. Therefore how to how to select trustworthy cloud services remains a significant problem in such platforms, and extensively investigated owing to the ever-growing needs of users. However, trust relationship in social network has not been taken into account in existing methods of cloud service selection and recommendation. In this paper, we propose a cloud service selection model based on the trust-enhanced similarity. Firstly, the direct, indirect, and hybrid trust degrees are measured based on the interaction frequencies among users. Secondly, we estimate the overall similarity by combining the experience usability measured based on Jaccard's Coefficient and the numerical distance computed by Pearson Correlation Coefficient. Then through using the trust degree to modify the basic similarity, we obtain a trust-enhanced similarity. Finally, we utilize the trust-enhanced similarity to find similar trusted neighbors and predict the missing QoS values as the basis of cloud service selection and recommendation. The experimental results show that our approach is able to obtain optimal results via adjusting parameters and exhibits high effectiveness. The cloud services ranking by our model also have better QoS properties than other methods in the comparison experiments.

  5. Classification and Target Group Selection Based Upon Frequent Patterns

    NARCIS (Netherlands)

    W.H.L.M. Pijls (Wim); R. Potharst (Rob)

    2000-01-01

    textabstractIn this technical report , two new algorithms based upon frequent patterns are proposed. One algorithm is a classification method. The other one is an algorithm for target group selection. In both algorithms, first of all, the collection of frequent patterns in the training set is

  6. A Time-Series Water Level Forecasting Model Based on Imputation and Variable Selection Method

    OpenAIRE

    Jun-He Yang; Ching-Hsue Cheng; Chia-Pan Chan

    2017-01-01

    Reservoirs are important for households and impact the national economy. This paper proposed a time-series forecasting model based on estimating a missing value followed by variable selection to forecast the reservoir's water level. This study collected data from the Taiwan Shimen Reservoir as well as daily atmospheric data from 2008 to 2015. The two datasets are concatenated into an integrated dataset based on ordering of the data as a research dataset. The proposed time-series forecasting m...

  7. Selection of industrial robots using the Polygons area method

    Directory of Open Access Journals (Sweden)

    Mortaza Honarmande Azimi

    2014-08-01

    Full Text Available Selection of robots from the several proposed alternatives is a very important and tedious task. Decision makers are not limited to one method and several methods have been proposed for solving this problem. This study presents Polygons Area Method (PAM as a multi attribute decision making method for robot selection problem. In this method, the maximum polygons area obtained from the attributes of an alternative robot on the radar chart is introduced as a decision-making criterion. The results of this method are compared with other typical multiple attribute decision-making methods (SAW, WPM, TOPSIS, and VIKOR by giving two examples. To find similarity in ranking given by different methods, Spearman’s rank correlation coefficients are obtained for different pairs of MADM methods. It was observed that the introduced method is in good agreement with other well-known MADM methods in the robot selection problem.

  8. Equipment Selection by using Fuzzy TOPSIS Method

    Science.gov (United States)

    Yavuz, Mahmut

    2016-10-01

    In this study, Fuzzy TOPSIS method was performed for the selection of open pit truck and the optimal solution of the problem was investigated. Data from Turkish Coal Enterprises was used in the application of the method. This paper explains the Fuzzy TOPSIS approaches with group decision-making application in an open pit coal mine in Turkey. An algorithm of the multi-person multi-criteria decision making with fuzzy set approach was applied an equipment selection problem. It was found that Fuzzy TOPSIS with a group decision making is a method that may help decision-makers in solving different decision-making problems in mining.

  9. AHP-Based Optimal Selection of Garment Sizes for Online Shopping

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Garment online shopping has been accepted by more and more consumers in recent years. In online shopping, a buyer only chooses the garment size judged by his own experience without trying-on, so the selected garment may not be the fittest one for the buyer due to the variety of body's figures. Thus, we propose a method of optimal selection of garment sizes for online shopping based on Analytic Hierarchy Process (AHP). The hierarchical structure model for optimal selection of garment sizes is structured and the fittest garment for a buyer is found by calculating the matching degrees between individual's measurements and the corresponding key-part values of ready-to-wear clothing sizes. In order to demonstrate its feasibility, we provide an example of selecting the fittest sizes of men's bottom. The result shows that the proposed method is useful in online clothing sales application.

  10. Efficient Iris Recognition Based on Optimal Subfeature Selection and Weighted Subregion Fusion

    Science.gov (United States)

    Deng, Ning

    2014-01-01

    In this paper, we propose three discriminative feature selection strategies and weighted subregion matching method to improve the performance of iris recognition system. Firstly, we introduce the process of feature extraction and representation based on scale invariant feature transformation (SIFT) in detail. Secondly, three strategies are described, which are orientation probability distribution function (OPDF) based strategy to delete some redundant feature keypoints, magnitude probability distribution function (MPDF) based strategy to reduce dimensionality of feature element, and compounded strategy combined OPDF and MPDF to further select optimal subfeature. Thirdly, to make matching more effective, this paper proposes a novel matching method based on weighted sub-region matching fusion. Particle swarm optimization is utilized to accelerate achieve different sub-region's weights and then weighted different subregions' matching scores to generate the final decision. The experimental results, on three public and renowned iris databases (CASIA-V3 Interval, Lamp, andMMU-V1), demonstrate that our proposed methods outperform some of the existing methods in terms of correct recognition rate, equal error rate, and computation complexity. PMID:24683317

  11. Efficient Iris Recognition Based on Optimal Subfeature Selection and Weighted Subregion Fusion

    Directory of Open Access Journals (Sweden)

    Ying Chen

    2014-01-01

    Full Text Available In this paper, we propose three discriminative feature selection strategies and weighted subregion matching method to improve the performance of iris recognition system. Firstly, we introduce the process of feature extraction and representation based on scale invariant feature transformation (SIFT in detail. Secondly, three strategies are described, which are orientation probability distribution function (OPDF based strategy to delete some redundant feature keypoints, magnitude probability distribution function (MPDF based strategy to reduce dimensionality of feature element, and compounded strategy combined OPDF and MPDF to further select optimal subfeature. Thirdly, to make matching more effective, this paper proposes a novel matching method based on weighted sub-region matching fusion. Particle swarm optimization is utilized to accelerate achieve different sub-region’s weights and then weighted different subregions’ matching scores to generate the final decision. The experimental results, on three public and renowned iris databases (CASIA-V3 Interval, Lamp, andMMU-V1, demonstrate that our proposed methods outperform some of the existing methods in terms of correct recognition rate, equal error rate, and computation complexity.

  12. Efficient iris recognition based on optimal subfeature selection and weighted subregion fusion.

    Science.gov (United States)

    Chen, Ying; Liu, Yuanning; Zhu, Xiaodong; He, Fei; Wang, Hongye; Deng, Ning

    2014-01-01

    In this paper, we propose three discriminative feature selection strategies and weighted subregion matching method to improve the performance of iris recognition system. Firstly, we introduce the process of feature extraction and representation based on scale invariant feature transformation (SIFT) in detail. Secondly, three strategies are described, which are orientation probability distribution function (OPDF) based strategy to delete some redundant feature keypoints, magnitude probability distribution function (MPDF) based strategy to reduce dimensionality of feature element, and compounded strategy combined OPDF and MPDF to further select optimal subfeature. Thirdly, to make matching more effective, this paper proposes a novel matching method based on weighted sub-region matching fusion. Particle swarm optimization is utilized to accelerate achieve different sub-region's weights and then weighted different subregions' matching scores to generate the final decision. The experimental results, on three public and renowned iris databases (CASIA-V3 Interval, Lamp, and MMU-V1), demonstrate that our proposed methods outperform some of the existing methods in terms of correct recognition rate, equal error rate, and computation complexity.

  13. Evaluation of peptide selection approaches for epitope‐based vaccine design

    DEFF Research Database (Denmark)

    Schubert, B.; Lund, Ole; Nielsen, Morten

    2013-01-01

    A major challenge in epitope-based vaccine (EV) design stems from the vast genomic variation of pathogens and the diversity of the host cellular immune system. Several computational approaches have been published to assist the selection of potential T cell epitopes for EV design. So far, no thoro......A major challenge in epitope-based vaccine (EV) design stems from the vast genomic variation of pathogens and the diversity of the host cellular immune system. Several computational approaches have been published to assist the selection of potential T cell epitopes for EV design. So far...... in terms of in silico measurements simulating important vaccine properties like the ability of inducing protection against a multivariant pathogen in a population; the predicted immunogenicity; pathogen, allele, and population coverage; as well as the conservation of selected epitopes. Additionally, we...... evaluate the use of human leukocyte antigen (HLA) supertypes with regards to their applicability for population-spanning vaccine design. The results showed that in terms of induced protection methods that simultaneously aim to optimize pathogen and HLA coverage significantly outperform methods focusing...

  14. Method for selecting FBR development strategies in the presence of uncertainty

    International Nuclear Information System (INIS)

    Fraley, D.W.; Burnham, J.B.

    1981-12-01

    This report describes the methods used to probabilistically analyze data related to the uranium supply the FBR's competitive dates, development strategies' time and costs, and economic benefits. It also describes the econometric methods used to calculate the economic risks of mistiming the development. Seven strategies for developing the FBR are analyzed. The various measures of a strategy's performance - timing, costs, benefits, and risks - are combined into several criteria which are used to evaluate the seven strategies. Methods are described for selecting a strategy based on a number of alternative criteria

  15. An active learning representative subset selection method using net analyte signal

    Science.gov (United States)

    He, Zhonghai; Ma, Zhenhe; Luan, Jingmin; Cai, Xi

    2018-05-01

    To guarantee accurate predictions, representative samples are needed when building a calibration model for spectroscopic measurements. However, in general, it is not known whether a sample is representative prior to measuring its concentration, which is both time-consuming and expensive. In this paper, a method to determine whether a sample should be selected into a calibration set is presented. The selection is based on the difference of Euclidean norm of net analyte signal (NAS) vector between the candidate and existing samples. First, the concentrations and spectra of a group of samples are used to compute the projection matrix, NAS vector, and scalar values. Next, the NAS vectors of candidate samples are computed by multiplying projection matrix with spectra of samples. Scalar value of NAS is obtained by norm computation. The distance between the candidate set and the selected set is computed, and samples with the largest distance are added to selected set sequentially. Last, the concentration of the analyte is measured such that the sample can be used as a calibration sample. Using a validation test, it is shown that the presented method is more efficient than random selection. As a result, the amount of time and money spent on reference measurements is greatly reduced.

  16. A new and fast image feature selection method for developing an optimal mammographic mass detection scheme.

    Science.gov (United States)

    Tan, Maxine; Pu, Jiantao; Zheng, Bin

    2014-08-01

    Selecting optimal features from a large image feature pool remains a major challenge in developing computer-aided detection (CAD) schemes of medical images. The objective of this study is to investigate a new approach to significantly improve efficacy of image feature selection and classifier optimization in developing a CAD scheme of mammographic masses. An image dataset including 1600 regions of interest (ROIs) in which 800 are positive (depicting malignant masses) and 800 are negative (depicting CAD-generated false positive regions) was used in this study. After segmentation of each suspicious lesion by a multilayer topographic region growth algorithm, 271 features were computed in different feature categories including shape, texture, contrast, isodensity, spiculation, local topological features, as well as the features related to the presence and location of fat and calcifications. Besides computing features from the original images, the authors also computed new texture features from the dilated lesion segments. In order to select optimal features from this initial feature pool and build a highly performing classifier, the authors examined and compared four feature selection methods to optimize an artificial neural network (ANN) based classifier, namely: (1) Phased Searching with NEAT in a Time-Scaled Framework, (2) A sequential floating forward selection (SFFS) method, (3) A genetic algorithm (GA), and (4) A sequential forward selection (SFS) method. Performances of the four approaches were assessed using a tenfold cross validation method. Among these four methods, SFFS has highest efficacy, which takes 3%-5% of computational time as compared to GA approach, and yields the highest performance level with the area under a receiver operating characteristic curve (AUC) = 0.864 ± 0.034. The results also demonstrated that except using GA, including the new texture features computed from the dilated mass segments improved the AUC results of the ANNs optimized

  17. Evolutionary game theory using agent-based methods.

    Science.gov (United States)

    Adami, Christoph; Schossau, Jory; Hintze, Arend

    2016-12-01

    Evolutionary game theory is a successful mathematical framework geared towards understanding the selective pressures that affect the evolution of the strategies of agents engaged in interactions with potential conflicts. While a mathematical treatment of the costs and benefits of decisions can predict the optimal strategy in simple settings, more realistic settings such as finite populations, non-vanishing mutations rates, stochastic decisions, communication between agents, and spatial interactions, require agent-based methods where each agent is modeled as an individual, carries its own genes that determine its decisions, and where the evolutionary outcome can only be ascertained by evolving the population of agents forward in time. While highlighting standard mathematical results, we compare those to agent-based methods that can go beyond the limitations of equations and simulate the complexity of heterogeneous populations and an ever-changing set of interactors. We conclude that agent-based methods can predict evolutionary outcomes where purely mathematical treatments cannot tread (for example in the weak selection-strong mutation limit), but that mathematics is crucial to validate the computational simulations. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. Automatic variable selection method and a comparison for quantitative analysis in laser-induced breakdown spectroscopy

    Science.gov (United States)

    Duan, Fajie; Fu, Xiao; Jiang, Jiajia; Huang, Tingting; Ma, Ling; Zhang, Cong

    2018-05-01

    In this work, an automatic variable selection method for quantitative analysis of soil samples using laser-induced breakdown spectroscopy (LIBS) is proposed, which is based on full spectrum correction (FSC) and modified iterative predictor weighting-partial least squares (mIPW-PLS). The method features automatic selection without artificial processes. To illustrate the feasibility and effectiveness of the method, a comparison with genetic algorithm (GA) and successive projections algorithm (SPA) for different elements (copper, barium and chromium) detection in soil was implemented. The experimental results showed that all the three methods could accomplish variable selection effectively, among which FSC-mIPW-PLS required significantly shorter computation time (12 s approximately for 40,000 initial variables) than the others. Moreover, improved quantification models were got with variable selection approaches. The root mean square errors of prediction (RMSEP) of models utilizing the new method were 27.47 (copper), 37.15 (barium) and 39.70 (chromium) mg/kg, which showed comparable prediction effect with GA and SPA.

  19. An entropy-based improved k-top scoring pairs (TSP) method for ...

    African Journals Online (AJOL)

    An entropy-based improved k-top scoring pairs (TSP) (Ik-TSP) method was presented in this study for the classification and prediction of human cancers based on gene-expression data. We compared Ik-TSP classifiers with 5 different machine learning methods and the k-TSP method based on 3 different feature selection ...

  20. Enhancements to Graph based methods for Multi Document Summarization

    Directory of Open Access Journals (Sweden)

    Rengaramanujam Srinivasan

    2009-01-01

    Full Text Available This paper focuses its attention on extractivesummarization using popular graph based approaches. Graphbased methods can be broadly classified into two categories:non- PageRank type and PageRank type methods. Of themethods already proposed - the Centrality Degree methodbelongs to the former category while LexRank and ContinuousLexRank methods belong to later category. The paper goes on tosuggest two enhancements to both PageRank type and non-PageRank type methods. The first modification is that ofrecursively discounting the selected sentences, i.e. if a sentence isselected it is removed from further consideration and the nextsentence is selected based upon the contributions of theremaining sentences only. Next the paper suggests a method ofincorporating position weight to these schemes. In all 14methods –six of non- PageRank type and eight of PageRanktype have been investigated. To clearly distinguish betweenvarious schemes, we call the methods of incorporatingdiscounting and position weight enhancements over LexicalRank schemes as Sentence Rank (SR methods. Intrinsicevaluation of all the 14 graph based methods were done usingconventional Precision metric and metrics earlier proposed byus - Effectiveness1 (E1 and Effectiveness2 (E2. Experimentalstudy brings out that the proposed SR methods are superior toall the other methods.

  1. Relationship of Source Selection Methods to Contract Outcomes: an Analysis of Air Force Source Selection

    Science.gov (United States)

    2015-12-01

    some occasions, performance is terminated early; this can occur due to either mutual agreement or a breach of contract by one of the parties (Garrett...Relationship of Source Selection Methods to Contract Outcomes: an Analysis of Air Force Source Selection December 2015 Capt Jacques Lamoureux, USAF...on the contract management process, with special emphasis on the source selection methods of tradeoff and lowest price technically acceptable (LPTA

  2. MCDM based evaluation and ranking of commercial off-the-shelf using fuzzy based matrix method

    Directory of Open Access Journals (Sweden)

    Rakesh Garg

    2017-04-01

    Full Text Available In today’s scenario, software has become an essential component in all kinds of systems. The size and the complexity of the software increases with a corresponding increase in its functionality, hence leads to the development of the modular software systems. Software developers emphasize on the concept of component based software engineering (CBSE for the development of modular software systems. The CBSE concept consists of dividing the software into a number of modules; selecting Commercial Off-the-Shelf (COTS for each module; and finally integrating the modules to develop the final software system. The selection of COTS for any module plays a vital role in software development. To address the problem of selection of COTS, a framework for ranking and selection of various COTS components for any software system based on expert opinion elicitation and fuzzy-based matrix methodology is proposed in this research paper. The selection problem is modeled as a multi-criteria decision making (MCDM problem. The evaluation criteria are identified through extensive literature study and the COTS components are ranked based on these identified and selected evaluation criteria using the proposed methods according to the value of a permanent function of their criteria matrices. The methodology is explained through an example and is validated by comparing with an existing method.

  3. Selection Method for COTS Systems

    DEFF Research Database (Denmark)

    Hedman, Jonas; Andersson, Bo

    2014-01-01

    feature behind the method is that improved understanding of organizational ‘ends’ or goals should govern the selection of a COTS system. This can also be expressed as a match or fit between ‘ends’ (e.g. improved organizational effectiveness) and ‘means’ (e.g. implementing COTS systems). This way...

  4. Diffusion weighted imaging for differentiating benign from malignant orbital tumors: Diagnostic performance of the apparent diffusion coefficient based on region of interest selection method

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Xiao Quan; Hu, Hao Hu; Su, Guo Yi; Liu, Hu; Shi, Hai Bin; Wu, Fei Yun [First Affiliated Hospital of Nanjing Medical University, Nanjing (China)

    2016-09-15

    To evaluate the differences in the apparent diffusion coefficient (ADC) measurements based on three different region of interest (ROI) selection methods, and compare their diagnostic performance in differentiating benign from malignant orbital tumors. Diffusion-weighted imaging data of sixty-four patients with orbital tumors (33 benign and 31 malignant) were retrospectively analyzed. Two readers independently measured the ADC values using three different ROIs selection methods including whole-tumor (WT), single-slice (SS), and reader-defined small sample (RDSS). The differences of ADC values (ADC-ROI{sub WT}, ADC-ROI{sub SS}, and ADC-ROI{sub RDSS}) between benign and malignant group were compared using unpaired t test. Receiver operating characteristic curve was used to determine and compare their diagnostic ability. The ADC measurement time was compared using ANOVA analysis and the measurement reproducibility was assessed using Bland-Altman method and intra-class correlation coefficient (ICC). Malignant group showed significantly lower ADC-ROI{sub WT}, ADC-ROI{sub SS}, and ADC-ROI{sub RDSS} than benign group (all p < 0.05). The areas under the curve showed no significant difference when using ADC-ROI{sub WT}, ADC-ROI{sub SS}, and ADC-ROI{sub RDSS} as differentiating index, respectively (all p > 0.05). The ROI{sub SS} and ROI{sub RDSS} required comparable measurement time (p > 0.05), while significantly shorter than ROI{sub WT} (p < 0.05). The ROI{sub SS} showed the best reproducibility (mean difference ± limits of agreement between two readers were 0.022 [-0.080–0.123] × 10{sup -3} mm{sup 2}/s; ICC, 0.997) among three ROI method. Apparent diffusion coefficient values based on the three different ROI selection methods can help to differentiate benign from malignant orbital tumors. The results of measurement time, reproducibility and diagnostic ability suggest that the ROI{sub SS} method are potentially useful for clinical practice.

  5. Method for selection of optimal road safety composite index with examples from DEA and TOPSIS method.

    Science.gov (United States)

    Rosić, Miroslav; Pešić, Dalibor; Kukić, Dragoslav; Antić, Boris; Božović, Milan

    2017-01-01

    Concept of composite road safety index is a popular and relatively new concept among road safety experts around the world. As there is a constant need for comparison among different units (countries, municipalities, roads, etc.) there is need to choose an adequate method which will make comparison fair to all compared units. Usually comparisons using one specific indicator (parameter which describes safety or unsafety) can end up with totally different ranking of compared units which is quite complicated for decision maker to determine "real best performers". Need for composite road safety index is becoming dominant since road safety presents a complex system where more and more indicators are constantly being developed to describe it. Among wide variety of models and developed composite indexes, a decision maker can come to even bigger dilemma than choosing one adequate risk measure. As DEA and TOPSIS are well-known mathematical models and have recently been increasingly used for risk evaluation in road safety, we used efficiencies (composite indexes) obtained by different models, based on DEA and TOPSIS, to present PROMETHEE-RS model for selection of optimal method for composite index. Method for selection of optimal composite index is based on three parameters (average correlation, average rank variation and average cluster variation) inserted into a PROMETHEE MCDM method in order to choose the optimal one. The model is tested by comparing 27 police departments in Serbia. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. An improved selective sampling method

    International Nuclear Information System (INIS)

    Miyahara, Hiroshi; Iida, Nobuyuki; Watanabe, Tamaki

    1986-01-01

    The coincidence methods which are currently used for the accurate activity standardisation of radio-nuclides, require dead time and resolving time corrections which tend to become increasingly uncertain as countrates exceed about 10 K. To reduce the dependence on such corrections, Muller, in 1981, proposed the selective sampling method using a fast multichannel analyser (50 ns ch -1 ) for measuring the countrates. It is, in many ways, more convenient and possibly potentially more reliable to replace the MCA with scalers and a circuit is described employing five scalers; two of them serving to measure the background correction. Results of comparisons using our new method and the coincidence method for measuring the activity of 60 Co sources yielded agree-ment within statistical uncertainties. (author)

  7. An innovative method for offshore wind farm site selection based on the interval number with probability distribution

    Science.gov (United States)

    Wu, Yunna; Chen, Kaifeng; Xu, Hu; Xu, Chuanbo; Zhang, Haobo; Yang, Meng

    2017-12-01

    There is insufficient research relating to offshore wind farm site selection in China. The current methods for site selection have some defects. First, information loss is caused by two aspects: the implicit assumption that the probability distribution on the interval number is uniform; and ignoring the value of decision makers' (DMs') common opinion on the criteria information evaluation. Secondly, the difference in DMs' utility function has failed to receive attention. An innovative method is proposed in this article to solve these drawbacks. First, a new form of interval number and its weighted operator are proposed to reflect the uncertainty and reduce information loss. Secondly, a new stochastic dominance degree is proposed to quantify the interval number with a probability distribution. Thirdly, a two-stage method integrating the weighted operator with stochastic dominance degree is proposed to evaluate the alternatives. Finally, a case from China proves the effectiveness of this method.

  8. An Identification Key for Selecting Methods for Sustainability Assessments

    Directory of Open Access Journals (Sweden)

    Michiel C. Zijp

    2015-03-01

    Full Text Available Sustainability assessments can play an important role in decision making. This role starts with selecting appropriate methods for a given situation. We observed that scientists, consultants, and decision-makers often do not systematically perform a problem analyses that guides the choice of the method, partly related to a lack of systematic, though sufficiently versatile approaches to do so. Therefore, we developed and propose a new step towards method selection on the basis of question articulation: the Sustainability Assessment Identification Key. The identification key was designed to lead its user through all important choices needed for comprehensive question articulation. Subsequently, methods that fit the resulting specific questions are suggested by the key. The key consists of five domains, of which three determine method selection and two the design or use of the method. Each domain consists of four or more criteria that need specification. For example in the domain “system boundaries”, amongst others, the spatial and temporal scales are specified. The key was tested (retrospectively on a set of thirty case studies. Using the key appeared to contribute to improved: (i transparency in the link between the question and method selection; (ii consistency between questions asked and answers provided; and (iii internal consistency in methodological design. There is latitude to develop the current initial key further, not only for selecting methods pertinent to a problem definition, but also as a principle for associated opportunities such as stakeholder identification.

  9. Evolutionary dynamics on graphs: Efficient method for weak selection

    Science.gov (United States)

    Fu, Feng; Wang, Long; Nowak, Martin A.; Hauert, Christoph

    2009-04-01

    Investigating the evolutionary dynamics of game theoretical interactions in populations where individuals are arranged on a graph can be challenging in terms of computation time. Here, we propose an efficient method to study any type of game on arbitrary graph structures for weak selection. In this limit, evolutionary game dynamics represents a first-order correction to neutral evolution. Spatial correlations can be empirically determined under neutral evolution and provide the basis for formulating the game dynamics as a discrete Markov process by incorporating a detailed description of the microscopic dynamics based on the neutral correlations. This framework is then applied to one of the most intriguing questions in evolutionary biology: the evolution of cooperation. We demonstrate that the degree heterogeneity of a graph impedes cooperation and that the success of tit for tat depends not only on the number of rounds but also on the degree of the graph. Moreover, considering the mutation-selection equilibrium shows that the symmetry of the stationary distribution of states under weak selection is skewed in favor of defectors for larger selection strengths. In particular, degree heterogeneity—a prominent feature of scale-free networks—generally results in a more pronounced increase in the critical benefit-to-cost ratio required for evolution to favor cooperation as compared to regular graphs. This conclusion is corroborated by an analysis of the effects of population structures on the fixation probabilities of strategies in general 2×2 games for different types of graphs. Computer simulations confirm the predictive power of our method and illustrate the improved accuracy as compared to previous studies.

  10. Feature Selection based on Machine Learning in MRIs for Hippocampal Segmentation

    Science.gov (United States)

    Tangaro, Sabina; Amoroso, Nicola; Brescia, Massimo; Cavuoti, Stefano; Chincarini, Andrea; Errico, Rosangela; Paolo, Inglese; Longo, Giuseppe; Maglietta, Rosalia; Tateo, Andrea; Riccio, Giuseppe; Bellotti, Roberto

    2015-01-01

    Neurodegenerative diseases are frequently associated with structural changes in the brain. Magnetic resonance imaging (MRI) scans can show these variations and therefore can be used as a supportive feature for a number of neurodegenerative diseases. The hippocampus has been known to be a biomarker for Alzheimer disease and other neurological and psychiatric diseases. However, it requires accurate, robust, and reproducible delineation of hippocampal structures. Fully automatic methods are usually the voxel based approach; for each voxel a number of local features were calculated. In this paper, we compared four different techniques for feature selection from a set of 315 features extracted for each voxel: (i) filter method based on the Kolmogorov-Smirnov test; two wrapper methods, respectively, (ii) sequential forward selection and (iii) sequential backward elimination; and (iv) embedded method based on the Random Forest Classifier on a set of 10 T1-weighted brain MRIs and tested on an independent set of 25 subjects. The resulting segmentations were compared with manual reference labelling. By using only 23 feature for each voxel (sequential backward elimination) we obtained comparable state-of-the-art performances with respect to the standard tool FreeSurfer.

  11. Patch-based visual tracking with online representative sample selection

    Science.gov (United States)

    Ou, Weihua; Yuan, Di; Li, Donghao; Liu, Bin; Xia, Daoxun; Zeng, Wu

    2017-05-01

    Occlusion is one of the most challenging problems in visual object tracking. Recently, a lot of discriminative methods have been proposed to deal with this problem. For the discriminative methods, it is difficult to select the representative samples for the target template updating. In general, the holistic bounding boxes that contain tracked results are selected as the positive samples. However, when the objects are occluded, this simple strategy easily introduces the noises into the training data set and the target template and then leads the tracker to drift away from the target seriously. To address this problem, we propose a robust patch-based visual tracker with online representative sample selection. Different from previous works, we divide the object and the candidates into several patches uniformly and propose a score function to calculate the score of each patch independently. Then, the average score is adopted to determine the optimal candidate. Finally, we utilize the non-negative least square method to find the representative samples, which are used to update the target template. The experimental results on the object tracking benchmark 2013 and on the 13 challenging sequences show that the proposed method is robust to the occlusion and achieves promising results.

  12. A Parameter Selection Method for Wind Turbine Health Management through SCADA Data

    Directory of Open Access Journals (Sweden)

    Mian Du

    2017-02-01

    Full Text Available Wind turbine anomaly or failure detection using machine learning techniques through supervisory control and data acquisition (SCADA system is drawing wide attention from academic and industry While parameter selection is important for modelling a wind turbine’s condition, only a few papers have been published focusing on this issue and in those papers interconnections among sub-components in a wind turbine are used to address this problem. However, merely the interconnections for decision making sometimes is too general to provide a parameter list considering the differences of each SCADA dataset. In this paper, a method is proposed to provide more detailed suggestions on parameter selection based on mutual information. First, the copula is proven to be capable of simplifying the estimation of mutual information. Then an empirical copulabased mutual information estimation method (ECMI is introduced for application. After that, a real SCADA dataset is adopted to test the method, and the results show the effectiveness of the ECMI in providing parameter selection suggestions when physical knowledge is not accurate enough.

  13. Multi criteria decision making methods for location selection of distribution centers

    Directory of Open Access Journals (Sweden)

    Romita Chakraborty

    2013-10-01

    Full Text Available In recent years, major challenges such as, increase in inflexible consumer demands and to improve the competitive advantage, it has become necessary for various industrial organizations all over the world to focus on strategies that will help them achieve cost reduction, continual quality improvement, increased customer satisfaction and on time delivery performance. As a result, selection of the most suitable and optimal facility location for a new organization or expansion of an existing location is one of the most important strategic issues, required to fulfill all of these above mentioned objectives. In order to sustain in the global competitive market of 21st century, many industrial organizations have begun to concentrate on the proper selection of the plant site or best facility location. The best location is that which results in higher economic benefits through increased productivity and good distribution network. When a choice is to be made from among several alternative facility locations, it is necessary to compare their performance characteristics in a decisive way. As the facility location selection problem involves multiple conflicting criteria and a finite set of potential candidate alternatives, different multi-criteria decision-making (MCDM methods can be effectively applied to solve such type of problem. In this paper, four well known MCDM methods have been applied on a facility location selection problem and their relative ranking performances are compared. Because of disagreement in the ranks obtained by the four different MCDM methods a final ranking method based on REGIME has been proposed by the authors to facilitate the decision making process.

  14. A Distributed Dynamic Super Peer Selection Method Based on Evolutionary Game for Heterogeneous P2P Streaming Systems

    Directory of Open Access Journals (Sweden)

    Jing Chen

    2013-01-01

    Full Text Available Due to high efficiency and good scalability, hierarchical hybrid P2P architecture has drawn more and more attention in P2P streaming research and application fields recently. The problem about super peer selection, which is the key problem in hybrid heterogeneous P2P architecture, is becoming highly challenging because super peers must be selected from a huge and dynamically changing network. A distributed super peer selection (SPS algorithm for hybrid heterogeneous P2P streaming system based on evolutionary game is proposed in this paper. The super peer selection procedure is modeled based on evolutionary game framework firstly, and its evolutionarily stable strategies are analyzed. Then a distributed Q-learning algorithm (ESS-SPS according to the mixed strategies by analysis is proposed for the peers to converge to the ESSs based on its own payoff history. Compared to the traditional randomly super peer selection scheme, experiments results show that the proposed ESS-SPS algorithm achieves better performance in terms of social welfare and average upload rate of super peers and keeps the upload capacity of the P2P streaming system increasing steadily with the number of peers increasing.

  15. Method for routine determination of fluoride in urine by selective ion- electrode

    International Nuclear Information System (INIS)

    Pires, M.A.F.; Bellintani, S.A.

    1985-01-01

    A simple, fast and sensitive method is outlined for determining fluoride in urine of workers who handle fluoride compounds. The determination is based on the measurement of fluoride by ion selective electrode. Cationic interferents like Ca ++ , Mg ++ , Fe +++ and Al +++ are complexed by EDTA and citric acid. Common anions present in urine, such as Cl - , PO --- 4 and SO -- 4 do not interfere in the method. (Author) [pt

  16. Development and selection of Asian-specific humeral implants based on statistical atlas: toward planning minimally invasive surgery.

    Science.gov (United States)

    Wu, K; Daruwalla, Z J; Wong, K L; Murphy, D; Ren, H

    2015-08-01

    The commercial humeral implants based on the Western population are currently not entirely compatible with Asian patients, due to differences in bone size, shape and structure. Surgeons may have to compromise or use different implants that are less conforming, which may cause complications of as well as inconvenience to the implant position. The construction of Asian humerus atlases of different clusters has therefore been proposed to eradicate this problem and to facilitate planning minimally invasive surgical procedures [6,31]. According to the features of the atlases, new implants could be designed specifically for different patients. Furthermore, an automatic implant selection algorithm has been proposed as well in order to reduce the complications caused by implant and bone mismatch. Prior to the design of the implant, data clustering and extraction of the relevant features were carried out on the datasets of each gender. The fuzzy C-means clustering method is explored in this paper. Besides, two new schemes of implant selection procedures, namely the Procrustes analysis-based scheme and the group average distance-based scheme, were proposed to better search for the matching implants for new coming patients from the database. Both these two algorithms have not been used in this area, while they turn out to have excellent performance in implant selection. Additionally, algorithms to calculate the matching scores between various implants and the patient data are proposed in this paper to assist the implant selection procedure. The results obtained have indicated the feasibility of the proposed development and selection scheme. The 16 sets of male data were divided into two clusters with 8 and 8 subjects, respectively, and the 11 female datasets were also divided into two clusters with 5 and 6 subjects, respectively. Based on the features of each cluster, the implants designed by the proposed algorithm fit very well on their reference humeri and the proposed

  17. Differential evolution algorithm-based kernel parameter selection for Fukunaga-Koontz Transform subspaces construction

    Science.gov (United States)

    Binol, Hamidullah; Bal, Abdullah; Cukur, Huseyin

    2015-10-01

    The performance of the kernel based techniques depends on the selection of kernel parameters. That's why; suitable parameter selection is an important problem for many kernel based techniques. This article presents a novel technique to learn the kernel parameters in kernel Fukunaga-Koontz Transform based (KFKT) classifier. The proposed approach determines the appropriate values of kernel parameters through optimizing an objective function constructed based on discrimination ability of KFKT. For this purpose we have utilized differential evolution algorithm (DEA). The new technique overcomes some disadvantages such as high time consumption existing in the traditional cross-validation method, and it can be utilized in any type of data. The experiments for target detection applications on the hyperspectral images verify the effectiveness of the proposed method.

  18. Methods for producing thin film charge selective transport layers

    Science.gov (United States)

    Hammond, Scott Ryan; Olson, Dana C.; van Hest, Marinus Franciscus Antonius Maria

    2018-01-02

    Methods for producing thin film charge selective transport layers are provided. In one embodiment, a method for forming a thin film charge selective transport layer comprises: providing a precursor solution comprising a metal containing reactive precursor material dissolved into a complexing solvent; depositing the precursor solution onto a surface of a substrate to form a film; and forming a charge selective transport layer on the substrate by annealing the film.

  19. A novel heterogeneous training sample selection method on space-time adaptive processing

    Science.gov (United States)

    Wang, Qiang; Zhang, Yongshun; Guo, Yiduo

    2018-04-01

    The performance of ground target detection about space-time adaptive processing (STAP) decreases when non-homogeneity of clutter power is caused because of training samples contaminated by target-like signals. In order to solve this problem, a novel nonhomogeneous training sample selection method based on sample similarity is proposed, which converts the training sample selection into a convex optimization problem. Firstly, the existing deficiencies on the sample selection using generalized inner product (GIP) are analyzed. Secondly, the similarities of different training samples are obtained by calculating mean-hausdorff distance so as to reject the contaminated training samples. Thirdly, cell under test (CUT) and the residual training samples are projected into the orthogonal subspace of the target in the CUT, and mean-hausdorff distances between the projected CUT and training samples are calculated. Fourthly, the distances are sorted in order of value and the training samples which have the bigger value are selective preference to realize the reduced-dimension. Finally, simulation results with Mountain-Top data verify the effectiveness of the proposed method.

  20. METHODS OF SELECTING THE EFFECTIVE MODELS OF BUILDINGS REPROFILING PROJECTS

    Directory of Open Access Journals (Sweden)

    Александр Иванович МЕНЕЙЛЮК

    2016-02-01

    Full Text Available The article highlights the important task of project management in reprofiling of buildings. It is expedient to pay attention to selecting effective engineering solutions to reduce the duration and cost reduction at the project management in the construction industry. This article presents a methodology for the selection of efficient organizational and technical solutions for the reconstruction of buildings reprofiling. The method is based on a compilation of project variants in the program Microsoft Project and experimental statistical analysis using the program COMPEX. The introduction of this technique in the realigning of buildings allows choosing efficient models of projects, depending on the given constraints. Also, this technique can be used for various construction projects.

  1. Variable selection methods in PLS regression - a comparison study on metabolomics data

    DEFF Research Database (Denmark)

    Karaman, İbrahim; Hedemann, Mette Skou; Knudsen, Knud Erik Bach

    . The aim of the metabolomics study was to investigate the metabolic profile in pigs fed various cereal fractions with special attention to the metabolism of lignans using LC-MS based metabolomic approach. References 1. Lê Cao KA, Rossouw D, Robert-Granié C, Besse P: A Sparse PLS for Variable Selection when...... integrated approach. Due to the high number of variables in data sets (both raw data and after peak picking) the selection of important variables in an explorative analysis is difficult, especially when different data sets of metabolomics data need to be related. Variable selection (or removal of irrelevant...... different strategies for variable selection on PLSR method were considered and compared with respect to selected subset of variables and the possibility for biological validation. Sparse PLSR [1] as well as PLSR with Jack-knifing [2] was applied to data in order to achieve variable selection prior...

  2. Selecting Measures to Evaluate Complex Sociotechnical Systems: An Empirical Comparison of a Task-based and Constraint-based Method

    Science.gov (United States)

    2013-07-01

    personnel selection, work methods, labour standards and an individual’s motivation to perform work. His work became less relevant as tasks became more...people were employed to do and was able to show that non-physical factors such as job satisfaction and the psychological states of workers contributed...all threats, flight conditions, consequences of their actions (for example, damaging the aircraft during a “hard” landing) and expressed satisfaction

  3. Gene-Based Multiclass Cancer Diagnosis with Class-Selective Rejections

    Science.gov (United States)

    Jrad, Nisrine; Grall-Maës, Edith; Beauseroy, Pierre

    2009-01-01

    Supervised learning of microarray data is receiving much attention in recent years. Multiclass cancer diagnosis, based on selected gene profiles, are used as adjunct of clinical diagnosis. However, supervised diagnosis may hinder patient care, add expense or confound a result. To avoid this misleading, a multiclass cancer diagnosis with class-selective rejection is proposed. It rejects some patients from one, some, or all classes in order to ensure a higher reliability while reducing time and expense costs. Moreover, this classifier takes into account asymmetric penalties dependant on each class and on each wrong or partially correct decision. It is based on ν-1-SVM coupled with its regularization path and minimizes a general loss function defined in the class-selective rejection scheme. The state of art multiclass algorithms can be considered as a particular case of the proposed algorithm where the number of decisions is given by the classes and the loss function is defined by the Bayesian risk. Two experiments are carried out in the Bayesian and the class selective rejection frameworks. Five genes selected datasets are used to assess the performance of the proposed method. Results are discussed and accuracies are compared with those computed by the Naive Bayes, Nearest Neighbor, Linear Perceptron, Multilayer Perceptron, and Support Vector Machines classifiers. PMID:19584932

  4. Research on Big Data Attribute Selection Method in Submarine Optical Fiber Network Fault Diagnosis Database

    Directory of Open Access Journals (Sweden)

    Chen Ganlang

    2017-11-01

    Full Text Available At present, in the fault diagnosis database of submarine optical fiber network, the attribute selection of large data is completed by detecting the attributes of the data, the accuracy of large data attribute selection cannot be guaranteed. In this paper, a large data attribute selection method based on support vector machines (SVM for fault diagnosis database of submarine optical fiber network is proposed. Mining large data in the database of optical fiber network fault diagnosis, and calculate its attribute weight, attribute classification is completed according to attribute weight, so as to complete attribute selection of large data. Experimental results prove that ,the proposed method can improve the accuracy of large data attribute selection in fault diagnosis database of submarine optical fiber network, and has high use value.

  5. Adaptive Steganalysis Based on Selection Region and Combined Convolutional Neural Networks

    Directory of Open Access Journals (Sweden)

    Donghui Hu

    2017-01-01

    Full Text Available Digital image steganalysis is the art of detecting the presence of information hiding in carrier images. When detecting recently developed adaptive image steganography methods, state-of-art steganalysis methods cannot achieve satisfactory detection accuracy, because the adaptive steganography methods can adaptively embed information into regions with rich textures via the guidance of distortion function and thus make the effective steganalysis features hard to be extracted. Inspired by the promising success which convolutional neural network (CNN has achieved in the fields of digital image analysis, increasing researchers are devoted to designing CNN based steganalysis methods. But as for detecting adaptive steganography methods, the results achieved by CNN based methods are still far from expected. In this paper, we propose a hybrid approach by designing a region selection method and a new CNN framework. In order to make the CNN focus on the regions with complex textures, we design a region selection method by finding a region with the maximal sum of the embedding probabilities. To evolve more diverse and effective steganalysis features, we design a new CNN framework consisting of three separate subnets with independent structure and configuration parameters and then merge and split the three subnets repeatedly. Experimental results indicate that our approach can lead to performance improvement in detecting adaptive steganography.

  6. Highly selective apo-arginase based method for sensitive enzymatic assay of manganese (II) and cobalt (II) ions

    Science.gov (United States)

    Stasyuk, Nataliya; Gayda, Galina; Zakalskiy, Andriy; Zakalska, Oksana; Errachid, Abdelhamid; Gonchar, Mykhailo

    2018-03-01

    A novel enzymatic method of manganese (II) and cobalt (II) ions assay, based on using apo-enzyme of Mn2 +-dependent recombinant arginase I (arginase) and 2,3-butanedione monoxime (DMO) as a chemical reagent is proposed. The principle of the method is the evaluation of the activity of L-arginine-hydrolyzing of arginase holoenzyme after the specific binding of Mn2 + or Co2 + with apo-arginase. Urea, which is the product of enzymatic hydrolysis of L-arginine (Arg), reacts with DMO and the resulted compound is detected by both fluorometry and visual spectrophotometry. Thus, the content of metal ions in the tested samples can be determined by measuring the level of urea generated after enzymatic hydrolysis of Arg by reconstructed arginase holoenzyme in the presence of tested metal ions. The linearity range of the fluorometric apo-arginase-DMO method in the case of Mn2 + assay is from 4 pM to 1.10 nM with a limit of detection of 1 pM Mn2 +, whereas the linearity range of the present method in the case of Co2 + assay is from 8 pM to 45 nM with a limit of detection of 2.5 pM Co2 +. The proposed method being highly sensitive, selective, valid and low-cost, may be useful to monitor Mn2 + and Co2 + content in clinical laboratories, food industry and environmental control service.

  7. A Comparative Study of Feature Selection and Classification Methods for Gene Expression Data of Glioma

    KAUST Repository

    Abusamra, Heba

    2013-11-01

    Microarray gene expression data gained great importance in recent years due to its role in disease diagnoses and prognoses which help to choose the appropriate treatment plan for patients. This technology has shifted a new era in molecular classification. Interpreting gene expression data remains a difficult problem and an active research area due to their native nature of “high dimensional low sample size”. Such problems pose great challenges to existing classification methods. Thus, effective feature selection techniques are often needed in this case to aid to correctly classify different tumor types and consequently lead to a better understanding of genetic signatures as well as improve treatment strategies. This paper aims on a comparative study of state-of-the- art feature selection methods, classification methods, and the combination of them, based on gene expression data. We compared the efficiency of three different classification methods including: support vector machines, k-nearest neighbor and random forest, and eight different feature selection methods, including: information gain, twoing rule, sum minority, max minority, gini index, sum of variances, t-statistics, and one-dimension support vector machine. Five-fold cross validation was used to evaluate the classification performance. Two publicly available gene expression data sets of glioma were used in the experiments. Results revealed the important role of feature selection in classifying gene expression data. By performing feature selection, the classification accuracy can be significantly boosted by using a small number of genes. The relationship of features selected in different feature selection methods is investigated and the most frequent features selected in each fold among all methods for both datasets are evaluated.

  8. A Comparative Study of Feature Selection and Classification Methods for Gene Expression Data of Glioma

    KAUST Repository

    Abusamra, Heba

    2013-01-01

    Microarray gene expression data gained great importance in recent years due to its role in disease diagnoses and prognoses which help to choose the appropriate treatment plan for patients. This technology has shifted a new era in molecular classification. Interpreting gene expression data remains a difficult problem and an active research area due to their native nature of “high dimensional low sample size”. Such problems pose great challenges to existing classification methods. Thus, effective feature selection techniques are often needed in this case to aid to correctly classify different tumor types and consequently lead to a better understanding of genetic signatures as well as improve treatment strategies. This paper aims on a comparative study of state-of-the- art feature selection methods, classification methods, and the combination of them, based on gene expression data. We compared the efficiency of three different classification methods including: support vector machines, k-nearest neighbor and random forest, and eight different feature selection methods, including: information gain, twoing rule, sum minority, max minority, gini index, sum of variances, t-statistics, and one-dimension support vector machine. Five-fold cross validation was used to evaluate the classification performance. Two publicly available gene expression data sets of glioma were used in the experiments. Results revealed the important role of feature selection in classifying gene expression data. By performing feature selection, the classification accuracy can be significantly boosted by using a small number of genes. The relationship of features selected in different feature selection methods is investigated and the most frequent features selected in each fold among all methods for both datasets are evaluated.

  9. Web-based emergency response exercise management systems and methods thereof

    Science.gov (United States)

    Goforth, John W.; Mercer, Michael B.; Heath, Zach; Yang, Lynn I.

    2014-09-09

    According to one embodiment, a method for simulating portions of an emergency response exercise includes generating situational awareness outputs associated with a simulated emergency and sending the situational awareness outputs to a plurality of output devices. Also, the method includes outputting to a user device a plurality of decisions associated with the situational awareness outputs at a decision point, receiving a selection of one of the decisions from the user device, generating new situational awareness outputs based on the selected decision, and repeating the sending, outputting and receiving steps based on the new situational awareness outputs. Other methods, systems, and computer program products are included according to other embodiments of the invention.

  10. Multi-Objective Particle Swarm Optimization Approach for Cost-Based Feature Selection in Classification.

    Science.gov (United States)

    Zhang, Yong; Gong, Dun-Wei; Cheng, Jian

    2017-01-01

    Feature selection is an important data-preprocessing technique in classification problems such as bioinformatics and signal processing. Generally, there are some situations where a user is interested in not only maximizing the classification performance but also minimizing the cost that may be associated with features. This kind of problem is called cost-based feature selection. However, most existing feature selection approaches treat this task as a single-objective optimization problem. This paper presents the first study of multi-objective particle swarm optimization (PSO) for cost-based feature selection problems. The task of this paper is to generate a Pareto front of nondominated solutions, that is, feature subsets, to meet different requirements of decision-makers in real-world applications. In order to enhance the search capability of the proposed algorithm, a probability-based encoding technology and an effective hybrid operator, together with the ideas of the crowding distance, the external archive, and the Pareto domination relationship, are applied to PSO. The proposed PSO-based multi-objective feature selection algorithm is compared with several multi-objective feature selection algorithms on five benchmark datasets. Experimental results show that the proposed algorithm can automatically evolve a set of nondominated solutions, and it is a highly competitive feature selection method for solving cost-based feature selection problems.

  11. A Combined Fuzzy-AHP and Fuzzy-GRA Methodology for Hydrogen Energy Storage Method Selection in Turkey

    Directory of Open Access Journals (Sweden)

    Aytac Yildiz

    2013-06-01

    Full Text Available In this paper, we aim to select the most appropriate Hydrogen Energy Storage (HES method for Turkey from among the alternatives of tank, metal hydride and chemical storage, which are determined based on expert opinions and literature review. Thus, we propose a Buckley extension based fuzzy Analytical Hierarchical Process (Fuzzy-AHP and linear normalization based fuzzy Grey Relational Analysis (Fuzzy-GRA combined Multi Criteria Decision Making (MCDM methodology. This combined approach can be applied to a complex decision process, which often makes sense with subjective data or vague information; and used to solve to solve HES selection problem with different defuzzification methods. The proposed approach is unique both in the HES literature and the MCDM literature.

  12. Underground Mining Method Selection Using WPM and PROMETHEE

    Science.gov (United States)

    Balusa, Bhanu Chander; Singam, Jayanthu

    2018-04-01

    The aim of this paper is to represent the solution to the problem of selecting suitable underground mining method for the mining industry. It is achieved by using two multi-attribute decision making techniques. These two techniques are weighted product method (WPM) and preference ranking organization method for enrichment evaluation (PROMETHEE). In this paper, analytic hierarchy process is used for weight's calculation of the attributes (i.e. parameters which are used in this paper). Mining method selection depends on physical parameters, mechanical parameters, economical parameters and technical parameters. WPM and PROMETHEE techniques have the ability to consider the relationship between the parameters and mining methods. The proposed techniques give higher accuracy and faster computation capability when compared with other decision making techniques. The proposed techniques are presented to determine the effective mining method for bauxite mine. The results of these techniques are compared with methods used in the earlier research works. The results show, conventional cut and fill method is the most suitable mining method.

  13. Greater fruit selection following an appearance-based compared with a health-based health promotion poster

    Science.gov (United States)

    2016-01-01

    Abstract Background This study investigated the impact of an appearance-based compared with a traditional health-based public health message for healthy eating. Methods A total of 166 British University students (41 males; aged 20.6 ± 1.9 years) were randomized to view either an appearance-based (n = 82) or a health-based (n = 84) fruit promotion poster. Intentions to consume fruit and immediate fruit selection (laboratory observation) were assessed immediately after poster viewing, and subsequent self-report fruit consumption was assessed 3 days later. Results Intentions to consume fruit were not predicted by poster type (largest β = 0.03, P = 0.68) but were associated with fruit-based liking, past consumption, attitudes and social norms (smallest β = 0.16, P = 0.04). Immediate fruit selection was greater following the appearance-based compared with the health-based poster (β = −0.24, P poster (β = −0.22, P = 0.03), but this effect became non-significant on consideration of participant characteristics (β = −0.15, P = 0.13), and was instead associated with fruit-based liking and past consumption (smallest β = 0.24, P = 0.03). Conclusions These findings demonstrate the clear value of an appearance-based compared with a health-based health promotion poster for increasing fruit selection. A distinction between outcome measures and the value of a behavioural measure is also demonstrated. PMID:28158693

  14. Selection of logging-based TOC calculation methods for shale reservoirs: A case study of the Jiaoshiba shale gas field in the Sichuan Basin

    Directory of Open Access Journals (Sweden)

    Renchun Huang

    2015-03-01

    Full Text Available Various methods are available for calculating the TOC of shale reservoirs with logging data, and each method has its unique applicability and accuracy. So it is especially important to establish a regional experimental calculation model based on a thorough analysis of their applicability. With the Upper Ordovician Wufeng Fm-Lower Silurian Longmaxi Fm shale reservoirs as an example, TOC calculation models were built by use of the improved ΔlgR, bulk density, natural gamma spectroscopy, multi-fitting and volume model methods respectively, considering the previous research results and the geologic features of the area. These models were compared based on the core data. Finally, the bulk density method was selected as the regional experimental calculation model. Field practices demonstrated that the improved ΔlgR and natural gamma spectroscopy methods are poor in accuracy; although the multi-fitting method and bulk density method have relatively high accuracy, the bulk density method is simpler and wider in application. For further verifying its applicability, the bulk density method was applied to calculate the TOC of shale reservoirs in several key wells in the Jiaoshiba shale gas field, Sichuan Basin, and the calculation accuracy was clarified with the measured data of core samples, showing that the coincidence rate of logging-based TOC calculation is up to 90.5%–91.0%.

  15. Development of modelling method selection tool for health services management: from problem structuring methods to modelling and simulation methods.

    Science.gov (United States)

    Jun, Gyuchan T; Morris, Zoe; Eldabi, Tillal; Harper, Paul; Naseer, Aisha; Patel, Brijesh; Clarkson, John P

    2011-05-19

    There is an increasing recognition that modelling and simulation can assist in the process of designing health care policies, strategies and operations. However, the current use is limited and answers to questions such as what methods to use and when remain somewhat underdeveloped. The aim of this study is to provide a mechanism for decision makers in health services planning and management to compare a broad range of modelling and simulation methods so that they can better select and use them or better commission relevant modelling and simulation work. This paper proposes a modelling and simulation method comparison and selection tool developed from a comprehensive literature review, the research team's extensive expertise and inputs from potential users. Twenty-eight different methods were identified, characterised by their relevance to different application areas, project life cycle stages, types of output and levels of insight, and four input resources required (time, money, knowledge and data). The characterisation is presented in matrix forms to allow quick comparison and selection. This paper also highlights significant knowledge gaps in the existing literature when assessing the applicability of particular approaches to health services management, where modelling and simulation skills are scarce let alone money and time. A modelling and simulation method comparison and selection tool is developed to assist with the selection of methods appropriate to supporting specific decision making processes. In particular it addresses the issue of which method is most appropriate to which specific health services management problem, what the user might expect to be obtained from the method, and what is required to use the method. In summary, we believe the tool adds value to the scarce existing literature on methods comparison and selection.

  16. A systematic and practical method for selecting systems engineering tools

    DEFF Research Database (Denmark)

    Munck, Allan; Madsen, Jan

    2017-01-01

    analyses of the actual needs and the available tools. Grouping needs into categories, allow us to obtain a comprehensive set of requirements for the tools. The entire model-based systems engineering discipline was categorized for a modeling tool case to enable development of a tool specification...... in successful operation since 2013 at GN Hearing. We further utilized the method to select a set of tools that we used on pilot cases at GN Hearing for modeling, simulating and formally verifying embedded systems.......The complexity of many types of systems has grown considerably over the last decades. Using appropriate systems engineering tools therefore becomes increasingly important. Starting the tool selection process can be intimidating because organizations often only have a vague idea about what they need...

  17. The sequence relay selection strategy based on stochastic dynamic programming

    Science.gov (United States)

    Zhu, Rui; Chen, Xihao; Huang, Yangchao

    2017-07-01

    Relay-assisted (RA) network with relay node selection is a kind of effective method to improve the channel capacity and convergence performance. However, most of the existing researches about the relay selection did not consider the statically channel state information and the selection cost. This shortage limited the performance and application of RA network in practical scenarios. In order to overcome this drawback, a sequence relay selection strategy (SRSS) was proposed. And the performance upper bound of SRSS was also analyzed in this paper. Furthermore, in order to make SRSS more practical, a novel threshold determination algorithm based on the stochastic dynamic program (SDP) was given to work with SRSS. Numerical results are also presented to exhibit the performance of SRSS with SDP.

  18. Influence of Feature Selection Methods on Classification Sensitivity Based on the Example of A Study of Polish Voivodship Tourist Attractiveness

    Directory of Open Access Journals (Sweden)

    Bąk Iwona

    2014-07-01

    Full Text Available The purpose of this article is to determine the influence of various methods of selection of diagnostic features on the sensitivity of classification. Three options of feature selection are presented: a parametric feature selection method with a sum (option I, a median of the correlation coefficients matrix column elements (option II and the method of a reversed matrix (option III. Efficiency of the groupings was verified by the indicators of homogeneity, heterogeneity and the correctness of grouping. In the assessment of group efficiency the approach with the Weber median was used. The undertaken problem was illustrated with a research into the tourist attractiveness of voivodships in Poland in 2011.

  19. Long-term response to genomic selection: effects of estimation method and reference population structure for different genetic architectures.

    Science.gov (United States)

    Bastiaansen, John W M; Coster, Albart; Calus, Mario P L; van Arendonk, Johan A M; Bovenhuis, Henk

    2012-01-24

    Genomic selection has become an important tool in the genetic improvement of animals and plants. The objective of this study was to investigate the impacts of breeding value estimation method, reference population structure, and trait genetic architecture, on long-term response to genomic selection without updating marker effects. Three methods were used to estimate genomic breeding values: a BLUP method with relationships estimated from genome-wide markers (GBLUP), a Bayesian method, and a partial least squares regression method (PLSR). A shallow (individuals from one generation) or deep reference population (individuals from five generations) was used with each method. The effects of the different selection approaches were compared under four different genetic architectures for the trait under selection. Selection was based on one of the three genomic breeding values, on pedigree BLUP breeding values, or performed at random. Selection continued for ten generations. Differences in long-term selection response were small. For a genetic architecture with a very small number of three to four quantitative trait loci (QTL), the Bayesian method achieved a response that was 0.05 to 0.1 genetic standard deviation higher than other methods in generation 10. For genetic architectures with approximately 30 to 300 QTL, PLSR (shallow reference) or GBLUP (deep reference) had an average advantage of 0.2 genetic standard deviation over the Bayesian method in generation 10. GBLUP resulted in 0.6% and 0.9% less inbreeding than PLSR and BM and on average a one third smaller reduction of genetic variance. Responses in early generations were greater with the shallow reference population while long-term response was not affected by reference population structure. The ranking of estimation methods was different with than without selection. Under selection, applying GBLUP led to lower inbreeding and a smaller reduction of genetic variance while a similar response to selection was

  20. Analytical network process based optimum cluster head selection in wireless sensor network.

    Science.gov (United States)

    Farman, Haleem; Javed, Huma; Jan, Bilal; Ahmad, Jamil; Ali, Shaukat; Khalil, Falak Naz; Khan, Murad

    2017-01-01

    Wireless Sensor Networks (WSNs) are becoming ubiquitous in everyday life due to their applications in weather forecasting, surveillance, implantable sensors for health monitoring and other plethora of applications. WSN is equipped with hundreds and thousands of small sensor nodes. As the size of a sensor node decreases, critical issues such as limited energy, computation time and limited memory become even more highlighted. In such a case, network lifetime mainly depends on efficient use of available resources. Organizing nearby nodes into clusters make it convenient to efficiently manage each cluster as well as the overall network. In this paper, we extend our previous work of grid-based hybrid network deployment approach, in which merge and split technique has been proposed to construct network topology. Constructing topology through our proposed technique, in this paper we have used analytical network process (ANP) model for cluster head selection in WSN. Five distinct parameters: distance from nodes (DistNode), residual energy level (REL), distance from centroid (DistCent), number of times the node has been selected as cluster head (TCH) and merged node (MN) are considered for CH selection. The problem of CH selection based on these parameters is tackled as a multi criteria decision system, for which ANP method is used for optimum cluster head selection. Main contribution of this work is to check the applicability of ANP model for cluster head selection in WSN. In addition, sensitivity analysis is carried out to check the stability of alternatives (available candidate nodes) and their ranking for different scenarios. The simulation results show that the proposed method outperforms existing energy efficient clustering protocols in terms of optimum CH selection and minimizing CH reselection process that results in extending overall network lifetime. This paper analyzes that ANP method used for CH selection with better understanding of the dependencies of

  1. Use of different marker pre-selection methods based on single SNP regression in the estimation of Genomic-EBVs

    Directory of Open Access Journals (Sweden)

    Corrado Dimauro

    2010-01-01

    Full Text Available Two methods of SNPs pre-selection based on single marker regression for the estimation of genomic breeding values (G-EBVs were compared using simulated data provided by the XII QTL-MAS workshop: i Bonferroni correction of the significance threshold and ii Permutation test to obtain the reference distribution of the null hypothesis and identify significant markers at P<0.01 and P<0.001 significance thresholds. From the set of markers significant at P<0.001, random subsets of 50% and 25% markers were extracted, to evaluate the effect of further reducing the number of significant SNPs on G-EBV predictions. The Bonferroni correction method allowed the identification of 595 significant SNPs that gave the best G-EBV accuracies in prediction generations (82.80%. The permutation methods gave slightly lower G-EBV accuracies even if a larger number of SNPs resulted significant (2,053 and 1,352 for 0.01 and 0.001 significance thresholds, respectively. Interestingly, halving or dividing by four the number of SNPs significant at P<0.001 resulted in an only slightly decrease of G-EBV accuracies. The genetic structure of the simulated population with few QTL carrying large effects, might have favoured the Bonferroni method.

  2. Conditional maximum-entropy method for selecting prior distributions in Bayesian statistics

    Science.gov (United States)

    Abe, Sumiyoshi

    2014-11-01

    The conditional maximum-entropy method (abbreviated here as C-MaxEnt) is formulated for selecting prior probability distributions in Bayesian statistics for parameter estimation. This method is inspired by a statistical-mechanical approach to systems governed by dynamics with largely separated time scales and is based on three key concepts: conjugate pairs of variables, dimensionless integration measures with coarse-graining factors and partial maximization of the joint entropy. The method enables one to calculate a prior purely from a likelihood in a simple way. It is shown, in particular, how it not only yields Jeffreys's rules but also reveals new structures hidden behind them.

  3. Laser-induced Breakdown spectroscopy quantitative analysis method via adaptive analytical line selection and relevance vector machine regression model

    International Nuclear Information System (INIS)

    Yang, Jianhong; Yi, Cancan; Xu, Jinwu; Ma, Xianghong

    2015-01-01

    A new LIBS quantitative analysis method based on analytical line adaptive selection and Relevance Vector Machine (RVM) regression model is proposed. First, a scheme of adaptively selecting analytical line is put forward in order to overcome the drawback of high dependency on a priori knowledge. The candidate analytical lines are automatically selected based on the built-in characteristics of spectral lines, such as spectral intensity, wavelength and width at half height. The analytical lines which will be used as input variables of regression model are determined adaptively according to the samples for both training and testing. Second, an LIBS quantitative analysis method based on RVM is presented. The intensities of analytical lines and the elemental concentrations of certified standard samples are used to train the RVM regression model. The predicted elemental concentration analysis results will be given with a form of confidence interval of probabilistic distribution, which is helpful for evaluating the uncertainness contained in the measured spectra. Chromium concentration analysis experiments of 23 certified standard high-alloy steel samples have been carried out. The multiple correlation coefficient of the prediction was up to 98.85%, and the average relative error of the prediction was 4.01%. The experiment results showed that the proposed LIBS quantitative analysis method achieved better prediction accuracy and better modeling robustness compared with the methods based on partial least squares regression, artificial neural network and standard support vector machine. - Highlights: • Both training and testing samples are considered for analytical lines selection. • The analytical lines are auto-selected based on the built-in characteristics of spectral lines. • The new method can achieve better prediction accuracy and modeling robustness. • Model predictions are given with confidence interval of probabilistic distribution

  4. Feature Selection Methods for Zero-Shot Learning of Neural Activity

    Directory of Open Access Journals (Sweden)

    Carlos A. Caceres

    2017-06-01

    Full Text Available Dimensionality poses a serious challenge when making predictions from human neuroimaging data. Across imaging modalities, large pools of potential neural features (e.g., responses from particular voxels, electrodes, and temporal windows have to be related to typically limited sets of stimuli and samples. In recent years, zero-shot prediction models have been introduced for mapping between neural signals and semantic attributes, which allows for classification of stimulus classes not explicitly included in the training set. While choices about feature selection can have a substantial impact when closed-set accuracy, open-set robustness, and runtime are competing design objectives, no systematic study of feature selection for these models has been reported. Instead, a relatively straightforward feature stability approach has been adopted and successfully applied across models and imaging modalities. To characterize the tradeoffs in feature selection for zero-shot learning, we compared correlation-based stability to several other feature selection techniques on comparable data sets from two distinct imaging modalities: functional Magnetic Resonance Imaging and Electrocorticography. While most of the feature selection methods resulted in similar zero-shot prediction accuracies and spatial/spectral patterns of selected features, there was one exception; A novel feature/attribute correlation approach was able to achieve those accuracies with far fewer features, suggesting the potential for simpler prediction models that yield high zero-shot classification accuracy.

  5. A combined Fisher and Laplacian score for feature selection in QSAR based drug design using compounds with known and unknown activities.

    Science.gov (United States)

    Valizade Hasanloei, Mohammad Amin; Sheikhpour, Razieh; Sarram, Mehdi Agha; Sheikhpour, Elnaz; Sharifi, Hamdollah

    2018-02-01

    Quantitative structure-activity relationship (QSAR) is an effective computational technique for drug design that relates the chemical structures of compounds to their biological activities. Feature selection is an important step in QSAR based drug design to select the most relevant descriptors. One of the most popular feature selection methods for classification problems is Fisher score which aim is to minimize the within-class distance and maximize the between-class distance. In this study, the properties of Fisher criterion were extended for QSAR models to define the new distance metrics based on the continuous activity values of compounds with known activities. Then, a semi-supervised feature selection method was proposed based on the combination of Fisher and Laplacian criteria which exploits both compounds with known and unknown activities to select the relevant descriptors. To demonstrate the efficiency of the proposed semi-supervised feature selection method in selecting the relevant descriptors, we applied the method and other feature selection methods on three QSAR data sets such as serine/threonine-protein kinase PLK3 inhibitors, ROCK inhibitors and phenol compounds. The results demonstrated that the QSAR models built on the selected descriptors by the proposed semi-supervised method have better performance than other models. This indicates the efficiency of the proposed method in selecting the relevant descriptors using the compounds with known and unknown activities. The results of this study showed that the compounds with known and unknown activities can be helpful to improve the performance of the combined Fisher and Laplacian based feature selection methods.

  6. Knowledge based decision making method for the selection of mixed refrigerant systems for energy efficient LNG processes

    International Nuclear Information System (INIS)

    Khan, Mohd Shariq; Lee, Sanggyu; Rangaiah, G.P.; Lee, Moonyong

    2013-01-01

    Highlights: • Practical method for finding optimum refrigerant composition is proposed for LNG plant. • Knowledge of boiling point differences in refrigerant component is employed. • Implementation of process knowledge notably makes LNG process energy efficient. • Optimization of LNG plant is more transparent using process knowledge. - Abstract: Mixed refrigerant (MR) systems are used in many industrial applications because of their high energy efficiency, compact design and energy-efficient heat transfer compared to other processes operating with pure refrigerants. The performance of MR systems depends strongly on the optimum refrigerant composition, which is difficult to obtain. This paper proposes a simple and practical method for selecting the appropriate refrigerant composition, which was inspired by (i) knowledge of the boiling point difference in MR components, and (ii) their specific refrigeration effect in bringing a MR system close to reversible operation. A feasibility plot and composite curves were used for full enforcement of the approach temperature. The proposed knowledge-based optimization approach was described and applied to a single MR and a propane precooled MR system for natural gas liquefaction. Maximization of the heat exchanger exergy efficiency was considered as the optimization objective to achieve an energy efficient design goal. Several case studies on single MR and propane precooled MR processes were performed to show the effectiveness of the proposed method. The application of the proposed method is not restricted to liquefiers, and can be applied to any refrigerator and cryogenic cooler where a MR is involved

  7. A New Manufacturing Service Selection and Composition Method Using Improved Flower Pollination Algorithm

    Directory of Open Access Journals (Sweden)

    Wenyu Zhang

    2016-01-01

    Full Text Available With an increasing number of manufacturing services, the means by which to select and compose these manufacturing services have become a challenging problem. It can be regarded as a multiobjective optimization problem that involves a variety of conflicting quality of service (QoS attributes. In this study, a multiobjective optimization model of manufacturing service composition is presented that is based on QoS and an environmental index. Next, the skyline operator is applied to reduce the solution space. And then a new method called improved Flower Pollination Algorithm (FPA is proposed for solving the problem of manufacturing service selection and composition. The improved FPA enhances the performance of basic FPA by combining the latter with crossover and mutation operators of the Differential Evolution (DE algorithm. Finally, a case study is conducted to compare the proposed method with other evolutionary algorithms, including the Genetic Algorithm, DE, basic FPA, and extended FPA. The experimental results reveal that the proposed method performs best at solving the problem of manufacturing service selection and composition.

  8. Trip Travel Time Forecasting Based on Selective Forgetting Extreme Learning Machine

    Directory of Open Access Journals (Sweden)

    Zhiming Gui

    2014-01-01

    Full Text Available Travel time estimation on road networks is a valuable traffic metric. In this paper, we propose a machine learning based method for trip travel time estimation in road networks. The method uses the historical trip information extracted from taxis trace data as the training data. An optimized online sequential extreme machine, selective forgetting extreme learning machine, is adopted to make the prediction. Its selective forgetting learning ability enables the prediction algorithm to adapt to trip conditions changes well. Experimental results using real-life taxis trace data show that the forecasting model provides an effective and practical way for the travel time forecasting.

  9. Ontology-Based Method for Fault Diagnosis of Loaders.

    Science.gov (United States)

    Xu, Feixiang; Liu, Xinhui; Chen, Wei; Zhou, Chen; Cao, Bingwei

    2018-02-28

    This paper proposes an ontology-based fault diagnosis method which overcomes the difficulty of understanding complex fault diagnosis knowledge of loaders and offers a universal approach for fault diagnosis of all loaders. This method contains the following components: (1) An ontology-based fault diagnosis model is proposed to achieve the integrating, sharing and reusing of fault diagnosis knowledge for loaders; (2) combined with ontology, CBR (case-based reasoning) is introduced to realize effective and accurate fault diagnoses following four steps (feature selection, case-retrieval, case-matching and case-updating); and (3) in order to cover the shortages of the CBR method due to the lack of concerned cases, ontology based RBR (rule-based reasoning) is put forward through building SWRL (Semantic Web Rule Language) rules. An application program is also developed to implement the above methods to assist in finding the fault causes, fault locations and maintenance measures of loaders. In addition, the program is validated through analyzing a case study.

  10. Clustering and training set selection methods for improving the accuracy of quantitative laser induced breakdown spectroscopy

    International Nuclear Information System (INIS)

    Anderson, Ryan B.; Bell, James F.; Wiens, Roger C.; Morris, Richard V.; Clegg, Samuel M.

    2012-01-01

    We investigated five clustering and training set selection methods to improve the accuracy of quantitative chemical analysis of geologic samples by laser induced breakdown spectroscopy (LIBS) using partial least squares (PLS) regression. The LIBS spectra were previously acquired for 195 rock slabs and 31 pressed powder geostandards under 7 Torr CO 2 at a stand-off distance of 7 m at 17 mJ per pulse to simulate the operational conditions of the ChemCam LIBS instrument on the Mars Science Laboratory Curiosity rover. The clustering and training set selection methods, which do not require prior knowledge of the chemical composition of the test-set samples, are based on grouping similar spectra and selecting appropriate training spectra for the partial least squares (PLS2) model. These methods were: (1) hierarchical clustering of the full set of training spectra and selection of a subset for use in training; (2) k-means clustering of all spectra and generation of PLS2 models based on the training samples within each cluster; (3) iterative use of PLS2 to predict sample composition and k-means clustering of the predicted compositions to subdivide the groups of spectra; (4) soft independent modeling of class analogy (SIMCA) classification of spectra, and generation of PLS2 models based on the training samples within each class; (5) use of Bayesian information criteria (BIC) to determine an optimal number of clusters and generation of PLS2 models based on the training samples within each cluster. The iterative method and the k-means method using 5 clusters showed the best performance, improving the absolute quadrature root mean squared error (RMSE) by ∼ 3 wt.%. The statistical significance of these improvements was ∼ 85%. Our results show that although clustering methods can modestly improve results, a large and diverse training set is the most reliable way to improve the accuracy of quantitative LIBS. In particular, additional sulfate standards and specifically

  11. Fingerprint-Based Machine Learning Approach to Identify Potent and Selective 5-HT2BR Ligands

    Directory of Open Access Journals (Sweden)

    Krzysztof Rataj

    2018-05-01

    Full Text Available The identification of subtype-selective GPCR (G-protein coupled receptor ligands is a challenging task. In this study, we developed a computational protocol to find compounds with 5-HT2BR versus 5-HT1BR selectivity. Our approach employs the hierarchical combination of machine learning methods, docking, and multiple scoring methods. First, we applied machine learning tools to filter a large database of druglike compounds by the new Neighbouring Substructures Fingerprint (NSFP. This two-dimensional fingerprint contains information on the connectivity of the substructural features of a compound. Preselected subsets of the database were then subjected to docking calculations. The main indicators of compounds’ selectivity were their different interactions with the secondary binding pockets of both target proteins, while binding modes within the orthosteric binding pocket were preserved. The combined methodology of ligand-based and structure-based methods was validated prospectively, resulting in the identification of hits with nanomolar affinity and ten-fold to ten thousand-fold selectivities.

  12. Sequence Based Prediction of Antioxidant Proteins Using a Classifier Selection Strategy.

    Directory of Open Access Journals (Sweden)

    Lina Zhang

    Full Text Available Antioxidant proteins perform significant functions in maintaining oxidation/antioxidation balance and have potential therapies for some diseases. Accurate identification of antioxidant proteins could contribute to revealing physiological processes of oxidation/antioxidation balance and developing novel antioxidation-based drugs. In this study, an ensemble method is presented to predict antioxidant proteins with hybrid features, incorporating SSI (Secondary Structure Information, PSSM (Position Specific Scoring Matrix, RSA (Relative Solvent Accessibility, and CTD (Composition, Transition, Distribution. The prediction results of the ensemble predictor are determined by an average of prediction results of multiple base classifiers. Based on a classifier selection strategy, we obtain an optimal ensemble classifier composed of RF (Random Forest, SMO (Sequential Minimal Optimization, NNA (Nearest Neighbor Algorithm, and J48 with an accuracy of 0.925. A Relief combined with IFS (Incremental Feature Selection method is adopted to obtain optimal features from hybrid features. With the optimal features, the ensemble method achieves improved performance with a sensitivity of 0.95, a specificity of 0.93, an accuracy of 0.94, and an MCC (Matthew's Correlation Coefficient of 0.880, far better than the existing method. To evaluate the prediction performance objectively, the proposed method is compared with existing methods on the same independent testing dataset. Encouragingly, our method performs better than previous studies. In addition, our method achieves more balanced performance with a sensitivity of 0.878 and a specificity of 0.860. These results suggest that the proposed ensemble method can be a potential candidate for antioxidant protein prediction. For public access, we develop a user-friendly web server for antioxidant protein identification that is freely accessible at http://antioxidant.weka.cc.

  13. Wavelength selection for portable noninvasive blood component measurement system based on spectral difference coefficient and dynamic spectrum

    Science.gov (United States)

    Feng, Ximeng; Li, Gang; Yu, Haixia; Wang, Shaohui; Yi, Xiaoqing; Lin, Ling

    2018-03-01

    Noninvasive blood component analysis by spectroscopy has been a hotspot in biomedical engineering in recent years. Dynamic spectrum provides an excellent idea for noninvasive blood component measurement, but studies have been limited to the application of broadband light sources and high-resolution spectroscopy instruments. In order to remove redundant information, a more effective wavelength selection method has been presented in this paper. In contrast to many common wavelength selection methods, this method is based on sensing mechanism which has a clear mechanism and can effectively avoid the noise from acquisition system. The spectral difference coefficient was theoretically proved to have a guiding significance for wavelength selection. After theoretical analysis, the multi-band spectral difference coefficient-wavelength selection method combining with the dynamic spectrum was proposed. An experimental analysis based on clinical trial data from 200 volunteers has been conducted to illustrate the effectiveness of this method. The extreme learning machine was used to develop the calibration models between the dynamic spectrum data and hemoglobin concentration. The experiment result shows that the prediction precision of hemoglobin concentration using multi-band spectral difference coefficient-wavelength selection method is higher compared with other methods.

  14. An Empirical Study of Wrappers for Feature Subset Selection based on a Parallel Genetic Algorithm: The Multi-Wrapper Model

    KAUST Repository

    Soufan, Othman

    2012-09-01

    Feature selection is the first task of any learning approach that is applied in major fields of biomedical, bioinformatics, robotics, natural language processing and social networking. In feature subset selection problem, a search methodology with a proper criterion seeks to find the best subset of features describing data (relevance) and achieving better performance (optimality). Wrapper approaches are feature selection methods which are wrapped around a classification algorithm and use a performance measure to select the best subset of features. We analyze the proper design of the objective function for the wrapper approach and highlight an objective based on several classification algorithms. We compare the wrapper approaches to different feature selection methods based on distance and information based criteria. Significant improvement in performance, computational time, and selection of minimally sized feature subsets is achieved by combining different objectives for the wrapper model. In addition, considering various classification methods in the feature selection process could lead to a global solution of desirable characteristics.

  15. Reliable selection of earthquake ground motions for performance-based design

    DEFF Research Database (Denmark)

    Katsanos, Evangelos; Sextos, A.G.

    2016-01-01

    A decision support process is presented to accommodate selecting and scaling of earthquake motions as required for the time domain analysis of structures. Prequalified code-compatible suites of seismic motions are provided through a multi-criterion approach to satisfy prescribed reduced variability...... of the method, by being subjected to numerous suites of motions that were highly ranked according to both the proposed approach (δsv-sc) and the conventional index (δconv), already used by most existing code-based earthquake records selection and scaling procedures. The findings reveal the superiority...

  16. Investigation of Methods for Selectively Reinforcing Aluminum and Aluminum-Lithium Materials

    Science.gov (United States)

    Bird, R. Keith; Alexa, Joel A.; Messick, Peter L.; Domack, Marcia S.; Wagner, John A.

    2013-01-01

    Several studies have indicated that selective reinforcement offers the potential to significantly improve the performance of metallic structures for aerospace applications. Applying high-strength, high-stiffness fibers to the high-stress regions of aluminum-based structures can increase the structural load-carrying capability and inhibit fatigue crack initiation and growth. This paper discusses an investigation into potential methods for applying reinforcing fibers onto the surface of aluminum and aluminum-lithium plate. Commercially-available alumina-fiber reinforced aluminum alloy tapes were used as the reinforcing material. Vacuum hot pressing was used to bond the reinforcing tape to aluminum alloy 2219 and aluminum-lithium alloy 2195 base plates. Static and cyclic three-point bend testing and metallurgical analysis were used to evaluate the enhancement of mechanical performance and the integrity of the bond between the tape and the base plate. The tests demonstrated an increase in specific bending stiffness. In addition, no issues with debonding of the reinforcing tape from the base plate during bend testing were observed. The increase in specific stiffness indicates that selectively-reinforced structures could be designed with the same performance capabilities as a conventional unreinforced structure but with lower mass.

  17. A comparison of U.S. and European methods for accident scenario, identificaton, selection and quantification

    International Nuclear Information System (INIS)

    Cadwallader, L.C.; Djerassi, H.; Lampin, I.

    1989-10-01

    This paper presents a comparison of the varying methods used to identify and select accident-initiating events for safety analysis and probabilistic risk assessment (PRA). Initiating events are important in that they define the extent of a given safety analysis or PRA. Comprehensiveness in identification and selection of initiating events is necessary to ensure that a thorough analysis is being performed. While total completeness cannot ever be realized, inclusion of all safety significant events can be attained. The European approach to initiating event identification and selection arises from within a newly developed Safety Analysis methodology framework. This is a functional approach, with accident initiators based on events that will cause a system or facility loss of function. The US method divides accident initiators into two groups, internal accident initiators into two groups, internal and external events. Since traditional US PRA techniques are applied to fusion facilities, the recommended PRA-based approach is a review of historical safety documents coupled with a facility-level Master Logic Diagram. The US and European methods are described, and both are applied to a proposed International Thermonuclear Experiment Reactor (ITER) Magnet System in a sample problem. Contrasts in the US and European methods are discussed. Within their respective frameworks, each method can provide the comprehensiveness of safety-significant events needed for a thorough analysis. 4 refs., 8 figs., 11 tabs

  18. Prototype selection based on FCM and its application in discrimination between nuclear explosion and earthquake

    International Nuclear Information System (INIS)

    Han Shaoqing; Li Xihai; Song Zibiao; Liu Daizhi

    2007-01-01

    The synergetic pattern recognition is a new way of pattern recognition with many excellent features such as noise resistance and deformity resistance. But when it is used in the discrimination between nuclear explosion and earthquake using existing methods of prototype selection, the results are not satisfying. A new method of prototype selection based on FCM is proposed in this paper. First, each group of training samples is clustered into c groups using FCM; then c barycenters or centers are chosen as prototypes. Experiment results show that compared with existing methods of prototype selection this new method is effective and it increases the recognition ratio greatly. (authors)

  19. Aplication of AHP method in partner's selection process for supply chain development

    Directory of Open Access Journals (Sweden)

    Barac Nada

    2012-06-01

    Full Text Available The process of developing a supply chain is long and complex. with many restrictions and obstacles that accompany it. In this paper the authors focus on the first stage in developing the supply chain and the selection process and selection of partners. This phase of the development significantly affect the competitive position of the supply chain and create value for the consumer. Selected partners or 'links' of the supply chain influence the future performance of the chain which points to the necessity of full commitment to this process. The process of selection and choice of partner is conditioned by the key criteria that are used on that occasion. The use of inadequate criteria may endanger the whole process of building a supply chain partner selection through inadequate future supply chain needs. This paper is an analysis of partner selection based on key criteria used by managers in Serbia. For this purpose we used the AHP method. the results show that these are the top ranked criteria in terms of managers.

  20. Alternative microbial methods: An overview and selection criteria.

    NARCIS (Netherlands)

    Jasson, V.; Jacxsens, L.; Luning, P.A.; Rajkovic, A.; Uyttendaele, M.

    2010-01-01

    This study provides an overview and criteria for the selection of a method, other than the reference method, for microbial analysis of foods. In a first part an overview of the general characteristics of rapid methods available, both for enumeration and detection, is given with reference to relevant

  1. Comparative studies of praseodymium(III) selective sensors based on newly synthesized Schiff's bases

    International Nuclear Information System (INIS)

    Gupta, Vinod K.; Goyal, Rajendra N.; Pal, Manoj K.; Sharma, Ram A.

    2009-01-01

    Praseodymium ion selective polyvinyl chloride (PVC) membrane sensors, based on two new Schiff's bases 1,3-diphenylpropane-1,3-diylidenebis(azan-1-ylidene)diphenol (M 1 ) and N,N'-bis(pyridoxylideneiminato) ethylene (M 2 ) have been developed and studied. The sensor having membrane composition of PVC: o-NPOE: ionophore (M 1 ): NaTPB (w/w; mg) of 150: 300: 8: 5 showed best performances in comparison to M 2 based membranes. The sensor based on (M 1 ) exhibits the working concentration range 1.0 x 10 -8 to 1.0 x 10 -2 M with a detection limit of 5.0 x 10 -9 M and a Nernstian slope 20.0 ± 0.3 mV decade -1 of activity. It exhibited a quick response time as <8 s and its potential responses were pH independent across the range of 3.5-8.5.The influence of the membrane composition and possible interfering ions have also been investigated on the response properties of the electrode. The sensor has been found to work satisfactorily in partially non-aqueous media up to 15% (v/v) content of methanol, ethanol or acetonitrile and could be used for a period of 3 months. The selectivity coefficients determined by using fixed interference method (FIM) indicate high selectivity for praseodymium(III) ions over wide variety of other cations. To asses its analytical applicability the prepared sensor was successfully applied for determination of praseodymium(III) in spiked water samples.

  2. Investigation of RNA Structure by High-Throughput SHAPE-Based Probing Methods

    DEFF Research Database (Denmark)

    Poulsen, Line Dahl

    of highthroughput SHAPE-based approaches to investigate RNA structure based on novel SHAPE reagents that permit selection of full-length cDNAs. The SHAPE Selection (SHAPES) method is applied to the foot-and-mouth disease virus (FMDV) plus strand RNA genome, and the data is used to construct a genome-wide structural...... that they are functional. The SHAPES method is further applied to the hepatitis C virus (HCV), where the data is used to refine known and predicted structures. Over the past years, the interest of studying RNA structure in their native environment has been increased, and to allow studying RNA structure inside living cells...... using the SHAPE Selection approach, I introduce a biotinylated probing reagent. This chemical can cross cell membranes and reacts with RNA inside the cells, allowing the structural conformations to be studied in the context of physiological relevant conditions in living cells. The methods and results...

  3. Analysis of Criteria Influencing Contractor Selection Using TOPSIS Method

    Science.gov (United States)

    Alptekin, Orkun; Alptekin, Nesrin

    2017-10-01

    Selection of the most suitable contractor is an important process in public construction projects. This process is a major decision which may influence the progress and success of a construction project. Improper selection of contractors may lead to problems such as bad quality of work and delay in project duration. Especially in the construction projects of public buildings, the proper choice of contractor is beneficial to the public institution. Public procurement processes have different characteristics in respect to dissimilarities in political, social and economic features of every country. In Turkey, Turkish Public Procurement Law PPL 4734 is the main regulatory law for the procurement of the public buildings. According to the PPL 4734, public construction administrators have to contract with the lowest bidder who has the minimum requirements according to the criteria in prequalification process. Public administrators are not sufficient for selection of the proper contractor because of the restrictive provisions of the PPL 4734. The lowest bid method does not enable public construction administrators to select the most qualified contractor and they have realised the fact that the selection of a contractor based on lowest bid alone is inadequate and may lead to the failure of the project in terms of time delay Eand poor quality standards. In order to evaluate the overall efficiency of a project, it is necessary to identify selection criteria. This study aims to focus on identify importance of other criteria besides lowest bid criterion in contractor selection process of PPL 4734. In this study, a survey was conducted to staff of Department of Construction Works of Eskisehir Osmangazi University. According to TOPSIS (Technique for Order Preference by Similarity to the Ideal Solution) for analysis results, termination of construction work in previous tenders is the most important criterion of 12 determined criteria. The lowest bid criterion is ranked in rank 5.

  4. Depth-Based Selective Blurring in Stereo Images Using Accelerated Framework

    Science.gov (United States)

    Mukherjee, Subhayan; Guddeti, Ram Mohana Reddy

    2014-09-01

    We propose a hybrid method for stereo disparity estimation by combining block and region-based stereo matching approaches. It generates dense depth maps from disparity measurements of only 18 % image pixels (left or right). The methodology involves segmenting pixel lightness values using fast K-Means implementation, refining segment boundaries using morphological filtering and connected components analysis; then determining boundaries' disparities using sum of absolute differences (SAD) cost function. Complete disparity maps are reconstructed from boundaries' disparities. We consider an application of our method for depth-based selective blurring of non-interest regions of stereo images, using Gaussian blur to de-focus users' non-interest regions. Experiments on Middlebury dataset demonstrate that our method outperforms traditional disparity estimation approaches using SAD and normalized cross correlation by up to 33.6 % and some recent methods by up to 6.1 %. Further, our method is highly parallelizable using CPU-GPU framework based on Java Thread Pool and APARAPI with speed-up of 5.8 for 250 stereo video frames (4,096 × 2,304).

  5. A pragmatic pairwise group-decision method for selection of sites for nuclear power plants

    International Nuclear Information System (INIS)

    Kutbi, I.I.

    1987-01-01

    A pragmatic pairwise group-decision approach is applied to compare two regions in order to select the more suitable one for construction of nulcear power plants in the Kingdom of Saudi Arabia. The selection methodology is based on pairwise comparison by forced choice. The method facilitates rating of the regions or sites using simple calculations. Two regions, one close to Dhahran on the Arabian Gulf and another close to Jeddah on the Red Sea, are evaluated. No specific site in either region is considered at this stage. The comparison is based on a set of selection criteria which include (i) topography, (ii) geology, (iii) seismology, (iv) meteorology, (v) oceanography, (vi) hydrology and (vii) proximetry to oil and gas fields. The comparison shows that the Jeddah region is more suitable than the Dhahran region. (orig.)

  6. Deep convolutional neural network based antenna selection in multiple-input multiple-output system

    Science.gov (United States)

    Cai, Jiaxin; Li, Yan; Hu, Ying

    2018-03-01

    Antenna selection of wireless communication system has attracted increasing attention due to the challenge of keeping a balance between communication performance and computational complexity in large-scale Multiple-Input MultipleOutput antenna systems. Recently, deep learning based methods have achieved promising performance for large-scale data processing and analysis in many application fields. This paper is the first attempt to introduce the deep learning technique into the field of Multiple-Input Multiple-Output antenna selection in wireless communications. First, the label of attenuation coefficients channel matrix is generated by minimizing the key performance indicator of training antenna systems. Then, a deep convolutional neural network that explicitly exploits the massive latent cues of attenuation coefficients is learned on the training antenna systems. Finally, we use the adopted deep convolutional neural network to classify the channel matrix labels of test antennas and select the optimal antenna subset. Simulation experimental results demonstrate that our method can achieve better performance than the state-of-the-art baselines for data-driven based wireless antenna selection.

  7. Method of Menu Selection by Gaze Movement Using AC EOG Signals

    Science.gov (United States)

    Kanoh, Shin'ichiro; Futami, Ryoko; Yoshinobu, Tatsuo; Hoshimiya, Nozomu

    A method to detect the direction and the distance of voluntary eye gaze movement from EOG (electrooculogram) signals was proposed and tested. In this method, AC-amplified vertical and horizontal transient EOG signals were classified into 8-class directions and 2-class distances of voluntary eye gaze movements. A horizontal and a vertical EOGs during eye gaze movement at each sampling time were treated as a two-dimensional vector, and the center of gravity of the sample vectors whose norms were more than 80% of the maximum norm was used as a feature vector to be classified. By the classification using the k-nearest neighbor algorithm, it was shown that the averaged correct detection rates on each subject were 98.9%, 98.7%, 94.4%, respectively. This method can avoid strict EOG-based eye tracking which requires DC amplification of very small signal. It would be useful to develop robust human interfacing systems based on menu selection for severely paralyzed patients.

  8. Methods for the guideline-based development of quality indicators--a systematic review

    Science.gov (United States)

    2012-01-01

    Background Quality indicators (QIs) are used in many healthcare settings to measure, compare, and improve quality of care. For the efficient development of high-quality QIs, rigorous, approved, and evidence-based development methods are needed. Clinical practice guidelines are a suitable source to derive QIs from, but no gold standard for guideline-based QI development exists. This review aims to identify, describe, and compare methodological approaches to guideline-based QI development. Methods We systematically searched medical literature databases (Medline, EMBASE, and CINAHL) and grey literature. Two researchers selected publications reporting methodological approaches to guideline-based QI development. In order to describe and compare methodological approaches used in these publications, we extracted detailed information on common steps of guideline-based QI development (topic selection, guideline selection, extraction of recommendations, QI selection, practice test, and implementation) to predesigned extraction tables. Results From 8,697 hits in the database search and several grey literature documents, we selected 48 relevant references. The studies were of heterogeneous type and quality. We found no randomized controlled trial or other studies comparing the ability of different methodological approaches to guideline-based development to generate high-quality QIs. The relevant publications featured a wide variety of methodological approaches to guideline-based QI development, especially regarding guideline selection and extraction of recommendations. Only a few studies reported patient involvement. Conclusions Further research is needed to determine which elements of the methodological approaches identified, described, and compared in this review are best suited to constitute a gold standard for guideline-based QI development. For this research, we provide a comprehensive groundwork. PMID:22436067

  9. Robust Ground Target Detection by SAR and IR Sensor Fusion Using Adaboost-Based Feature Selection

    Science.gov (United States)

    Kim, Sungho; Song, Woo-Jin; Kim, So-Hyun

    2016-01-01

    Long-range ground targets are difficult to detect in a noisy cluttered environment using either synthetic aperture radar (SAR) images or infrared (IR) images. SAR-based detectors can provide a high detection rate with a high false alarm rate to background scatter noise. IR-based approaches can detect hot targets but are affected strongly by the weather conditions. This paper proposes a novel target detection method by decision-level SAR and IR fusion using an Adaboost-based machine learning scheme to achieve a high detection rate and low false alarm rate. The proposed method consists of individual detection, registration, and fusion architecture. This paper presents a single framework of a SAR and IR target detection method using modified Boolean map visual theory (modBMVT) and feature-selection based fusion. Previous methods applied different algorithms to detect SAR and IR targets because of the different physical image characteristics. One method that is optimized for IR target detection produces unsuccessful results in SAR target detection. This study examined the image characteristics and proposed a unified SAR and IR target detection method by inserting a median local average filter (MLAF, pre-filter) and an asymmetric morphological closing filter (AMCF, post-filter) into the BMVT. The original BMVT was optimized to detect small infrared targets. The proposed modBMVT can remove the thermal and scatter noise by the MLAF and detect extended targets by attaching the AMCF after the BMVT. Heterogeneous SAR and IR images were registered automatically using the proposed RANdom SAmple Region Consensus (RANSARC)-based homography optimization after a brute-force correspondence search using the detected target centers and regions. The final targets were detected by feature-selection based sensor fusion using Adaboost. The proposed method showed good SAR and IR target detection performance through feature selection-based decision fusion on a synthetic database generated

  10. Robust Ground Target Detection by SAR and IR Sensor Fusion Using Adaboost-Based Feature Selection

    Directory of Open Access Journals (Sweden)

    Sungho Kim

    2016-07-01

    Full Text Available Long-range ground targets are difficult to detect in a noisy cluttered environment using either synthetic aperture radar (SAR images or infrared (IR images. SAR-based detectors can provide a high detection rate with a high false alarm rate to background scatter noise. IR-based approaches can detect hot targets but are affected strongly by the weather conditions. This paper proposes a novel target detection method by decision-level SAR and IR fusion using an Adaboost-based machine learning scheme to achieve a high detection rate and low false alarm rate. The proposed method consists of individual detection, registration, and fusion architecture. This paper presents a single framework of a SAR and IR target detection method using modified Boolean map visual theory (modBMVT and feature-selection based fusion. Previous methods applied different algorithms to detect SAR and IR targets because of the different physical image characteristics. One method that is optimized for IR target detection produces unsuccessful results in SAR target detection. This study examined the image characteristics and proposed a unified SAR and IR target detection method by inserting a median local average filter (MLAF, pre-filter and an asymmetric morphological closing filter (AMCF, post-filter into the BMVT. The original BMVT was optimized to detect small infrared targets. The proposed modBMVT can remove the thermal and scatter noise by the MLAF and detect extended targets by attaching the AMCF after the BMVT. Heterogeneous SAR and IR images were registered automatically using the proposed RANdom SAmple Region Consensus (RANSARC-based homography optimization after a brute-force correspondence search using the detected target centers and regions. The final targets were detected by feature-selection based sensor fusion using Adaboost. The proposed method showed good SAR and IR target detection performance through feature selection-based decision fusion on a synthetic

  11. Advanced display object selection methods for enhancing user-computer productivity

    Science.gov (United States)

    Osga, Glenn A.

    1993-01-01

    The User-Interface Technology Branch at NCCOSC RDT&E Division has been conducting a series of studies to address the suitability of commercial off-the-shelf (COTS) graphic user-interface (GUI) methods for efficiency and performance in critical naval combat systems. This paper presents an advanced selection algorithm and method developed to increase user performance when making selections on tactical displays. The method has also been applied with considerable success to a variety of cursor and pointing tasks. Typical GUI's allow user selection by: (1) moving a cursor with a pointing device such as a mouse, trackball, joystick, touchscreen; and (2) placing the cursor on the object. Examples of GUI objects are the buttons, icons, folders, scroll bars, etc. used in many personal computer and workstation applications. This paper presents an improved method of selection and the theoretical basis for the significant performance gains achieved with various input devices tested. The method is applicable to all GUI styles and display sizes, and is particularly useful for selections on small screens such as notebook computers. Considering the amount of work-hours spent pointing and clicking across all styles of available graphic user-interfaces, the cost/benefit in applying this method to graphic user-interfaces is substantial, with the potential for increasing productivity across thousands of users and applications.

  12. A new laser vibrometry-based 2D selective intensity method for source identification in reverberant fields: part II. Application to an aircraft cabin

    International Nuclear Information System (INIS)

    Revel, G M; Martarelli, M; Chiariotti, P

    2010-01-01

    The selective intensity technique is a powerful tool for the localization of acoustic sources and for the identification of the structural contribution to the acoustic emission. In practice, the selective intensity method is based on simultaneous measurements of acoustic intensity, by means of a couple of matched microphones, and structural vibration of the emitting object. In this paper high spatial density multi-point vibration data, acquired by using a scanning laser Doppler vibrometer, have been used for the first time. Therefore, by applying the selective intensity algorithm, the contribution of a large number of structural sources to the acoustic field radiated by the vibrating object can be estimated. The selective intensity represents the distribution of the acoustic monopole sources on the emitting surface, as if each monopole acted separately from the others. This innovative selective intensity approach can be very helpful when the measurement is performed on large panels in highly reverberating environments, such as aircraft cabins. In this case the separation of the direct acoustic field (radiated by the vibrating panels of the fuselage) and the reverberant one is difficult by traditional techniques. The work shown in this paper is the application of part of the results of the European project CREDO (Cabin Noise Reduction by Experimental and Numerical Design Optimization) carried out within the framework of the EU. Therefore the aim of this paper is to illustrate a real application of the method to the interior acoustic characterization of an Alenia Aeronautica ATR42 ground test facility, Alenia Aeronautica being a partner of the CREDO project

  13. Feature selection based on SVM significance maps for classification of dementia

    NARCIS (Netherlands)

    E.E. Bron (Esther); M. Smits (Marion); J.C. van Swieten (John); W.J. Niessen (Wiro); S. Klein (Stefan)

    2014-01-01

    textabstractSupport vector machine significance maps (SVM p-maps) previously showed clusters of significantly different voxels in dementiarelated brain regions. We propose a novel feature selection method for classification of dementia based on these p-maps. In our approach, the SVM p-maps are

  14. Kernel-based Joint Feature Selection and Max-Margin Classification for Early Diagnosis of Parkinson’s Disease

    Science.gov (United States)

    Adeli, Ehsan; Wu, Guorong; Saghafi, Behrouz; An, Le; Shi, Feng; Shen, Dinggang

    2017-01-01

    Feature selection methods usually select the most compact and relevant set of features based on their contribution to a linear regression model. Thus, these features might not be the best for a non-linear classifier. This is especially crucial for the tasks, in which the performance is heavily dependent on the feature selection techniques, like the diagnosis of neurodegenerative diseases. Parkinson’s disease (PD) is one of the most common neurodegenerative disorders, which progresses slowly while affects the quality of life dramatically. In this paper, we use the data acquired from multi-modal neuroimaging data to diagnose PD by investigating the brain regions, known to be affected at the early stages. We propose a joint kernel-based feature selection and classification framework. Unlike conventional feature selection techniques that select features based on their performance in the original input feature space, we select features that best benefit the classification scheme in the kernel space. We further propose kernel functions, specifically designed for our non-negative feature types. We use MRI and SPECT data of 538 subjects from the PPMI database, and obtain a diagnosis accuracy of 97.5%, which outperforms all baseline and state-of-the-art methods.

  15. Kernel-based Joint Feature Selection and Max-Margin Classification for Early Diagnosis of Parkinson’s Disease

    Science.gov (United States)

    Adeli, Ehsan; Wu, Guorong; Saghafi, Behrouz; An, Le; Shi, Feng; Shen, Dinggang

    2017-01-01

    Feature selection methods usually select the most compact and relevant set of features based on their contribution to a linear regression model. Thus, these features might not be the best for a non-linear classifier. This is especially crucial for the tasks, in which the performance is heavily dependent on the feature selection techniques, like the diagnosis of neurodegenerative diseases. Parkinson’s disease (PD) is one of the most common neurodegenerative disorders, which progresses slowly while affects the quality of life dramatically. In this paper, we use the data acquired from multi-modal neuroimaging data to diagnose PD by investigating the brain regions, known to be affected at the early stages. We propose a joint kernel-based feature selection and classification framework. Unlike conventional feature selection techniques that select features based on their performance in the original input feature space, we select features that best benefit the classification scheme in the kernel space. We further propose kernel functions, specifically designed for our non-negative feature types. We use MRI and SPECT data of 538 subjects from the PPMI database, and obtain a diagnosis accuracy of 97.5%, which outperforms all baseline and state-of-the-art methods. PMID:28120883

  16. Detection of biomarkers for Hepatocellular Carcinoma using a hybrid univariate gene selection methods

    Directory of Open Access Journals (Sweden)

    Abdel Samee Nagwan M

    2012-08-01

    Full Text Available Abstract Background Discovering new biomarkers has a great role in improving early diagnosis of Hepatocellular carcinoma (HCC. The experimental determination of biomarkers needs a lot of time and money. This motivates this work to use in-silico prediction of biomarkers to reduce the number of experiments required for detecting new ones. This is achieved by extracting the most representative genes in microarrays of HCC. Results In this work, we provide a method for extracting the differential expressed genes, up regulated ones, that can be considered candidate biomarkers in high throughput microarrays of HCC. We examine the power of several gene selection methods (such as Pearson’s correlation coefficient, Cosine coefficient, Euclidean distance, Mutual information and Entropy with different estimators in selecting informative genes. A biological interpretation of the highly ranked genes is done using KEGG (Kyoto Encyclopedia of Genes and Genomes pathways, ENTREZ and DAVID (Database for Annotation, Visualization, and Integrated Discovery databases. The top ten genes selected using Pearson’s correlation coefficient and Cosine coefficient contained six genes that have been implicated in cancer (often multiple cancers genesis in previous studies. A fewer number of genes were obtained by the other methods (4 genes using Mutual information, 3genes using Euclidean distance and only one gene using Entropy. A better result was obtained by the utilization of a hybrid approach based on intersecting the highly ranked genes in the output of all investigated methods. This hybrid combination yielded seven genes (2 genes for HCC and 5 genes in different types of cancer in the top ten genes of the list of intersected genes. Conclusions To strengthen the effectiveness of the univariate selection methods, we propose a hybrid approach by intersecting several of these methods in a cascaded manner. This approach surpasses all of univariate selection methods when

  17. Selection of independent components based on cortical mapping of electromagnetic activity

    Science.gov (United States)

    Chan, Hui-Ling; Chen, Yong-Sheng; Chen, Li-Fen

    2012-10-01

    Independent component analysis (ICA) has been widely used to attenuate interference caused by noise components from the electromagnetic recordings of brain activity. However, the scalp topographies and associated temporal waveforms provided by ICA may be insufficient to distinguish functional components from artifactual ones. In this work, we proposed two component selection methods, both of which first estimate the cortical distribution of the brain activity for each component, and then determine the functional components based on the parcellation of brain activity mapped onto the cortical surface. Among all independent components, the first method can identify the dominant components, which have strong activity in the selected dominant brain regions, whereas the second method can identify those inter-regional associating components, which have similar component spectra between a pair of regions. For a targeted region, its component spectrum enumerates the amplitudes of its parceled brain activity across all components. The selected functional components can be remixed to reconstruct the focused electromagnetic signals for further analysis, such as source estimation. Moreover, the inter-regional associating components can be used to estimate the functional brain network. The accuracy of the cortical activation estimation was evaluated on the data from simulation studies, whereas the usefulness and feasibility of the component selection methods were demonstrated on the magnetoencephalography data recorded from a gender discrimination study.

  18. An ontological knowledge based system for selection of process monitoring and analysis tools

    DEFF Research Database (Denmark)

    Singh, Ravendra; Gernaey, Krist; Gani, Rafiqul

    2010-01-01

    monitoring and analysis tools for a wide range of operations has made their selection a difficult, time consuming and challenging task. Therefore, an efficient and systematic knowledge base coupled with an inference system is necessary to support the optimal selection of process monitoring and analysis tools......, satisfying the process and user constraints. A knowledge base consisting of the process knowledge as well as knowledge on measurement methods and tools has been developed. An ontology has been designed for knowledge representation and management. The developed knowledge base has a dual feature. On the one...... procedures has been developed to retrieve the data/information stored in the knowledge base....

  19. Optimal Channel Selection Based on Online Decision and Offline Learning in Multichannel Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Mu Qiao

    2017-01-01

    Full Text Available We propose a channel selection strategy with hybrid architecture, which combines the centralized method and the distributed method to alleviate the overhead of access point and at the same time provide more flexibility in network deployment. By this architecture, we make use of game theory and reinforcement learning to fulfill the optimal channel selection under different communication scenarios. Particularly, when the network can satisfy the requirements of energy and computational costs, the online decision algorithm based on noncooperative game can help each individual sensor node immediately select the optimal channel. Alternatively, when the network cannot satisfy the requirements of energy and computational costs, the offline learning algorithm based on reinforcement learning can help each individual sensor node to learn from its experience and iteratively adjust its behavior toward the expected target. Extensive simulation results validate the effectiveness of our proposal and also prove that higher system throughput can be achieved by our channel selection strategy over the conventional off-policy channel selection approaches.

  20. Hot-spot selection and evaluation methods for whole slice images of meningiomas and oligodendrogliomas.

    Science.gov (United States)

    Swiderska, Zaneta; Markiewicz, Tomasz; Grala, Bartlomiej; Slodkowska, Janina

    2015-01-01

    The paper presents a combined method for an automatic hot-spot areas selection based on penalty factor in the whole slide images to support the pathomorphological diagnostic procedure. The studied slides represent the meningiomas and oligodendrogliomas tumor on the basis of the Ki-67/MIB-1 immunohistochemical reaction. It allows determining the tumor proliferation index as well as gives an indication to the medical treatment and prognosis. The combined method based on mathematical morphology, thresholding, texture analysis and classification is proposed and verified. The presented algorithm includes building a specimen map, elimination of hemorrhages from them, two methods for detection of hot-spot fields with respect to an introduced penalty factor. Furthermore, we propose localization concordance measure to evaluation localization of hot spot selection by the algorithms in respect to the expert's results. Thus, the results of the influence of the penalty factor are presented and discussed. It was found that the best results are obtained for 0.2 value of them. They confirm effectiveness of applied approach.

  1. Factors of Selection of the Stock Allocation Method

    Directory of Open Access Journals (Sweden)

    Rohov Heorhii K.

    2014-03-01

    Full Text Available The article describes results of the author’s study of factors of making strategic decisions on selection of methods of stock allocation by public joint stock companies in Ukraine. The author used the Random forest mathematical apparatus of classification trees building and also informal methods. The article analyses the reasons that restrain public allocation of stock. It shows significant influence upon selection of a method of stock allocation of such factors as capital concentration, balance rate of corporate rights, sector of economy and significant participation of the institutes of common investment or the state in the authorised capital. The built hierarchical model of classification of factors of the issuing policy of joint stock companies finds logical justification in specific features of the institutional environment, however, it does not fit into the framework of the classical concept of the market economy. The model could be used both for formation of goals of corporate financial strategies and in the process of improvement of state regulation of activity of securities issuers. The prospect of further studies in this direction is identification of transformation of factors of selection of the stock allocation method under conditions of revival of the stock market.

  2. Robust Feature Selection from Microarray Data Based on Cooperative Game Theory and Qualitative Mutual Information

    Directory of Open Access Journals (Sweden)

    Atiyeh Mortazavi

    2016-01-01

    Full Text Available High dimensionality of microarray data sets may lead to low efficiency and overfitting. In this paper, a multiphase cooperative game theoretic feature selection approach is proposed for microarray data classification. In the first phase, due to high dimension of microarray data sets, the features are reduced using one of the two filter-based feature selection methods, namely, mutual information and Fisher ratio. In the second phase, Shapley index is used to evaluate the power of each feature. The main innovation of the proposed approach is to employ Qualitative Mutual Information (QMI for this purpose. The idea of Qualitative Mutual Information causes the selected features to have more stability and this stability helps to deal with the problem of data imbalance and scarcity. In the third phase, a forward selection scheme is applied which uses a scoring function to weight each feature. The performance of the proposed method is compared with other popular feature selection algorithms such as Fisher ratio, minimum redundancy maximum relevance, and previous works on cooperative game based feature selection. The average classification accuracy on eleven microarray data sets shows that the proposed method improves both average accuracy and average stability compared to other approaches.

  3. An Initialization Method Based on Hybrid Distance for k-Means Algorithm.

    Science.gov (United States)

    Yang, Jie; Ma, Yan; Zhang, Xiangfen; Li, Shunbao; Zhang, Yuping

    2017-11-01

    The traditional [Formula: see text]-means algorithm has been widely used as a simple and efficient clustering method. However, the performance of this algorithm is highly dependent on the selection of initial cluster centers. Therefore, the method adopted for choosing initial cluster centers is extremely important. In this letter, we redefine the density of points according to the number of its neighbors, as well as the distance between points and their neighbors. In addition, we define a new distance measure that considers both Euclidean distance and density. Based on that, we propose an algorithm for selecting initial cluster centers that can dynamically adjust the weighting parameter. Furthermore, we propose a new internal clustering validation measure, the clustering validation index based on the neighbors (CVN), which can be exploited to select the optimal result among multiple clustering results. Experimental results show that the proposed algorithm outperforms existing initialization methods on real-world data sets and demonstrates the adaptability of the proposed algorithm to data sets with various characteristics.

  4. Content-based image retrieval: Color-selection exploited

    NARCIS (Netherlands)

    Broek, E.L. van den; Vuurpijl, L.G.; Kisters, P. M. F.; Schmid, J.C.M. von; Moens, M.F.; Busser, R. de; Hiemstra, D.; Kraaij, W.

    2002-01-01

    This research presents a new color selection interface that facilitates query-by-color in Content-Based Image Retrieval (CBIR). Existing CBIR color selection interfaces, are being judged as non-intuitive and difficult to use. Our interface copes with these problems of usability. It is based on 11

  5. Content-Based Image Retrieval: Color-selection exploited

    NARCIS (Netherlands)

    Moens, Marie-Francine; van den Broek, Egon; Vuurpijl, L.G.; de Brusser, Rik; Kisters, P.M.F.; Hiemstra, Djoerd; Kraaij, Wessel; von Schmid, J.C.M.

    2002-01-01

    This research presents a new color selection interface that facilitates query-by-color in Content-Based Image Retrieval (CBIR). Existing CBIR color selection interfaces, are being judged as non-intuitive and difficult to use. Our interface copes with these problems of usability. It is based on 11

  6. Robust gene selection methods using weighting schemes for microarray data analysis.

    Science.gov (United States)

    Kang, Suyeon; Song, Jongwoo

    2017-09-02

    A common task in microarray data analysis is to identify informative genes that are differentially expressed between two different states. Owing to the high-dimensional nature of microarray data, identification of significant genes has been essential in analyzing the data. However, the performances of many gene selection techniques are highly dependent on the experimental conditions, such as the presence of measurement error or a limited number of sample replicates. We have proposed new filter-based gene selection techniques, by applying a simple modification to significance analysis of microarrays (SAM). To prove the effectiveness of the proposed method, we considered a series of synthetic datasets with different noise levels and sample sizes along with two real datasets. The following findings were made. First, our proposed methods outperform conventional methods for all simulation set-ups. In particular, our methods are much better when the given data are noisy and sample size is small. They showed relatively robust performance regardless of noise level and sample size, whereas the performance of SAM became significantly worse as the noise level became high or sample size decreased. When sufficient sample replicates were available, SAM and our methods showed similar performance. Finally, our proposed methods are competitive with traditional methods in classification tasks for microarrays. The results of simulation study and real data analysis have demonstrated that our proposed methods are effective for detecting significant genes and classification tasks, especially when the given data are noisy or have few sample replicates. By employing weighting schemes, we can obtain robust and reliable results for microarray data analysis.

  7. Selection method of terrain matching area for TERCOM algorithm

    Science.gov (United States)

    Zhang, Qieqie; Zhao, Long

    2017-10-01

    The performance of terrain aided navigation is closely related to the selection of terrain matching area. The different matching algorithms have different adaptability to terrain. This paper mainly studies the adaptability to terrain of TERCOM algorithm, analyze the relation between terrain feature and terrain characteristic parameters by qualitative and quantitative methods, and then research the relation between matching probability and terrain characteristic parameters by the Monte Carlo method. After that, we propose a selection method of terrain matching area for TERCOM algorithm, and verify the method correctness with real terrain data by simulation experiment. Experimental results show that the matching area obtained by the method in this paper has the good navigation performance and the matching probability of TERCOM algorithm is great than 90%

  8. A holistic method for selecting tidal stream energy hotspots under technical, economic and functional constraints

    International Nuclear Information System (INIS)

    Vazquez, A.; Iglesias, G.

    2016-01-01

    Highlights: • A method for selecting the most suitable sites for tidal stream farms was presented. • The selection was based on relevant technical, economic and functional aspects. • As a case study, a model of the Bristol Channel was implemented and validated. - Abstract: Although a number of prospective locations for tidal stream farms have been identified, the development of a unified approach for selecting the optimum site in a region remains a current research topic. The objective of this work is to develop and apply a methodology for determining the most suitable sites for tidal stream farms, i.e. sites whose characteristics maximise power performance, minimise cost and avoid conflicts with competing uses of the marine space. Illustrated through a case study in the Bristol Channel, the method uses a validated hydrodynamics model to identify highly energetic areas and a geospatial Matlab-based program (designed ad hoc) to estimate the energy output that a tidal farm at the site with a given technology would have. This output is then used to obtain the spatial distribution of the levelised cost of energy and, on this basis, to preselect certain areas. Subsequently, potential conflicts with other functions of the marine space (e.g. fishing, shipping) are considered. The result is a selection of areas for tidal stream energy development based on a holistic approach, encompassing the relevant technical, economic and functional aspects. This methodology can lead to a significant improvement in the selection of tidal sites, thereby increasing the possibilities of project acceptance and development.

  9. Range Sensor-Based Efficient Obstacle Avoidance through Selective Decision-Making.

    Science.gov (United States)

    Shim, Youngbo; Kim, Gon-Woo

    2018-03-29

    In this paper, we address a collision avoidance method for mobile robots. Many conventional obstacle avoidance methods have been focused solely on avoiding obstacles. However, this can cause instability when passing through a narrow passage, and can also generate zig-zag motions. We define two strategies for obstacle avoidance, known as Entry mode and Bypass mode. Entry mode is a pattern for passing through the gap between obstacles, while Bypass mode is a pattern for making a detour around obstacles safely. With these two modes, we propose an efficient obstacle avoidance method based on the Expanded Guide Circle (EGC) method with selective decision-making. The simulation and experiment results show the validity of the proposed method.

  10. Range Sensor-Based Efficient Obstacle Avoidance through Selective Decision-Making

    Directory of Open Access Journals (Sweden)

    Youngbo Shim

    2018-03-01

    Full Text Available In this paper, we address a collision avoidance method for mobile robots. Many conventional obstacle avoidance methods have been focused solely on avoiding obstacles. However, this can cause instability when passing through a narrow passage, and can also generate zig-zag motions. We define two strategies for obstacle avoidance, known as Entry mode and Bypass mode. Entry mode is a pattern for passing through the gap between obstacles, while Bypass mode is a pattern for making a detour around obstacles safely. With these two modes, we propose an efficient obstacle avoidance method based on the Expanded Guide Circle (EGC method with selective decision-making. The simulation and experiment results show the validity of the proposed method.

  11. A probabilistic method for testing and estimating selection differences between populations.

    Science.gov (United States)

    He, Yungang; Wang, Minxian; Huang, Xin; Li, Ran; Xu, Hongyang; Xu, Shuhua; Jin, Li

    2015-12-01

    Human populations around the world encounter various environmental challenges and, consequently, develop genetic adaptations to different selection forces. Identifying the differences in natural selection between populations is critical for understanding the roles of specific genetic variants in evolutionary adaptation. Although numerous methods have been developed to detect genetic loci under recent directional selection, a probabilistic solution for testing and quantifying selection differences between populations is lacking. Here we report the development of a probabilistic method for testing and estimating selection differences between populations. By use of a probabilistic model of genetic drift and selection, we showed that logarithm odds ratios of allele frequencies provide estimates of the differences in selection coefficients between populations. The estimates approximate a normal distribution, and variance can be estimated using genome-wide variants. This allows us to quantify differences in selection coefficients and to determine the confidence intervals of the estimate. Our work also revealed the link between genetic association testing and hypothesis testing of selection differences. It therefore supplies a solution for hypothesis testing of selection differences. This method was applied to a genome-wide data analysis of Han and Tibetan populations. The results confirmed that both the EPAS1 and EGLN1 genes are under statistically different selection in Han and Tibetan populations. We further estimated differences in the selection coefficients for genetic variants involved in melanin formation and determined their confidence intervals between continental population groups. Application of the method to empirical data demonstrated the outstanding capability of this novel approach for testing and quantifying differences in natural selection. © 2015 He et al.; Published by Cold Spring Harbor Laboratory Press.

  12. Clustering and training set selection methods for improving the accuracy of quantitative laser induced breakdown spectroscopy

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, Ryan B., E-mail: randerson@astro.cornell.edu [Cornell University Department of Astronomy, 406 Space Sciences Building, Ithaca, NY 14853 (United States); Bell, James F., E-mail: Jim.Bell@asu.edu [Arizona State University School of Earth and Space Exploration, Bldg.: INTDS-A, Room: 115B, Box 871404, Tempe, AZ 85287 (United States); Wiens, Roger C., E-mail: rwiens@lanl.gov [Los Alamos National Laboratory, P.O. Box 1663 MS J565, Los Alamos, NM 87545 (United States); Morris, Richard V., E-mail: richard.v.morris@nasa.gov [NASA Johnson Space Center, 2101 NASA Parkway, Houston, TX 77058 (United States); Clegg, Samuel M., E-mail: sclegg@lanl.gov [Los Alamos National Laboratory, P.O. Box 1663 MS J565, Los Alamos, NM 87545 (United States)

    2012-04-15

    We investigated five clustering and training set selection methods to improve the accuracy of quantitative chemical analysis of geologic samples by laser induced breakdown spectroscopy (LIBS) using partial least squares (PLS) regression. The LIBS spectra were previously acquired for 195 rock slabs and 31 pressed powder geostandards under 7 Torr CO{sub 2} at a stand-off distance of 7 m at 17 mJ per pulse to simulate the operational conditions of the ChemCam LIBS instrument on the Mars Science Laboratory Curiosity rover. The clustering and training set selection methods, which do not require prior knowledge of the chemical composition of the test-set samples, are based on grouping similar spectra and selecting appropriate training spectra for the partial least squares (PLS2) model. These methods were: (1) hierarchical clustering of the full set of training spectra and selection of a subset for use in training; (2) k-means clustering of all spectra and generation of PLS2 models based on the training samples within each cluster; (3) iterative use of PLS2 to predict sample composition and k-means clustering of the predicted compositions to subdivide the groups of spectra; (4) soft independent modeling of class analogy (SIMCA) classification of spectra, and generation of PLS2 models based on the training samples within each class; (5) use of Bayesian information criteria (BIC) to determine an optimal number of clusters and generation of PLS2 models based on the training samples within each cluster. The iterative method and the k-means method using 5 clusters showed the best performance, improving the absolute quadrature root mean squared error (RMSE) by {approx} 3 wt.%. The statistical significance of these improvements was {approx} 85%. Our results show that although clustering methods can modestly improve results, a large and diverse training set is the most reliable way to improve the accuracy of quantitative LIBS. In particular, additional sulfate standards and

  13. An objective method for High Dynamic Range source content selection

    DEFF Research Database (Denmark)

    Narwaria, Manish; Mantel, Claire; Da Silva, Matthieu Perreira

    2014-01-01

    With the aim of improving the immersive experience of the end user, High Dynamic Range (HDR) imaging has been gaining popularity. Therefore, proper validation and performance benchmarking of HDR processing algorithms is a key step towards standardization and commercial deployment. A crucial...... component of such validation studies is the selection of a challenging and balanced set of source (reference) HDR content. In order to facilitate this, we present an objective method based on the premise that a more challenging HDR scene encapsulates higher contrast, and as a result will show up more...

  14. Technique for Increasing the Selectivity of the Method of Laser Fragmentation/Laser-Induced Fluorescence

    Science.gov (United States)

    Bobrovnikov, S. M.; Gorlov, E. V.; Zharkov, V. I.

    2018-05-01

    A technique for increasing the selectivity of the method of detecting high-energy materials (HEMs) based on laser fragmentation of HEM molecules with subsequent laser excitation of fluorescence of the characteristic NO fragments from the first vibrational level of the ground state is suggested.

  15. Equilibrium Strategy Based Recycling Facility Site Selection towards Mitigating Coal Gangue Contamination

    Directory of Open Access Journals (Sweden)

    Jiuping Xu

    2017-02-01

    Full Text Available Environmental pollution caused by coal gangue has been a significant challenge for sustainable development; thus, many coal gangue reduction approaches have been proposed in recent years. In particular, coal gangue facility (CGF construction has been considered as an efficient method for the control and recycling of coal gangue. Meanwhile, the identification and selection of suitable CGF sites is a fundamental task for the government. Therefore, based on the equilibrium strategy, a site selection approach under a fuzzy environment is developed to mitigate coal gangue contamination, which integrates a geographical information system (GIS technique and a bi-level model to identify candidate CGF sites and to select the most suitable one. In this situation, the GIS technique used to identify potential feasible sites is able to integrate a great deal of geographical data tofitwithpracticalcircumstances;thebi-levelmodelusedtoscreentheappropriatesitecanreasonably dealwiththeconflictsbetweenthelocalauthorityandthecolliery. Moreover,aKarush–Kuhn–Tucker (KKT condition-based approach is used to find an optimal solution, and a case study is given to demonstrate the effectiveness of the proposed method. The results across different scenarios show that appropriate site selection can achieve coal gangue reduction targets and that a suitable excess stack level can realize an environmental-economic equilibrium. Finally, some propositions and management recommendations are given.

  16. Duration and speed of speech events: A selection of methods

    Directory of Open Access Journals (Sweden)

    Gibbon Dafydd

    2015-07-01

    Full Text Available The study of speech timing, i.e. the duration and speed or tempo of speech events, has increased in importance over the past twenty years, in particular in connection with increased demands for accuracy, intelligibility and naturalness in speech technology, with applications in language teaching and testing, and with the study of speech timing patterns in language typology. H owever, the methods used in such studies are very diverse, and so far there is no accessible overview of these methods. Since the field is too broad for us to provide an exhaustive account, we have made two choices: first, to provide a framework of paradigmatic (classificatory, syntagmatic (compositional and functional (discourse-oriented dimensions for duration analysis; and second, to provide worked examples of a selection of methods associated primarily with these three dimensions. Some of the methods which are covered are established state-of-the-art approaches (e.g. the paradigmatic Classification and Regression Trees, CART , analysis, others are discussed in a critical light (e.g. so-called ‘rhythm metrics’. A set of syntagmatic approaches applies to the tokenisation and tree parsing of duration hierarchies, based on speech annotations, and a functional approach describes duration distributions with sociolinguistic variables. Several of the methods are supported by a new web-based software tool for analysing annotated speech data, the Time Group Analyser.

  17. Research on the Selection Strategy of Green Building Parts Supplier Based on the Catastrophe Theory and Kent Index Method

    Directory of Open Access Journals (Sweden)

    Zhenhua Luo

    2016-01-01

    Full Text Available At present, the green building and housing industrialization are two mainstream directions in the real estate industry. The production of green building parts which combines green building and housing industrialization, two concepts, is to be vigorously developed. The key of quality assurance in the assembly project is choosing reliable and proper green building parts suppliers. This paper analyzes the inherent requirements of the green building, combined with the characteristics of the housing industrialization, and puts forward an evaluation index system of supplier selection for green building parts, which includes product index, enterprise index, green development index, and cooperation ability index. To reduce the influence of subjective factors, the improved method which merges Kent index method and catastrophe theory is applied to the green building parts supplier selection and evaluation. This paper takes the selection of the unit bathroom suppliers as an example, uses the improved model to calculate and analyze the data of each supplier, and finally selects the optimal supplier. With combination of the Kent index and the catastrophe theory, the result shows that it can effectively reduce the subjectivity of the evaluation and provide a basis for the selection of the green building parts suppliers.

  18. Fusion of remote sensing images based on pyramid decomposition with Baldwinian Clonal Selection Optimization

    Science.gov (United States)

    Jin, Haiyan; Xing, Bei; Wang, Lei; Wang, Yanyan

    2015-11-01

    In this paper, we put forward a novel fusion method for remote sensing images based on the contrast pyramid (CP) using the Baldwinian Clonal Selection Algorithm (BCSA), referred to as CPBCSA. Compared with classical methods based on the transform domain, the method proposed in this paper adopts an improved heuristic evolutionary algorithm, wherein the clonal selection algorithm includes Baldwinian learning. In the process of image fusion, BCSA automatically adjusts the fusion coefficients of different sub-bands decomposed by CP according to the value of the fitness function. BCSA also adaptively controls the optimal search direction of the coefficients and accelerates the convergence rate of the algorithm. Finally, the fusion images are obtained via weighted integration of the optimal fusion coefficients and CP reconstruction. Our experiments show that the proposed method outperforms existing methods in terms of both visual effect and objective evaluation criteria, and the fused images are more suitable for human visual or machine perception.

  19. Comparison of selected methods of prediction of wine exports and imports

    Directory of Open Access Journals (Sweden)

    Radka Šperková

    2008-01-01

    Full Text Available For prediction of future events, there exist a number of methods usable in managerial practice. Decision on which of them should be used in a particular situation depends not only on the amount and quality of input information, but also on a subjective managerial judgement. Paper performs a practical application and consequent comparison of results of two selected methods, which are statistical method and deductive method. Both methods were used for predicting wine exports and imports in (from the Czech Republic. Prediction was done in 2003 and it related to the economic years 2003/2004, 2004/2005, 2005/2006, and 2006/2007, within which it was compared with the real values of the given indicators.Within the deductive methods there were characterized the most important factors of external environment including the most important influence according to authors’ opinion, which was the integration of the Czech Republic into the EU from 1st May, 2004. On the contrary, the statistical method of time-series analysis did not regard the integration, which is comes out of its principle. Statistics only calculates based on data from the past, and cannot incorporate the influence of irregular future conditions, just as the EU integration. Because of this the prediction based on deductive method was more optimistic and more precise in terms of its difference from real development in the given field.

  20. Quantitative Methods for Software Selection and Evaluation

    National Research Council Canada - National Science Library

    Bandor, Michael S

    2006-01-01

    ... (the ability of the product to meet the need) and the cost. The method used for the analysis and selection activities can range from the use of basic intuition to counting the number of requirements fulfilled, or something...

  1. Innovation in values based public health nursing student selection: A qualitative evaluation of candidate and selection panel member perspectives.

    Science.gov (United States)

    McGraw, Caroline; Abbott, Stephen; Brook, Judy

    2018-02-19

    Values based recruitment emerges from the premise that a high degree of value congruence, or the extent to which an individual's values are similar to those of the health organization in which they work, leads to organizational effectiveness. The aim of this evaluation was to explore how candidates and selection panel members experienced and perceived innovative methods of values based public health nursing student selection. The evaluation was framed by a qualitative exploratory design involving semi-structured interviews and a group exercise. Data were thematically analyzed. Eight semi-structured interviews were conducted with selection panel members. Twenty-two successful candidates took part in a group exercise. The use of photo elicitation interviews and situational judgment questions in the context of selection to a university-run public health nursing educational program was explored. While candidates were ambivalent about the use of photo elicitation interviews, with some misunderstanding the task, selection panel members saw the benefits for improving candidate expression and reducing gaming and deception. Situational interview questions were endorsed by candidates and selection panel members due to their fidelity to real-life problems and the ability of panel members to discern value congruence from candidates' responses. Both techniques offered innovative solutions to candidate selection for entry to the public health nursing education program. © 2018 Wiley Periodicals, Inc.

  2. Improving the time efficiency of the Fourier synthesis method for slice selection in magnetic resonance imaging.

    Science.gov (United States)

    Tahayori, B; Khaneja, N; Johnston, L A; Farrell, P M; Mareels, I M Y

    2016-01-01

    The design of slice selective pulses for magnetic resonance imaging can be cast as an optimal control problem. The Fourier synthesis method is an existing approach to solve these optimal control problems. In this method the gradient field as well as the excitation field are switched rapidly and their amplitudes are calculated based on a Fourier series expansion. Here, we provide a novel insight into the Fourier synthesis method via representing the Bloch equation in spherical coordinates. Based on the spherical Bloch equation, we propose an alternative sequence of pulses that can be used for slice selection which is more time efficient compared to the original method. Simulation results demonstrate that while the performance of both methods is approximately the same, the required time for the proposed sequence of pulses is half of the original sequence of pulses. Furthermore, the slice selectivity of both sequences of pulses changes with radio frequency field inhomogeneities in a similar way. We also introduce a measure, referred to as gradient complexity, to compare the performance of both sequences of pulses. This measure indicates that for a desired level of uniformity in the excited slice, the gradient complexity for the proposed sequence of pulses is less than the original sequence. Copyright © 2015 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  3. Integrative approaches to the prediction of protein functions based on the feature selection

    Directory of Open Access Journals (Sweden)

    Lee Hyunju

    2009-12-01

    Full Text Available Abstract Background Protein function prediction has been one of the most important issues in functional genomics. With the current availability of various genomic data sets, many researchers have attempted to develop integration models that combine all available genomic data for protein function prediction. These efforts have resulted in the improvement of prediction quality and the extension of prediction coverage. However, it has also been observed that integrating more data sources does not always increase the prediction quality. Therefore, selecting data sources that highly contribute to the protein function prediction has become an important issue. Results We present systematic feature selection methods that assess the contribution of genome-wide data sets to predict protein functions and then investigate the relationship between genomic data sources and protein functions. In this study, we use ten different genomic data sources in Mus musculus, including: protein-domains, protein-protein interactions, gene expressions, phenotype ontology, phylogenetic profiles and disease data sources to predict protein functions that are labelled with Gene Ontology (GO terms. We then apply two approaches to feature selection: exhaustive search feature selection using a kernel based logistic regression (KLR, and a kernel based L1-norm regularized logistic regression (KL1LR. In the first approach, we exhaustively measure the contribution of each data set for each function based on its prediction quality. In the second approach, we use the estimated coefficients of features as measures of contribution of data sources. Our results show that the proposed methods improve the prediction quality compared to the full integration of all data sources and other filter-based feature selection methods. We also show that contributing data sources can differ depending on the protein function. Furthermore, we observe that highly contributing data sets can be similar among

  4. A Comparative Study of Feature Selection and Classification Methods for Gene Expression Data

    KAUST Repository

    Abusamra, Heba

    2013-01-01

    Different experiments have been applied to compare the performance of the classification methods with and without performing feature selection. Results revealed the important role of feature selection in classifying gene expression data. By performing feature selection, the classification accuracy can be significantly boosted by using a small number of genes. The relationship of features selected in different feature selection methods is investigated and the most frequent features selected in each fold among all methods for both datasets are evaluated.

  5. Numerical Model based Reliability Estimation of Selective Laser Melting Process

    DEFF Research Database (Denmark)

    Mohanty, Sankhya; Hattel, Jesper Henri

    2014-01-01

    Selective laser melting is developing into a standard manufacturing technology with applications in various sectors. However, the process is still far from being at par with conventional processes such as welding and casting, the primary reason of which is the unreliability of the process. While...... of the selective laser melting process. A validated 3D finite-volume alternating-direction-implicit numerical technique is used to model the selective laser melting process, and is calibrated against results from single track formation experiments. Correlation coefficients are determined for process input...... parameters such as laser power, speed, beam profile, etc. Subsequently, uncertainties in the processing parameters are utilized to predict a range for the various outputs, using a Monte Carlo method based uncertainty analysis methodology, and the reliability of the process is established....

  6. Feature selection gait-based gender classification under different circumstances

    Science.gov (United States)

    Sabir, Azhin; Al-Jawad, Naseer; Jassim, Sabah

    2014-05-01

    This paper proposes a gender classification based on human gait features and investigates the problem of two variations: clothing (wearing coats) and carrying bag condition as addition to the normal gait sequence. The feature vectors in the proposed system are constructed after applying wavelet transform. Three different sets of feature are proposed in this method. First, Spatio-temporal distance that is dealing with the distance of different parts of the human body (like feet, knees, hand, Human Height and shoulder) during one gait cycle. The second and third feature sets are constructed from approximation and non-approximation coefficient of human body respectively. To extract these two sets of feature we divided the human body into two parts, upper and lower body part, based on the golden ratio proportion. In this paper, we have adopted a statistical method for constructing the feature vector from the above sets. The dimension of the constructed feature vector is reduced based on the Fisher score as a feature selection method to optimize their discriminating significance. Finally k-Nearest Neighbor is applied as a classification method. Experimental results demonstrate that our approach is providing more realistic scenario and relatively better performance compared with the existing approaches.

  7. Proposed Project Selection Method for Human Support Research and Technology Development (HSR&TD)

    Science.gov (United States)

    Jones, Harry

    2005-01-01

    The purpose of HSR&TD is to deliver human support technologies to the Exploration Systems Mission Directorate (ESMD) that will be selected for future missions. This requires identifying promising candidate technologies and advancing them in technology readiness until they are acceptable. HSR&TD must select an may of technology development projects, guide them, and either terminate or continue them, so as to maximize the resulting number of usable advanced human support technologies. This paper proposes an effective project scoring methodology to support managing the HSR&TD project portfolio. Researchers strongly disagree as to what are the best technology project selection methods, or even if there are any proven ones. Technology development is risky and outstanding achievements are rare and unpredictable. There is no simple formula for success. Organizations that are satisfied with their project selection approach typically use a mix of financial, strategic, and scoring methods in an open, established, explicit, formal process. This approach helps to build consensus and develop management insight. It encourages better project proposals by clarifying the desired project attributes. We propose a project scoring technique based on a method previously used in a federal laboratory and supported by recent research. Projects are ranked by their perceived relevance, risk, and return - a new 3 R's. Relevance is the degree to which the project objective supports the HSR&TD goal of developing usable advanced human support technologies. Risk is the estimated probability that the project will achieve its specific objective. Return is the reduction in mission life cycle cost obtained if the project is successful. If the project objective technology performs a new function with no current cost, its return is the estimated cash value of performing the new function. The proposed project selection scoring method includes definitions of the criteria, a project evaluation

  8. Sample selection based on kernel-subclustering for the signal reconstruction of multifunctional sensors

    International Nuclear Information System (INIS)

    Wang, Xin; Wei, Guo; Sun, Jinwei

    2013-01-01

    The signal reconstruction methods based on inverse modeling for the signal reconstruction of multifunctional sensors have been widely studied in recent years. To improve the accuracy, the reconstruction methods have become more and more complicated because of the increase in the model parameters and sample points. However, there is another factor that affects the reconstruction accuracy, the position of the sample points, which has not been studied. A reasonable selection of the sample points could improve the signal reconstruction quality in at least two ways: improved accuracy with the same number of sample points or the same accuracy obtained with a smaller number of sample points. Both ways are valuable for improving the accuracy and decreasing the workload, especially for large batches of multifunctional sensors. In this paper, we propose a sample selection method based on kernel-subclustering distill groupings of the sample data and produce the representation of the data set for inverse modeling. The method calculates the distance between two data points based on the kernel-induced distance instead of the conventional distance. The kernel function is a generalization of the distance metric by mapping the data that are non-separable in the original space into homogeneous groups in the high-dimensional space. The method obtained the best results compared with the other three methods in the simulation. (paper)

  9. Simultaneous Channel and Feature Selection of Fused EEG Features Based on Sparse Group Lasso

    Directory of Open Access Journals (Sweden)

    Jin-Jia Wang

    2015-01-01

    Full Text Available Feature extraction and classification of EEG signals are core parts of brain computer interfaces (BCIs. Due to the high dimension of the EEG feature vector, an effective feature selection algorithm has become an integral part of research studies. In this paper, we present a new method based on a wrapped Sparse Group Lasso for channel and feature selection of fused EEG signals. The high-dimensional fused features are firstly obtained, which include the power spectrum, time-domain statistics, AR model, and the wavelet coefficient features extracted from the preprocessed EEG signals. The wrapped channel and feature selection method is then applied, which uses the logistical regression model with Sparse Group Lasso penalized function. The model is fitted on the training data, and parameter estimation is obtained by modified blockwise coordinate descent and coordinate gradient descent method. The best parameters and feature subset are selected by using a 10-fold cross-validation. Finally, the test data is classified using the trained model. Compared with existing channel and feature selection methods, results show that the proposed method is more suitable, more stable, and faster for high-dimensional feature fusion. It can simultaneously achieve channel and feature selection with a lower error rate. The test accuracy on the data used from international BCI Competition IV reached 84.72%.

  10. Iron-based amorphous alloys and methods of synthesizing iron-based amorphous alloys

    Science.gov (United States)

    Saw, Cheng Kiong; Bauer, William A.; Choi, Jor-Shan; Day, Dan; Farmer, Joseph C.

    2016-05-03

    A method according to one embodiment includes combining an amorphous iron-based alloy and at least one metal selected from a group consisting of molybdenum, chromium, tungsten, boron, gadolinium, nickel phosphorous, yttrium, and alloys thereof to form a mixture, wherein the at least one metal is present in the mixture from about 5 atomic percent (at %) to about 55 at %; and ball milling the mixture at least until an amorphous alloy of the iron-based alloy and the at least one metal is formed. Several amorphous iron-based metal alloys are also presented, including corrosion-resistant amorphous iron-based metal alloys and radiation-shielding amorphous iron-based metal alloys.

  11. A Novel Extension Decision-Making Method for Selecting Solar Power Systems

    Directory of Open Access Journals (Sweden)

    Meng-Hui Wang

    2013-01-01

    Full Text Available Due to the complex parameters of a solar power system, the designer not only must think about the load demand but also needs to consider the price, weight, and annual power generating capacity (APGC and maximum power of the solar system. It is an important task to find the optimal solar power system with many parameters. Therefore, this paper presents a novel decision-making method based on the extension theory; we call it extension decision-making method (EDMM. Using the EDMM can make it quick to select the optimal solar power system. The paper proposed this method not only to provide a useful estimated tool for the solar system engineers but also to supply the important reference with the installation of solar systems to the consumer.

  12. Comparison of fuzzy AHP and fuzzy TODIM methods for landfill location selection.

    Science.gov (United States)

    Hanine, Mohamed; Boutkhoum, Omar; Tikniouine, Abdessadek; Agouti, Tarik

    2016-01-01

    Landfill location selection is a multi-criteria decision problem and has a strategic importance for many regions. The conventional methods for landfill location selection are insufficient in dealing with the vague or imprecise nature of linguistic assessment. To resolve this problem, fuzzy multi-criteria decision-making methods are proposed. The aim of this paper is to use fuzzy TODIM (the acronym for Interactive and Multi-criteria Decision Making in Portuguese) and the fuzzy analytic hierarchy process (AHP) methods for the selection of landfill location. The proposed methods have been applied to a landfill location selection problem in the region of Casablanca, Morocco. After determining the criteria affecting the landfill location decisions, fuzzy TODIM and fuzzy AHP methods are applied to the problem and results are presented. The comparisons of these two methods are also discussed.

  13. Core Business Selection Based on Ant Colony Clustering Algorithm

    Directory of Open Access Journals (Sweden)

    Yu Lan

    2014-01-01

    Full Text Available Core business is the most important business to the enterprise in diversified business. In this paper, we first introduce the definition and characteristics of the core business and then descript the ant colony clustering algorithm. In order to test the effectiveness of the proposed method, Tianjin Port Logistics Development Co., Ltd. is selected as the research object. Based on the current situation of the development of the company, the core business of the company can be acquired by ant colony clustering algorithm. Thus, the results indicate that the proposed method is an effective way to determine the core business for company.

  14. A selective iodide ion sensor electrode based on functionalized ZnO nanotubes.

    Science.gov (United States)

    Ibupoto, Zafar Hussain; Khun, Kimleang; Willander, Magnus

    2013-02-04

    In this research work, ZnO nanotubes were fabricated on a gold coated glass substrate through chemical etching by the aqueous chemical growth method. For the first time a nanostructure-based iodide ion selective electrode was developed. The ZnO nanotubes were functionalized with miconazole ion exchanger and the electromotive force (EMF) was measured by the potentiometric method. The iodide ion sensor exhibited a linear response over a wide range of concentrations (1 × 10-6 to 1 × 10-1 M) and excellent sensitivity of -62 ± 1 mV/decade. The detection limit of the proposed sensor was found to be 5 × 10-7 M. The effects of pH, temperature, additive, plasticizer and stabilizer on the potential response of iodide ion selective electrode were also studied. The proposed iodide ion sensor demonstrated a fast response time of less than 5 s and high selectivity against common organic and the inorganic anions. All the obtained results revealed that the iodide ion sensor based on functionalized ZnO nanotubes may be used for the detection of iodide ion in environmental water samples, pharmaceutical products and other real samples.

  15. A Selective Iodide Ion Sensor Electrode Based on Functionalized ZnO Nanotubes

    Directory of Open Access Journals (Sweden)

    Magnus Willander

    2013-02-01

    Full Text Available In this research work, ZnO nanotubes were fabricated on a gold coated glass substrate through chemical etching by the aqueous chemical growth method. For the first time a nanostructure-based iodide ion selective electrode was developed. The ZnO nanotubes were functionalized with miconazole ion exchanger and the electromotive force (EMF was measured by the potentiometric method. The iodide ion sensor exhibited a linear response over a wide range of concentrations (1 × 10−6 to 1 × 10−1 M and excellent sensitivity of –62 ± 1 mV/decade. The detection limit of the proposed sensor was found to be 5 × 10−7 M. The effects of pH, temperature, additive, plasticizer and stabilizer on the potential response of iodide ion selective electrode were also studied. The proposed iodide ion sensor demonstrated a fast response time of less than 5 s and high selectivity against common organic and the inorganic anions. All the obtained results revealed that the iodide ion sensor based on functionalized ZnO nanotubes may be used for the detection of iodide ion in environmental water samples, pharmaceutical products and other real samples.

  16. Cut Based Method for Comparing Complex Networks.

    Science.gov (United States)

    Liu, Qun; Dong, Zhishan; Wang, En

    2018-03-23

    Revealing the underlying similarity of various complex networks has become both a popular and interdisciplinary topic, with a plethora of relevant application domains. The essence of the similarity here is that network features of the same network type are highly similar, while the features of different kinds of networks present low similarity. In this paper, we introduce and explore a new method for comparing various complex networks based on the cut distance. We show correspondence between the cut distance and the similarity of two networks. This correspondence allows us to consider a broad range of complex networks and explicitly compare various networks with high accuracy. Various machine learning technologies such as genetic algorithms, nearest neighbor classification, and model selection are employed during the comparison process. Our cut method is shown to be suited for comparisons of undirected networks and directed networks, as well as weighted networks. In the model selection process, the results demonstrate that our approach outperforms other state-of-the-art methods with respect to accuracy.

  17. Combining AHP and DEA Methods for Selecting a Project Manager

    Directory of Open Access Journals (Sweden)

    Baruch Keren

    2014-07-01

    Full Text Available A project manager has a major influence on the success or failure of the project. A good project manager can match between the strategy and objectives of the organization and the goals of the project. Therefore, the selection of the appropriate project manager is a key factor for the success of the project. A potential project manager is judged by his or her proven performance and personal qualifications. This paper proposes a method to calculate the weighted scores and the full rank of candidates for managing a project, and to select the best of those candidates. The proposed method combines specific methodologies: the Data Envelopment Analysis (DEA and the Analytical Hierarchical Process (AHP and uses DEA Ranking Methods to enhance selection.

  18. Indicators for Monitoring Water, Sanitation, and Hygiene: A Systematic Review of Indicator Selection Methods

    Directory of Open Access Journals (Sweden)

    Stefanie Schwemlein

    2016-03-01

    Full Text Available Monitoring water, sanitation, and hygiene (WaSH is important to track progress, improve accountability, and demonstrate impacts of efforts to improve conditions and services, especially in low- and middle-income countries. Indicator selection methods enable robust monitoring of WaSH projects and conditions. However, selection methods are not always used and there are no commonly-used methods for selecting WaSH indicators. To address this gap, we conducted a systematic review of indicator selection methods used in WaSH-related fields. We present a summary of indicator selection methods for environment, international development, and water. We identified six methodological stages for selecting indicators for WaSH: define the purpose and scope; select a conceptual framework; search for candidate indicators; determine selection criteria; score indicators against criteria; and select a final suite of indicators. This summary of indicator selection methods provides a foundation for the critical assessment of existing methods. It can be used to inform future efforts to construct indicator sets in WaSH and related fields.

  19. Rapid one-step selection method for generating nucleic acid aptamers: development of a DNA aptamer against α-bungarotoxin.

    Directory of Open Access Journals (Sweden)

    Lasse H Lauridsen

    Full Text Available BACKGROUND: Nucleic acids based therapeutic approaches have gained significant interest in recent years towards the development of therapeutics against many diseases. Recently, research on aptamers led to the marketing of Macugen®, an inhibitor of vascular endothelial growth factor (VEGF for the treatment of age related macular degeneration (AMD. Aptamer technology may prove useful as a therapeutic alternative against an array of human maladies. Considering the increased interest in aptamer technology globally that rival antibody mediated therapeutic approaches, a simplified selection, possibly in one-step, technique is required for developing aptamers in limited time period. PRINCIPAL FINDINGS: Herein, we present a simple one-step selection of DNA aptamers against α-bungarotoxin. A toxin immobilized glass coverslip was subjected to nucleic acid pool binding and extensive washing followed by PCR enrichment of the selected aptamers. One round of selection successfully identified a DNA aptamer sequence with a binding affinity of 7.58 µM. CONCLUSION: We have demonstrated a one-step method for rapid production of nucleic acid aptamers. Although the reported binding affinity is in the low micromolar range, we believe that this could be further improved by using larger targets, increasing the stringency of selection and also by combining a capillary electrophoresis separation prior to the one-step selection. Furthermore, the method presented here is a user-friendly, cheap and an easy way of deriving an aptamer unlike the time consuming conventional SELEX-based approach. The most important application of this method is that chemically-modified nucleic acid libraries can also be used for aptamer selection as it requires only one enzymatic step. This method could equally be suitable for developing RNA aptamers.

  20. Method for fluoride routine determination in urine of personnel exposed, by ion selective electrode

    International Nuclear Information System (INIS)

    Pires, M.A.F.; Bellintani, S.A.

    1986-01-01

    A simple, fast and sensible method is outlined for the determination of fluoride in urine of workers that handle fluorine compounds. The determination is based on the measurement of fluoride by ion selective electrode. Cationic interference like Ca ++ , Mg ++ , Fe +++ and Al +++ are complexed by EDTA and citric acid. (Author) [pt

  1. Development of a QFD-based expert system for CNC turning centre selection

    Science.gov (United States)

    Prasad, Kanika; Chakraborty, Shankar

    2015-12-01

    Computer numerical control (CNC) machine tools are automated devices capable of generating complicated and intricate product shapes in shorter time. Selection of the best CNC machine tool is a critical, complex and time-consuming task due to availability of a wide range of alternatives and conflicting nature of several evaluation criteria. Although, the past researchers had attempted to select the appropriate machining centres using different knowledge-based systems, mathematical models and multi-criteria decision-making methods, none of those approaches has given due importance to the voice of customers. The aforesaid limitation can be overcome using quality function deployment (QFD) technique, which is a systematic approach for integrating customers' needs and designing the product to meet those needs first time and every time. In this paper, the adopted QFD-based methodology helps in selecting CNC turning centres for a manufacturing organization, providing due importance to the voice of customers to meet their requirements. An expert system based on QFD technique is developed in Visual BASIC 6.0 to automate the CNC turning centre selection procedure for different production plans. Three illustrative examples are demonstrated to explain the real-time applicability of the developed expert system.

  2. Frequency-Wavenumber (FK)-Based Data Selection in High-Frequency Passive Surface Wave Survey

    Science.gov (United States)

    Cheng, Feng; Xia, Jianghai; Xu, Zongbo; Hu, Yue; Mi, Binbin

    2018-04-01

    Passive surface wave methods have gained much attention from geophysical and civil engineering communities because of the limited application of traditional seismic surveys in highly populated urban areas. Considering that they can provide high-frequency phase velocity information up to several tens of Hz, the active surface wave survey would be omitted and the amount of field work could be dramatically reduced. However, the measured dispersion energy image in the passive surface wave survey would usually be polluted by a type of "crossed" artifacts at high frequencies. It is common in the bidirectional noise distribution case with a linear receiver array deployed along roads or railways. We review several frequently used passive surface wave methods and derive the underlying physics for the existence of the "crossed" artifacts. We prove that the "crossed" artifacts would cross the true surface wave energy at fixed points in the f-v domain and propose a FK-based data selection technique to attenuate the artifacts in order to retrieve the high-frequency information. Numerical tests further demonstrate the existence of the "crossed" artifacts and indicate that the well-known wave field separation method, FK filter, does not work for the selection of directional noise data. Real-world applications manifest the feasibility of the proposed FK-based technique to improve passive surface wave methods by a priori data selection. Finally, we discuss the applicability of our approach.

  3. Frequency-Wavenumber (FK)-Based Data Selection in High-Frequency Passive Surface Wave Survey

    Science.gov (United States)

    Cheng, Feng; Xia, Jianghai; Xu, Zongbo; Hu, Yue; Mi, Binbin

    2018-07-01

    Passive surface wave methods have gained much attention from geophysical and civil engineering communities because of the limited application of traditional seismic surveys in highly populated urban areas. Considering that they can provide high-frequency phase velocity information up to several tens of Hz, the active surface wave survey would be omitted and the amount of field work could be dramatically reduced. However, the measured dispersion energy image in the passive surface wave survey would usually be polluted by a type of "crossed" artifacts at high frequencies. It is common in the bidirectional noise distribution case with a linear receiver array deployed along roads or railways. We review several frequently used passive surface wave methods and derive the underlying physics for the existence of the "crossed" artifacts. We prove that the "crossed" artifacts would cross the true surface wave energy at fixed points in the f- v domain and propose a FK-based data selection technique to attenuate the artifacts in order to retrieve the high-frequency information. Numerical tests further demonstrate the existence of the "crossed" artifacts and indicate that the well-known wave field separation method, FK filter, does not work for the selection of directional noise data. Real-world applications manifest the feasibility of the proposed FK-based technique to improve passive surface wave methods by a priori data selection. Finally, we discuss the applicability of our approach.

  4. Will genomic selection be a practical method for plant breeding?

    Science.gov (United States)

    Nakaya, Akihiro; Isobe, Sachiko N

    2012-11-01

    Genomic selection or genome-wide selection (GS) has been highlighted as a new approach for marker-assisted selection (MAS) in recent years. GS is a form of MAS that selects favourable individuals based on genomic estimated breeding values. Previous studies have suggested the utility of GS, especially for capturing small-effect quantitative trait loci, but GS has not become a popular methodology in the field of plant breeding, possibly because there is insufficient information available on GS for practical use. In this review, GS is discussed from a practical breeding viewpoint. Statistical approaches employed in GS are briefly described, before the recent progress in GS studies is surveyed. GS practices in plant breeding are then reviewed before future prospects are discussed. Statistical concepts used in GS are discussed with genetic models and variance decomposition, heritability, breeding value and linear model. Recent progress in GS studies is reviewed with a focus on empirical studies. For the practice of GS in plant breeding, several specific points are discussed including linkage disequilibrium, feature of populations and genotyped markers and breeding scheme. Currently, GS is not perfect, but it is a potent, attractive and valuable approach for plant breeding. This method will be integrated into many practical breeding programmes in the near future with further advances and the maturing of its theory.

  5. Using and reporting the Delphi method for selecting healthcare quality indicators: a systematic review.

    Science.gov (United States)

    Boulkedid, Rym; Abdoul, Hendy; Loustau, Marine; Sibony, Olivier; Alberti, Corinne

    2011-01-01

    Delphi technique is a structured process commonly used to developed healthcare quality indicators, but there is a little recommendation for researchers who wish to use it. This study aimed 1) to describe reporting of the Delphi method to develop quality indicators, 2) to discuss specific methodological skills for quality indicators selection 3) to give guidance about this practice. Three electronic data bases were searched over a 30 years period (1978-2009). All articles that used the Delphi method to select quality indicators were identified. A standardized data extraction form was developed. Four domains (questionnaire preparation, expert panel, progress of the survey and Delphi results) were assessed. Of 80 included studies, quality of reporting varied significantly between items (9% for year's number of experience of the experts to 98% for the type of Delphi used). Reporting of methodological aspects needed to evaluate the reliability of the survey was insufficient: only 39% (31/80) of studies reported response rates for all rounds, 60% (48/80) that feedback was given between rounds, 77% (62/80) the method used to achieve consensus and 57% (48/80) listed quality indicators selected at the end of the survey. A modified Delphi procedure was used in 49/78 (63%) with a physical meeting of the panel members, usually between Delphi rounds. Median number of panel members was 17(Q1:11; Q3:31). In 40/70 (57%) studies, the panel included multiple stakeholders, who were healthcare professionals in 95% (38/40) of cases. Among 75 studies describing criteria to select quality indicators, 28 (37%) used validity and 17(23%) feasibility. The use and reporting of the Delphi method for quality indicators selection need to be improved. We provide some guidance to the investigators to improve the using and reporting of the method in future surveys.

  6. Using and reporting the Delphi method for selecting healthcare quality indicators: a systematic review.

    Directory of Open Access Journals (Sweden)

    Rym Boulkedid

    Full Text Available OBJECTIVE: Delphi technique is a structured process commonly used to developed healthcare quality indicators, but there is a little recommendation for researchers who wish to use it. This study aimed 1 to describe reporting of the Delphi method to develop quality indicators, 2 to discuss specific methodological skills for quality indicators selection 3 to give guidance about this practice. METHODOLOGY AND MAIN FINDING: Three electronic data bases were searched over a 30 years period (1978-2009. All articles that used the Delphi method to select quality indicators were identified. A standardized data extraction form was developed. Four domains (questionnaire preparation, expert panel, progress of the survey and Delphi results were assessed. Of 80 included studies, quality of reporting varied significantly between items (9% for year's number of experience of the experts to 98% for the type of Delphi used. Reporting of methodological aspects needed to evaluate the reliability of the survey was insufficient: only 39% (31/80 of studies reported response rates for all rounds, 60% (48/80 that feedback was given between rounds, 77% (62/80 the method used to achieve consensus and 57% (48/80 listed quality indicators selected at the end of the survey. A modified Delphi procedure was used in 49/78 (63% with a physical meeting of the panel members, usually between Delphi rounds. Median number of panel members was 17(Q1:11; Q3:31. In 40/70 (57% studies, the panel included multiple stakeholders, who were healthcare professionals in 95% (38/40 of cases. Among 75 studies describing criteria to select quality indicators, 28 (37% used validity and 17(23% feasibility. CONCLUSION: The use and reporting of the Delphi method for quality indicators selection need to be improved. We provide some guidance to the investigators to improve the using and reporting of the method in future surveys.

  7. Maintenance of the selected infant feeding methods amongst ...

    African Journals Online (AJOL)

    The focus of this study was to explore and describe influences on decision making related to infant feeding methods in the context of HIV and AIDS. Study objectives were: (1) to explore and describe the influences on decision making related to infant feeding methods selected by the mother during the antenatal period and ...

  8. A comparison of two methods for prediction of response and rates of inbreeding in selected populations with the results obtained in two selection experiments

    Directory of Open Access Journals (Sweden)

    Verrier Etienne

    2005-05-01

    Full Text Available Abstract Selection programmes are mainly concerned with increasing genetic gain. However, short-term progress should not be obtained at the expense of the within-population genetic variability. Different prediction models for the evolution within a small population of the genetic mean of a selected trait, its genetic variance and its inbreeding have been developed but have mainly been validated through Monte Carlo simulation studies. The purpose of this study was to compare theoretical predictions to experimental results. Two deterministic methods were considered, both grounded on a polygenic additive model. Differences between theoretical predictions and experimental results arise from differences between the true and the assumed genetic model, and from mathematical simplifications applied in the prediction methods. Two sets of experimental lines of chickens were used in this study: the Dutch lines undergoing true truncation mass selection, the other lines (French undergoing mass selection with a restriction on the representation of the different families. This study confirmed, on an experimental basis, that modelling is an efficient approach to make useful predictions of the evolution of selected populations although the basic assumptions considered in the models (polygenic additive model, normality of the distribution, base population at the equilibrium, etc. are not met in reality. The two deterministic methods compared yielded results that were close to those observed in real data, especially when the selection scheme followed the rules of strict mass selection: for instance, both predictions overestimated the genetic gain in the French experiment, whereas both predictions were close to the observed values in the Dutch experiment.

  9. Mutual Information-Based Inputs Selection for Electric Load Time Series Forecasting

    Directory of Open Access Journals (Sweden)

    Nenad Floranović

    2013-02-01

    Full Text Available Providing accurate load forecast to electric utility corporations is essential in order to reduce their operational costs and increase profits. Hence, training set selection is an important preprocessing step which has to be considered in practice in order to increase the accuracy of load forecasts. The usage of mutual information (MI has been recently proposed in regression tasks, mostly for feature selection and for identifying the real instances from training sets that contains noise and outliers. This paper proposes a methodology for the training set selection in a least squares support vector machines (LS-SVMs load forecasting model. A new application of the concept of MI is presented for the selection of a training set based on MI computation between initial training set instances and testing set instances. Accordingly, several LS-SVMs models have been trained, based on the proposed methodology, for hourly prediction of electric load for one day ahead. The results obtained from a real-world data set indicate that the proposed method increases the accuracy of load forecasting as well as reduces the size of the initial training set needed for model training.

  10. Multi-atlas pancreas segmentation: Atlas selection based on vessel structure.

    Science.gov (United States)

    Karasawa, Ken'ichi; Oda, Masahiro; Kitasaka, Takayuki; Misawa, Kazunari; Fujiwara, Michitaka; Chu, Chengwen; Zheng, Guoyan; Rueckert, Daniel; Mori, Kensaku

    2017-07-01

    Automated organ segmentation from medical images is an indispensable component for clinical applications such as computer-aided diagnosis (CAD) and computer-assisted surgery (CAS). We utilize a multi-atlas segmentation scheme, which has recently been used in different approaches in the literature to achieve more accurate and robust segmentation of anatomical structures in computed tomography (CT) volume data. Among abdominal organs, the pancreas has large inter-patient variability in its position, size and shape. Moreover, the CT intensity of the pancreas closely resembles adjacent tissues, rendering its segmentation a challenging task. Due to this, conventional intensity-based atlas selection for pancreas segmentation often fails to select atlases that are similar in pancreas position and shape to those of the unlabeled target volume. In this paper, we propose a new atlas selection strategy based on vessel structure around the pancreatic tissue and demonstrate its application to a multi-atlas pancreas segmentation. Our method utilizes vessel structure around the pancreas to select atlases with high pancreatic resemblance to the unlabeled volume. Also, we investigate two types of applications of the vessel structure information to the atlas selection. Our segmentations were evaluated on 150 abdominal contrast-enhanced CT volumes. The experimental results showed that our approach can segment the pancreas with an average Jaccard index of 66.3% and an average Dice overlap coefficient of 78.5%. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. A multi-fidelity analysis selection method using a constrained discrete optimization formulation

    Science.gov (United States)

    Stults, Ian C.

    The purpose of this research is to develop a method for selecting the fidelity of contributing analyses in computer simulations. Model uncertainty is a significant component of result validity, yet it is neglected in most conceptual design studies. When it is considered, it is done so in only a limited fashion, and therefore brings the validity of selections made based on these results into question. Neglecting model uncertainty can potentially cause costly redesigns of concepts later in the design process or can even cause program cancellation. Rather than neglecting it, if one were to instead not only realize the model uncertainty in tools being used but also use this information to select the tools for a contributing analysis, studies could be conducted more efficiently and trust in results could be quantified. Methods for performing this are generally not rigorous or traceable, and in many cases the improvement and additional time spent performing enhanced calculations are washed out by less accurate calculations performed downstream. The intent of this research is to resolve this issue by providing a method which will minimize the amount of time spent conducting computer simulations while meeting accuracy and concept resolution requirements for results. In many conceptual design programs, only limited data is available for quantifying model uncertainty. Because of this data sparsity, traditional probabilistic means for quantifying uncertainty should be reconsidered. This research proposes to instead quantify model uncertainty using an evidence theory formulation (also referred to as Dempster-Shafer theory) in lieu of the traditional probabilistic approach. Specific weaknesses in using evidence theory for quantifying model uncertainty are identified and addressed for the purposes of the Fidelity Selection Problem. A series of experiments was conducted to address these weaknesses using n-dimensional optimization test functions. These experiments found that model

  12. The Naïve Overfitting Index Selection (NOIS): A new method to optimize model complexity for hyperspectral data

    Science.gov (United States)

    Rocha, Alby D.; Groen, Thomas A.; Skidmore, Andrew K.; Darvishzadeh, Roshanak; Willemen, Louise

    2017-11-01

    The growing number of narrow spectral bands in hyperspectral remote sensing improves the capacity to describe and predict biological processes in ecosystems. But it also poses a challenge to fit empirical models based on such high dimensional data, which often contain correlated and noisy predictors. As sample sizes, to train and validate empirical models, seem not to be increasing at the same rate, overfitting has become a serious concern. Overly complex models lead to overfitting by capturing more than the underlying relationship, and also through fitting random noise in the data. Many regression techniques claim to overcome these problems by using different strategies to constrain complexity, such as limiting the number of terms in the model, by creating latent variables or by shrinking parameter coefficients. This paper is proposing a new method, named Naïve Overfitting Index Selection (NOIS), which makes use of artificially generated spectra, to quantify the relative model overfitting and to select an optimal model complexity supported by the data. The robustness of this new method is assessed by comparing it to a traditional model selection based on cross-validation. The optimal model complexity is determined for seven different regression techniques, such as partial least squares regression, support vector machine, artificial neural network and tree-based regressions using five hyperspectral datasets. The NOIS method selects less complex models, which present accuracies similar to the cross-validation method. The NOIS method reduces the chance of overfitting, thereby avoiding models that present accurate predictions that are only valid for the data used, and too complex to make inferences about the underlying process.

  13. Selecting a risk-based tool to aid in decision making

    Energy Technology Data Exchange (ETDEWEB)

    Bendure, A.O.

    1995-03-01

    Selecting a risk-based tool to aid in decision making is as much of a challenge as properly using the tool once it has been selected. Failure to consider customer and stakeholder requirements and the technical bases and differences in risk-based decision making tools will produce confounding and/or politically unacceptable results when the tool is used. Selecting a risk-based decisionmaking tool must therefore be undertaken with the same, if not greater, rigor than the use of the tool once it is selected. This paper presents a process for selecting a risk-based tool appropriate to a set of prioritization or resource allocation tasks, discusses the results of applying the process to four risk-based decision-making tools, and identifies the ``musts`` for successful selection and implementation of a risk-based tool to aid in decision making.

  14. Ratio-based vs. model-based methods to correct for urinary creatinine concentrations.

    Science.gov (United States)

    Jain, Ram B

    2016-08-01

    Creatinine-corrected urinary analyte concentration is usually computed as the ratio of the observed level of analyte concentration divided by the observed level of the urinary creatinine concentration (UCR). This ratio-based method is flawed since it implicitly assumes that hydration is the only factor that affects urinary creatinine concentrations. On the contrary, it has been shown in the literature, that age, gender, race/ethnicity, and other factors also affect UCR. Consequently, an optimal method to correct for UCR should correct for hydration as well as other factors like age, gender, and race/ethnicity that affect UCR. Model-based creatinine correction in which observed UCRs are used as an independent variable in regression models has been proposed. This study was conducted to evaluate the performance of ratio-based and model-based creatinine correction methods when the effects of gender, age, and race/ethnicity are evaluated one factor at a time for selected urinary analytes and metabolites. It was observed that ratio-based method leads to statistically significant pairwise differences, for example, between males and females or between non-Hispanic whites (NHW) and non-Hispanic blacks (NHB), more often than the model-based method. However, depending upon the analyte of interest, the reverse is also possible. The estimated ratios of geometric means (GM), for example, male to female or NHW to NHB, were also compared for the two methods. When estimated UCRs were higher for the group (for example, males) in the numerator of this ratio, these ratios were higher for the model-based method, for example, male to female ratio of GMs. When estimated UCR were lower for the group (for example, NHW) in the numerator of this ratio, these ratios were higher for the ratio-based method, for example, NHW to NHB ratio of GMs. Model-based method is the method of choice if all factors that affect UCR are to be accounted for.

  15. Maximum relevance, minimum redundancy band selection based on neighborhood rough set for hyperspectral data classification

    International Nuclear Information System (INIS)

    Liu, Yao; Chen, Yuehua; Tan, Kezhu; Xie, Hong; Wang, Liguo; Xie, Wu; Yan, Xiaozhen; Xu, Zhen

    2016-01-01

    Band selection is considered to be an important processing step in handling hyperspectral data. In this work, we selected informative bands according to the maximal relevance minimal redundancy (MRMR) criterion based on neighborhood mutual information. Two measures MRMR difference and MRMR quotient were defined and a forward greedy search for band selection was constructed. The performance of the proposed algorithm, along with a comparison with other methods (neighborhood dependency measure based algorithm, genetic algorithm and uninformative variable elimination algorithm), was studied using the classification accuracy of extreme learning machine (ELM) and random forests (RF) classifiers on soybeans’ hyperspectral datasets. The results show that the proposed MRMR algorithm leads to promising improvement in band selection and classification accuracy. (paper)

  16. Method of selecting optimum cross arm lengths for a 750 kV transmission line

    Energy Technology Data Exchange (ETDEWEB)

    Aleksandrov, G N; Olorokov, V P

    1965-01-01

    A method is presented, based on both technical and economic considerations, for selecting cross arm lengths for intermediate poles of power transmission lines according to the effects of internal overvoltage, methods from probability theory and mathematical statistics employed. The problem of optimum pole size is considered in terms of the effect of internal overvoltages for a prescribed maximum level of 2.1 PU currently used in the USSR for the design of 750 kV lines.

  17. Evaluation of selection methods for toxicological impacts in LCA. Recommendations for OMNIITOX

    DEFF Research Database (Denmark)

    Larsen, Henrik Fred; Birkved, Morten; Hauschild, Michael Zwicky

    2004-01-01

    selection methods. Conclusion and Recommendations. For the development of SMs it is recommended that the general principles for CRS systems as applied to SMs are taken into account. Furthermore, special attention should be paid to some specific issues, i.e. the emitted amount should be included, data......Goal, Scope and Background. The aim of this study has been to come up with recommendations on how to develop a selection method (SM) within the method development research of the OMNIITOX project. An SM is a method for prioritization of chemical emissions to be included in a Life Cycle Impact...... categories, and when they do there are typically many gaps. This study covers the only existing methods explicitly designed as SMs (EDIP-selection, Priofactor and CPM-selection), the dominating Chemical Ranking and Scoring (CRS) method in Europe (EURAM) and in USA (WMPT) that can be adapted for this purpose...

  18. Tensor-based Multi-view Feature Selection with Applications to Brain Diseases

    Science.gov (United States)

    Cao, Bokai; He, Lifang; Kong, Xiangnan; Yu, Philip S.; Hao, Zhifeng; Ragin, Ann B.

    2015-01-01

    In the era of big data, we can easily access information from multiple views which may be obtained from different sources or feature subsets. Generally, different views provide complementary information for learning tasks. Thus, multi-view learning can facilitate the learning process and is prevalent in a wide range of application domains. For example, in medical science, measurements from a series of medical examinations are documented for each subject, including clinical, imaging, immunologic, serologic and cognitive measures which are obtained from multiple sources. Specifically, for brain diagnosis, we can have different quantitative analysis which can be seen as different feature subsets of a subject. It is desirable to combine all these features in an effective way for disease diagnosis. However, some measurements from less relevant medical examinations can introduce irrelevant information which can even be exaggerated after view combinations. Feature selection should therefore be incorporated in the process of multi-view learning. In this paper, we explore tensor product to bring different views together in a joint space, and present a dual method of tensor-based multi-view feature selection (dual-Tmfs) based on the idea of support vector machine recursive feature elimination. Experiments conducted on datasets derived from neurological disorder demonstrate the features selected by our proposed method yield better classification performance and are relevant to disease diagnosis. PMID:25937823

  19. Selected Tools and Methods from Quality Management Field

    Directory of Open Access Journals (Sweden)

    Kateřina BRODECKÁ

    2009-06-01

    Full Text Available Following paper describes selected tools and methods from Quality management field and their practical applications on defined examples. Solved examples were elaborated in the form of electronic support. This in detail elaborated electronic support provides students opportunity to thoroughly practice specific issues, help them to prepare for exams and consequently will lead to education improvement. Especially students of combined study form will appreciate this support. The paper specifies project objectives, subjects that will be covered by mentioned support, target groups, structure and the way of elaboration of electronic exercise book in view. The emphasis is not only on manual solution of selected examples that may help students to understand the principles and relationships, but also on solving and results interpreting of selected examples using software support. Statistic software Statgraphics Plus v 5.0 is used while working support, because it is free to use for all students of the faculty. Exemplary example from the subject Basic Statistical Methods of Quality Management is also part of this paper.

  20. IT Project Selection

    DEFF Research Database (Denmark)

    Pedersen, Keld

    2016-01-01

    for initiation. Most of the research on project selection is normative, suggesting new methods, but available empirical studies indicate that many methods are seldom used in practice. This paper addresses the issue by providing increased understanding of IT project selection practice, thereby facilitating...... the development of methods that better fit current practice. The study is based on naturalistic decision-making theory and interviews with experienced project portfolio managers who, when selecting projects, primarily rely on political skills, experience and personal networks rather than on formal IT project......-selection methods, and these findings point to new areas for developing new methodological support for IT project selection....

  1. A survey of variable selection methods in two Chinese epidemiology journals

    Directory of Open Access Journals (Sweden)

    Lynn Henry S

    2010-09-01

    Full Text Available Abstract Background Although much has been written on developing better procedures for variable selection, there is little research on how it is practiced in actual studies. This review surveys the variable selection methods reported in two high-ranking Chinese epidemiology journals. Methods Articles published in 2004, 2006, and 2008 in the Chinese Journal of Epidemiology and the Chinese Journal of Preventive Medicine were reviewed. Five categories of methods were identified whereby variables were selected using: A - bivariate analyses; B - multivariable analysis; e.g. stepwise or individual significance testing of model coefficients; C - first bivariate analyses, followed by multivariable analysis; D - bivariate analyses or multivariable analysis; and E - other criteria like prior knowledge or personal judgment. Results Among the 287 articles that reported using variable selection methods, 6%, 26%, 30%, 21%, and 17% were in categories A through E, respectively. One hundred sixty-three studies selected variables using bivariate analyses, 80% (130/163 via multiple significance testing at the 5% alpha-level. Of the 219 multivariable analyses, 97 (44% used stepwise procedures, 89 (41% tested individual regression coefficients, but 33 (15% did not mention how variables were selected. Sixty percent (58/97 of the stepwise routines also did not specify the algorithm and/or significance levels. Conclusions The variable selection methods reported in the two journals were limited in variety, and details were often missing. Many studies still relied on problematic techniques like stepwise procedures and/or multiple testing of bivariate associations at the 0.05 alpha-level. These deficiencies should be rectified to safeguard the scientific validity of articles published in Chinese epidemiology journals.

  2. The Preparation and Selection of Budget Methods for Promotion in Kosovo

    Directory of Open Access Journals (Sweden)

    MSc. Halit Karaxha

    2017-06-01

    Full Text Available Selecting the adequate method for promotion has a huge importance in increasing business’s performance. Selecting the method of the budget depends from a number of factors. The formulation of budget is known as the most critical period which requires special analysis from marketing’s managers. The expenses for promotion are usually high, and every investment made in the field of promotion directly influences in the business situation. Thus, the selection and adequate formulation of budget methods for promotion influences the growth of profit. The allocated amount for promotion depends from a number of factors, such as: the size of the firm, the sector in which it operates, competition etc. After planning the budget, we have to do the budget allocation to select the promotional form which is considered to be successful by the firms in promoting the products and services and that will help the company to connect with its clients. In this paper, I have elaborated the role and importance of the preparation and selection of budget methods for promotion in the theoretical aspect and the practical one as well.

  3. Distance based control system for machine vision-based selective spraying

    NARCIS (Netherlands)

    Steward, B.L.; Tian, L.F.; Tang, L.

    2002-01-01

    For effective operation of a selective sprayer with real-time local weed sensing, herbicides must be delivered, accurately to weed targets in the field. With a machine vision-based selective spraying system, acquiring sequential images and switching nozzles on and off at the correct locations are

  4. Improved methods in Agrobacterium-mediated transformation of almond using positive (mannose/pmi) or negative (kanamycin resistance) selection-based protocols.

    Science.gov (United States)

    Ramesh, Sunita A; Kaiser, Brent N; Franks, Tricia; Collins, Graham; Sedgley, Margaret

    2006-08-01

    A protocol for Agrobacterium-mediated transformation with either kanamycin or mannose selection was developed for leaf explants of the cultivar Prunus dulcis cv. Ne Plus Ultra. Regenerating shoots were selected on medium containing 15 muM kanamycin (negative selection), while in the positive selection strategy, shoots were selected on 2.5 g/l mannose supplemented with 15 g/l sucrose. Transformation efficiencies based on PCR analysis of individual putative transformed shoots from independent lines relative to the initial numbers of leaf explants tested were 5.6% for kanamycin/nptII and 6.8% for mannose/pmi selection, respectively. Southern blot analysis on six randomly chosen PCR-positive shoots confirmed the presence of the nptII transgene in each, and five randomly chosen lines identified to contain the pmi transgene by PCR showed positive hybridisation to a pmi DNA probe. The positive (mannose/pmi) and the negative (kanamycin) selection protocols used in this study have greatly improved transformation efficiency in almond, which were confirmed with PCR and Southern blot. This study also demonstrates that in almond the mannose/pmi selection protocol is appropriate and can result in higher transformation efficiencies over that of kanamycin/nptII selection protocols.

  5. SELECTING A MANAGEMENT SYSTEM HOSPITAL BY A METHOD MULTICRITERIA

    Directory of Open Access Journals (Sweden)

    Vitorino, Sidney L.

    2016-12-01

    Full Text Available The objective of this report is to assess how the multi-criteria method Analytic Hierarchy Process [HP] can help a hospital complex to choose a more suitable management system, known as Enterprise Resource Planning (ERP. The choice coated is very complex due to the novelty of the process of choosing and conflicts generated between areas that did not have a single view of organizational needs, generating a lot of pressure in the department responsible for implementing systems. To assist in this process, he was hired an expert consultant in decision-making and AHP, which in its role of facilitator, contributed to the criteria for system selection were defined, and the choice to occur within a consensual process. We used the study of a single case, based on two indepth interviews with the consultant and the project manager, and documents generated by the advisory and the tool that supported the method. The results of this analysis showed that the method could effectively collaborate in the system acquisition process, but knowledge of the problems of employees and senior management support, it was not used in new decisions of the organization. We conclude that this method contributed to the consensus in the procurement process, team commitment and engagement of those involved.

  6. Emotional textile image classification based on cross-domain convolutional sparse autoencoders with feature selection

    Science.gov (United States)

    Li, Zuhe; Fan, Yangyu; Liu, Weihua; Yu, Zeqi; Wang, Fengqin

    2017-01-01

    We aim to apply sparse autoencoder-based unsupervised feature learning to emotional semantic analysis for textile images. To tackle the problem of limited training data, we present a cross-domain feature learning scheme for emotional textile image classification using convolutional autoencoders. We further propose a correlation-analysis-based feature selection method for the weights learned by sparse autoencoders to reduce the number of features extracted from large size images. First, we randomly collect image patches on an unlabeled image dataset in the source domain and learn local features with a sparse autoencoder. We then conduct feature selection according to the correlation between different weight vectors corresponding to the autoencoder's hidden units. We finally adopt a convolutional neural network including a pooling layer to obtain global feature activations of textile images in the target domain and send these global feature vectors into logistic regression models for emotional image classification. The cross-domain unsupervised feature learning method achieves 65% to 78% average accuracy in the cross-validation experiments corresponding to eight emotional categories and performs better than conventional methods. Feature selection can reduce the computational cost of global feature extraction by about 50% while improving classification performance.

  7. THIRD PARTY LOGISTIC SERVICE PROVIDER SELECTION USING FUZZY AHP AND TOPSIS METHOD

    Directory of Open Access Journals (Sweden)

    Golam Kabir

    2012-03-01

    Full Text Available The use of third party logistic(3PL services providers is increasing globally to accomplish the strategic objectives. In the increasingly competitive environment, logistics strategic management requires systematic and structured approach to have cutting edge over the rival. Logistics service provider selection is a complex multi-criteria decision making process; in which, decision makers have to deals with the optimization of conflicting objectives such as quality, cost, and delivery time. In this paper, fuzzy analytic hierarchy process (FAHP approach based on technique for order preference by similarity to ideal solution (TOPSIS method has been proposed for evaluating and selecting an appropriate logistics service provider, where the ratings of each alternative and importance weight of each criterion are expressed in triangular fuzzy numbers.

  8. Sensor selection of helicopter transmission systems based on physical model and sensitivity analysis

    Directory of Open Access Journals (Sweden)

    Lyu Kehong

    2014-06-01

    Full Text Available In the helicopter transmission systems, it is important to monitor and track the tooth damage evolution using lots of sensors and detection methods. This paper develops a novel approach for sensor selection based on physical model and sensitivity analysis. Firstly, a physical model of tooth damage and mesh stiffness is built. Secondly, some effective condition indicators (CIs are presented, and the optimal CIs set is selected by comparing their test statistics according to Mann–Kendall test. Afterwards, the selected CIs are used to generate a health indicator (HI through sen slop estimator. Then, the sensors are selected according to the monotonic relevance and sensitivity to the damage levels. Finally, the proposed method is verified by the simulation and experimental data. The results show that the approach can provide a guide for health monitoring of helicopter transmission systems, and it is effective to reduce the test cost and improve the system’s reliability.

  9. The co-feature ratio, a novel method for the measurement of chromatographic and signal selectivity in LC-MS-based metabolomics

    International Nuclear Information System (INIS)

    Elmsjö, Albert; Haglöf, Jakob; Engskog, Mikael K.R.; Nestor, Marika; Arvidsson, Torbjörn; Pettersson, Curt

    2017-01-01

    Evaluation of analytical procedures, especially in regards to measuring chromatographic and signal selectivity, is highly challenging in untargeted metabolomics. The aim of this study was to suggest a new straightforward approach for a systematic examination of chromatographic and signal selectivity in LC-MS-based metabolomics. By calculating the ratio between each feature and its co-eluting features (the co-features), a measurement of the chromatographic selectivity (i.e. extent of co-elution) as well as the signal selectivity (e.g. amount of adduct formation) of each feature could be acquired, the co-feature ratio. This approach was used to examine possible differences in chromatographic and signal selectivity present in samples exposed to three different sample preparation procedures. The capability of the co-feature ratio was evaluated both in a classical targeted setting using isotope labelled standards as well as without standards in an untargeted setting. For the targeted analysis, several metabolites showed a skewed quantitative signal due to poor chromatographic selectivity and/or poor signal selectivity. Moreover, evaluation of the untargeted approach through multivariate analysis of the co-feature ratios demonstrated the possibility to screen for metabolites displaying poor chromatographic and/or signal selectivity characteristics. We conclude that the co-feature ratio can be a useful tool in the development and evaluation of analytical procedures in LC-MS-based metabolomics investigations. Increased selectivity through proper choice of analytical procedures may decrease the false positive and false negative discovery rate and thereby increase the validity of any metabolomic investigation. - Highlights: • The co-feature ratio (CFR) is introduced. • CFR measures chromatographic and signal selectivity of a feature. • CFR can be used for evaluating experimental procedures in metabolomics. • CFR can aid in locating features with poor selectivity.

  10. The co-feature ratio, a novel method for the measurement of chromatographic and signal selectivity in LC-MS-based metabolomics

    Energy Technology Data Exchange (ETDEWEB)

    Elmsjö, Albert, E-mail: Albert.Elmsjo@farmkemi.uu.se [Department of Medicinal Chemistry, Division of Analytical Pharmaceutical Chemistry, Uppsala University (Sweden); Haglöf, Jakob; Engskog, Mikael K.R. [Department of Medicinal Chemistry, Division of Analytical Pharmaceutical Chemistry, Uppsala University (Sweden); Nestor, Marika [Department of Immunology, Genetics and Pathology, Uppsala University (Sweden); Arvidsson, Torbjörn [Department of Medicinal Chemistry, Division of Analytical Pharmaceutical Chemistry, Uppsala University (Sweden); Medical Product Agency, Uppsala (Sweden); Pettersson, Curt [Department of Medicinal Chemistry, Division of Analytical Pharmaceutical Chemistry, Uppsala University (Sweden)

    2017-03-01

    Evaluation of analytical procedures, especially in regards to measuring chromatographic and signal selectivity, is highly challenging in untargeted metabolomics. The aim of this study was to suggest a new straightforward approach for a systematic examination of chromatographic and signal selectivity in LC-MS-based metabolomics. By calculating the ratio between each feature and its co-eluting features (the co-features), a measurement of the chromatographic selectivity (i.e. extent of co-elution) as well as the signal selectivity (e.g. amount of adduct formation) of each feature could be acquired, the co-feature ratio. This approach was used to examine possible differences in chromatographic and signal selectivity present in samples exposed to three different sample preparation procedures. The capability of the co-feature ratio was evaluated both in a classical targeted setting using isotope labelled standards as well as without standards in an untargeted setting. For the targeted analysis, several metabolites showed a skewed quantitative signal due to poor chromatographic selectivity and/or poor signal selectivity. Moreover, evaluation of the untargeted approach through multivariate analysis of the co-feature ratios demonstrated the possibility to screen for metabolites displaying poor chromatographic and/or signal selectivity characteristics. We conclude that the co-feature ratio can be a useful tool in the development and evaluation of analytical procedures in LC-MS-based metabolomics investigations. Increased selectivity through proper choice of analytical procedures may decrease the false positive and false negative discovery rate and thereby increase the validity of any metabolomic investigation. - Highlights: • The co-feature ratio (CFR) is introduced. • CFR measures chromatographic and signal selectivity of a feature. • CFR can be used for evaluating experimental procedures in metabolomics. • CFR can aid in locating features with poor selectivity.

  11. QUANTITATIVE EVALUATION METHOD OF ELEMENTS PRIORITY OF CARTOGRAPHIC GENERALIZATION BASED ON TAXI TRAJECTORY DATA

    Directory of Open Access Journals (Sweden)

    Z. Long

    2017-09-01

    Full Text Available Considering the lack of quantitative criteria for the selection of elements in cartographic generalization, this study divided the hotspot areas of passengers into parts at three levels, gave them different weights, and then classified the elements from the different hotspots. On this basis, a method was proposed to quantify the priority of elements selection. Subsequently, the quantitative priority of different cartographic elements was summarized based on this method. In cartographic generalization, the method can be preferred to select the significant elements and discard those that are relatively non-significant.

  12. An automatic optimum number of well-distributed ground control lines selection procedure based on genetic algorithm

    Science.gov (United States)

    Yavari, Somayeh; Valadan Zoej, Mohammad Javad; Salehi, Bahram

    2018-05-01

    The procedure of selecting an optimum number and best distribution of ground control information is important in order to reach accurate and robust registration results. This paper proposes a new general procedure based on Genetic Algorithm (GA) which is applicable for all kinds of features (point, line, and areal features). However, linear features due to their unique characteristics are of interest in this investigation. This method is called Optimum number of Well-Distributed ground control Information Selection (OWDIS) procedure. Using this method, a population of binary chromosomes is randomly initialized. The ones indicate the presence of a pair of conjugate lines as a GCL and zeros specify the absence. The chromosome length is considered equal to the number of all conjugate lines. For each chromosome, the unknown parameters of a proper mathematical model can be calculated using the selected GCLs (ones in each chromosome). Then, a limited number of Check Points (CPs) are used to evaluate the Root Mean Square Error (RMSE) of each chromosome as its fitness value. The procedure continues until reaching a stopping criterion. The number and position of ones in the best chromosome indicate the selected GCLs among all conjugate lines. To evaluate the proposed method, a GeoEye and an Ikonos Images are used over different areas of Iran. Comparing the obtained results by the proposed method in a traditional RFM with conventional methods that use all conjugate lines as GCLs shows five times the accuracy improvement (pixel level accuracy) as well as the strength of the proposed method. To prevent an over-parametrization error in a traditional RFM due to the selection of a high number of improper correlated terms, an optimized line-based RFM is also proposed. The results show the superiority of the combination of the proposed OWDIS method with an optimized line-based RFM in terms of increasing the accuracy to better than 0.7 pixel, reliability, and reducing systematic

  13. Feature and score fusion based multiple classifier selection for iris recognition.

    Science.gov (United States)

    Islam, Md Rabiul

    2014-01-01

    The aim of this work is to propose a new feature and score fusion based iris recognition approach where voting method on Multiple Classifier Selection technique has been applied. Four Discrete Hidden Markov Model classifiers output, that is, left iris based unimodal system, right iris based unimodal system, left-right iris feature fusion based multimodal system, and left-right iris likelihood ratio score fusion based multimodal system, is combined using voting method to achieve the final recognition result. CASIA-IrisV4 database has been used to measure the performance of the proposed system with various dimensions. Experimental results show the versatility of the proposed system of four different classifiers with various dimensions. Finally, recognition accuracy of the proposed system has been compared with existing N hamming distance score fusion approach proposed by Ma et al., log-likelihood ratio score fusion approach proposed by Schmid et al., and single level feature fusion approach proposed by Hollingsworth et al.

  14. Feature and Score Fusion Based Multiple Classifier Selection for Iris Recognition

    Directory of Open Access Journals (Sweden)

    Md. Rabiul Islam

    2014-01-01

    Full Text Available The aim of this work is to propose a new feature and score fusion based iris recognition approach where voting method on Multiple Classifier Selection technique has been applied. Four Discrete Hidden Markov Model classifiers output, that is, left iris based unimodal system, right iris based unimodal system, left-right iris feature fusion based multimodal system, and left-right iris likelihood ratio score fusion based multimodal system, is combined using voting method to achieve the final recognition result. CASIA-IrisV4 database has been used to measure the performance of the proposed system with various dimensions. Experimental results show the versatility of the proposed system of four different classifiers with various dimensions. Finally, recognition accuracy of the proposed system has been compared with existing N hamming distance score fusion approach proposed by Ma et al., log-likelihood ratio score fusion approach proposed by Schmid et al., and single level feature fusion approach proposed by Hollingsworth et al.

  15. SU-E-J-128: Two-Stage Atlas Selection in Multi-Atlas-Based Image Segmentation

    International Nuclear Information System (INIS)

    Zhao, T; Ruan, D

    2015-01-01

    Purpose: In the new era of big data, multi-atlas-based image segmentation is challenged by heterogeneous atlas quality and high computation burden from extensive atlas collection, demanding efficient identification of the most relevant atlases. This study aims to develop a two-stage atlas selection scheme to achieve computational economy with performance guarantee. Methods: We develop a low-cost fusion set selection scheme by introducing a preliminary selection to trim full atlas collection into an augmented subset, alleviating the need for extensive full-fledged registrations. More specifically, fusion set selection is performed in two successive steps: preliminary selection and refinement. An augmented subset is first roughly selected from the whole atlas collection with a simple registration scheme and the corresponding preliminary relevance metric; the augmented subset is further refined into the desired fusion set size, using full-fledged registration and the associated relevance metric. The main novelty of this work is the introduction of an inference model to relate the preliminary and refined relevance metrics, based on which the augmented subset size is rigorously derived to ensure the desired atlases survive the preliminary selection with high probability. Results: The performance and complexity of the proposed two-stage atlas selection method were assessed using a collection of 30 prostate MR images. It achieved comparable segmentation accuracy as the conventional one-stage method with full-fledged registration, but significantly reduced computation time to 1/3 (from 30.82 to 11.04 min per segmentation). Compared with alternative one-stage cost-saving approach, the proposed scheme yielded superior performance with mean and medium DSC of (0.83, 0.85) compared to (0.74, 0.78). Conclusion: This work has developed a model-guided two-stage atlas selection scheme to achieve significant cost reduction while guaranteeing high segmentation accuracy. The benefit

  16. Computational design for a wide-angle cermet-based solar selective absorber for high temperature applications

    International Nuclear Information System (INIS)

    Sakurai, Atsushi; Tanikawa, Hiroya; Yamada, Makoto

    2014-01-01

    The purpose of this study is to computationally design a wide-angle cermet-based solar selective absorber for high temperature applications by using a characteristic matrix method and a genetic algorithm. The present study investigates a solar selective absorber with tungsten–silica (W–SiO 2 ) cermet. Multilayer structures of 1, 2, 3, and 4 layers and a wide range of metal volume fractions are optimized. The predicted radiative properties show good solar performance, i.e., thermal emittances, especially beyond 2 μm, are quite low, in contrast, solar absorptance levels are successfully high with wide angular range, so that solar photons are effectively absorbed and infrared radiative heat loss can be decreased. -- Highlights: • Electromagnetic simulation of radiative properties by characteristic matrix method. • Optimization for multilayered W–SiO 2 cermet-based absorber by a Genetic Algorithm. • We propose a successfully high solar performance of solar selective absorber

  17. GREY STATISTICS METHOD OF TECHNOLOGY SELECTION FOR ADVANCED PUBLIC TRANSPORTATION SYSTEMS

    Directory of Open Access Journals (Sweden)

    Chien Hung WEI

    2003-01-01

    Full Text Available Taiwan is involved in intelligent transportation systems planning, and is now selecting its prior focus areas for investment and development. The high social and economic impact associated with which intelligent transportation systems technology are chosen explains the efforts of various electronics and transportation corporations for developing intelligent transportation systems technology to expand their business opportunities. However, there has been no detailed research conducted with regard to selecting technology for advanced public transportation systems in Taiwan. Thus, the present paper demonstrates a grey statistics method integrated with a scenario method for solving the problem of selecting advanced public transportation systems technology for Taiwan. A comprehensive questionnaire survey was conducted to demonstrate the effectiveness of the grey statistics method. The proposed approach indicated that contactless smart card technology is the appropriate technology for Taiwan to develop in the near future. The significance of our research results implies that the grey statistics method is an effective method for selecting advanced public transportation systems technologies. We feel our information will be beneficial to the private sector for developing an appropriate intelligent transportation systems technology strategy.

  18. Novel Distance Measure in Fuzzy TOPSIS for Supply Chain Strategy Based Supplier Selection

    Directory of Open Access Journals (Sweden)

    B. Pardha Saradhi

    2016-01-01

    Full Text Available In today’s highly competitive environment, organizations need to evaluate and select suppliers based on their manufacturing strategy. Identification of supply chain strategy of the organization, determination of decision criteria, and methods of supplier selection are appearing to be the most important components in research area in the field of supply chain management. In this paper, evaluation of suppliers is done based on the balanced scorecard framework using new distance measure in fuzzy TOPSIS by considering the supply chain strategy of the manufacturing organization. To take care of vagueness in decision making, trapezoidal fuzzy number is assumed for pairwise comparisons to determine relative weights of perspectives and criteria of supplier selection. Also, linguistic variables specified in terms of trapezoidal fuzzy number are considered for the payoff values of criteria of the suppliers. These fuzzy numbers satisfied the Jensen based inequality. A detailed application of the proposed methodology is illustrated.

  19. Orbital-selective Mott phase of Cu-substituted iron-based superconductors

    International Nuclear Information System (INIS)

    Liu, Yang; Zhao, Yang-Yang; Song, Yun

    2016-01-01

    We study the phase transition in Cu-substituted iron-based superconductors with a new developed real-space Green’s function method. We find that Cu substitution has strong effect on the orbital-selective Mott transition introduced by the Hund’s rule coupling. The redistribution of the orbital occupancy which is caused by the increase of the Hund’s rule coupling, gives rise to the Mott–Hubbard metal-insulator transition in the half-filled d xy orbital. We also find that more and more electronic states appear inside that Mott gap of the d xy orbital with the increase of Cu substitution, and the in-gap states around the Fermi level are strongly localized at some specific lattice sites. Further, a distinctive phase diagram, obtained for the Cu-substituted Fe-based superconductors, displays an orbital-selective insulating phase, as a result of the cooperative effect of the Hund’s rule coupling and the impurity-induced disorder. (paper)

  20. Hyperspectral band selection based on consistency-measure of neighborhood rough set theory

    International Nuclear Information System (INIS)

    Liu, Yao; Xie, Hong; Wang, Liguo; Tan, Kezhu; Chen, Yuehua; Xu, Zhen

    2016-01-01

    Band selection is a well-known approach for reducing dimensionality in hyperspectral imaging. In this paper, a band selection method based on consistency-measure of neighborhood rough set theory (CMNRS) was proposed to select informative bands from hyperspectral images. A decision-making information system was established by the reflection spectrum of soybeans’ hyperspectral data between 400 nm and 1000 nm wavelengths. The neighborhood consistency-measure, which reflects not only the size of the decision positive region, but also the sample distribution in the boundary region, was used as the evaluation function of band significance. The optimal band subset was selected by a forward greedy search algorithm. A post-pruning strategy was employed to overcome the over-fitting problem and find the minimum subset. To assess the effectiveness of the proposed band selection technique, two classification models (extreme learning machine (ELM) and random forests (RF)) were built. The experimental results showed that the proposed algorithm can effectively select key bands and obtain satisfactory classification accuracy. (paper)

  1. Efficient Banknote Recognition Based on Selection of Discriminative Regions with One-Dimensional Visible-Light Line Sensor.

    Science.gov (United States)

    Pham, Tuyen Danh; Park, Young Ho; Kwon, Seung Yong; Park, Kang Ryoung; Jeong, Dae Sik; Yoon, Sungsoo

    2016-03-04

    Banknote papers are automatically recognized and classified in various machines, such as vending machines, automatic teller machines (ATM), and banknote-counting machines. Previous studies on automatic classification of banknotes have been based on the optical characteristics of banknote papers. On each banknote image, there are regions more distinguishable than others in terms of banknote types, sides, and directions. However, there has been little previous research on banknote recognition that has addressed the selection of distinguishable areas. To overcome this problem, we propose a method for recognizing banknotes by selecting more discriminative regions based on similarity mapping, using images captured by a one-dimensional visible light line sensor. Experimental results with various types of banknote databases show that our proposed method outperforms previous methods.

  2. Efficient Banknote Recognition Based on Selection of Discriminative Regions with One-Dimensional Visible-Light Line Sensor

    Directory of Open Access Journals (Sweden)

    Tuyen Danh Pham

    2016-03-01

    Full Text Available Banknote papers are automatically recognized and classified in various machines, such as vending machines, automatic teller machines (ATM, and banknote-counting machines. Previous studies on automatic classification of banknotes have been based on the optical characteristics of banknote papers. On each banknote image, there are regions more distinguishable than others in terms of banknote types, sides, and directions. However, there has been little previous research on banknote recognition that has addressed the selection of distinguishable areas. To overcome this problem, we propose a method for recognizing banknotes by selecting more discriminative regions based on similarity mapping, using images captured by a one-dimensional visible light line sensor. Experimental results with various types of banknote databases show that our proposed method outperforms previous methods.

  3. Variable selection for confounder control, flexible modeling and Collaborative Targeted Minimum Loss-based Estimation in causal inference

    Science.gov (United States)

    Schnitzer, Mireille E.; Lok, Judith J.; Gruber, Susan

    2015-01-01

    This paper investigates the appropriateness of the integration of flexible propensity score modeling (nonparametric or machine learning approaches) in semiparametric models for the estimation of a causal quantity, such as the mean outcome under treatment. We begin with an overview of some of the issues involved in knowledge-based and statistical variable selection in causal inference and the potential pitfalls of automated selection based on the fit of the propensity score. Using a simple example, we directly show the consequences of adjusting for pure causes of the exposure when using inverse probability of treatment weighting (IPTW). Such variables are likely to be selected when using a naive approach to model selection for the propensity score. We describe how the method of Collaborative Targeted minimum loss-based estimation (C-TMLE; van der Laan and Gruber, 2010) capitalizes on the collaborative double robustness property of semiparametric efficient estimators to select covariates for the propensity score based on the error in the conditional outcome model. Finally, we compare several approaches to automated variable selection in low-and high-dimensional settings through a simulation study. From this simulation study, we conclude that using IPTW with flexible prediction for the propensity score can result in inferior estimation, while Targeted minimum loss-based estimation and C-TMLE may benefit from flexible prediction and remain robust to the presence of variables that are highly correlated with treatment. However, in our study, standard influence function-based methods for the variance underestimated the standard errors, resulting in poor coverage under certain data-generating scenarios. PMID:26226129

  4. Assessment of different quit smoking methods selected by patients in tobacco cessation centers in Iran

    Directory of Open Access Journals (Sweden)

    Gholamreza Heydari

    2015-01-01

    Full Text Available Background: Health systems play key roles in identifying tobacco users and providing evidence-based care to help them quit. This treatment includes different methods such as simple medical consultation, medication, and telephone counseling. To assess different quit smoking methods selected by patients in tobacco cessation centers in Iran in order to identify those that are most appropriate for the country health system. Methods: In this cross-sectional and descriptive study, a random sample of all quit centers at the country level was used to obtain a representative sample. Patients completed the self-administered questionnaire which contained 10 questions regarding the quality, cost, effect, side effects and the results of quitting methods using a 5-point Likert-type scale. Percentages, frequencies, mean, T-test, and variance analyses were computed for all study variables. Results: A total of 1063 smokers returned completed survey questionnaires. The most frequently used methods were Nicotine Replacement Therapy (NRT and combination therapy (NRT and Counseling with 228 and 163 individuals reporting these respectively. The least used methods were hypnotism (n = 8 and the quit and win (n = 17. The methods which gained the maximum scores were respectively the combined method, personal and Champix with means of 21.4, 20.4 and 18.4. The minimum scores were for e-cigarettes, hypnotism and education with means of 12.8, 11 and 10.8, respectively. There were significant differences in mean scores based on different cities and different methods. Conclusions: According to smokers′ selection the combined therapy, personal methods and Champix are the most effective methods for quit smoking and these methods could be much more considered in the country health system.

  5. Variable Selection via Partial Correlation.

    Science.gov (United States)

    Li, Runze; Liu, Jingyuan; Lou, Lejia

    2017-07-01

    Partial correlation based variable selection method was proposed for normal linear regression models by Bühlmann, Kalisch and Maathuis (2010) as a comparable alternative method to regularization methods for variable selection. This paper addresses two important issues related to partial correlation based variable selection method: (a) whether this method is sensitive to normality assumption, and (b) whether this method is valid when the dimension of predictor increases in an exponential rate of the sample size. To address issue (a), we systematically study this method for elliptical linear regression models. Our finding indicates that the original proposal may lead to inferior performance when the marginal kurtosis of predictor is not close to that of normal distribution. Our simulation results further confirm this finding. To ensure the superior performance of partial correlation based variable selection procedure, we propose a thresholded partial correlation (TPC) approach to select significant variables in linear regression models. We establish the selection consistency of the TPC in the presence of ultrahigh dimensional predictors. Since the TPC procedure includes the original proposal as a special case, our theoretical results address the issue (b) directly. As a by-product, the sure screening property of the first step of TPC was obtained. The numerical examples also illustrate that the TPC is competitively comparable to the commonly-used regularization methods for variable selection.

  6. Selection of materials using multi-criteria decision-making methods with minimum data

    Directory of Open Access Journals (Sweden)

    Shankar Chakraborty

    2013-07-01

    Full Text Available Selection of material for a specific engineering component, which plays a significant role in its design and proper functioning, is often treated as a multi-criteria decision-making (MCDM problem where the most suitable material is to be chosen based on a given set of conflicting criteria. For solving these MCDM problems, the designers do not generally know what should be the optimal number of criteria required for arriving at the best decisive action. Those criteria should be independent to each other and their number should usually limit to seven plus or minus two. In this paper, five material selection problems are solved using three common MCDM techniques to demonstrate the effect of number of criteria on the final rankings of the material alternatives. It is interesting to observe that the choices of the best suited materials solely depend on the criterion having the maximum priority value. It is also found that among the three MCDM methods, the ranking performance of VIKOR (Vlse Kriterijumska Optimizacija Kompromisno Resenje method is the best.

  7. An iterative method for selecting degenerate multiplex PCR primers.

    Science.gov (United States)

    Souvenir, Richard; Buhler, Jeremy; Stormo, Gary; Zhang, Weixiong

    2007-01-01

    Single-nucleotide polymorphism (SNP) genotyping is an important molecular genetics process, which can produce results that will be useful in the medical field. Because of inherent complexities in DNA manipulation and analysis, many different methods have been proposed for a standard assay. One of the proposed techniques for performing SNP genotyping requires amplifying regions of DNA surrounding a large number of SNP loci. To automate a portion of this particular method, it is necessary to select a set of primers for the experiment. Selecting these primers can be formulated as the Multiple Degenerate Primer Design (MDPD) problem. The Multiple, Iterative Primer Selector (MIPS) is an iterative beam-search algorithm for MDPD. Theoretical and experimental analyses show that this algorithm performs well compared with the limits of degenerate primer design. Furthermore, MIPS outperforms an existing algorithm that was designed for a related degenerate primer selection problem.

  8. Rough-fuzzy clustering and unsupervised feature selection for wavelet based MR image segmentation.

    Directory of Open Access Journals (Sweden)

    Pradipta Maji

    Full Text Available Image segmentation is an indispensable process in the visualization of human tissues, particularly during clinical analysis of brain magnetic resonance (MR images. For many human experts, manual segmentation is a difficult and time consuming task, which makes an automated brain MR image segmentation method desirable. In this regard, this paper presents a new segmentation method for brain MR images, integrating judiciously the merits of rough-fuzzy computing and multiresolution image analysis technique. The proposed method assumes that the major brain tissues, namely, gray matter, white matter, and cerebrospinal fluid from the MR images are considered to have different textural properties. The dyadic wavelet analysis is used to extract the scale-space feature vector for each pixel, while the rough-fuzzy clustering is used to address the uncertainty problem of brain MR image segmentation. An unsupervised feature selection method is introduced, based on maximum relevance-maximum significance criterion, to select relevant and significant textural features for segmentation problem, while the mathematical morphology based skull stripping preprocessing step is proposed to remove the non-cerebral tissues like skull. The performance of the proposed method, along with a comparison with related approaches, is demonstrated on a set of synthetic and real brain MR images using standard validity indices.

  9. A simple method for finding explicit analytic transition densities of diffusion processes with general diploid selection.

    Science.gov (United States)

    Song, Yun S; Steinrücken, Matthias

    2012-03-01

    The transition density function of the Wright-Fisher diffusion describes the evolution of population-wide allele frequencies over time. This function has important practical applications in population genetics, but finding an explicit formula under a general diploid selection model has remained a difficult open problem. In this article, we develop a new computational method to tackle this classic problem. Specifically, our method explicitly finds the eigenvalues and eigenfunctions of the diffusion generator associated with the Wright-Fisher diffusion with recurrent mutation and arbitrary diploid selection, thus allowing one to obtain an accurate spectral representation of the transition density function. Simplicity is one of the appealing features of our approach. Although our derivation involves somewhat advanced mathematical concepts, the resulting algorithm is quite simple and efficient, only involving standard linear algebra. Furthermore, unlike previous approaches based on perturbation, which is applicable only when the population-scaled selection coefficient is small, our method is nonperturbative and is valid for a broad range of parameter values. As a by-product of our work, we obtain the rate of convergence to the stationary distribution under mutation-selection balance.

  10. Selection of heat disposal methods for a Hanford Nuclear Energy Center

    International Nuclear Information System (INIS)

    Young, J.R.; Kannberg, L.D.; Ramsdell, J.V.; Rickard, W.H.; Watson, D.G.

    1976-06-01

    Selection of the best method for disposal of the waste heat from a large power generation center requires a comprehensive comparison of the costs and environmental effects. The objective is to identify the heat dissipation method with the minimum total economic and environmental cost. A 20 reactor HNEC will dissipate about 50,000 MWt of waste heat; a 40 reactor HNEC would release about 100,000 MWt. This is a much larger discharge of heat than has occurred from other concentrated industrial facilities and consequently a special analysis is required to determine the permissibility of such a large heat disposal and the best methods of disposal. It is possible that some methods of disposal will not be permissible because of excessive environmental effects or that the optimum disposal method may include a combination of several methods. A preliminary analysis is presented of the Hanford Nuclear Energy Center heat disposal problem to determine the best methods for disposal and any obvious limitations on the amount of heat that can be released. The analysis is based, in part, on information from an interim conceptual study, a heat sink management analysis, and a meteorological analysis

  11. A Highly Sensitive and Selective Method for the Determination of an Iodate in Table-salt Samples Using Malachite Green-based Spectrophotometry.

    Science.gov (United States)

    Konkayan, Mongkol; Limchoowong, Nunticha; Sricharoen, Phitchan; Chanthai, Saksit

    2016-01-01

    A simple, rapid, and sensitive malachite green-based spectrophotometric method for the selective trace determination of an iodate has been developed and presented for the first time. The reaction mixture was specifically involved in the liberation of iodine in the presence of an excess of iodide in an acidic condition following an instantaneous reaction between the liberated iodine and malachite green dye. The optimum condition was obtained with a buffer solution pH of 5.2 in the presence of 40 mg L -1 potassium iodide and 1.5 × 10 -5 M malachite green for a 5-min incubation time. The iodate contents in some table-salt samples were in the range of 26 to 45 mg kg -1 , while those of drinking water, tap water, canal water, and seawater samples were not detectable (< 96 ng mL -1 of limits of detection, LOQ) with their satisfied method of recoveries of between 93 and 108%. The results agreed with those obtained using ICP-OES for comparison.

  12. On dynamic selection of households for direct marketing based on Markov chain models with memory

    NARCIS (Netherlands)

    Otter, Pieter W.

    A simple, dynamic selection procedure is proposed, based on conditional, expected profits using Markov chain models with memory. The method is easy to apply, only frequencies and mean values have to be calculated or estimated. The method is empirically illustrated using a data set from a charitable

  13. Personal profile of medical students selected through a knowledge-based exam only: are we missing suitable students?

    Directory of Open Access Journals (Sweden)

    Milena Abbiati

    2016-04-01

    Full Text Available Introduction: A consistent body of literature highlights the importance of a broader approach to select medical school candidates both assessing cognitive capacity and individual characteristics. However, selection in a great number of medical schools worldwide is still based on knowledge exams, a procedure that might neglect students with needed personal characteristics for future medical practice. We investigated whether the personal profile of students selected through a knowledge-based exam differed from those not selected. Methods: Students applying for medical school (N=311 completed questionnaires assessing motivations for becoming a doctor, learning approaches, personality traits, empathy, and coping styles. Selection was based on the results of MCQ tests. Principal component analysis was used to draw a profile of the students. Differences between selected and non-selected students were examined by Multivariate ANOVAs, and their impact on selection by logistic regression analysis. Results: Students demonstrating a profile of diligence with higher conscientiousness, deep learning approach, and task-focused coping were more frequently selected (p=0.01. Other personal characteristics such as motivation, sociability, and empathy did not significantly differ, comparing selected and non-selected students. Conclusion: Selection through a knowledge-based exam privileged diligent students. It did neither advantage nor preclude candidates with a more humane profile.

  14. Methods Dealing with Complexity in Selecting Joint Venture Contractors for Large-Scale Infrastructure Projects

    Directory of Open Access Journals (Sweden)

    Ru Liang

    2018-01-01

    Full Text Available The magnitude of business dynamics has increased rapidly due to increased complexity, uncertainty, and risk of large-scale infrastructure projects. This fact made it increasingly tough to “go alone” into a contractor. As a consequence, joint venture contractors with diverse strengths and weaknesses cooperatively bid for bidding. Understanding project complexity and making decision on the optimal joint venture contractor is challenging. This paper is to study how to select joint venture contractors for undertaking large-scale infrastructure projects based on a multiattribute mathematical model. Two different methods are developed to solve the problem. One is based on ideal points and the other one is based on balanced ideal advantages. Both of the two methods consider individual difference in expert judgment and contractor attributes. A case study of Hong Kong-Zhuhai-Macao-Bridge (HZMB project in China is used to demonstrate how to apply these two methods and their advantages.

  15. A method to select aperture margin in collimated spot scanning proton therapy

    International Nuclear Information System (INIS)

    Wang, Dongxu; Smith, Blake R; Gelover, Edgar; Flynn, Ryan T; Hyer, Daniel E

    2015-01-01

    The use of collimator or aperture may sharpen the lateral dose gradient for spot scanning proton therapy. However, to date, there has not been a standard method to determine the aperture margin for a single field in collimated spot scanning proton therapy. This study describes a theoretical framework to select the optimal aperture margin for a single field, and also presents the spot spacing limit required such that the optimal aperture margin exists. Since, for a proton pencil beam partially intercepted by collimator, the maximum point dose (spot center) shifts away from the original pencil beam central axis, we propose that the optimal margin should be equal to the maximum pencil beam center shift under the condition that spot spacing is small with respect to the maximum pencil beam center shift, which can be numerically determined based on beam modeling data. A test case is presented which demonstrates agreement with the prediction made based on the proposed methods. When apertures are applied in a commercial treatment planning system this method may be implemented. (note)

  16. A Developed Meta-model for Selection of Cotton Fabrics Using Design of Experiments and TOPSIS Method

    Science.gov (United States)

    Chakraborty, Shankar; Chatterjee, Prasenjit

    2017-12-01

    Selection of cotton fabrics for providing optimal clothing comfort is often considered as a multi-criteria decision making problem consisting of an array of candidate alternatives to be evaluated based of several conflicting properties. In this paper, design of experiments and technique for order preference by similarity to ideal solution (TOPSIS) are integrated so as to develop regression meta-models for identifying the most suitable cotton fabrics with respect to the computed TOPSIS scores. The applicability of the adopted method is demonstrated using two real time examples. These developed models can also identify the statistically significant fabric properties and their interactions affecting the measured TOPSIS scores and final selection decisions. There exists good degree of congruence between the ranking patterns as derived using these meta-models and the existing methods for cotton fabric ranking and subsequent selection.

  17. Selective adsorption-desorption method for the enrichment of krypton

    International Nuclear Information System (INIS)

    Yuasa, Y.; Ohta, M.; Watanabe, A.; Tani, A.; Takashima, N.

    1975-01-01

    Selective adsorption-desorption method has been developed as an effective means of enriching krypton and xenon gases. A seriesof laboratory-scale tests were performed to provide some basic data of the method when applied to off-gas streams of nuclear power plants. For the first step of the enrichment process of the experiments, krypton was adsorbed on solid adsorbents from dilute mixtures with air at temperatures ranging from -50 0 C to -170 0 C. After the complete breakthrough was obtained, the adsorption bed was evacuated at low temperature by a vacuum pump. By combining these two steps krypton was highly enriched on the adsorbents, and the enrichment factor for krypton was calculated as the product of individual enrichment factors of each step. Two types of adsorbents, coconut charcoal and molecular sieves 5A, were used. Experimental results showed that the present method gave the greater enrichment factor than the conventional method which used selective adsorption step only. (U.S.)

  18. AN OBJECT-BASED METHOD FOR CHINESE LANDFORM TYPES CLASSIFICATION

    Directory of Open Access Journals (Sweden)

    H. Ding

    2016-06-01

    Full Text Available Landform classification is a necessary task for various fields of landscape and regional planning, for example for landscape evaluation, erosion studies, hazard prediction, et al. This study proposes an improved object-based classification for Chinese landform types using the factor importance analysis of random forest and the gray-level co-occurrence matrix (GLCM. In this research, based on 1km DEM of China, the combination of the terrain factors extracted from DEM are selected by correlation analysis and Sheffield's entropy method. Random forest classification tree is applied to evaluate the importance of the terrain factors, which are used as multi-scale segmentation thresholds. Then the GLCM is conducted for the knowledge base of classification. The classification result was checked by using the 1:4,000,000 Chinese Geomorphological Map as reference. And the overall classification accuracy of the proposed method is 5.7% higher than ISODATA unsupervised classification, and 15.7% higher than the traditional object-based classification method.

  19. A method for selecting cis-acting regulatory sequences that respond to small molecule effectors

    Directory of Open Access Journals (Sweden)

    Allas Ülar

    2010-08-01

    Full Text Available Abstract Background Several cis-acting regulatory sequences functioning at the level of mRNA or nascent peptide and specifically influencing transcription or translation have been described. These regulatory elements often respond to specific chemicals. Results We have developed a method that allows us to select cis-acting regulatory sequences that respond to diverse chemicals. The method is based on the β-lactamase gene containing a random sequence inserted into the beginning of the ORF. Several rounds of selection are used to isolate sequences that suppress β-lactamase expression in response to the compound under study. We have isolated sequences that respond to erythromycin, troleandomycin, chloramphenicol, meta-toluate and homoserine lactone. By introducing synonymous and non-synonymous mutations we have shown that at least in the case of erythromycin the sequences act at the peptide level. We have also tested the cross-activities of the constructs and found that in most cases the sequences respond most strongly to the compound on which they were isolated. Conclusions Several selected peptides showed ligand-specific changes in amino acid frequencies, but no consensus motif could be identified. This is consistent with previous observations on natural cis-acting peptides, showing that it is often impossible to demonstrate a consensus. Applying the currently developed method on a larger scale, by selecting and comparing an extended set of sequences, might allow the sequence rules underlying the activity of cis-acting regulatory peptides to be identified.

  20. New visible and selective DNA staining method in gels with tetrazolium salts.

    Science.gov (United States)

    Paredes, Aaron J; Naranjo-Palma, Tatiana; Alfaro-Valdés, Hilda M; Barriga, Andrés; Babul, Jorge; Wilson, Christian A M

    2017-01-15

    DNA staining in gels has historically been carried out using silver staining and fluorescent dyes like ethidium bromide and SYBR Green I (SGI). Using fluorescent dyes allows recovery of the analyte, but requires instruments such as a transilluminator or fluorimeter to visualize the DNA. Here we described a new and simple method that allows DNA visualization to the naked eye by generating a colored precipitate. It works by soaking the acrylamide or agarose DNA gel in SGI and nitro blue tetrazolium (NBT) solution that, when exposed to sunlight, produces a purple insoluble formazan precipitate that remains in the gel after exposure to light. A calibration curve made with a DNA standard established a detection limit of approximately 180 pg/band at 500 bp. Selectivity of this assay was determined using different biomolecules, demonstrating a high selectivity for DNA. Integrity and functionality of the DNA recovered from gels was determined by enzymatic cutting with a restriction enzyme and by transforming competent cells after the different staining methods, respectively. Our method showed the best performance among the dyes employed. Based on its specificity, low cost and its adequacy for field work, this new methodology has enormous potential benefits to research and industry. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. Ultrahigh-dimensional variable selection method for whole-genome gene-gene interaction analysis

    Directory of Open Access Journals (Sweden)

    Ueki Masao

    2012-05-01

    Full Text Available Abstract Background Genome-wide gene-gene interaction analysis using single nucleotide polymorphisms (SNPs is an attractive way for identification of genetic components that confers susceptibility of human complex diseases. Individual hypothesis testing for SNP-SNP pairs as in common genome-wide association study (GWAS however involves difficulty in setting overall p-value due to complicated correlation structure, namely, the multiple testing problem that causes unacceptable false negative results. A large number of SNP-SNP pairs than sample size, so-called the large p small n problem, precludes simultaneous analysis using multiple regression. The method that overcomes above issues is thus needed. Results We adopt an up-to-date method for ultrahigh-dimensional variable selection termed the sure independence screening (SIS for appropriate handling of numerous number of SNP-SNP interactions by including them as predictor variables in logistic regression. We propose ranking strategy using promising dummy coding methods and following variable selection procedure in the SIS method suitably modified for gene-gene interaction analysis. We also implemented the procedures in a software program, EPISIS, using the cost-effective GPGPU (General-purpose computing on graphics processing units technology. EPISIS can complete exhaustive search for SNP-SNP interactions in standard GWAS dataset within several hours. The proposed method works successfully in simulation experiments and in application to real WTCCC (Wellcome Trust Case–control Consortium data. Conclusions Based on the machine-learning principle, the proposed method gives powerful and flexible genome-wide search for various patterns of gene-gene interaction.

  2. A laser ablation ICP-MS based method for multiplexed immunoblot analysis

    DEFF Research Database (Denmark)

    de Bang, Thomas Christian; Petersen, Jørgen; Pedas, Pai Rosager

    2015-01-01

    developed a multiplexed antibody-based assay and analysed selected PSII subunits in barley (Hordeum vulgare L.). A selection of antibodies were labelled with specific lanthanides and immunoreacted with thylakoids exposed to Mn deficiency after western blotting. Subsequently, western blot membranes were...... analysed by laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS), which allowed selective and relative quantitative analysis via the different lanthanides. The method was evaluated against established liquid chromatography electrospray ionization tandem mass spectrometry (LC...... by more than one technique. The developed method enables a higher number of proteins to be multiplexed in comparison to existing immunoassays. Furthermore, multiplexed protein analysis by LA-ICP-MS provides an analytical platform with high throughput appropriate for screening large collections of plants....

  3. ALIS-FLP: Amplified ligation selected fragment-length polymorphism method for microbial genotyping

    DEFF Research Database (Denmark)

    Brillowska-Dabrowska, A.; Wianecka, M.; Dabrowski, Slawomir

    2008-01-01

    A DNA fingerprinting method known as ALIS-FLP (amplified ligation selected fragment-length polymorphism) has been developed for selective and specific amplification of restriction fragments from TspRI restriction endonuclease digested genomic DNA. The method is similar to AFLP, but differs...

  4. Wavelength selection method with standard deviation: application to pulse oximetry.

    Science.gov (United States)

    Vazquez-Jaccaud, Camille; Paez, Gonzalo; Strojnik, Marija

    2011-07-01

    Near-infrared spectroscopy provides useful biological information after the radiation has penetrated through the tissue, within the therapeutic window. One of the significant shortcomings of the current applications of spectroscopic techniques to a live subject is that the subject may be uncooperative and the sample undergoes significant temporal variations, due to his health status that, from radiometric point of view, introduce measurement noise. We describe a novel wavelength selection method for monitoring, based on a standard deviation map, that allows low-noise sensitivity. It may be used with spectral transillumination, transmission, or reflection signals, including those corrupted by noise and unavoidable temporal effects. We apply it to the selection of two wavelengths for the case of pulse oximetry. Using spectroscopic data, we generate a map of standard deviation that we propose as a figure-of-merit in the presence of the noise introduced by the living subject. Even in the presence of diverse sources of noise, we identify four wavelength domains with standard deviation, minimally sensitive to temporal noise, and two wavelengths domains with low sensitivity to temporal noise.

  5. Multicriteria Personnel Selection by the Modified Fuzzy VIKOR Method

    Directory of Open Access Journals (Sweden)

    Rasim M. Alguliyev

    2015-01-01

    Full Text Available Personnel evaluation is an important process in human resource management. The multicriteria nature and the presence of both qualitative and quantitative factors make it considerably more complex. In this study, a fuzzy hybrid multicriteria decision-making (MCDM model is proposed to personnel evaluation. This model solves personnel evaluation problem in a fuzzy environment where both criteria and weights could be fuzzy sets. The triangular fuzzy numbers are used to evaluate the suitability of personnel and the approximate reasoning of linguistic values. For evaluation, we have selected five information culture criteria. The weights of the criteria were calculated using worst-case method. After that, modified fuzzy VIKOR is proposed to rank the alternatives. The outcome of this research is ranking and selecting best alternative with the help of fuzzy VIKOR and modified fuzzy VIKOR techniques. A comparative analysis of results by fuzzy VIKOR and modified fuzzy VIKOR methods is presented. Experiments showed that the proposed modified fuzzy VIKOR method has some advantages over fuzzy VIKOR method. Firstly, from a computational complexity point of view, the presented model is effective. Secondly, compared to fuzzy VIKOR method, it has high acceptable advantage compared to fuzzy VIKOR method.

  6. Clustering based gene expression feature selection method: A computational approach to enrich the classifier efficiency of differentially expressed genes

    KAUST Repository

    Abusamra, Heba; Bajic, Vladimir B.

    2016-01-01

    decrease the computational time and cost, but also improve the classification performance. Among different approaches of feature selection methods, however most of them suffer from several problems such as lack of robustness, validation issues etc. Here, we

  7. Development of a thermodynamic data base for selected heavy metals

    International Nuclear Information System (INIS)

    Hageman, Sven; Scharge, Tina; Willms, Thomas

    2015-07-01

    The report on the development of a thermodynamic data base for selected heavy metals covers the description of experimental methods, the thermodynamic model for chromate, the thermodynamic model for dichromate, the thermodynamic model for manganese (II), the thermodynamic model for cobalt, the thermodynamic model for nickel, the thermodynamic model for copper (I), the thermodynamic model for copper(II), the thermodynamic model for mercury (0) and mercury (I), the thermodynamic model for mercury (III), the thermodynamic model for arsenate.

  8. The influence of selection on the evolutionary distance estimated from the base changes observed between homologous nucleotide sequences.

    Science.gov (United States)

    Otsuka, J; Kawai, Y; Sugaya, N

    2001-11-21

    In most studies of molecular evolution, the nucleotide base at a site is assumed to change with the apparent rate under functional constraint, and the comparison of base changes between homologous genes is thought to yield the evolutionary distance corresponding to the site-average change rate multiplied by the divergence time. However, this view is not sufficiently successful in estimating the divergence time of species, but mostly results in the construction of tree topology without a time-scale. In the present paper, this problem is investigated theoretically by considering that observed base changes are the results of comparing the survivals through selection of mutated bases. In the case of weak selection, the time course of base changes due to mutation and selection can be obtained analytically, leading to a theoretical equation showing how the selection has influence on the evolutionary distance estimated from the enumeration of base changes. This result provides a new method for estimating the divergence time more accurately from the observed base changes by evaluating both the strength of selection and the mutation rate. The validity of this method is verified by analysing the base changes observed at the third codon positions of amino acid residues with four-fold codon degeneracy in the protein genes of mammalian mitochondria; i.e. the ratios of estimated divergence times are fairly well consistent with a series of fossil records of mammals. Throughout this analysis, it is also suggested that the mutation rates in mitochondrial genomes are almost the same in different lineages of mammals and that the lineage-specific base-change rates indicated previously are due to the selection probably arising from the preference of transfer RNAs to codons.

  9. Method for automatic control rod operation using rule-based control

    International Nuclear Information System (INIS)

    Kinoshita, Mitsuo; Yamada, Naoyuki; Kiguchi, Takashi

    1988-01-01

    An automatic control rod operation method using rule-based control is proposed. Its features are as follows: (1) a production system to recognize plant events, determine control actions and realize fast inference (fast selection of a suitable production rule), (2) use of the fuzzy control technique to determine quantitative control variables. The method's performance was evaluated by simulation tests on automatic control rod operation at a BWR plant start-up. The results were as follows; (1) The performance which is related to stabilization of controlled variables and time required for reactor start-up, was superior to that of other methods such as PID control and program control methods, (2) the process time to select and interpret the suitable production rule, which was the same as required for event recognition or determination of control action, was short (below 1 s) enough for real time control. The results showed that the method is effective for automatic control rod operation. (author)

  10. Moment Conditions Selection Based on Adaptive Penalized Empirical Likelihood

    Directory of Open Access Journals (Sweden)

    Yunquan Song

    2014-01-01

    Full Text Available Empirical likelihood is a very popular method and has been widely used in the fields of artificial intelligence (AI and data mining as tablets and mobile application and social media dominate the technology landscape. This paper proposes an empirical likelihood shrinkage method to efficiently estimate unknown parameters and select correct moment conditions simultaneously, when the model is defined by moment restrictions in which some are possibly misspecified. We show that our method enjoys oracle-like properties; that is, it consistently selects the correct moment conditions and at the same time its estimator is as efficient as the empirical likelihood estimator obtained by all correct moment conditions. Moreover, unlike the GMM, our proposed method allows us to carry out confidence regions for the parameters included in the model without estimating the covariances of the estimators. For empirical implementation, we provide some data-driven procedures for selecting the tuning parameter of the penalty function. The simulation results show that the method works remarkably well in terms of correct moment selection and the finite sample properties of the estimators. Also, a real-life example is carried out to illustrate the new methodology.

  11. Residue-based Coordinated Selection and Parameter Design of Multiple Power System Stabilizers (PSSs)

    DEFF Research Database (Denmark)

    Su, Chi; Hu, Weihao; Fang, Jiakun

    2013-01-01

    data from time domain simulations. Then a coordinated approach for multiple PSS selection and parameter design based on residue method is proposed and realized in MATLAB m-files. Particle swarm optimization (PSO) is adopted in the coordination process. The IEEE 39-bus New England system model...

  12. A Spectrum Handoff Scheme for Optimal Network Selection in NEMO Based Cognitive Radio Vehicular Networks

    Directory of Open Access Journals (Sweden)

    Krishan Kumar

    2017-01-01

    Full Text Available When a mobile network changes its point of attachments in Cognitive Radio (CR vehicular networks, the Mobile Router (MR requires spectrum handoff. Network Mobility (NEMO in CR vehicular networks is concerned with the management of this movement. In future NEMO based CR vehicular networks deployment, multiple radio access networks may coexist in the overlapping areas having different characteristics in terms of multiple attributes. The CR vehicular node may have the capability to make call for two or more types of nonsafety services such as voice, video, and best effort simultaneously. Hence, it becomes difficult for MR to select optimal network for the spectrum handoff. This can be done by performing spectrum handoff using Multiple Attributes Decision Making (MADM methods which is the objective of the paper. The MADM methods such as grey relational analysis and cost based methods are used. The application of MADM methods provides wider and optimum choice among the available networks with quality of service. Numerical results reveal that the proposed scheme is effective for spectrum handoff decision for optimal network selection with reduced complexity in NEMO based CR vehicular networks.

  13. The Implementation of Analytical Hierarchy Process Method for Outstanding Achievement Scholarship Reception Selection at Universal University of Batam

    Science.gov (United States)

    Marfuah; Widiantoro, Suryo

    2017-12-01

    Universal University of Batam offers outstanding achievement scholarship to the current students to be each year of new academic year, seeing the large number of new Students who are interested to get it then the selection team should be able to filter and choose the eligible ones. The selection process starting with evaluation and judgement made by the experts. There were five criteria as the basic of selection and each had three alternatives that must be considered. Based on the policy of University the maximum number of recipients are five for each of six study programs. Those programs are art of music, dance, industrial engineering, environmental engineering, telecommunication engineering, and software engineering. The expert choice was subjective that AHP method was used to help in making decision consistently by doing pairwise comparison matrix process between criteria based on selected alternatives, by determining the priority order of criteria and alternatives used. The results of these calculations were used as supporting decision-making to determine the eligible students receiving scholarships based on alternatives of selected criteria determined by the final results of AHP method calculation with the priority criterion A (0.37%), C (0.23%), E (0.21%), D (0.14%) and B (0.06%), value of consistency ratio 0.05. Then the alternative priorities 1 (0.63), 2 (0.26) and 3 (0.11) the consistency ratio values 0.03, where each CR ≤ 0.1 or consistent weighting preference.

  14. A multicore based parallel image registration method.

    Science.gov (United States)

    Yang, Lin; Gong, Leiguang; Zhang, Hong; Nosher, John L; Foran, David J

    2009-01-01

    Image registration is a crucial step for many image-assisted clinical applications such as surgery planning and treatment evaluation. In this paper we proposed a landmark based nonlinear image registration algorithm for matching 2D image pairs. The algorithm was shown to be effective and robust under conditions of large deformations. In landmark based registration, the most important step is establishing the correspondence among the selected landmark points. This usually requires an extensive search which is often computationally expensive. We introduced a nonregular data partition algorithm using the K-means clustering algorithm to group the landmarks based on the number of available processing cores. The step optimizes the memory usage and data transfer. We have tested our method using IBM Cell Broadband Engine (Cell/B.E.) platform.

  15. Evaluation and selection of in-situ leaching mining method using analytic hierarchy process

    International Nuclear Information System (INIS)

    Zhao Heyong; Tan Kaixuan; Liu Huizhen

    2007-01-01

    According to the complicated conditions and main influence factors of in-situ leaching min- ing, a model and processes of analytic hierarchy are established for evaluation and selection of in-situ leaching mining methods based on analytic hierarchy process. Taking a uranium mine in Xinjiang of China for example, the application of this model is presented. The results of analyses and calculation indicate that the acid leaching is the optimum project. (authors)

  16. Support System Model for Value based Group Decision on Roof System Selection

    Directory of Open Access Journals (Sweden)

    Christiono Utomo

    2011-02-01

    Full Text Available A group decision support system is required on a value-based decision because there are different concern caused by differing preferences, experiences, and background. It is to enable each decision-maker to evaluate and rank the solution alternatives before engaging into negotiation with other decision-makers. Stakeholder of multi-criteria decision making problems usually evaluates the alternative solution from different perspective, making it possible to have a dominant solution among the alternatives. Each stakeholder needs to identify the goals that can be optimized and those that can be compromised in order to reach an agreement with other stakeholders. This paper presents group decision model involving three decision-makers on the selection of suitable system for a building’s roof. The objective of the research is to find an agreement options model and coalition algorithms for multi person decision with two main preferences of value which are function and cost. The methodology combines value analysis method using Function Analysis System Technique (FAST; Life Cycle Cost analysis, group decision analysis method based on Analytical Hierarchy Process (AHP in a satisfying options, and Game theory-based agent system to develop agreement option and coalition formation for the support system. The support system bridges theoretical gap between automated design in construction domain and automated negotiation in information technology domain by providing a structured methodology which can lead to systematic support system and automated negotiation. It will contribute to value management body of knowledge as an advanced method for creativity and analysis phase, since the practice of this knowledge is teamwork based. In the case of roof system selection, it reveals the start of the first negotiation round. Some of the solutions are not an option because no individual stakeholder or coalition of stakeholders desires to select it. The result indicates

  17. A Fast Adaptive Receive Antenna Selection Method in MIMO System

    Directory of Open Access Journals (Sweden)

    Chaowei Wang

    2013-01-01

    Full Text Available Antenna selection has been regarded as an effective method to acquire the diversity benefits of multiple antennas while potentially reduce hardware costs. This paper focuses on receive antenna selection. According to the proportion between the numbers of total receive antennas and selected antennas and the influence of each antenna on system capacity, we propose a fast adaptive antenna selection algorithm for wireless multiple-input multiple-output (MIMO systems. Mathematical analysis and numerical results show that our algorithm significantly reduces the computational complexity and memory requirement and achieves considerable system capacity gain compared with the optimal selection technique in the same time.

  18. Feature Selection for Object-Based Classification of High-Resolution Remote Sensing Images Based on the Combination of a Genetic Algorithm and Tabu Search

    Directory of Open Access Journals (Sweden)

    Lei Shi

    2018-01-01

    Full Text Available In object-based image analysis of high-resolution images, the number of features can reach hundreds, so it is necessary to perform feature reduction prior to classification. In this paper, a feature selection method based on the combination of a genetic algorithm (GA and tabu search (TS is presented. The proposed GATS method aims to reduce the premature convergence of the GA by the use of TS. A prematurity index is first defined to judge the convergence situation during the search. When premature convergence does take place, an improved mutation operator is executed, in which TS is performed on individuals with higher fitness values. As for the other individuals with lower fitness values, mutation with a higher probability is carried out. Experiments using the proposed GATS feature selection method and three other methods, a standard GA, the multistart TS method, and ReliefF, were conducted on WorldView-2 and QuickBird images. The experimental results showed that the proposed method outperforms the other methods in terms of the final classification accuracy.

  19. Feature Selection for Object-Based Classification of High-Resolution Remote Sensing Images Based on the Combination of a Genetic Algorithm and Tabu Search

    Science.gov (United States)

    Shi, Lei; Wan, Youchuan; Gao, Xianjun

    2018-01-01

    In object-based image analysis of high-resolution images, the number of features can reach hundreds, so it is necessary to perform feature reduction prior to classification. In this paper, a feature selection method based on the combination of a genetic algorithm (GA) and tabu search (TS) is presented. The proposed GATS method aims to reduce the premature convergence of the GA by the use of TS. A prematurity index is first defined to judge the convergence situation during the search. When premature convergence does take place, an improved mutation operator is executed, in which TS is performed on individuals with higher fitness values. As for the other individuals with lower fitness values, mutation with a higher probability is carried out. Experiments using the proposed GATS feature selection method and three other methods, a standard GA, the multistart TS method, and ReliefF, were conducted on WorldView-2 and QuickBird images. The experimental results showed that the proposed method outperforms the other methods in terms of the final classification accuracy. PMID:29581721

  20. A parallel optimization method for product configuration and supplier selection based on interval

    Science.gov (United States)

    Zheng, Jian; Zhang, Meng; Li, Guoxi

    2017-06-01

    In the process of design and manufacturing, product configuration is an important way of product development, and supplier selection is an essential component of supply chain management. To reduce the risk of procurement and maximize the profits of enterprises, this study proposes to combine the product configuration and supplier selection, and express the multiple uncertainties as interval numbers. An integrated optimization model of interval product configuration and supplier selection was established, and NSGA-II was put forward to locate the Pareto-optimal solutions to the interval multiobjective optimization model.

  1. Wrapper-based selection of genetic features in genome-wide association studies through fast matrix operations

    Science.gov (United States)

    2012-01-01

    test data than a classifier trained using features selected by a statistical p-value-based filter, which is currently the most popular approach for constructing predictive models in GWAS. Conclusions Greedy RLS is the first known implementation of a machine learning based method with the capability to conduct a wrapper-based feature selection on an entire GWAS containing several thousand examples and over 400,000 variants. In our experiments, greedy RLS selected a highly predictive subset of genetic variants in a fraction of the time spent by wrapper-based selection methods used together with SVM classifiers. The proposed algorithms are freely available as part of the RLScore software library at http://users.utu.fi/aatapa/RLScore/. PMID:22551170

  2. Out of the box selection and application of UX evaluation methods and practical cases

    DEFF Research Database (Denmark)

    Obrist, Marianna; Knoche, Hendrik; Basapur, Santosh

    2013-01-01

    The scope of user experience supersedes the concept of usability and other performance oriented measures by including for example users' emotions, motivations and a strong focus on the context of use. The purpose of this tutorial is to motivate researchers and practitioners to think about...... the challenging questions around how to select and apply UX evaluation methods for different usage contexts, in particular for the "home" and "mobile" context, relevant for TV-based services. Next to a general understanding of UX evaluation and available methods, we will provide concrete UX evaluation case...

  3. Leukemia and colon tumor detection based on microarray data classification using momentum backpropagation and genetic algorithm as a feature selection method

    Science.gov (United States)

    Wisesty, Untari N.; Warastri, Riris S.; Puspitasari, Shinta Y.

    2018-03-01

    Cancer is one of the major causes of mordibility and mortality problems in the worldwide. Therefore, the need of a system that can analyze and identify a person suffering from a cancer by using microarray data derived from the patient’s Deoxyribonucleic Acid (DNA). But on microarray data has thousands of attributes, thus making the challenges in data processing. This is often referred to as the curse of dimensionality. Therefore, in this study built a system capable of detecting a patient whether contracted cancer or not. The algorithm used is Genetic Algorithm as feature selection and Momentum Backpropagation Neural Network as a classification method, with data used from the Kent Ridge Bio-medical Dataset. Based on system testing that has been done, the system can detect Leukemia and Colon Tumor with best accuracy equal to 98.33% for colon tumor data and 100% for leukimia data. Genetic Algorithm as feature selection algorithm can improve system accuracy, which is from 64.52% to 98.33% for colon tumor data and 65.28% to 100% for leukemia data, and the use of momentum parameters can accelerate the convergence of the system in the training process of Neural Network.

  4. Cermet based solar selective absorbers : further selectivity improvement and developing new fabrication technique

    OpenAIRE

    Nejati, Mohammadreza

    2008-01-01

    Spectral selectivity of cermet based selective absorbers were increased by inducing surface roughness on the surface of the cermet layer using a roughening technique (deposition on hot substrates) or by micro-structuring the metallic substrates before deposition of the absorber coating using laser and imprint structuring techniques. Cu-Al2O3 cermet absorbers with very rough surfaces and excellent selectivity were obtained by employing a roughness template layer under the infrared reflective l...

  5. Sol-gel based sensor for selective formaldehyde determination

    Energy Technology Data Exchange (ETDEWEB)

    Bunkoed, Opas [Trace Analysis and Biosensor Research Center, Prince of Songkla University, Hat Yai, Songkhla 90112 (Thailand); Department of Chemistry and Center for Innovation in Chemistry, Faculty of Science, Prince of Songkla University, Hat Yai, Songkhla 90112 (Thailand); Davis, Frank [Cranfield Health, Cranfield University, Bedford MK43 0AL (United Kingdom); Kanatharana, Proespichaya, E-mail: proespichaya.K@psu.ac.th [Trace Analysis and Biosensor Research Center, Prince of Songkla University, Hat Yai, Songkhla 90112 (Thailand); Department of Chemistry and Center for Innovation in Chemistry, Faculty of Science, Prince of Songkla University, Hat Yai, Songkhla 90112 (Thailand); Thavarungkul, Panote [Trace Analysis and Biosensor Research Center, Prince of Songkla University, Hat Yai, Songkhla 90112 (Thailand); Department of Physics, Faculty of Science, Prince of Songkla University, Hat Yai, Songkhla 90112 (Thailand); Higson, Seamus P.J., E-mail: s.p.j.higson@cranfield.ac.uk [Cranfield Health, Cranfield University, Bedford MK43 0AL (United Kingdom)

    2010-02-05

    We report the development of transparent sol-gels with entrapped sensitive and selective reagents for the detection of formaldehyde. The sampling method is based on the adsorption of formaldehyde from the air and reaction with {beta}-diketones (for example acetylacetone) in a sol-gel matrix to produce a yellow product, lutidine, which was detected directly. The proposed method does not require preparation of samples prior to analysis and allows both screening by visual detection and quantitative measurement by simple spectrophotometry. The detection limit of 0.03 ppmv formaldehyde is reported which is lower than the maximum exposure concentrations recommended by both the World Health Organisation (WHO) and the Occupational Safety and Health Administration (OSHA). This sampling method was found to give good reproducibility, the relative standard deviation at 0.2 and 1 ppmv being 6.3% and 4.6%, respectively. Other carbonyl compounds i.e. acetaldehyde, benzaldehyde, acetone and butanone do not interfere with this analytical approach. Results are provided for the determination of formaldehyde in indoor air.

  6. Sol-gel based sensor for selective formaldehyde determination

    International Nuclear Information System (INIS)

    Bunkoed, Opas; Davis, Frank; Kanatharana, Proespichaya; Thavarungkul, Panote; Higson, Seamus P.J.

    2010-01-01

    We report the development of transparent sol-gels with entrapped sensitive and selective reagents for the detection of formaldehyde. The sampling method is based on the adsorption of formaldehyde from the air and reaction with β-diketones (for example acetylacetone) in a sol-gel matrix to produce a yellow product, lutidine, which was detected directly. The proposed method does not require preparation of samples prior to analysis and allows both screening by visual detection and quantitative measurement by simple spectrophotometry. The detection limit of 0.03 ppmv formaldehyde is reported which is lower than the maximum exposure concentrations recommended by both the World Health Organisation (WHO) and the Occupational Safety and Health Administration (OSHA). This sampling method was found to give good reproducibility, the relative standard deviation at 0.2 and 1 ppmv being 6.3% and 4.6%, respectively. Other carbonyl compounds i.e. acetaldehyde, benzaldehyde, acetone and butanone do not interfere with this analytical approach. Results are provided for the determination of formaldehyde in indoor air.

  7. Will genomic selection be a practical method for plant breeding?

    OpenAIRE

    Nakaya, Akihiro; Isobe, Sachiko N.

    2012-01-01

    Background Genomic selection or genome-wide selection (GS) has been highlighted as a new approach for marker-assisted selection (MAS) in recent years. GS is a form of MAS that selects favourable individuals based on genomic estimated breeding values. Previous studies have suggested the utility of GS, especially for capturing small-effect quantitative trait loci, but GS has not become a popular methodology in the field of plant breeding, possibly because there is insufficient information avail...

  8. Adaptive method for multi-dimensional integration and selection of a base of chaos polynomials

    International Nuclear Information System (INIS)

    Crestaux, T.

    2011-01-01

    This research thesis addresses the propagation of uncertainty in numerical simulations and its processing within a probabilistic framework by a functional approach based on random variable functions. The author reports the use of the spectral method to represent random variables by development in polynomial chaos. More precisely, the author uses the method of non-intrusive projection which uses the orthogonality of Chaos Polynomials to compute the development coefficients by approximation of scalar products. The approach is applied to a cavity and to waste storage [fr

  9. Principal Feature Analysis: A Multivariate Feature Selection Method for fMRI Data

    Directory of Open Access Journals (Sweden)

    Lijun Wang

    2013-01-01

    Full Text Available Brain decoding with functional magnetic resonance imaging (fMRI requires analysis of complex, multivariate data. Multivoxel pattern analysis (MVPA has been widely used in recent years. MVPA treats the activation of multiple voxels from fMRI data as a pattern and decodes brain states using pattern classification methods. Feature selection is a critical procedure of MVPA because it decides which features will be included in the classification analysis of fMRI data, thereby improving the performance of the classifier. Features can be selected by limiting the analysis to specific anatomical regions or by computing univariate (voxel-wise or multivariate statistics. However, these methods either discard some informative features or select features with redundant information. This paper introduces the principal feature analysis as a novel multivariate feature selection method for fMRI data processing. This multivariate approach aims to remove features with redundant information, thereby selecting fewer features, while retaining the most information.

  10. Performance-Based Technology Selection Filter description report

    International Nuclear Information System (INIS)

    O'Brien, M.C.; Morrison, J.L.; Morneau, R.A.; Rudin, M.J.; Richardson, J.G.

    1992-05-01

    A formal methodology has been developed for identifying technology gaps and assessing innovative or postulated technologies for inclusion in proposed Buried Waste Integrated Demonstration (BWID) remediation systems. Called the Performance-Based Technology Selection Filter, the methodology provides a formalized selection process where technologies and systems are rated and assessments made based on performance measures, and regulatory and technical requirements. The results are auditable, and can be validated with field data. This analysis methodology will be applied to the remedial action of transuranic contaminated waste pits and trenches buried at the Idaho National Engineering Laboratory (INEL)

  11. Performance-Based Technology Selection Filter description report

    Energy Technology Data Exchange (ETDEWEB)

    O' Brien, M.C.; Morrison, J.L.; Morneau, R.A.; Rudin, M.J.; Richardson, J.G.

    1992-05-01

    A formal methodology has been developed for identifying technology gaps and assessing innovative or postulated technologies for inclusion in proposed Buried Waste Integrated Demonstration (BWID) remediation systems. Called the Performance-Based Technology Selection Filter, the methodology provides a formalized selection process where technologies and systems are rated and assessments made based on performance measures, and regulatory and technical requirements. The results are auditable, and can be validated with field data. This analysis methodology will be applied to the remedial action of transuranic contaminated waste pits and trenches buried at the Idaho National Engineering Laboratory (INEL).

  12. Feature Selection and Kernel Learning for Local Learning-Based Clustering.

    Science.gov (United States)

    Zeng, Hong; Cheung, Yiu-ming

    2011-08-01

    The performance of the most clustering algorithms highly relies on the representation of data in the input space or the Hilbert space of kernel methods. This paper is to obtain an appropriate data representation through feature selection or kernel learning within the framework of the Local Learning-Based Clustering (LLC) (Wu and Schölkopf 2006) method, which can outperform the global learning-based ones when dealing with the high-dimensional data lying on manifold. Specifically, we associate a weight to each feature or kernel and incorporate it into the built-in regularization of the LLC algorithm to take into account the relevance of each feature or kernel for the clustering. Accordingly, the weights are estimated iteratively in the clustering process. We show that the resulting weighted regularization with an additional constraint on the weights is equivalent to a known sparse-promoting penalty. Hence, the weights of those irrelevant features or kernels can be shrunk toward zero. Extensive experiments show the efficacy of the proposed methods on the benchmark data sets.

  13. A multi-label learning based kernel automatic recommendation method for support vector machine.

    Science.gov (United States)

    Zhang, Xueying; Song, Qinbao

    2015-01-01

    Choosing an appropriate kernel is very important and critical when classifying a new problem with Support Vector Machine. So far, more attention has been paid on constructing new kernels and choosing suitable parameter values for a specific kernel function, but less on kernel selection. Furthermore, most of current kernel selection methods focus on seeking a best kernel with the highest classification accuracy via cross-validation, they are time consuming and ignore the differences among the number of support vectors and the CPU time of SVM with different kernels. Considering the tradeoff between classification success ratio and CPU time, there may be multiple kernel functions performing equally well on the same classification problem. Aiming to automatically select those appropriate kernel functions for a given data set, we propose a multi-label learning based kernel recommendation method built on the data characteristics. For each data set, the meta-knowledge data base is first created by extracting the feature vector of data characteristics and identifying the corresponding applicable kernel set. Then the kernel recommendation model is constructed on the generated meta-knowledge data base with the multi-label classification method. Finally, the appropriate kernel functions are recommended to a new data set by the recommendation model according to the characteristics of the new data set. Extensive experiments over 132 UCI benchmark data sets, with five different types of data set characteristics, eleven typical kernels (Linear, Polynomial, Radial Basis Function, Sigmoidal function, Laplace, Multiquadric, Rational Quadratic, Spherical, Spline, Wave and Circular), and five multi-label classification methods demonstrate that, compared with the existing kernel selection methods and the most widely used RBF kernel function, SVM with the kernel function recommended by our proposed method achieved the highest classification performance.

  14. Salts-based size-selective precipitation: toward mass precipitation of aqueous nanoparticles.

    Science.gov (United States)

    Wang, Chun-Lei; Fang, Min; Xu, Shu-Hong; Cui, Yi-Ping

    2010-01-19

    Purification is a necessary step before the application of nanocrystals (NCs), since the excess matter in nanoparticles solution usually causes a disadvantage to their subsequent coupling or assembling with other materials. In this work, a novel salts-based precipitation technique is originally developed for the precipitation and size-selective precipitation of aqueous NCs. Simply by addition of salts, NCs can be precipitated from the solution. After decantation of the supernatant solution, the precipitates can be dispersed in water again. By means of adjusting the addition amount of salt, size-selective precipitation of aqueous NCs can be achieved. Namely, the NCs with large size are precipitated preferentially, leaving small NCs in solution. Compared with the traditional nonsolvents-based precipitation technique, the current one is simpler and more rapid due to the avoidance of condensation and heating manipulations used in the traditional precipitation process. Moreover, the salts-based precipitation technique was generally available for the precipitation of aqueous nanoparticles, no matter if there were semiconductor NCs or metal nanoparticles. Simultaneously, the cost of the current method is also much lower than that of the traditional nonsolvents-based precipitation technique, making it applicable for mass purification of aqueous NCs.

  15. Parameters selection in gene selection using Gaussian kernel support vector machines by genetic algorithm

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    In microarray-based cancer classification, gene selection is an important issue owing to the large number of variables and small number of samples as well as its non-linearity. It is difficult to get satisfying results by using conventional linear statistical methods. Recursive feature elimination based on support vector machine (SVM RFE) is an effective algorithm for gene selection and cancer classification, which are integrated into a consistent framework. In this paper, we propose a new method to select parameters of the aforementioned algorithm implemented with Gaussian kernel SVMs as better alternatives to the common practice of selecting the apparently best parameters by using a genetic algorithm to search for a couple of optimal parameter. Fast implementation issues for this method are also discussed for pragmatic reasons. The proposed method was tested on two representative hereditary breast cancer and acute leukaemia datasets. The experimental results indicate that the proposed method performs well in selecting genes and achieves high classification accuracies with these genes.

  16. Demographically-Based Evaluation of Genomic Regions under Selection in Domestic Dogs.

    Directory of Open Access Journals (Sweden)

    Adam H Freedman

    2016-03-01

    Full Text Available Controlling for background demographic effects is important for accurately identifying loci that have recently undergone positive selection. To date, the effects of demography have not yet been explicitly considered when identifying loci under selection during dog domestication. To investigate positive selection on the dog lineage early in the domestication, we examined patterns of polymorphism in six canid genomes that were previously used to infer a demographic model of dog domestication. Using an inferred demographic model, we computed false discovery rates (FDR and identified 349 outlier regions consistent with positive selection at a low FDR. The signals in the top 100 regions were frequently centered on candidate genes related to brain function and behavior, including LHFPL3, CADM2, GRIK3, SH3GL2, MBP, PDE7B, NTAN1, and GLRA1. These regions contained significant enrichments in behavioral ontology categories. The 3rd top hit, CCRN4L, plays a major role in lipid metabolism, that is supported by additional metabolism related candidates revealed in our scan, including SCP2D1 and PDXC1. Comparing our method to an empirical outlier approach that does not directly account for demography, we found only modest overlaps between the two methods, with 60% of empirical outliers having no overlap with our demography-based outlier detection approach. Demography-aware approaches have lower-rates of false discovery. Our top candidates for selection, in addition to expanding the set of neurobehavioral candidate genes, include genes related to lipid metabolism, suggesting a dietary target of selection that was important during the period when proto-dogs hunted and fed alongside hunter-gatherers.

  17. Method selection for mercury removal from hard coal

    Directory of Open Access Journals (Sweden)

    Dziok Tadeusz

    2017-01-01

    Full Text Available Mercury is commonly found in coal and the coal utilization processes constitute one of the main sources of mercury emission to the environment. This issue is particularly important for Poland, because the Polish energy production sector is based on brown and hard coal. The forecasts show that this trend in energy production will continue in the coming years. At the time of the emission limits introduction, methods of reducing the mercury emission will have to be implemented in Poland. Mercury emission can be reduced as a result of using coal with a relatively low mercury content. In the case of the absence of such coals, the methods of mercury removal from coal can be implemented. The currently used and developing methods include the coal cleaning process (both the coal washing and the dry deshaling as well as the thermal pretreatment of coal (mild pyrolysis. The effectiveness of these methods various for different coals, which is caused by the diversity of coal origin, various characteristics of coal and, especially, by the various modes of mercury occurrence in coal. It should be mentioned that the coal cleaning process allows for the removal of mercury occurring in mineral matter, mainly in pyrite. The thermal pretreatment of coal allows for the removal of mercury occurring in organic matter as well as in the inorganic constituents characterized by a low temperature of mercury release. In this paper, the guidelines for the selection of mercury removal method from hard coal were presented. The guidelines were developed taking into consideration: the effectiveness of mercury removal from coal in the process of coal cleaning and thermal pretreatment, the synergy effect resulting from the combination of these processes, the direction of coal utilization as well as the influence of these processes on coal properties.

  18. Predictive Active Set Selection Methods for Gaussian Processes

    DEFF Research Database (Denmark)

    Henao, Ricardo; Winther, Ole

    2012-01-01

    We propose an active set selection framework for Gaussian process classification for cases when the dataset is large enough to render its inference prohibitive. Our scheme consists of a two step alternating procedure of active set update rules and hyperparameter optimization based upon marginal...... high impact to the classifier decision process while removing those that are less relevant. We introduce two active set rules based on different criteria, the first one prefers a model with interpretable active set parameters whereas the second puts computational complexity first, thus a model...... with active set parameters that directly control its complexity. We also provide both theoretical and empirical support for our active set selection strategy being a good approximation of a full Gaussian process classifier. Our extensive experiments show that our approach can compete with state...

  19. Utilization of Selected Data Mining Methods for Communication Network Analysis

    Directory of Open Access Journals (Sweden)

    V. Ondryhal

    2011-06-01

    Full Text Available The aim of the project was to analyze the behavior of military communication networks based on work with real data collected continuously since 2005. With regard to the nature and amount of the data, data mining methods were selected for the purpose of analyses and experiments. The quality of real data is often insufficient for an immediate analysis. The article presents the data cleaning operations which have been carried out with the aim to improve the input data sample to obtain reliable models. Gradually, by means of properly chosen SW, network models were developed to verify generally valid patterns of network behavior as a bulk service. Furthermore, unlike the commercially available communication networks simulators, the models designed allowed us to capture nonstandard models of network behavior under an increased load, verify the correct sizing of the network to the increased load, and thus test its reliability. Finally, based on previous experience, the models enabled us to predict emergency situations with a reasonable accuracy.

  20. Quantitative sacroiliac scintigraphy. The effect of method of selection of region of interest

    International Nuclear Information System (INIS)

    Davis, M.C.; Turner, D.A.; Charters, J.R.; Golden, H.E.; Ali, A.; Fordham, E.W.

    1984-01-01

    Various authors have advocated quantitative methods of evaluating bone scintigrams to detect sacroiliitis, while others have not found them useful. Many explanations for this disagreement have been offered, including differences in the method of case selection, ethnicity, gender, and previous drug therapy. It would appear that one of the most important impediments to consistent results is the variability of selecting sacroiliac joint and reference regions of interest (ROIs). The effect of ROI selection would seem particularly important because of the normal variability of radioactivity within the reference regions that have been used (sacrum, spine, iliac wing) and the inhomogeneity of activity in the SI joints. We have investigated the effect of ROI selection, using five different methods representative of, though not necessarily identical to, those found in the literature. Each method produced unique mean indices that were different for patients with ankylosing spondylitis (AS) and controls. The method of Ayres (19) proved superior (largest mean difference, smallest variance), but none worked well as a diagnostic tool because of substantial overlap of the distributions of indices of patient and control groups. We conclude that ROI selection is important in determining results, and quantitative scintigraphic methods in general are not effective tools for diagnosing AS. Among the possible factors limiting success, difficulty in selecting a stable reference area seems of particular importance

  1. A Permutation Approach for Selecting the Penalty Parameter in Penalized Model Selection

    Science.gov (United States)

    Sabourin, Jeremy A; Valdar, William; Nobel, Andrew B

    2015-01-01

    Summary We describe a simple, computationally effcient, permutation-based procedure for selecting the penalty parameter in LASSO penalized regression. The procedure, permutation selection, is intended for applications where variable selection is the primary focus, and can be applied in a variety of structural settings, including that of generalized linear models. We briefly discuss connections between permutation selection and existing theory for the LASSO. In addition, we present a simulation study and an analysis of real biomedical data sets in which permutation selection is compared with selection based on the following: cross-validation (CV), the Bayesian information criterion (BIC), Scaled Sparse Linear Regression, and a selection method based on recently developed testing procedures for the LASSO. PMID:26243050

  2. Distant Supervision for Relation Extraction with Ranking-Based Methods

    Directory of Open Access Journals (Sweden)

    Yang Xiang

    2016-05-01

    Full Text Available Relation extraction has benefited from distant supervision in recent years with the development of natural language processing techniques and data explosion. However, distant supervision is still greatly limited by the quality of training data, due to its natural motivation for greatly reducing the heavy cost of data annotation. In this paper, we construct an architecture called MIML-sort (Multi-instance Multi-label Learning with Sorting Strategies, which is built on the famous MIML framework. Based on MIML-sort, we propose three ranking-based methods for sample selection with which we identify relation extractors from a subset of the training data. Experiments are set up on the KBP (Knowledge Base Propagation corpus, one of the benchmark datasets for distant supervision, which is large and noisy. Compared with previous work, the proposed methods produce considerably better results. Furthermore, the three methods together achieve the best F1 on the official testing set, with an optimal enhancement of F1 from 27.3% to 29.98%.

  3. Selecting device for processing method of radioactive gaseous wastes

    International Nuclear Information System (INIS)

    Sasaki, Ryoichi; Komoda, Norihisa.

    1976-01-01

    Object: To extend the period of replacement of a filter for adsorbing radioactive material by discharging waste gas containing radioactive material produced from an atomic power equipment after treating it by a method selected on the basis of the results of measurement of wind direction. Structure: Exhaust gas containing radioactive material produced from atomic power equipment is discharged after it is treated by a method selected on the basis of the results of wind direction measurement. For Instance, in case of sea wind the waste gas passes through a route selected for this case and is discharged through the waste gas outlet. When the sea wind disappears (that is, when a land wind or calm sets in), the exhaust gas is switched to a route for the case other than that of the sea wind, so that it passes through a filter consisting of active carbon where the radioactive material is removed through adsorption. The waste gas now free from the radioactive material is discharged through the waste gas outlet. (Moriyama, K.)

  4. Highly selective and sensitive method for Cu2 + detection based on chiroptical activity of L-Cysteine mediated Au nanorod assemblies

    Science.gov (United States)

    Abbasi, Shahryar; Khani, Hamzeh

    2017-11-01

    Herein, we demonstrated a simple and efficient method to detect Cu2 + based on amplified optical activity in the chiral nanoassemblies of gold nanorods (Au NRs). L-Cysteine can induce side-by-side or end-to-end assembly of Au NRs with an evident plasmonic circular dichroism (PCD) response due to coupling between surface plasmon resonances (SPR) of Au NRs and the chiral signal of L-Cys. Because of the obvious stronger plasmonic circular dichrosim (CD) response of the side-by-side assembly compared with the end-to-end assemblies, SS assembled Au NRs was selected as a sensitive platform and used for Cu2 + detection. In the presence of Cu2 +, Cu2 + can catalyze O2 oxidation of cysteine to cystine. With an increase in Cu2 + concentration, the L-Cysteine-mediated assembly of Au NRs decreased because of decrease in the free cysteine thiol groups, and the PCD signal decreased. Taking advantage of this method, Cu2 + could be detected in the concentration range of 20 pM-5 nM. Under optimal conditions, the calculated detection limit was found to be 7 pM.

  5. Application of two-dimensional binary fingerprinting methods for the design of selective Tankyrase I inhibitors.

    Science.gov (United States)

    Muddukrishna, B S; Pai, Vasudev; Lobo, Richard; Pai, Aravinda

    2017-11-22

    In the present study, five important binary fingerprinting techniques were used to model novel flavones for the selective inhibition of Tankyrase I. From the fingerprints used: the fingerprint atom pairs resulted in a statistically significant 2D QSAR model using a kernel-based partial least square regression method. This model indicates that the presence of electron-donating groups positively contributes to activity, whereas the presence of electron withdrawing groups negatively contributes to activity. This model could be used to develop more potent as well as selective analogues for the inhibition of Tankyrase I. Schematic representation of 2D QSAR work flow.

  6. Success/Failure Prediction of Noninvasive Mechanical Ventilation in Intensive Care Units. Using Multiclassifiers and Feature Selection Methods.

    Science.gov (United States)

    Martín-González, Félix; González-Robledo, Javier; Sánchez-Hernández, Fernando; Moreno-García, María N

    2016-05-17

    This paper addresses the problem of decision-making in relation to the administration of noninvasive mechanical ventilation (NIMV) in intensive care units. Data mining methods were employed to find out the factors influencing the success/failure of NIMV and to predict its results in future patients. These artificial intelligence-based methods have not been applied in this field in spite of the good results obtained in other medical areas. Feature selection methods provided the most influential variables in the success/failure of NIMV, such as NIMV hours, PaCO2 at the start, PaO2 / FiO2 ratio at the start, hematocrit at the start or PaO2 / FiO2 ratio after two hours. These methods were also used in the preprocessing step with the aim of improving the results of the classifiers. The algorithms provided the best results when the dataset used as input was the one containing the attributes selected with the CFS method. Data mining methods can be successfully applied to determine the most influential factors in the success/failure of NIMV and also to predict NIMV results in future patients. The results provided by classifiers can be improved by preprocessing the data with feature selection techniques.

  7. Differential privacy-based evaporative cooling feature selection and classification with relief-F and random forests.

    Science.gov (United States)

    Le, Trang T; Simmons, W Kyle; Misaki, Masaya; Bodurka, Jerzy; White, Bill C; Savitz, Jonathan; McKinney, Brett A

    2017-09-15

    Classification of individuals into disease or clinical categories from high-dimensional biological data with low prediction error is an important challenge of statistical learning in bioinformatics. Feature selection can improve classification accuracy but must be incorporated carefully into cross-validation to avoid overfitting. Recently, feature selection methods based on differential privacy, such as differentially private random forests and reusable holdout sets, have been proposed. However, for domains such as bioinformatics, where the number of features is much larger than the number of observations p≫n , these differential privacy methods are susceptible to overfitting. We introduce private Evaporative Cooling, a stochastic privacy-preserving machine learning algorithm that uses Relief-F for feature selection and random forest for privacy preserving classification that also prevents overfitting. We relate the privacy-preserving threshold mechanism to a thermodynamic Maxwell-Boltzmann distribution, where the temperature represents the privacy threshold. We use the thermal statistical physics concept of Evaporative Cooling of atomic gases to perform backward stepwise privacy-preserving feature selection. On simulated data with main effects and statistical interactions, we compare accuracies on holdout and validation sets for three privacy-preserving methods: the reusable holdout, reusable holdout with random forest, and private Evaporative Cooling, which uses Relief-F feature selection and random forest classification. In simulations where interactions exist between attributes, private Evaporative Cooling provides higher classification accuracy without overfitting based on an independent validation set. In simulations without interactions, thresholdout with random forest and private Evaporative Cooling give comparable accuracies. We also apply these privacy methods to human brain resting-state fMRI data from a study of major depressive disorder. Code

  8. Evaluation of methods for selecting the midventilation bin in 4DCT scans of lung cancer patients

    DEFF Research Database (Denmark)

    Nygaard, Ditte Eklund; Persson, Gitte Fredberg; Brink, Carsten

    2013-01-01

    based on: 1) visual evaluation of tumour displacement; 2) rigid registration of tumour position; 3) diaphragm displacement in the CC direction; and 4) carina displacement in the CC direction. Determination of the MidV bin based on the displacement of the manually delineated gross tumour volume (GTV.......4-5.4) mm, 1.9 (0.5-6.9) mm, 2.0 (0.5-12.3) mm and 1.1 (0.4-5.4) mm for the visual, rigid registration, diaphragm, carina, and reference method. Median (range) absolute difference between geometric MidV error for the evaluated methods and the reference method was 0.0 (0.0-1.2) mm, 0.0 (0.0-1.7) mm, 0.7 (0.......0-3.9) mm and 1.0 (0.0-6.9) mm for the visual, rigid registration, diaphragm and carina method. Conclusion. The visual and semi-automatic rigid registration methods were equivalent in accuracy for selecting the MidV bin of a 4DCT scan. The methods based on diaphragm and carina displacement cannot...

  9. Remote sensing image ship target detection method based on visual attention model

    Science.gov (United States)

    Sun, Yuejiao; Lei, Wuhu; Ren, Xiaodong

    2017-11-01

    The traditional methods of detecting ship targets in remote sensing images mostly use sliding window to search the whole image comprehensively. However, the target usually occupies only a small fraction of the image. This method has high computational complexity for large format visible image data. The bottom-up selective attention mechanism can selectively allocate computing resources according to visual stimuli, thus improving the computational efficiency and reducing the difficulty of analysis. Considering of that, a method of ship target detection in remote sensing images based on visual attention model was proposed in this paper. The experimental results show that the proposed method can reduce the computational complexity while improving the detection accuracy, and improve the detection efficiency of ship targets in remote sensing images.

  10. portfolio optimization based on nonparametric estimation methods

    Directory of Open Access Journals (Sweden)

    mahsa ghandehari

    2017-03-01

    Full Text Available One of the major issues investors are facing with in capital markets is decision making about select an appropriate stock exchange for investing and selecting an optimal portfolio. This process is done through the risk and expected return assessment. On the other hand in portfolio selection problem if the assets expected returns are normally distributed, variance and standard deviation are used as a risk measure. But, the expected returns on assets are not necessarily normal and sometimes have dramatic differences from normal distribution. This paper with the introduction of conditional value at risk ( CVaR, as a measure of risk in a nonparametric framework, for a given expected return, offers the optimal portfolio and this method is compared with the linear programming method. The data used in this study consists of monthly returns of 15 companies selected from the top 50 companies in Tehran Stock Exchange during the winter of 1392 which is considered from April of 1388 to June of 1393. The results of this study show the superiority of nonparametric method over the linear programming method and the nonparametric method is much faster than the linear programming method.

  11. Method for hydrometallurgical recovery of selected metals

    International Nuclear Information System (INIS)

    Lorenz, G.; Schaefer, B.; Balzat, W.

    1988-01-01

    The method for hydrometallurgical recovery of selected metals refers to ore dressing by means of milling and alkaline leaching of metals, preferably uranium. By adding CaO during wet milling, Na + or K + ions of clayey ores are replaced by Ca 2+ ions. Due to the ion exchange processes, the uranium bonded with clays becomes more accessible to the leaching solution. The uranium yield increases and the consumption of reagents decreases

  12. Evidence based case selection: An innovative knowledge management method to cluster public technical and vocational education and training colleges in South Africa

    CSIR Research Space (South Africa)

    Visser, MM

    2017-03-01

    Full Text Available for MIS maturity. The method can be applied in quantitative, qualitative and mixed methods research, to group a population and thereby simplify the process of sample selection of cases for further in-depth investigation....

  13. URBAN RAIN GAUGE SITING SELECTION BASED ON GIS-MULTICRITERIA ANALYSIS

    OpenAIRE

    Y. Fu; C. Jing; M. Du

    2016-01-01

    With the increasingly rapid growth of urbanization and climate change, urban rainfall monitoring as well as urban waterlogging has widely been paid attention. In the light of conventional siting selection methods do not take into consideration of geographic surroundings and spatial-temporal scale for the urban rain gauge site selection, this paper primarily aims at finding the appropriate siting selection rules and methods for rain gauge in urban area. Additionally, for optimization gauge loc...

  14. An improved culture method for selective isolation of Campylobacter jejuni from wastewater

    Directory of Open Access Journals (Sweden)

    Jinyong Kim

    2016-08-01

    Full Text Available Campylobacter jejuni is one of the leading foodborne pathogens worldwide. C. jejuni is isolated from a wide range of foods, domestic animals, wildlife, and environmental sources. The currently-available culture-based isolation methods are not highly effective for wastewater samples due to the low number of C. jejuni in the midst of competing bacteria. To detect and isolate C. jejuni from wastewater samples, in this study, we evaluated a few different enrichment conditions using five different antibiotics (i.e., cefoperazone, vancomycin, trimethoprim, polymyxin B, and rifampicin, to which C. jejuni is intrinsically resistant. The selectivity of each enrichment condition was measured with Ct value using quantitative real-time PCR (qRT-PCR, and multiplex PCR to determine Campylobacter species. In addition, the efficacy of Campylobacter isolation on different culture media after selective enrichment was examined by growing on Bolton and Preston agar plates. The addition of polymyxin B, rifampicin, or both to the Bolton selective supplements enhanced the selective isolation of C. jejuni. In particular, rifampicin supplementation and an increased culture temperature (i.e., 42°C had a decisive effect on the selective enrichment of C. jejuni from wastewater. The results of 16S rDNA sequencing also revealed that Enterococcus spp. and Pseudomonas aeruginosa are major competing bacteria in the enrichment conditions. Although it is known to be difficult to isolate Campylobacter from samples with heavy contamination, this study well exhibited that the manipulation of antibiotic selective pressure improves the isolation efficiency of fastidious Campylobacter from wastewater.

  15. Comparative study of SVM methods combined with voxel selection for object category classification on fMRI data.

    Science.gov (United States)

    Song, Sutao; Zhan, Zhichao; Long, Zhiying; Zhang, Jiacai; Yao, Li

    2011-02-16

    Support vector machine (SVM) has been widely used as accurate and reliable method to decipher brain patterns from functional MRI (fMRI) data. Previous studies have not found a clear benefit for non-linear (polynomial kernel) SVM versus linear one. Here, a more effective non-linear SVM using radial basis function (RBF) kernel is compared with linear SVM. Different from traditional studies which focused either merely on the evaluation of different types of SVM or the voxel selection methods, we aimed to investigate the overall performance of linear and RBF SVM for fMRI classification together with voxel selection schemes on classification accuracy and time-consuming. Six different voxel selection methods were employed to decide which voxels of fMRI data would be included in SVM classifiers with linear and RBF kernels in classifying 4-category objects. Then the overall performances of voxel selection and classification methods were compared. Results showed that: (1) Voxel selection had an important impact on the classification accuracy of the classifiers: in a relative low dimensional feature space, RBF SVM outperformed linear SVM significantly; in a relative high dimensional space, linear SVM performed better than its counterpart; (2) Considering the classification accuracy and time-consuming holistically, linear SVM with relative more voxels as features and RBF SVM with small set of voxels (after PCA) could achieve the better accuracy and cost shorter time. The present work provides the first empirical result of linear and RBF SVM in classification of fMRI data, combined with voxel selection methods. Based on the findings, if only classification accuracy was concerned, RBF SVM with appropriate small voxels and linear SVM with relative more voxels were two suggested solutions; if users concerned more about the computational time, RBF SVM with relative small set of voxels when part of the principal components were kept as features was a better choice.

  16. Preparation of Iron Nanoparticles by Selective Leaching Method

    Czech Academy of Sciences Publication Activity Database

    Michalcová, A.; Vojtěch, D.; Kubatík, Tomáš František; Stehlíková, K.; Brabec, F.; Marek, I.

    2015-01-01

    Roč. 128, č. 4 (2015), s. 640-642 ISSN 0587-4246. [International Symposium on Physics of Materials (ISPMA) /13./. Prague, 31.08.2014-04.09.2014] Institutional support: RVO:61389021 Keywords : Iron nanoparticles * selective leaching method Subject RIV: JK - Corrosion ; Surface Treatment of Materials Impact factor: 0.525, year: 2015

  17. Selecting and Using Mathematics Methods Texts: Nontrivial Tasks

    Science.gov (United States)

    Harkness, Shelly Sheats; Brass, Amy

    2017-01-01

    Mathematics methods textbooks/texts are important components of many courses for preservice teachers. Researchers should explore how these texts are selected and used. Within this paper we report the findings of a survey administered electronically to 132 members of the Association of Mathematics Teacher Educators (AMTE) in order to answer the…

  18. Development, validation and application of a micro-liquid chromatography-tandem mass spectrometry based method for simultaneous quantification of selected protein biomarkers of endothelial dysfunction in murine plasma.

    Science.gov (United States)

    Suraj, Joanna; Kurpińska, Anna; Olkowicz, Mariola; Niedzielska-Andres, Ewa; Smolik, Magdalena; Zakrzewska, Agnieszka; Jasztal, Agnieszka; Sitek, Barbara; Chlopicki, Stefan; Walczak, Maria

    2018-02-05

    The objective of this study was to develop and validate the method based on micro-liquid chromatography-tandem mass spectrometry (microLC/MS-MRM) for simultaneous determination of adiponectin (ADN), von Willebrand factor (vWF), soluble form of vascular cell adhesion molecule 1 (sVCAM-1), soluble form of intercellular adhesion molecule 1 (sICAM-1) and syndecan-1 (SDC-1) in mouse plasma. The calibration range was established from 2.5pmol/mL to 5000pmol/mL for ADN; 5pmol/mL to 5000pmol/mL for vWF; 0.375pmol/mL to 250pmol/mL for sVCAM-1 and sICAM-1; and 0.25pmol/mL to 250pmol/mL for SDC-1. The method was applied to measure the plasma concentration of selected proteins in mice fed high-fat diet (HFD), and revealed the pro-thrombotic status by increased concentration of vWF (1.31±0.17 nmol/mL (Control) vs 1.98±0.09 nmol/mL (HFD), p <0.05) and the dysregulation of adipose tissue metabolism by decreased concentration of ADN (0.62±0.08 nmol/mL (Control) vs 0.37±0.06 nmol/mL (HFD), p <0.05). In conclusion, the microLC/MS-MRM-based method allows for reliable measurements of selected protein biomarkers of endothelial dysfunction in mouse plasma. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Improved Screening Method for the Selection of Wine Yeasts Based on Their Pigment Adsorption Activity

    Directory of Open Access Journals (Sweden)

    Andrea Caridi

    2013-01-01

    Full Text Available The aim of this research is to improve an existing low-cost and simple but consistent culturing technique for measuring the adsorption of grape skin pigments on yeasts, comprising: (i growing yeasts in Petri dishes on chromogenic grape-skin-based medium, (ii photographing the yeast biomass, (iii measuring its red, green, and blue colour components, and (iv performing the statistical analysis of the data. Twenty strains of Saccharomyces cerevisiae were grown on different lots of the chromogenic medium, prepared using grape skins from dark cultivars Greco Nero, Magliocco and Nero d’Avola. Microscale wine fermentation trials were also performed. Wide and significant differences among wine yeasts were observed. The chromogenic grape-skin-based medium can be prepared using any grape cultivar, thus allowing the specific selection of the most suitable strain of Saccharomyces cerevisiae for each grape must, mainly for red winemaking. The research provides a useful tool to characterize wine yeasts in relation to pigment adsorption, allowing the improvement of wine colour.

  20. A general procedure to generate models for urban environmental-noise pollution using feature selection and machine learning methods.

    Science.gov (United States)

    Torija, Antonio J; Ruiz, Diego P

    2015-02-01

    The prediction of environmental noise in urban environments requires the solution of a complex and non-linear problem, since there are complex relationships among the multitude of variables involved in the characterization and modelling of environmental noise and environmental-noise magnitudes. Moreover, the inclusion of the great spatial heterogeneity characteristic of urban environments seems to be essential in order to achieve an accurate environmental-noise prediction in cities. This problem is addressed in this paper, where a procedure based on feature-selection techniques and machine-learning regression methods is proposed and applied to this environmental problem. Three machine-learning regression methods, which are considered very robust in solving non-linear problems, are used to estimate the energy-equivalent sound-pressure level descriptor (LAeq). These three methods are: (i) multilayer perceptron (MLP), (ii) sequential minimal optimisation (SMO), and (iii) Gaussian processes for regression (GPR). In addition, because of the high number of input variables involved in environmental-noise modelling and estimation in urban environments, which make LAeq prediction models quite complex and costly in terms of time and resources for application to real situations, three different techniques are used to approach feature selection or data reduction. The feature-selection techniques used are: (i) correlation-based feature-subset selection (CFS), (ii) wrapper for feature-subset selection (WFS), and the data reduction technique is principal-component analysis (PCA). The subsequent analysis leads to a proposal of different schemes, depending on the needs regarding data collection and accuracy. The use of WFS as the feature-selection technique with the implementation of SMO or GPR as regression algorithm provides the best LAeq estimation (R(2)=0.94 and mean absolute error (MAE)=1.14-1.16 dB(A)). Copyright © 2014 Elsevier B.V. All rights reserved.

  1. New Interval-Valued Intuitionistic Fuzzy Behavioral MADM Method and Its Application in the Selection of Photovoltaic Cells

    Directory of Open Access Journals (Sweden)

    Xiaolu Zhang

    2016-10-01

    Full Text Available As one of the emerging renewable resources, the use of photovoltaic cells has become a promise for offering clean and plentiful energy. The selection of a best photovoltaic cell for a promoter plays a significant role in aspect of maximizing income, minimizing costs and conferring high maturity and reliability, which is a typical multiple attribute decision making (MADM problem. Although many prominent MADM techniques have been developed, most of them are usually to select the optimal alternative under the hypothesis that the decision maker or expert is completely rational and the decision data are represented by crisp values. However, in the selecting processes of photovoltaic cells the decision maker is usually bounded rational and the ratings of alternatives are usually imprecise and vague. To address these kinds of complex and common issues, in this paper we develop a new interval-valued intuitionistic fuzzy behavioral MADM method. We employ interval-valued intuitionistic fuzzy numbers (IVIFNs to express the imprecise ratings of alternatives; and we construct LINMAP-based nonlinear programming models to identify the reference points under IVIFNs contexts, which avoid the subjective randomness of selecting the reference points. Finally we develop a prospect theory-based ranking method to identify the optimal alternative, which takes fully into account the decision maker’s behavioral characteristics such as reference dependence, diminishing sensitivity and loss aversion in the decision making process.

  2. A comparison of statistical methods for genomic selection in a mice population

    Directory of Open Access Journals (Sweden)

    Neves Haroldo HR

    2012-11-01

    Full Text Available Abstract Background The availability of high-density panels of SNP markers has opened new perspectives for marker-assisted selection strategies, such that genotypes for these markers are used to predict the genetic merit of selection candidates. Because the number of markers is often much larger than the number of phenotypes, marker effect estimation is not a trivial task. The objective of this research was to compare the predictive performance of ten different statistical methods employed in genomic selection, by analyzing data from a heterogeneous stock mice population. Results For the five traits analyzed (W6W: weight at six weeks, WGS: growth slope, BL: body length, %CD8+: percentage of CD8+ cells, CD4+/ CD8+: ratio between CD4+ and CD8+ cells, within-family predictions were more accurate than across-family predictions, although this superiority in accuracy varied markedly across traits. For within-family prediction, two kernel methods, Reproducing Kernel Hilbert Spaces Regression (RKHS and Support Vector Regression (SVR, were the most accurate for W6W, while a polygenic model also had comparable performance. A form of ridge regression assuming that all markers contribute to the additive variance (RR_GBLUP figured among the most accurate for WGS and BL, while two variable selection methods ( LASSO and Random Forest, RF had the greatest predictive abilities for %CD8+ and CD4+/ CD8+. RF, RKHS, SVR and RR_GBLUP outperformed the remainder methods in terms of bias and inflation of predictions. Conclusions Methods with large conceptual differences reached very similar predictive abilities and a clear re-ranking of methods was observed in function of the trait analyzed. Variable selection methods were more accurate than the remainder in the case of %CD8+ and CD4+/CD8+ and these traits are likely to be influenced by a smaller number of QTL than the remainder. Judged by their overall performance across traits and computational requirements, RR

  3. Novel Selectivity-Based Forensic Toxicological Validation of a Paper Spray Mass Spectrometry Method for the Quantitative Determination of Eight Amphetamines in Whole Blood

    Science.gov (United States)

    Teunissen, Sebastiaan F.; Fedick, Patrick W.; Berendsen, Bjorn J. A.; Nielen, Michel W. F.; Eberlin, Marcos N.; Graham Cooks, R.; van Asten, Arian C.

    2017-12-01

    Paper spray tandem mass spectrometry is used to identify and quantify eight individual amphetamines in whole blood in 1.3 min. The method has been optimized and fully validated according to forensic toxicology guidelines, for the quantification of amphetamine, methamphetamine, 3,4-methylenedioxyamphetamine (MDA), 3,4-methylenedioxy- N-methylamphetamine (MDMA), 3,4-methylenedioxy- N-ethylamphetamine (MDEA), para-methoxyamphetamine (PMA), para-methoxymethamphetamine (PMMA), and 4-fluoroamphetamine (4-FA). Additionally, a new concept of intrinsic and application-based selectivity is discussed, featuring increased confidence in the power to discriminate the amphetamines from other chemically similar compounds when applying an ambient mass spectrometric method without chromatographic separation. Accuracy was within ±15% and average precision was better than 15%, and better than 20% at the LLOQ. Detection limits between 15 and 50 ng/mL were obtained using only 12 μL of whole blood. [Figure not available: see fulltext.

  4. A Comparison of Multidimensional Item Selection Methods in Simple and Complex Test Designs

    Directory of Open Access Journals (Sweden)

    Eren Halil ÖZBERK

    2017-03-01

    Full Text Available In contrast with the previous studies, this study employed various test designs (simple and complex which allow the evaluation of the overall ability score estimations across multiple real test conditions. In this study, four factors were manipulated, namely the test design, number of items per dimension, correlation between dimensions and item selection methods. Using the generated item and ability parameters, dichotomous item responses were generated in by using M3PL compensatory multidimensional IRT model with specified correlations. MCAT composite ability score accuracy was evaluated using absolute bias (ABSBIAS, correlation and the root mean square error (RMSE between true and estimated ability scores. The results suggest that the multidimensional test structure, number of item per dimension and correlation between dimensions had significant effect on item selection methods for the overall score estimations. For simple structure test design it was found that V1 item selection has the lowest absolute bias estimations for both long and short tests while estimating overall scores. As the model gets complex KL item selection method performed better than other two item selection method.

  5. A Swarm-Based Learning Method Inspired by Social Insects

    Science.gov (United States)

    He, Xiaoxian; Zhu, Yunlong; Hu, Kunyuan; Niu, Ben

    Inspired by cooperative transport behaviors of ants, on the basis of Q-learning, a new learning method, Neighbor-Information-Reference (NIR) learning method, is present in the paper. This is a swarm-based learning method, in which principles of swarm intelligence are strictly complied with. In NIR learning, the i-interval neighbor's information, namely its discounted reward, is referenced when an individual selects the next state, so that it can make the best decision in a computable local neighborhood. In application, different policies of NIR learning are recommended by controlling the parameters according to time-relativity of concrete tasks. NIR learning can remarkably improve individual efficiency, and make swarm more "intelligent".

  6. Evaluation of gene importance in microarray data based upon probability of selection

    Directory of Open Access Journals (Sweden)

    Fu Li M

    2005-03-01

    Full Text Available Abstract Background Microarray devices permit a genome-scale evaluation of gene function. This technology has catalyzed biomedical research and development in recent years. As many important diseases can be traced down to the gene level, a long-standing research problem is to identify specific gene expression patterns linking to metabolic characteristics that contribute to disease development and progression. The microarray approach offers an expedited solution to this problem. However, it has posed a challenging issue to recognize disease-related genes expression patterns embedded in the microarray data. In selecting a small set of biologically significant genes for classifier design, the nature of high data dimensionality inherent in this problem creates substantial amount of uncertainty. Results Here we present a model for probability analysis of selected genes in order to determine their importance. Our contribution is that we show how to derive the P value of each selected gene in multiple gene selection trials based on different combinations of data samples and how to conduct a reliability analysis accordingly. The importance of a gene is indicated by its associated P value in that a smaller value implies higher information content from information theory. On the microarray data concerning the subtype classification of small round blue cell tumors, we demonstrate that the method is capable of finding the smallest set of genes (19 genes with optimal classification performance, compared with results reported in the literature. Conclusion In classifier design based on microarray data, the probability value derived from gene selection based on multiple combinations of data samples enables an effective mechanism for reducing the tendency of fitting local data particularities.

  7. A new screening method for selection of desired recombinant ...

    African Journals Online (AJOL)

    A new screening method for selection of desired recombinant plasmids in molecular cloning. ... African Journal of Biotechnology ... Regarding the facts of this study, after digestion process, the products directly were subjected to ligation. Due to ...

  8. Feature Selection Based on Mutual Correlation

    Czech Academy of Sciences Publication Activity Database

    Haindl, Michal; Somol, Petr; Ververidis, D.; Kotropoulos, C.

    2006-01-01

    Roč. 19, č. 4225 (2006), s. 569-577 ISSN 0302-9743. [Iberoamerican Congress on Pattern Recognition. CIARP 2006 /11./. Cancun, 14.11.2006-17.11.2006] R&D Projects: GA AV ČR 1ET400750407; GA MŠk 1M0572; GA AV ČR IAA2075302 EU Projects: European Commission(XE) 507752 - MUSCLE Institutional research plan: CEZ:AV0Z10750506 Keywords : feature selection Subject RIV: BD - Theory of Information Impact factor: 0.402, year: 2005 http://library.utia.cas.cz/separaty/historie/haindl-feature selection based on mutual correlation.pdf

  9. MADM Technique Integrated with Grey- based Taguchi method for Selection of Alluminium alloys to minimize deburring cost during Drilling

    Directory of Open Access Journals (Sweden)

    Reddy Sreenivasulu

    2015-06-01

    Full Text Available Traditionally, burr problems had been considered unavoidable so that most efforts had been made on removal of the burr as a post process. Nowadays, a trend of manufacturing is an integration of the whole production flow from design to end product. Manufacturing problem issues are handled in various stages even from design stage. Therefore, the methods of describing the burr are getting much attention in recent years for the systematic approach to resolve the burr problem at various manufacturing stages. The main objective of this paper is to explore the basic concepts of MADM methods. In this study, five parameters namely speed, feed, drill size, drill geometry such as point angle and clearance angle were identified to influence more on burr formation during drilling. L 18 orthogonal array was selected and experiments were conducted as per Taguchi experimental plan for Aluminium alloy of 2014, 6061, 5035 and 7075 series. The experiment performed on a CNC Machining center with HSS twist drills. The burr size such as height and thickness were measured on exit of each hole. An optimal combination of process parameters was obtained to minimize the burr size via grey relational analysis. The output from grey based- taguchi method fed as input to the MADM. Apart from burr size strength and temperature are also considered as attributes. Finally, the results generated in MADM suggests the suitable alternative of  aluminium alloy, which results in less deburring cost, high strength and high resistance at elevated temperatures.

  10. On the Feature Selection and Classification Based on Information Gain for Document Sentiment Analysis

    Directory of Open Access Journals (Sweden)

    Asriyanti Indah Pratiwi

    2018-01-01

    Full Text Available Sentiment analysis in a movie review is the needs of today lifestyle. Unfortunately, enormous features make the sentiment of analysis slow and less sensitive. Finding the optimum feature selection and classification is still a challenge. In order to handle an enormous number of features and provide better sentiment classification, an information-based feature selection and classification are proposed. The proposed method reduces more than 90% unnecessary features while the proposed classification scheme achieves 96% accuracy of sentiment classification. From the experimental results, it can be concluded that the combination of proposed feature selection and classification achieves the best performance so far.

  11. The Hull Method for Selecting the Number of Common Factors

    Science.gov (United States)

    Lorenzo-Seva, Urbano; Timmerman, Marieke E.; Kiers, Henk A. L.

    2011-01-01

    A common problem in exploratory factor analysis is how many factors need to be extracted from a particular data set. We propose a new method for selecting the number of major common factors: the Hull method, which aims to find a model with an optimal balance between model fit and number of parameters. We examine the performance of the method in an…

  12. Orthogonal feature selection method. [For preprocessing of man spectral data

    Energy Technology Data Exchange (ETDEWEB)

    Kowalski, B R [Univ. of Washington, Seattle; Bender, C F

    1976-01-01

    A new method of preprocessing spectral data for extraction of molecular structural information is desired. This SELECT method generates orthogonal features that are important for classification purposes and that also retain their identity to the original measurements. A brief introduction to chemical pattern recognition is presented. A brief description of the method and an application to mass spectral data analysis follow. (BLM)

  13. [Feature extraction for breast cancer data based on geometric algebra theory and feature selection using differential evolution].

    Science.gov (United States)

    Li, Jing; Hong, Wenxue

    2014-12-01

    The feature extraction and feature selection are the important issues in pattern recognition. Based on the geometric algebra representation of vector, a new feature extraction method using blade coefficient of geometric algebra was proposed in this study. At the same time, an improved differential evolution (DE) feature selection method was proposed to solve the elevated high dimension issue. The simple linear discriminant analysis was used as the classifier. The result of the 10-fold cross-validation (10 CV) classification of public breast cancer biomedical dataset was more than 96% and proved superior to that of the original features and traditional feature extraction method.

  14. Development of knowledge acquisition methods for knowledge base construction for autonomous plants

    Energy Technology Data Exchange (ETDEWEB)

    Yoshikawa, S. [Power Reactor and Nuclear Fuel Development Corp., Oarai, Ibaraki (Japan). Oarai Engineering Center; Sasajima, M.; Kitamura, Y.; Ikeda, M.; Mizoguchi, R.

    1993-03-01

    In order to enhance safety and reliability of nuclear plant operation, it is strongly desired to construct diagnostic knowledge base without lacking, contradiction, and description inconsistency. Nowadays, an advanced method Knowledge Compiler` has been studied to acquire diagnostic knowledge, mainly based on qualitative reasoning technique, without accumulating heuristics by interviews. Until now, 2 methods to suppress the ambiguity observed when qualitative reasoning mechanism were applied to heat transport systems of nuclear power plants: In the first method, qualitative values are allocated to the system variables along with the causality direction, avoiding contradictions among plural variables in each qualitative constraint describing knowledge of deviation propagation, heat balance, or energy conservation. In the second method, all the qualitative information is represented as a set of simultaneous qualitative equations. And, an appropriate subset is selected so that the qualitative solutions of unknowns in this subset can be derived independently of the remaining part. A contrary method is applied for the selected subset to derive local solutions. Then the problem size is reduced by substituting solutions of the subset, in a recursive manner. In the previous report on this research project, complete computer softwares have been constructed based on these methods, and applied to a 2-loop heat transport system of a nuclear power plant. The detailed results are discussed in this report. In addition, an integrated configuration of diagnostic knowledge generation system of nuclear power plants is proposed, based upon the results and new foundings obtained through the research activities so far, and the future works to overcome remaining problems are also identified. (author)

  15. GMDH-Based Semi-Supervised Feature Selection for Electricity Load Classification Forecasting

    Directory of Open Access Journals (Sweden)

    Lintao Yang

    2018-01-01

    Full Text Available With the development of smart power grids, communication network technology and sensor technology, there has been an exponential growth in complex electricity load data. Irregular electricity load fluctuations caused by the weather and holiday factors disrupt the daily operation of the power companies. To deal with these challenges, this paper investigates a day-ahead electricity peak load interval forecasting problem. It transforms the conventional continuous forecasting problem into a novel interval forecasting problem, and then further converts the interval forecasting problem into the classification forecasting problem. In addition, an indicator system influencing the electricity load is established from three dimensions, namely the load series, calendar data, and weather data. A semi-supervised feature selection algorithm is proposed to address an electricity load classification forecasting issue based on the group method of data handling (GMDH technology. The proposed algorithm consists of three main stages: (1 training the basic classifier; (2 selectively marking the most suitable samples from the unclassified label data, and adding them to an initial training set; and (3 training the classification models on the final training set and classifying the test samples. An empirical analysis of electricity load dataset from four Chinese cities is conducted. Results show that the proposed model can address the electricity load classification forecasting problem more efficiently and effectively than the FW-Semi FS (forward semi-supervised feature selection and GMDH-U (GMDH-based semi-supervised feature selection for customer classification models.

  16. A novel method based on selective laser sintering for preparing high-performance carbon fibres/polyamide12/epoxy ternary composites

    Science.gov (United States)

    Zhu, Wei; Yan, Chunze; Shi, Yunsong; Wen, Shifeng; Liu, Jie; Wei, Qingsong; Shi, Yusheng

    2016-09-01

    A novel method based on selective laser sintering (SLS) process is proposed for the first time to prepare complex and high-performance carbon fibres/polyamide12/epoxy (CF/PA12/EP) ternary composites. The procedures are briefly described as follows: prepare polyamide12 (PA12) coated carbon fibre (CF) composite powder; build porous green parts by SLS; infiltrate the green parts with high-performance thermosetting epoxy (EP) resin; and finally cure the resin at high temperature. The obtained composites are a ternary composite system consisting of the matrix of novolac EP resin, the reinforcement of CFs and the transition thin layer of PA12 with a thickness of 595 nm. The SEM images and micro-CT analysis prove that the ternary system is a three-dimensional co-continuous structure and the reinforcement of CFs are well dispersed in the matrix of EP with the volume fraction of 31%. Mechanical tests show that the composites fabricated by this method yield an ultimate tensile strength of 101.03 MPa and a flexural strength of 153.43 MPa, which are higher than those of most of the previously reported SLS materials. Therefore, the process proposed in this paper shows great potential for manufacturing complex, lightweight and high-performance CF reinforced composite components in aerospace, automotive industries and other areas.

  17. An FPGA-based heterogeneous image fusion system design method

    Science.gov (United States)

    Song, Le; Lin, Yu-chi; Chen, Yan-hua; Zhao, Mei-rong

    2011-08-01

    Taking the advantages of FPGA's low cost and compact structure, an FPGA-based heterogeneous image fusion platform is established in this study. Altera's Cyclone IV series FPGA is adopted as the core processor of the platform, and the visible light CCD camera and infrared thermal imager are used as the image-capturing device in order to obtain dualchannel heterogeneous video images. Tailor-made image fusion algorithms such as gray-scale weighted averaging, maximum selection and minimum selection methods are analyzed and compared. VHDL language and the synchronous design method are utilized to perform a reliable RTL-level description. Altera's Quartus II 9.0 software is applied to simulate and implement the algorithm modules. The contrast experiments of various fusion algorithms show that, preferably image quality of the heterogeneous image fusion can be obtained on top of the proposed system. The applied range of the different fusion algorithms is also discussed.

  18. Characteristic gene selection via weighting principal components by singular values.

    Directory of Open Access Journals (Sweden)

    Jin-Xing Liu

    Full Text Available Conventional gene selection methods based on principal component analysis (PCA use only the first principal component (PC of PCA or sparse PCA to select characteristic genes. These methods indeed assume that the first PC plays a dominant role in gene selection. However, in a number of cases this assumption is not satisfied, so the conventional PCA-based methods usually provide poor selection results. In order to improve the performance of the PCA-based gene selection method, we put forward the gene selection method via weighting PCs by singular values (WPCS. Because different PCs have different importance, the singular values are exploited as the weights to represent the influence on gene selection of different PCs. The ROC curves and AUC statistics on artificial data show that our method outperforms the state-of-the-art methods. Moreover, experimental results on real gene expression data sets show that our method can extract more characteristic genes in response to abiotic stresses than conventional gene selection methods.

  19. A rapid and highly selective method for the estimation of pyro-, tri- and orthophosphates.

    Science.gov (United States)

    Kamat, D R; Savant, V V; Sathyanarayana, D N

    1995-03-01

    A rapid, highly selective and simple method has been developed for the quantitative determination of pyro-, tri- and orthophosphates. The method is based on the formation of a solid complex of bis(ethylenediamine)cobalt(III) species with pyrophosphate at pH 4.2-4.3, with triphosphate at pH 2.0-2.1 and with orthophosphate at pH 8.2-8.6. The proposed method for pyro- and triphosphates differs from the available method, which is based on the formation of an adduct with tris(ethylenediamine)cobalt(III) species. The complexes have the composition [Co(en)(2)HP(2)O(7)]4H(2)O and [Co(en)(2)H(2)P(3)O(10)]2H(2)O, respectively. The precipitation is instantaneous and quantitative under the recommended optimum conditions giving 99.5% gravimetric yield in both cases. There is no interferences from orthophosphate, trimetaphosphate and pyrophosphate species in the triphosphate estimation up to 5% of each component. The efficacy of the method has been established by determining pyrophosphate and triphosphate contents in various matrices. In the case of orthophosphate, the proposed method differs from the available methods such as ammonium phosphomolybdate, vanadophosphomolybdate and quinoline phosphomolybdate, which are based on the formation of a precipitate, followed by either titrimetry or gravimetry. The precipitation is instantaneous and the method is simple. Under the recommended pH and other reaction conditions, gravimetric yields of 99.6-100% are obtainable. The method is applicable to orthophosphoric acid and a variety of phosphate salts.

  20. SVM Based Descriptor Selection and Classification of Neurodegenerative Disease Drugs for Pharmacological Modeling.

    Science.gov (United States)

    Shahid, Mohammad; Shahzad Cheema, Muhammad; Klenner, Alexander; Younesi, Erfan; Hofmann-Apitius, Martin

    2013-03-01

    Systems pharmacological modeling of drug mode of action for the next generation of multitarget drugs may open new routes for drug design and discovery. Computational methods are widely used in this context amongst which support vector machines (SVM) have proven successful in addressing the challenge of classifying drugs with similar features. We have applied a variety of such SVM-based approaches, namely SVM-based recursive feature elimination (SVM-RFE). We use the approach to predict the pharmacological properties of drugs widely used against complex neurodegenerative disorders (NDD) and to build an in-silico computational model for the binary classification of NDD drugs from other drugs. Application of an SVM-RFE model to a set of drugs successfully classified NDD drugs from non-NDD drugs and resulted in overall accuracy of ∼80 % with 10 fold cross validation using 40 top ranked molecular descriptors selected out of total 314 descriptors. Moreover, SVM-RFE method outperformed linear discriminant analysis (LDA) based feature selection and classification. The model reduced the multidimensional descriptors space of drugs dramatically and predicted NDD drugs with high accuracy, while avoiding over fitting. Based on these results, NDD-specific focused libraries of drug-like compounds can be designed and existing NDD-specific drugs can be characterized by a well-characterized set of molecular descriptors. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Impact of Menu Sequencing on Internet-Based Educational Module Selection

    Science.gov (United States)

    Bensley, Robert; Brusk, John J.; Rivas, Jason; Anderson, Judith V.

    2006-01-01

    Patterns of Internet-based menu item selection can occur for a number of reasons, many of which may not be based on interest in topic. It then becomes important to ensure menu order is devised in a way that ensures the greatest accuracy in matching user need with selection. This study examined the impact of menu rotation on the selection of…

  2. Multiple-Features-Based Semisupervised Clustering DDoS Detection Method

    Directory of Open Access Journals (Sweden)

    Yonghao Gu

    2017-01-01

    Full Text Available DDoS attack stream from different agent host converged at victim host will become very large, which will lead to system halt or network congestion. Therefore, it is necessary to propose an effective method to detect the DDoS attack behavior from the massive data stream. In order to solve the problem that large numbers of labeled data are not provided in supervised learning method, and the relatively low detection accuracy and convergence speed of unsupervised k-means algorithm, this paper presents a semisupervised clustering detection method using multiple features. In this detection method, we firstly select three features according to the characteristics of DDoS attacks to form detection feature vector. Then, Multiple-Features-Based Constrained-K-Means (MF-CKM algorithm is proposed based on semisupervised clustering. Finally, using MIT Laboratory Scenario (DDoS 1.0 data set, we verify that the proposed method can improve the convergence speed and accuracy of the algorithm under the condition of using a small amount of labeled data sets.

  3. New Hybrid Features Selection Method: A Case Study on Websites Phishing

    Directory of Open Access Journals (Sweden)

    Khairan D. Rajab

    2017-01-01

    Full Text Available Phishing is one of the serious web threats that involves mimicking authenticated websites to deceive users in order to obtain their financial information. Phishing has caused financial damage to the different online stakeholders. It is massive in the magnitude of hundreds of millions; hence it is essential to minimize this risk. Classifying websites into “phishy” and legitimate types is a primary task in data mining that security experts and decision makers are hoping to improve particularly with respect to the detection rate and reliability of the results. One way to ensure the reliability of the results and to enhance performance is to identify a set of related features early on so the data dimensionality reduces and irrelevant features are discarded. To increase reliability of preprocessing, this article proposes a new feature selection method that combines the scores of multiple known methods to minimize discrepancies in feature selection results. The proposed method has been applied to the problem of website phishing classification to show its pros and cons in identifying relevant features. Results against a security dataset reveal that the proposed preprocessing method was able to derive new features datasets which when mined generate high competitive classifiers with reference to detection rate when compared to results obtained from other features selection methods.

  4. REVIEW OF SELECTED BIOLOGICAL METHODS OF ASSESSING THE QUALITY OF NATURAL ENVIRONMENT

    Directory of Open Access Journals (Sweden)

    Monika Beata Jakubus

    2015-04-01

    Full Text Available The xenobiotics introduced into the environment are the effect of human activities. It is especially soil contamination that leads to degradation of soils, which may finally be referred to the biological imbalance of the ecosystem. Normally chemical methods are used for the assessment of soil’s quality. Unfortunately, they are not always quick and inexpensive. Therefore, the practice and the science at environmental monitoring more frequently employ biological methods. Most of them meet the above mentioned conditions and become a supplement of routine laboratory practices. This publication shows an overview of selected common biological methods used to estimate the quality of the environment. The first part of the paper presents biomonitoring as a first step of environmental control which relies on the observation of indicator organisms. The next section was dedicated to the bioassays, indicating the greater or lesser practical applications confirmed by literature on the subject. Particular attention has been focused on phytotests and the tests based on the invertebrates.

  5. Identification of potential nuclear reprogramming and differentiation factors by a novel selection method for cloning chromatin-binding proteins

    International Nuclear Information System (INIS)

    Wang Liu; Zheng Aihua; Yi Ling; Xu Chongren; Ding Mingxiao; Deng Hongkui

    2004-01-01

    Nuclear reprogramming is critical for animal cloning and stem cell creation through nuclear transfer, which requires extensive remodeling of chromosomal architecture involving dramatic changes in chromatin-binding proteins. To understand the mechanism of nuclear reprogramming, it is critical to identify chromatin-binding factors specify the reprogramming process. In this report, we have developed a high-throughput selection method, based on T7 phage display and chromatin immunoprecipitation, to isolate chromatin-binding factors expressed in mouse embryonic stem cells using primary mouse embryonic fibroblast chromatin. Seven chromatin-binding proteins have been isolated by this method. We have also isolated several chromatin-binding proteins involved in hepatocyte differentiation. Our method provides a powerful tool to rapidly and selectively identify chromatin-binding proteins. The method can be used to study epigenetic modification of chromatin during nuclear reprogramming, cell differentiation, and transdifferentiation

  6. [Hyperspectral remote sensing image classification based on SVM optimized by clonal selection].

    Science.gov (United States)

    Liu, Qing-Jie; Jing, Lin-Hai; Wang, Meng-Fei; Lin, Qi-Zhong

    2013-03-01

    Model selection for support vector machine (SVM) involving kernel and the margin parameter values selection is usually time-consuming, impacts training efficiency of SVM model and final classification accuracies of SVM hyperspectral remote sensing image classifier greatly. Firstly, based on combinatorial optimization theory and cross-validation method, artificial immune clonal selection algorithm is introduced to the optimal selection of SVM (CSSVM) kernel parameter a and margin parameter C to improve the training efficiency of SVM model. Then an experiment of classifying AVIRIS in India Pine site of USA was performed for testing the novel CSSVM, as well as a traditional SVM classifier with general Grid Searching cross-validation method (GSSVM) for comparison. And then, evaluation indexes including SVM model training time, classification overall accuracy (OA) and Kappa index of both CSSVM and GSSVM were all analyzed quantitatively. It is demonstrated that OA of CSSVM on test samples and whole image are 85.1% and 81.58, the differences from that of GSSVM are both within 0.08% respectively; And Kappa indexes reach 0.8213 and 0.7728, the differences from that of GSSVM are both within 0.001; While the ratio of model training time of CSSVM and GSSVM is between 1/6 and 1/10. Therefore, CSSVM is fast and accurate algorithm for hyperspectral image classification and is superior to GSSVM.

  7. Investigation of the paired-gear method in selectivity studies

    DEFF Research Database (Denmark)

    Sistiaga, Manu; Herrmann, Bent; Larsen, R.B.

    2009-01-01

    was repeated throughout the eight cases in this investigation. When using the paired-gear method, the distribution of the estimated L50 and SR is wider; the distribution of the estimated split parameter has a higher variability than the true split; the estimated mean L50 and SR can be biased; the estimated...... recommend that the methodology used to obtain selectivity estimates using the paired-gear method be reviewed....

  8. Developing TOPSIS method using statistical normalization for selecting knowledge management strategies

    Directory of Open Access Journals (Sweden)

    Amin Zadeh Sarraf

    2013-09-01

    Full Text Available Purpose: Numerous companies are expecting their knowledge management (KM to be performed effectively in order to leverage and transform the knowledge into competitive advantages. However, here raises a critical issue of how companies can better evaluate and select a favorable KM strategy prior to a successful KM implementation. Design/methodology/approach: An extension of TOPSIS, a multi-attribute decision making (MADM technique, to a group decision environment is investigated. TOPSIS is a practical and useful technique for ranking and selection of a number of externally determined alternatives through distance measures. The entropy method is often used for assessing weights in the TOPSIS method. Entropy in information theory is a criterion uses for measuring the amount of disorder represented by a discrete probability distribution. According to decrease resistance degree of employees opposite of implementing a new strategy, it seems necessary to spot all managers’ opinion. The normal distribution considered the most prominent probability distribution in statistics is used to normalize gathered data. Findings: The results of this study show that by considering 6 criteria for alternatives Evaluation, the most appropriate KM strategy to implement  in our company was ‘‘Personalization’’. Research limitations/implications: In this research, there are some assumptions that might affect the accuracy of the approach such as normal distribution of sample and community. These assumptions can be changed in future work. Originality/value: This paper proposes an effective solution based on combined entropy and TOPSIS approach to help companies that need to evaluate and select KM strategies. In represented solution, opinions of all managers is gathered and normalized by using standard normal distribution and central limit theorem. Keywords: Knowledge management; strategy; TOPSIS; Normal distribution; entropy

  9. AN APPLICATION OF FUZZY PROMETHEE METHOD FOR SELECTING OPTIMAL CAR PROBLEM

    Directory of Open Access Journals (Sweden)

    SERKAN BALLI

    2013-06-01

    Full Text Available Most of the economical, industrial, financial or political decision problems are multi-criteria. In these multi criteria problems, optimal selection of alternatives is hard and complex process. Recently, some kinds of methods are improved to solve these problems. Promethee is one of most efficient and easiest method and solves problems that consist quantitative criteria.  However, in daily life, there are criteria which are explained as linguistic and cannot modeled numerical. Hence, Promethee method is incomplete for linguistic criteria which are imprecise. To satisfy this deficiency, fuzzy set approximation can be used. Promethee method, which is extended with using fuzzy inputs, is applied to car selection for seven different cars in same class by using criteria: price, fuel, performance and security. The obtained results are appropriate and consistent.

  10. Sensor Selection method for IoT systems – focusing on embedded system requirements

    Directory of Open Access Journals (Sweden)

    Hirayama Masayuki

    2016-01-01

    Full Text Available Recently, various types of sensors have been developed. Using these sensors, IoT systems have become hot topics in embedded system domain. However, sensor selections for embedded systems are not well discussed up to now. This paper focuses on embedded system’s features and architecture, and proposes a sensor selection method which is composed seven steps. In addition, we applied the proposed method to a simple example – a sensor selection for computer scored answer sheet reader unit. From this case study, an idea to use FTA in sensor selection is also discussed.

  11. Risk-based audit selection of dairy farms.

    Science.gov (United States)

    van Asseldonk, M A P M; Velthuis, A G J

    2014-02-01

    Dairy farms are audited in the Netherlands on numerous process standards. Each farm is audited once every 2 years. Increasing demands for cost-effectiveness in farm audits can be met by introducing risk-based principles. This implies targeting subpopulations with a higher risk of poor process standards. To select farms for an audit that present higher risks, a statistical analysis was conducted to test the relationship between the outcome of farm audits and bulk milk laboratory results before the audit. The analysis comprised 28,358 farm audits and all conducted laboratory tests of bulk milk samples 12 mo before the audit. The overall outcome of each farm audit was classified as approved or rejected. Laboratory results included somatic cell count (SCC), total bacterial count (TBC), antimicrobial drug residues (ADR), level of butyric acid spores (BAB), freezing point depression (FPD), level of free fatty acids (FFA), and cleanliness of the milk (CLN). The bulk milk laboratory results were significantly related to audit outcomes. Rejected audits are likely to occur on dairy farms with higher mean levels of SCC, TBC, ADR, and BAB. Moreover, in a multivariable model, maxima for TBC, SCC, and FPD as well as standard deviations for TBC and FPD are risk factors for negative audit outcomes. The efficiency curve of a risk-based selection approach, on the basis of the derived regression results, dominated the current random selection approach. To capture 25, 50, or 75% of the population with poor process standards (i.e., audit outcome of rejected), respectively, only 8, 20, or 47% of the population had to be sampled based on a risk-based selection approach. Milk quality information can thus be used to preselect high-risk farms to be audited more frequently. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  12. A Comparative Investigation of the Combined Effects of Pre-Processing, Wavelength Selection, and Regression Methods on Near-Infrared Calibration Model Performance.

    Science.gov (United States)

    Wan, Jian; Chen, Yi-Chieh; Morris, A Julian; Thennadil, Suresh N

    2017-07-01

    Near-infrared (NIR) spectroscopy is being widely used in various fields ranging from pharmaceutics to the food industry for analyzing chemical and physical properties of the substances concerned. Its advantages over other analytical techniques include available physical interpretation of spectral data, nondestructive nature and high speed of measurements, and little or no need for sample preparation. The successful application of NIR spectroscopy relies on three main aspects: pre-processing of spectral data to eliminate nonlinear variations due to temperature, light scattering effects and many others, selection of those wavelengths that contribute useful information, and identification of suitable calibration models using linear/nonlinear regression . Several methods have been developed for each of these three aspects and many comparative studies of different methods exist for an individual aspect or some combinations. However, there is still a lack of comparative studies for the interactions among these three aspects, which can shed light on what role each aspect plays in the calibration and how to combine various methods of each aspect together to obtain the best calibration model. This paper aims to provide such a comparative study based on four benchmark data sets using three typical pre-processing methods, namely, orthogonal signal correction (OSC), extended multiplicative signal correction (EMSC) and optical path-length estimation and correction (OPLEC); two existing wavelength selection methods, namely, stepwise forward selection (SFS) and genetic algorithm optimization combined with partial least squares regression for spectral data (GAPLSSP); four popular regression methods, namely, partial least squares (PLS), least absolute shrinkage and selection operator (LASSO), least squares support vector machine (LS-SVM), and Gaussian process regression (GPR). The comparative study indicates that, in general, pre-processing of spectral data can play a significant

  13. LCIA selection methods for assessing toxic releases

    DEFF Research Database (Denmark)

    Larsen, Henrik Fred; Birkved, Morten; Hauschild, Michael Zwicky

    2002-01-01

    the inventory that contribute significantly to the impact categories on ecotoxicity and human toxicity to focus the characterisation work. The reason why the selection methods are more important for the chemical-related impact categories than for other impact categories is the extremely high number......Characterization of toxic emissions in life cycle impact assessment (LCIA) is in many cases severely limited by the lack of characterization factors for the emissions mapped in the inventory. The number of substances assigned characterization factors for (eco)toxicity included in the dominating LCA....... The methods are evaluated against a set of pre-defined criteria (comprising consistency with characterization and data requirement) and applied to case studies and a test set of chemicals. The reported work is part of the EU-project OMNIITOX....

  14. Project evaluation and selection using fuzzy Delphi method and zero - one goal programming

    Science.gov (United States)

    Alias, Suriana; Adna, Nofarziah; Arsad, Roslah; Soid, Siti Khuzaimah; Ali, Zaileha Md

    2014-12-01

    Project evaluation and selection is a factor affecting the impotence of board director in which is trying to maximize all the possible goals. Assessment of the problem occurred in organization plan is the first phase for decision making process. The company needs a group of expert to evaluate the problems. The Fuzzy Delphi Method (FDM) is a systematic procedure to evoke the group's opinion in order to get the best result to evaluate the project performance. This paper proposes an evaluation and selection of the best alternative project based on combination of FDM and Zero - One Goal Programming (ZOGP) formulation. ZOGP is used to solve the multi-criteria decision making for final decision part by using optimization software LINDO 6.1. An empirical example on an ongoing decision making project in Johor, Malaysia is implemented for case study.

  15. Genomic selection in plant breeding.

    Science.gov (United States)

    Newell, Mark A; Jannink, Jean-Luc

    2014-01-01

    Genomic selection (GS) is a method to predict the genetic value of selection candidates based on the genomic estimated breeding value (GEBV) predicted from high-density markers positioned throughout the genome. Unlike marker-assisted selection, the GEBV is based on all markers including both minor and major marker effects. Thus, the GEBV may capture more of the genetic variation for the particular trait under selection.

  16. Intrusion recognition for optic fiber vibration sensor based on the selective attention mechanism

    Science.gov (United States)

    Xu, Haiyan; Xie, Yingjuan; Li, Min; Zhang, Zhuo; Zhang, Xuewu

    2017-11-01

    Distributed fiber-optic vibration sensors receive extensive investigation and play a significant role in the sensor panorama. A fiber optic perimeter detection system based on all-fiber interferometric sensor is proposed, through the back-end analysis, processing and intelligent identification, which can distinguish effects of different intrusion activities. In this paper, an intrusion recognition based on the auditory selective attention mechanism is proposed. Firstly, considering the time-frequency of vibration, the spectrogram is calculated. Secondly, imitating the selective attention mechanism, the color, direction and brightness map of the spectrogram is computed. Based on these maps, the feature matrix is formed after normalization. The system could recognize the intrusion activities occurred along the perimeter sensors. Experiment results show that the proposed method for the perimeter is able to differentiate intrusion signals from ambient noises. What's more, the recognition rate of the system is improved while deduced the false alarm rate, the approach is proved by large practical experiment and project.

  17. An ant colony optimization based feature selection for web page classification.

    Science.gov (United States)

    Saraç, Esra; Özel, Selma Ayşe

    2014-01-01

    The increased popularity of the web has caused the inclusion of huge amount of information to the web, and as a result of this explosive information growth, automated web page classification systems are needed to improve search engines' performance. Web pages have a large number of features such as HTML/XML tags, URLs, hyperlinks, and text contents that should be considered during an automated classification process. The aim of this study is to reduce the number of features to be used to improve runtime and accuracy of the classification of web pages. In this study, we used an ant colony optimization (ACO) algorithm to select the best features, and then we applied the well-known C4.5, naive Bayes, and k nearest neighbor classifiers to assign class labels to web pages. We used the WebKB and Conference datasets in our experiments, and we showed that using the ACO for feature selection improves both accuracy and runtime performance of classification. We also showed that the proposed ACO based algorithm can select better features with respect to the well-known information gain and chi square feature selection methods.

  18. Solving portfolio selection problems with minimum transaction lots based on conditional-value-at-risk

    Science.gov (United States)

    Setiawan, E. P.; Rosadi, D.

    2017-01-01

    Portfolio selection problems conventionally means ‘minimizing the risk, given the certain level of returns’ from some financial assets. This problem is frequently solved with quadratic or linear programming methods, depending on the risk measure that used in the objective function. However, the solutions obtained by these method are in real numbers, which may give some problem in real application because each asset usually has its minimum transaction lots. In the classical approach considering minimum transaction lots were developed based on linear Mean Absolute Deviation (MAD), variance (like Markowitz’s model), and semi-variance as risk measure. In this paper we investigated the portfolio selection methods with minimum transaction lots with conditional value at risk (CVaR) as risk measure. The mean-CVaR methodology only involves the part of the tail of the distribution that contributed to high losses. This approach looks better when we work with non-symmetric return probability distribution. Solution of this method can be found with Genetic Algorithm (GA) methods. We provide real examples using stocks from Indonesia stocks market.

  19. Label fusion based brain MR image segmentation via a latent selective model

    Science.gov (United States)

    Liu, Gang; Guo, Xiantang; Zhu, Kai; Liao, Hengxu

    2018-04-01

    Multi-atlas segmentation is an effective approach and increasingly popular for automatically labeling objects of interest in medical images. Recently, segmentation methods based on generative models and patch-based techniques have become the two principal branches of label fusion. However, these generative models and patch-based techniques are only loosely related, and the requirement for higher accuracy, faster segmentation, and robustness is always a great challenge. In this paper, we propose novel algorithm that combines the two branches using global weighted fusion strategy based on a patch latent selective model to perform segmentation of specific anatomical structures for human brain magnetic resonance (MR) images. In establishing this probabilistic model of label fusion between the target patch and patch dictionary, we explored the Kronecker delta function in the label prior, which is more suitable than other models, and designed a latent selective model as a membership prior to determine from which training patch the intensity and label of the target patch are generated at each spatial location. Because the image background is an equally important factor for segmentation, it is analyzed in label fusion procedure and we regard it as an isolated label to keep the same privilege between the background and the regions of interest. During label fusion with the global weighted fusion scheme, we use Bayesian inference and expectation maximization algorithm to estimate the labels of the target scan to produce the segmentation map. Experimental results indicate that the proposed algorithm is more accurate and robust than the other segmentation methods.

  20. High efficiency GaN-based LEDs using plasma selective treatment of p-GaN surface

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Young-Bae; Naoi, Yoshiki; Sakai, Shiro [Department of Electrical and Electronic Engineering, University of Tokushima, 2-1 Minami-josanjima, Tokushima 770-8506 (Japan); Takaki, Ryohei; Sato, Hisao [Nitride Semiconductor Co., Ltd., 115-7 Itayajima, Akinokami, Seto-cho, Naruto, Tokushima 771-0360 (Japan)

    2003-11-01

    We have studied a new method of increasing the extraction efficiency of a GaN-based light-emitting diode (LED) using a plasma surface treatment. In this method, prior to the evaporation of a semitransparent p-metal, the surface of a p-GaN located beneath a p-pad is selectively exposed to a nitrogen plasma in a reactive ion etching (RIE) chamber. The electrical characteristics of the plasma treated p-GaN remarkably changes its resistivity into semi-insulator without any parasitic damage. Since the LEDs with a new method have no light absorption in a p-pad region, a higher optical power can be extracted compared to a conventional LEDs without plasma selective treatment on the p-GaN surface. (copyright 2003 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)

  1. Highly Sensitive and Selective Potassium Ion Detection Based on Graphene Hall Effect Biosensors

    Directory of Open Access Journals (Sweden)

    Xiangqi Liu

    2018-03-01

    Full Text Available Potassium (K+ ion is an important biological substance in the human body and plays a critical role in the maintenance of transmembrane potential and hormone secretion. Several detection techniques, including fluorescent, electrochemical, and electrical methods, have been extensively investigated to selectively recognize K+ ions. In this work, a highly sensitive and selective biosensor based on single-layer graphene has been developed for K+ ion detection under Van der Pauw measurement configuration. With pre-immobilization of guanine-rich DNA on the graphene surface, the graphene devices exhibit a very low limit of detection (≈1 nM with a dynamic range of 1 nM–10 μM and excellent K+ ion specificity against other alkali cations, such as Na+ ions. The origin of K+ ion selectivity can be attributed to the fact that the formation of guanine-quadruplexes from guanine-rich DNA has a strong affinity for capturing K+ ions. The graphene-based biosensors with improved sensing performance for K+ ion recognition can be applied to health monitoring and early disease diagnosis.

  2. Coupling Empowerment Based Application of Extension Method for Geothermal Potential Assessment

    Directory of Open Access Journals (Sweden)

    Qing Zhang

    2018-01-01

    Full Text Available Plenty of mathematics researches provide feasibility to calculate the weights of geothermal controlling factors and have been applied in geothermal potential assessment. In order to avoid the disadvantages of subjective and objective weighting calculation methods, an extension theory integrated weighting method was put forward, by combining with the process of AHP and mean variance method. The improved method can reach an agreement on subjective understanding of impact factors’ roles and data-based calculation weights. Then by replacing the point values with intervals, the extension theory was used in classification of geothermal assessment, according to extension judgment matrix. The evaluation results showed perfect performance in classification of impact factors, especially in Wudalianchi area, where 10 out of 11 selected impact factors agreed well with the actual evaluation grades. The study can provide a guidance for primary stage of geothermal investigation including the impact factor selection, weights calculation for impact factors, and the factors’ classification in geothermal assessment.

  3. Revealing metabolite biomarkers for acupuncture treatment by linear programming based feature selection.

    Science.gov (United States)

    Wang, Yong; Wu, Qiao-Feng; Chen, Chen; Wu, Ling-Yun; Yan, Xian-Zhong; Yu, Shu-Guang; Zhang, Xiang-Sun; Liang, Fan-Rong

    2012-01-01

    Acupuncture has been practiced in China for thousands of years as part of the Traditional Chinese Medicine (TCM) and has gradually accepted in western countries as an alternative or complementary treatment. However, the underlying mechanism of acupuncture, especially whether there exists any difference between varies acupoints, remains largely unknown, which hinders its widespread use. In this study, we develop a novel Linear Programming based Feature Selection method (LPFS) to understand the mechanism of acupuncture effect, at molecular level, by revealing the metabolite biomarkers for acupuncture treatment. Specifically, we generate and investigate the high-throughput metabolic profiles of acupuncture treatment at several acupoints in human. To select the subsets of metabolites that best characterize the acupuncture effect for each meridian point, an optimization model is proposed to identify biomarkers from high-dimensional metabolic data from case and control samples. Importantly, we use nearest centroid as the prototype to simultaneously minimize the number of selected features and the leave-one-out cross validation error of classifier. We compared the performance of LPFS to several state-of-the-art methods, such as SVM recursive feature elimination (SVM-RFE) and sparse multinomial logistic regression approach (SMLR). We find that our LPFS method tends to reveal a small set of metabolites with small standard deviation and large shifts, which exactly serves our requirement for good biomarker. Biologically, several metabolite biomarkers for acupuncture treatment are revealed and serve as the candidates for further mechanism investigation. Also biomakers derived from five meridian points, Zusanli (ST36), Liangmen (ST21), Juliao (ST3), Yanglingquan (GB34), and Weizhong (BL40), are compared for their similarity and difference, which provide evidence for the specificity of acupoints. Our result demonstrates that metabolic profiling might be a promising method to

  4. Effect of cooking methods on the micronutrient profile of selected ...

    African Journals Online (AJOL)

    Effect of cooking methods on the micronutrient profile of selected vegetables: okra fruit ( Abelmoshcus esculentus ), fluted pumpkin ( Telfairia occidentalis ), African spinach ( Amarantus viridis ), and scent leaf ( Ocumum gratissimum.

  5. Entropy-based gene ranking without selection bias for the predictive classification of microarray data

    Directory of Open Access Journals (Sweden)

    Serafini Maria

    2003-11-01

    Full Text Available Abstract Background We describe the E-RFE method for gene ranking, which is useful for the identification of markers in the predictive classification of array data. The method supports a practical modeling scheme designed to avoid the construction of classification rules based on the selection of too small gene subsets (an effect known as the selection bias, in which the estimated predictive errors are too optimistic due to testing on samples already considered in the feature selection process. Results With E-RFE, we speed up the recursive feature elimination (RFE with SVM classifiers by eliminating chunks of uninteresting genes using an entropy measure of the SVM weights distribution. An optimal subset of genes is selected according to a two-strata model evaluation procedure: modeling is replicated by an external stratified-partition resampling scheme, and, within each run, an internal K-fold cross-validation is used for E-RFE ranking. Also, the optimal number of genes can be estimated according to the saturation of Zipf's law profiles. Conclusions Without a decrease of classification accuracy, E-RFE allows a speed-up factor of 100 with respect to standard RFE, while improving on alternative parametric RFE reduction strategies. Thus, a process for gene selection and error estimation is made practical, ensuring control of the selection bias, and providing additional diagnostic indicators of gene importance.

  6. On a selection method of imaging condition in scintigraphy

    International Nuclear Information System (INIS)

    Ikeda, Hozumi; Kishimoto, Kenji; Shimonishi, Yoshihiro; Ohmura, Masahiro; Kosakai, Kazuhisa; Ochi, Hironobu

    1992-01-01

    Selection of imaging condition in scintigraphy was evaluated using analytic hierarchy process. First, a method of the selection was led by determining at the points of image quantity and imaging time. Influence of image quality was thought to depend on changes of system resolution, count density, image size, and image density. Also influence of imaging time was thought to depend on changes of system sensitivity and data acquisition time. Phantom study was done for paired comparison of these selection factors, and relations of sample data and the factors, that is Rollo phantom images were taken by changing count density, image size, and image density. Image quality was shown by calculating the score of visual evaluation that done by comparing of a pair of images in clearer cold lesion on the scintigrams. Imaging time was shown by relative values for changes of count density. However, system resolution and system sensitivity were constant in this study. Next, using these values analytic hierarchy process was adapted for this selection of imaging conditions. We conclude that this selection of imaging conditions can be analyzed quantitatively using analytic hierarchy process and this analysis develops theoretical consideration of imaging technique. (author)

  7. Highly selective ionic liquid-based microextraction method for sensitive trace cobalt determination in environmental and biological samples

    International Nuclear Information System (INIS)

    Berton, Paula; Wuilloud, Rodolfo G.

    2010-01-01

    A simple and rapid dispersive liquid-liquid microextraction procedure based on an ionic liquid (IL-DLLME) was developed for selective determination of cobalt (Co) with electrothermal atomic absorption spectrometry (ETAAS) detection. Cobalt was initially complexed with 1-nitroso-2-naphtol (1N2N) reagent at pH 4.0. The IL-DLLME procedure was then performed by using a few microliters of the room temperature ionic liquid (RTIL) 1-hexyl-3-methylimidazolium hexafluorophosphate [C 6 mim][PF 6 ] as extractant while methanol was the dispersant solvent. After microextraction procedure, the Co-enriched RTIL phase was solubilized in methanol and directly injected into the graphite furnace. The effect of several variables on Co-1N2N complex formation, extraction with the dispersed RTIL phase, and analyte detection with ETAAS, was carefully studied in this work. An enrichment factor of 120 was obtained with only 6 mL of sample solution and under optimal experimental conditions. The resultant limit of detection (LOD) was 3.8 ng L -1 , while the relative standard deviation (RSD) was 3.4% (at 1 μg L -1 Co level and n = 10), calculated from the peak height of absorbance signals. The accuracy of the proposed methodology was tested by analysis of a certified reference material. The method was successfully applied for the determination of Co in environmental and biological samples.

  8. Norm based Threshold Selection for Fault Detectors

    DEFF Research Database (Denmark)

    Rank, Mike Lind; Niemann, Henrik

    1998-01-01

    The design of fault detectors for fault detection and isolation (FDI) in dynamic systems is considered from a norm based point of view. An analysis of norm based threshold selection is given based on different formulations of FDI problems. Both the nominal FDI problem as well as the uncertain FDI...... problem are considered. Based on this analysis, a performance index based on norms of the involved transfer functions is given. The performance index allows us also to optimize the structure of the fault detection filter directly...

  9. Implementation of preference ranking organization method for enrichment evaluation (Promethee) on selection system of student’s achievement

    Science.gov (United States)

    Karlitasari, L.; Suhartini, D.; Nurrosikawati, L.

    2018-03-01

    Selection of Student Achievement is conducted every year, starting from the level of Study Program, Faculty, to University, which then rank one will be sent to Kopertis level. The criteria made for the selection are Academic and Rich Scientific, Organizational, Personality, and English. In order for the selection of Student Achievement is Objective, then in addition to the presence of the jury is expected to use methods that support the decision to be more optimal in determining the Student Achievement. One method used is the Promethee Method. Preference Ranking Organization Method for Enrichment Evaluation (Promethee) is a method of ranking in Multi Criteria Decision Making (MCDM). PROMETHEE has the advantage that there is a preference type against the criteria that can take into account alternatives with other alternatives on the same criteria. The conjecture of alternate dominance over a criterion used in PROMETHEE is the use of values in the relationships between alternative ranking values. Based on the calculation result, from 7 applicants between Manual and Promethee Matrices, rank 1, 2, and 3, did not change, only 4 to 7 positions were changed. However, after the sensitivity test, almost all criteria experience a high level of sensitivity. Although it does not affect the students who will be sent to the next level, but can bring psychological impact on prospective student’s achievement

  10. Variable selection in near-infrared spectroscopy: Benchmarking of feature selection methods on biodiesel data

    International Nuclear Information System (INIS)

    Balabin, Roman M.; Smirnov, Sergey V.

    2011-01-01

    During the past several years, near-infrared (near-IR/NIR) spectroscopy has increasingly been adopted as an analytical tool in various fields from petroleum to biomedical sectors. The NIR spectrum (above 4000 cm -1 ) of a sample is typically measured by modern instruments at a few hundred of wavelengths. Recently, considerable effort has been directed towards developing procedures to identify variables (wavelengths) that contribute useful information. Variable selection (VS) or feature selection, also called frequency selection or wavelength selection, is a critical step in data analysis for vibrational spectroscopy (infrared, Raman, or NIRS). In this paper, we compare the performance of 16 different feature selection methods for the prediction of properties of biodiesel fuel, including density, viscosity, methanol content, and water concentration. The feature selection algorithms tested include stepwise multiple linear regression (MLR-step), interval partial least squares regression (iPLS), backward iPLS (BiPLS), forward iPLS (FiPLS), moving window partial least squares regression (MWPLS), (modified) changeable size moving window partial least squares (CSMWPLS/MCSMWPLSR), searching combination moving window partial least squares (SCMWPLS), successive projections algorithm (SPA), uninformative variable elimination (UVE, including UVE-SPA), simulated annealing (SA), back-propagation artificial neural networks (BP-ANN), Kohonen artificial neural network (K-ANN), and genetic algorithms (GAs, including GA-iPLS). Two linear techniques for calibration model building, namely multiple linear regression (MLR) and partial least squares regression/projection to latent structures (PLS/PLSR), are used for the evaluation of biofuel properties. A comparison with a non-linear calibration model, artificial neural networks (ANN-MLP), is also provided. Discussion of gasoline, ethanol-gasoline (bioethanol), and diesel fuel data is presented. The results of other spectroscopic

  11. Supplier Selection based on the Performance by using PROMETHEE Method

    Science.gov (United States)

    Sinaga, T. S.; Siregar, K.

    2017-03-01

    Generally, companies faced problem to identify vendors that can provide excellent service in availability raw material and on time delivery. The performance of suppliers in a company have to be monitored to ensure the availability to fulfill the company needs. This research is intended to explain how to assess suppliers to improve manufacturing performance. The criteria that considered in evaluating suppliers is criteria of Dickson. There are four main criteria which further split into seven sub-criteria, namely compliance with accuracy, consistency, on-time delivery, right quantity order, flexibility and negotiation, timely of order confirmation, and responsiveness. This research uses PROMETHEE methodology in assessing the supplier performances and obtaining a selected supplier as the best one that shown from the degree of alternative comparison preference between suppliers.

  12. Comparing methods of targeting obesity interventions in populations: An agent-based simulation.

    Science.gov (United States)

    Beheshti, Rahmatollah; Jalalpour, Mehdi; Glass, Thomas A

    2017-12-01

    Social networks as well as neighborhood environments have been shown to effect obesity-related behaviors including energy intake and physical activity. Accordingly, harnessing social networks to improve targeting of obesity interventions may be promising to the extent this leads to social multiplier effects and wider diffusion of intervention impact on populations. However, the literature evaluating network-based interventions has been inconsistent. Computational methods like agent-based models (ABM) provide researchers with tools to experiment in a simulated environment. We develop an ABM to compare conventional targeting methods (random selection, based on individual obesity risk, and vulnerable areas) with network-based targeting methods. We adapt a previously published and validated model of network diffusion of obesity-related behavior. We then build social networks among agents using a more realistic approach. We calibrate our model first against national-level data. Our results show that network-based targeting may lead to greater population impact. We also present a new targeting method that outperforms other methods in terms of intervention effectiveness at the population level.

  13. URBAN RAIN GAUGE SITING SELECTION BASED ON GIS-MULTICRITERIA ANALYSIS

    Directory of Open Access Journals (Sweden)

    Y. Fu

    2016-06-01

    Full Text Available With the increasingly rapid growth of urbanization and climate change, urban rainfall monitoring as well as urban waterlogging has widely been paid attention. In the light of conventional siting selection methods do not take into consideration of geographic surroundings and spatial-temporal scale for the urban rain gauge site selection, this paper primarily aims at finding the appropriate siting selection rules and methods for rain gauge in urban area. Additionally, for optimization gauge location, a spatial decision support system (DSS aided by geographical information system (GIS has been developed. In terms of a series of criteria, the rain gauge optimal site-search problem can be addressed by a multicriteria decision analysis (MCDA. A series of spatial analytical techniques are required for MCDA to identify the prospective sites. With the platform of GIS, using spatial kernel density analysis can reflect the population density; GIS buffer analysis is used to optimize the location with the rain gauge signal transmission character. Experiment results show that the rules and the proposed method are proper for the rain gauge site selection in urban areas, which is significant for the siting selection of urban hydrological facilities and infrastructure, such as water gauge.

  14. A New Method Based on TOPSIS and Response Surface Method for MCDM Problems with Interval Numbers

    Directory of Open Access Journals (Sweden)

    Peng Wang

    2015-01-01

    Full Text Available As the preference of design maker (DM is always ambiguous, we have to face many multiple criteria decision-making (MCDM problems with interval numbers in our daily life. Though there have been some methods applied to solve this sort of problem, it is always complex to comprehend and sometimes difficult to implement. The calculation processes are always ineffective when a new alternative is added or removed. In view of the weakness like this, this paper presents a new method based on TOPSIS and response surface method (RSM for MCDM problems with interval numbers, RSM-TOPSIS-IN for short. The key point of this approach is the application of deviation degree matrix, which ensures that the DM can get a simple response surface (RS model to rank the alternatives. In order to demonstrate the feasibility and effectiveness of the proposed method, three illustrative MCMD problems with interval numbers are analysed, including (a selection of investment program, (b selection of a right partner, and (c assessment of road transport technologies. The contrast of ranking results shows that the RSM-TOPSIS-IN method is in good agreement with those derived by earlier researchers, indicating it is suitable to solve MCDM problems with interval numbers.

  15. Selection of rhizosphere local microbial as bioactive inoculant based on irradiated compost

    International Nuclear Information System (INIS)

    Dadang Sudrajat; Nana Mulyana; Arief Adhari

    2014-01-01

    One of the main components of carrier based on irradiation compost for bio organic fertilizer is a potential microbial isolates role in nutrient supply and growth hormone. This research was conducted to obtain microbial isolates from plant root zone (rhizosphere), further isolation and selection in order to obtain potential isolates capable of nitrogen fixation (N 2 ), resulting in growth hormone (Indole Acetic Acid), and phosphate solubilizing. Selected potential isolates used as bioactive microbial inoculants formulation in irradiation compost based. Forty eight (48) rhizosphere samples were collected from different areas of West and Central Java. One hundred sixteen (116) isolates have been characterized for their morphological, cultural, staining and biochemical characteristics. Isolates have been selected for further screening of PGPR traits. Parameters assessed were Indole Acetic Acid (IAA) content analysis with colorimetric methods, dinitrogen fixation using gas chromatography, phosphate solubility test qualitatively (in the media pikovskaya) and quantitative assay of dissolved phosphate (spectrophotometry). Evaluation of the ability of selected isolates on the growth of corn plants were done in pots. The isolates will be used as inoculant consortium base on compost irradiation. The selection obtained eight (8) bacterial isolates identified as Bacillus circulans (3 isolates), Bacillus stearothermophilus (1 isolate), Azotobacter sp (3 isolates), Pseudomonas diminuta (1 isolate). The highest phosphate released (91,21 mg/l) was by BD2 isolate (Bacillus circulan) with a holo zone size (1.32 cm) on Pikovskaya agar medium. Isolate of Pseudomonas diminuta (KACI) was capable to produce the highest IAA hormone (74.34 μg/ml). The highest nitrogen (N 2 ) fixation activity was shown by Azotobacter sp isolates (KDB2) at a rate of 235.05 nmol/hour. The viability test showed that all selected isolates in compost irradiation carrier slightly decreased after 3 months of

  16. Characterization of Catalytic Fast Pyrolysis Oils: The Importance of Solvent Selection for Analytical Method Development

    Energy Technology Data Exchange (ETDEWEB)

    Ferrell, Jack R [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Ware, Anne E [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2018-02-25

    Two catalytic fast pyrolysis (CFP) oils (bottom/heavy fraction) were analyzed in various solvents that are used in common analytical methods (nuclear magnetic resonance - NMR, gas chromatography - GC, gel permeation chromatography - GPC, thermogravimetric analysis - TGA) for oil characterization and speciation. A more accurate analysis of the CFP oils can be obtained by identification and exploitation of solvent miscibility characteristics. Acetone and tetrahydrofuran can be used to completely solubilize CFP oils for analysis by GC and tetrahydrofuran can be used for traditional organic GPC analysis of the oils. DMSO-d6 can be used to solubilize CFP oils for analysis by 13C NMR. The fractionation of oils into solvents that did not completely solubilize the whole oils showed that miscibility can be related to the oil properties. This allows for solvent selection based on physico-chemical properties of the oils. However, based on semi-quantitative comparisons of the GC chromatograms, the organic solvent fractionation schemes did not speciate the oils based on specific analyte type. On the other hand, chlorinated solvents did fractionate the oils based on analyte size to a certain degree. Unfortunately, like raw pyrolysis oil, the matrix of the CFP oils is complicated and is not amenable to simple liquid-liquid extraction (LLE) or solvent fractionation to separate the oils based on the chemical and/or physical properties of individual components. For reliable analyses, for each analytical method used, it is critical that the bio-oil sample is both completely soluble and also not likely to react with the chosen solvent. The adoption of the standardized solvent selection protocols presented here will allow for greater reproducibility of analysis across different users and facilities.

  17. Achievement of extreme resolution for the selective by depth Moessbauer method on conversion electrons

    International Nuclear Information System (INIS)

    Babenkov, M.I.; Zhdanov, V.S.; Ryzhikh, V.Yu.; Chubisov, M.A.

    2001-01-01

    At the Institute of Nuclear Physics of the National Nuclear Center of the Republic of Kazakhstan the depth selective conversion electrons Moessbauer spectroscopy (DSCEMS) method was realized on the facility designed on the magnet sector beta-spectrometer base with the dual focusing equipped with non-equipotential electron source in the multi-ribbon variant and the position-sensitive detector. In the work the model statistical calculations of energy and angular distributions experienced not so many times of inelastic scattering acts were carried out

  18. Practically Efficient Blind Speech Separation Using Frequency Band Selection Based on Magnitude Squared Coherence and a Small Dodecahedral Microphone Array

    Directory of Open Access Journals (Sweden)

    Kazunobu Kondo

    2012-01-01

    Full Text Available Small agglomerative microphone array systems have been proposed for use with speech communication and recognition systems. Blind source separation methods based on frequency domain independent component analysis have shown significant separation performance, and the microphone arrays are small enough to make them portable. However, the level of computational complexity involved is very high because the conventional signal collection and processing method uses 60 microphones. In this paper, we propose a band selection method based on magnitude squared coherence. Frequency bands are selected based on the spatial and geometric characteristics of the microphone array device which is strongly related to the dodecahedral shape, and the selected bands are nonuniformly spaced. The estimated reduction in the computational complexity is 90% with a 68% reduction in the number of frequency bands. Separation performance achieved during our experimental evaluation was 7.45 (dB (signal-to-noise ratio and 2.30 (dB (cepstral distortion. These results show improvement in performance compared to the use of uniformly spaced frequency band.

  19. A genetic algorithm-based framework for wavelength selection on sample categorization.

    Science.gov (United States)

    Anzanello, Michel J; Yamashita, Gabrielli; Marcelo, Marcelo; Fogliatto, Flávio S; Ortiz, Rafael S; Mariotti, Kristiane; Ferrão, Marco F

    2017-08-01

    In forensic and pharmaceutical scenarios, the application of chemometrics and optimization techniques has unveiled common and peculiar features of seized medicine and drug samples, helping investigative forces to track illegal operations. This paper proposes a novel framework aimed at identifying relevant subsets of attenuated total reflectance Fourier transform infrared (ATR-FTIR) wavelengths for classifying samples into two classes, for example authentic or forged categories in case of medicines, or salt or base form in cocaine analysis. In the first step of the framework, the ATR-FTIR spectra were partitioned into equidistant intervals and the k-nearest neighbour (KNN) classification technique was applied to each interval to insert samples into proper classes. In the next step, selected intervals were refined through the genetic algorithm (GA) by identifying a limited number of wavelengths from the intervals previously selected aimed at maximizing classification accuracy. When applied to Cialis®, Viagra®, and cocaine ATR-FTIR datasets, the proposed method substantially decreased the number of wavelengths needed to categorize, and increased the classification accuracy. From a practical perspective, the proposed method provides investigative forces with valuable information towards monitoring illegal production of drugs and medicines. In addition, focusing on a reduced subset of wavelengths allows the development of portable devices capable of testing the authenticity of samples during police checking events, avoiding the need for later laboratorial analyses and reducing equipment expenses. Theoretically, the proposed GA-based approach yields more refined solutions than the current methods relying on interval approaches, which tend to insert irrelevant wavelengths in the retained intervals. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  20. Order Selection for General Expression of Nonlinear Autoregressive Model Based on Multivariate Stepwise Regression

    Science.gov (United States)

    Shi, Jinfei; Zhu, Songqing; Chen, Ruwen

    2017-12-01

    An order selection method based on multiple stepwise regressions is proposed for General Expression of Nonlinear Autoregressive model which converts the model order problem into the variable selection of multiple linear regression equation. The partial autocorrelation function is adopted to define the linear term in GNAR model. The result is set as the initial model, and then the nonlinear terms are introduced gradually. Statistics are chosen to study the improvements of both the new introduced and originally existed variables for the model characteristics, which are adopted to determine the model variables to retain or eliminate. So the optimal model is obtained through data fitting effect measurement or significance test. The simulation and classic time-series data experiment results show that the method proposed is simple, reliable and can be applied to practical engineering.