WorldWideScience

Sample records for selection method based

  1. Personnel Selection Based on Fuzzy Methods

    Directory of Open Access Journals (Sweden)

    Lourdes Cañós

    2011-03-01

    Full Text Available The decisions of managers regarding the selection of staff strongly determine the success of the company. A correct choice of employees is a source of competitive advantage. We propose a fuzzy method for staff selection, based on competence management and the comparison with the valuation that the company considers the best in each competence (ideal candidate. Our method is based on the Hamming distance and a Matching Level Index. The algorithms, implemented in the software StaffDesigner, allow us to rank the candidates, even when the competences of the ideal candidate have been evaluated only in part. Our approach is applied in a numerical example.

  2. EEG feature selection method based on decision tree.

    Science.gov (United States)

    Duan, Lijuan; Ge, Hui; Ma, Wei; Miao, Jun

    2015-01-01

    This paper aims to solve automated feature selection problem in brain computer interface (BCI). In order to automate feature selection process, we proposed a novel EEG feature selection method based on decision tree (DT). During the electroencephalogram (EEG) signal processing, a feature extraction method based on principle component analysis (PCA) was used, and the selection process based on decision tree was performed by searching the feature space and automatically selecting optimal features. Considering that EEG signals are a series of non-linear signals, a generalized linear classifier named support vector machine (SVM) was chosen. In order to test the validity of the proposed method, we applied the EEG feature selection method based on decision tree to BCI Competition II datasets Ia, and the experiment showed encouraging results.

  3. Supplier selection based on multi-criterial AHP method

    Directory of Open Access Journals (Sweden)

    Jana Pócsová

    2010-03-01

    Full Text Available This paper describes a case-study of supplier selection based on multi-criterial Analytic Hierarchy Process (AHP method.It is demonstrated that using adequate mathematical method can bring us “unprejudiced” conclusion, even if the alternatives (suppliercompanies are very similar in given selection-criteria. The result is the best possible supplier company from the viewpoint of chosen criteriaand the price of the product.

  4. Personnel Selection Method Based on Personnel-Job Matching

    OpenAIRE

    Li Wang; Xilin Hou; Lili Zhang

    2013-01-01

    The existing personnel selection decisions in practice are based on the evaluation of job seeker's human capital, and it may be difficult to make personnel-job matching and make each party satisfy. Therefore, this paper puts forward a new personnel selection method by consideration of bilateral matching. Starting from the employment thoughts of ¡°satisfy¡±, the satisfaction evaluation indicator system of each party are constructed. The multi-objective optimization model is given according to ...

  5. A fuzzy logic based PROMETHEE method for material selection problems

    Directory of Open Access Journals (Sweden)

    Muhammet Gul

    2018-03-01

    Full Text Available Material selection is a complex problem in the design and development of products for diverse engineering applications. This paper presents a fuzzy PROMETHEE (Preference Ranking Organization Method for Enrichment Evaluation method based on trapezoidal fuzzy interval numbers that can be applied to the selection of materials for an automotive instrument panel. Also, it presents uniqueness in making a significant contribution to the literature in terms of the application of fuzzy decision-making approach to material selection problems. The method is illustrated, validated, and compared against three different fuzzy MCDM methods (fuzzy VIKOR, fuzzy TOPSIS, and fuzzy ELECTRE in terms of its ranking performance. Also, the relationships between the compared methods and the proposed scenarios for fuzzy PROMETHEE are evaluated via the Spearman’s correlation coefficient. Styrene Maleic Anhydride and Polypropylene are determined optionally as suitable materials for the automotive instrument panel case. We propose a generic fuzzy MCDM methodology that can be practically implemented to material selection problem. The main advantages of the methodology are consideration of the vagueness, uncertainty, and fuzziness to decision making environment.

  6. Linear feature selection in texture analysis - A PLS based method

    DEFF Research Database (Denmark)

    Marques, Joselene; Igel, Christian; Lillholm, Martin

    2013-01-01

    We present a texture analysis methodology that combined uncommitted machine-learning techniques and partial least square (PLS) in a fully automatic framework. Our approach introduces a robust PLS-based dimensionality reduction (DR) step to specifically address outliers and high-dimensional feature...... and considering all CV groups, the methods selected 36 % of the original features available. The diagnosis evaluation reached a generalization area-under-the-ROC curve of 0.92, which was higher than established cartilage-based markers known to relate to OA diagnosis....

  7. A Lightweight Structure Redesign Method Based on Selective Laser Melting

    Directory of Open Access Journals (Sweden)

    Li Tang

    2016-11-01

    Full Text Available The purpose of this paper is to present a new design method of lightweight parts fabricated by selective laser melting (SLM based on the “Skin-Frame” and to explore the influence of fabrication defects on SLM parts with different sizes. Some standard lattice parts were designed according to the Chinese GB/T 1452-2005 standard and manufactured by SLM. Then these samples were tested in an MTS Insight 30 compression testing machine to study the trends of the yield process with different structure sizes. A set of standard cylinder samples were also designed according to the Chinese GB/T 228-2010 standard. These samples, which were made of iron-nickel alloy (IN718, were also processed by SLM, and then tested in the universal material testing machine INSTRON 1346 to obtain their tensile strength. Furthermore, a lightweight redesigned method was researched. Then some common parts such as a stopper and connecting plate were redesigned using this method. These redesigned parts were fabricated and some application tests have already been performed. The compression testing results show that when the minimum structure size is larger than 1.5 mm, the mechanical characteristics will hardly be affected by process defects. The cylinder parts were fractured by the universal material testing machine at about 1069.6 MPa. These redesigned parts worked well in application tests, with both the weight and fabrication time of these parts reduced more than 20%.

  8. GAIN RATIO BASED FEATURE SELECTION METHOD FOR PRIVACY PRESERVATION

    Directory of Open Access Journals (Sweden)

    R. Praveena Priyadarsini

    2011-04-01

    Full Text Available Privacy-preservation is a step in data mining that tries to safeguard sensitive information from unsanctioned disclosure and hence protecting individual data records and their privacy. There are various privacy preservation techniques like k-anonymity, l-diversity and t-closeness and data perturbation. In this paper k-anonymity privacy protection technique is applied to high dimensional datasets like adult and census. since, both the data sets are high dimensional, feature subset selection method like Gain Ratio is applied and the attributes of the datasets are ranked and low ranking attributes are filtered to form new reduced data subsets. K-anonymization privacy preservation technique is then applied on reduced datasets. The accuracy of the privacy preserved reduced datasets and the original datasets are compared for their accuracy on the two functionalities of data mining namely classification and clustering using naïve Bayesian and k-means algorithm respectively. Experimental results show that classification and clustering accuracy are comparatively the same for reduced k-anonym zed datasets and the original data sets.

  9. Supplier Selection based on the Performance by using PROMETHEE Method

    Science.gov (United States)

    Sinaga, T. S.; Siregar, K.

    2017-03-01

    Generally, companies faced problem to identify vendors that can provide excellent service in availability raw material and on time delivery. The performance of suppliers in a company have to be monitored to ensure the availability to fulfill the company needs. This research is intended to explain how to assess suppliers to improve manufacturing performance. The criteria that considered in evaluating suppliers is criteria of Dickson. There are four main criteria which further split into seven sub-criteria, namely compliance with accuracy, consistency, on-time delivery, right quantity order, flexibility and negotiation, timely of order confirmation, and responsiveness. This research uses PROMETHEE methodology in assessing the supplier performances and obtaining a selected supplier as the best one that shown from the degree of alternative comparison preference between suppliers.

  10. Systematic differences in the response of genetic variation to pedigree and genome-based selection methods

    NARCIS (Netherlands)

    Heidaritabar, M.; Vereijken, A.; Muir, W.M.; Meuwissen, T.H.E.; Cheng, H.; Megens, H.J.W.C.; Groenen, M.; Bastiaansen, J.W.M.

    2014-01-01

    Genomic selection (GS) is a DNA-based method of selecting for quantitative traits in animal and plant breeding, and offers a potentially superior alternative to traditional breeding methods that rely on pedigree and phenotype information. Using a 60¿K SNP chip with markers spaced throughout the

  11. A Feature Selection Method for Large-Scale Network Traffic Classification Based on Spark

    Directory of Open Access Journals (Sweden)

    Yong Wang

    2016-02-01

    Full Text Available Currently, with the rapid increasing of data scales in network traffic classifications, how to select traffic features efficiently is becoming a big challenge. Although a number of traditional feature selection methods using the Hadoop-MapReduce framework have been proposed, the execution time was still unsatisfactory with numeral iterative computations during the processing. To address this issue, an efficient feature selection method for network traffic based on a new parallel computing framework called Spark is proposed in this paper. In our approach, the complete feature set is firstly preprocessed based on Fisher score, and a sequential forward search strategy is employed for subsets. The optimal feature subset is then selected using the continuous iterations of the Spark computing framework. The implementation demonstrates that, on the precondition of keeping the classification accuracy, our method reduces the time cost of modeling and classification, and improves the execution efficiency of feature selection significantly.

  12. FEATURE SELECTION METHODS BASED ON MUTUAL INFORMATION FOR CLASSIFYING HETEROGENEOUS FEATURES

    Directory of Open Access Journals (Sweden)

    Ratri Enggar Pawening

    2016-06-01

    Full Text Available Datasets with heterogeneous features can affect feature selection results that are not appropriate because it is difficult to evaluate heterogeneous features concurrently. Feature transformation (FT is another way to handle heterogeneous features subset selection. The results of transformation from non-numerical into numerical features may produce redundancy to the original numerical features. In this paper, we propose a method to select feature subset based on mutual information (MI for classifying heterogeneous features. We use unsupervised feature transformation (UFT methods and joint mutual information maximation (JMIM methods. UFT methods is used to transform non-numerical features into numerical features. JMIM methods is used to select feature subset with a consideration of the class label. The transformed and the original features are combined entirely, then determine features subset by using JMIM methods, and classify them using support vector machine (SVM algorithm. The classification accuracy are measured for any number of selected feature subset and compared between UFT-JMIM methods and Dummy-JMIM methods. The average classification accuracy for all experiments in this study that can be achieved by UFT-JMIM methods is about 84.47% and Dummy-JMIM methods is about 84.24%. This result shows that UFT-JMIM methods can minimize information loss between transformed and original features, and select feature subset to avoid redundant and irrelevant features.

  13. A Dynamic and Adaptive Selection Radar Tracking Method Based on Information Entropy

    Directory of Open Access Journals (Sweden)

    Ge Jianjun

    2017-12-01

    Full Text Available Nowadays, the battlefield environment has become much more complex and variable. This paper presents a quantitative method and lower bound for the amount of target information acquired from multiple radar observations to adaptively and dynamically organize the detection of battlefield resources based on the principle of information entropy. Furthermore, for minimizing the given information entropy’s lower bound for target measurement at every moment, a method to dynamically and adaptively select radars with a high amount of information for target tracking is proposed. The simulation results indicate that the proposed method has higher tracking accuracy than that of tracking without adaptive radar selection based on entropy.

  14. Traditional and robust vector selection methods for use with similarity based models

    International Nuclear Information System (INIS)

    Hines, J. W.; Garvey, D. R.

    2006-01-01

    Vector selection, or instance selection as it is often called in the data mining literature, performs a critical task in the development of nonparametric, similarity based models. Nonparametric, similarity based modeling (SBM) is a form of 'lazy learning' which constructs a local model 'on the fly' by comparing a query vector to historical, training vectors. For large training sets the creation of local models may become cumbersome, since each training vector must be compared to the query vector. To alleviate this computational burden, varying forms of training vector sampling may be employed with the goal of selecting a subset of the training data such that the samples are representative of the underlying process. This paper describes one such SBM, namely auto-associative kernel regression (AAKR), and presents five traditional vector selection methods and one robust vector selection method that may be used to select prototype vectors from a larger data set in model training. The five traditional vector selection methods considered are min-max, vector ordering, combination min-max and vector ordering, fuzzy c-means clustering, and Adeli-Hung clustering. Each method is described in detail and compared using artificially generated data and data collected from the steam system of an operating nuclear power plant. (authors)

  15. FCNN-MR: A Parallel Instance Selection Method Based on Fast Condensed Nearest Neighbor Rule

    OpenAIRE

    Lu Si; Jie Yu; Shasha Li; Jun Ma; Lei Luo; Qingbo Wu; Yongqi Ma; Zhengji Liu

    2017-01-01

    Instance selection (IS) technique is used to reduce the data size to improve the performance of data mining methods. Recently, to process very large data set, several proposed methods divide the training set into some disjoint subsets and apply IS algorithms independently to each subset. In this paper, we analyze the limitation of these methods and give our viewpoint about how to divide and conquer in IS procedure. Then, based on fast condensed nearest neighbor (FCNN) rul...

  16. A novel selection method of seismic attributes based on gray relational degree and support vector machine.

    Directory of Open Access Journals (Sweden)

    Yaping Huang

    Full Text Available The selection of seismic attributes is a key process in reservoir prediction because the prediction accuracy relies on the reliability and credibility of the seismic attributes. However, effective selection method for useful seismic attributes is still a challenge. This paper presents a novel selection method of seismic attributes for reservoir prediction based on the gray relational degree (GRD and support vector machine (SVM. The proposed method has a two-hierarchical structure. In the first hierarchy, the primary selection of seismic attributes is achieved by calculating the GRD between seismic attributes and reservoir parameters, and the GRD between the seismic attributes. The principle of the primary selection is that these seismic attributes with higher GRD to the reservoir parameters will have smaller GRD between themselves as compared to those with lower GRD to the reservoir parameters. Then the SVM is employed in the second hierarchy to perform an interactive error verification using training samples for the purpose of determining the final seismic attributes. A real-world case study was conducted to evaluate the proposed GRD-SVM method. Reliable seismic attributes were selected to predict the coalbed methane (CBM content in southern Qinshui basin, China. In the analysis, the instantaneous amplitude, instantaneous bandwidth, instantaneous frequency, and minimum negative curvature were selected, and the predicted CBM content was fundamentally consistent with the measured CBM content. This real-world case study demonstrates that the proposed method is able to effectively select seismic attributes, and improve the prediction accuracy. Thus, the proposed GRD-SVM method can be used for the selection of seismic attributes in practice.

  17. Speech Emotion Feature Selection Method Based on Contribution Analysis Algorithm of Neural Network

    International Nuclear Information System (INIS)

    Wang Xiaojia; Mao Qirong; Zhan Yongzhao

    2008-01-01

    There are many emotion features. If all these features are employed to recognize emotions, redundant features may be existed. Furthermore, recognition result is unsatisfying and the cost of feature extraction is high. In this paper, a method to select speech emotion features based on contribution analysis algorithm of NN is presented. The emotion features are selected by using contribution analysis algorithm of NN from the 95 extracted features. Cluster analysis is applied to analyze the effectiveness for the features selected, and the time of feature extraction is evaluated. Finally, 24 emotion features selected are used to recognize six speech emotions. The experiments show that this method can improve the recognition rate and the time of feature extraction

  18. A target recognition method for maritime surveillance radars based on hybrid ensemble selection

    Science.gov (United States)

    Fan, Xueman; Hu, Shengliang; He, Jingbo

    2017-11-01

    In order to improve the generalisation ability of the maritime surveillance radar, a novel ensemble selection technique, termed Optimisation and Dynamic Selection (ODS), is proposed. During the optimisation phase, the non-dominated sorting genetic algorithm II for multi-objective optimisation is used to find the Pareto front, i.e. a set of ensembles of classifiers representing different tradeoffs between the classification error and diversity. During the dynamic selection phase, the meta-learning method is used to predict whether a candidate ensemble is competent enough to classify a query instance based on three different aspects, namely, feature space, decision space and the extent of consensus. The classification performance and time complexity of ODS are compared against nine other ensemble methods using a self-built full polarimetric high resolution range profile data-set. The experimental results clearly show the effectiveness of ODS. In addition, the influence of the selection of diversity measures is studied concurrently.

  19. The MCDM Model for Personnel Selection Based on SWARA and ARAS Methods

    Directory of Open Access Journals (Sweden)

    Darjan Karabasevic

    2015-05-01

    Full Text Available Competent employees are the key resource in an organization for achieving success and, therefore, competitiveness on the market. The aim of the recruitment and selection process is to acquire personnel with certain competencies required for a particular position, i.e.,a position within the company. Bearing in mind the fact that in the process of decision making decision-makers have underused the methods of making decisions, this paper aims to establish an MCDM model for the evaluation and selection of candidates in the process of the recruitment and selection of personnel based on the SWARA and the ARAS methods. Apart from providing an MCDM model, the paper will additionally provide a set of evaluation criteria for the position of a sales manager (the middle management in the telecommunication industry which will also be used in the numerical example. On the basis of a numerical example, in the process of employment, theproposed MCDMmodel can be successfully usedin selecting candidates.

  20. A Robust Service Selection Method Based on Uncertain QoS

    Directory of Open Access Journals (Sweden)

    Yanping Chen

    2016-01-01

    Full Text Available Nowadays, the number of Web services on the Internet is quickly increasing. Meanwhile, different service providers offer numerous services with the similar functions. Quality of Service (QoS has become an important factor used to select the most appropriate service for users. The most prominent QoS-based service selection models only take the certain attributes into account, which is an ideal assumption. In the real world, there are a large number of uncertain factors. In particular, at the runtime, QoS may become very poor or unacceptable. In order to solve the problem, a global service selection model based on uncertain QoS was proposed, including the corresponding normalization and aggregation functions, and then a robust optimization model adopted to transform the model. Experiment results show that the proposed method can effectively select services with high robustness and optimality.

  1. Systematic differences in the response of genetic variation to pedigree and genome-based selection methods.

    Science.gov (United States)

    Heidaritabar, M; Vereijken, A; Muir, W M; Meuwissen, T; Cheng, H; Megens, H-J; Groenen, M A M; Bastiaansen, J W M

    2014-12-01

    Genomic selection (GS) is a DNA-based method of selecting for quantitative traits in animal and plant breeding, and offers a potentially superior alternative to traditional breeding methods that rely on pedigree and phenotype information. Using a 60 K SNP chip with markers spaced throughout the entire chicken genome, we compared the impact of GS and traditional BLUP (best linear unbiased prediction) selection methods applied side-by-side in three different lines of egg-laying chickens. Differences were demonstrated between methods, both at the level and genomic distribution of allele frequency changes. In all three lines, the average allele frequency changes were larger with GS, 0.056 0.064 and 0.066, compared with BLUP, 0.044, 0.045 and 0.036 for lines B1, B2 and W1, respectively. With BLUP, 35 selected regions (empirical P selected regions were identified. Empirical thresholds for local allele frequency changes were determined from gene dropping, and differed considerably between GS (0.167-0.198) and BLUP (0.105-0.126). Between lines, the genomic regions with large changes in allele frequencies showed limited overlap. Our results show that GS applies selection pressure much more locally than BLUP, resulting in larger allele frequency changes. With these results, novel insights into the nature of selection on quantitative traits have been gained and important questions regarding the long-term impact of GS are raised. The rapid changes to a part of the genetic architecture, while another part may not be selected, at least in the short term, require careful consideration, especially when selection occurs before phenotypes are observed.

  2. Feature selection for splice site prediction: A new method using EDA-based feature ranking

    Directory of Open Access Journals (Sweden)

    Rouzé Pierre

    2004-05-01

    Full Text Available Abstract Background The identification of relevant biological features in large and complex datasets is an important step towards gaining insight in the processes underlying the data. Other advantages of feature selection include the ability of the classification system to attain good or even better solutions using a restricted subset of features, and a faster classification. Thus, robust methods for fast feature selection are of key importance in extracting knowledge from complex biological data. Results In this paper we present a novel method for feature subset selection applied to splice site prediction, based on estimation of distribution algorithms, a more general framework of genetic algorithms. From the estimated distribution of the algorithm, a feature ranking is derived. Afterwards this ranking is used to iteratively discard features. We apply this technique to the problem of splice site prediction, and show how it can be used to gain insight into the underlying biological process of splicing. Conclusion We show that this technique proves to be more robust than the traditional use of estimation of distribution algorithms for feature selection: instead of returning a single best subset of features (as they normally do this method provides a dynamical view of the feature selection process, like the traditional sequential wrapper methods. However, the method is faster than the traditional techniques, and scales better to datasets described by a large number of features.

  3. A method for selection of spent nuclear fuel (SNF) transportation route considering socioeconomic cost based on contingent valuation method (CVM)

    International Nuclear Information System (INIS)

    Kim, Young Sik

    2008-02-01

    A transportation of SNF may cause an additional radiation exposure to human beings. It means that the radiological risk should be estimated and managed quantitatively for the public who live near the shipments route. Before the SNF transportation is performed, the route selection is concluded based on the radiological risk estimated with RADTRAN code in existing method generally. It means the existing method for route selection is based only on the radiological health risk but there are not only the impacts related to the radiological health risk but also the socioeconomic impacts related to the cost. In this study, a new method and its numerical formula for route selection on transporting SNF is proposed based on cost estimation because there are several costs in transporting SNF. The total cost consists of radiological health cost, transportation cost, and socioeconomic cost. Each cost is defined properly to the characteristics of SNF transportation and many coefficients and variables describing the meaning of each cost are obtained or estimated through many surveys. Especially to get the socioeconomic cost, contingent valuation method (CVM) is used with a questionnaire. The socioeconomic cost estimation is the most important part of the total cost originated from transporting SNF because it is a very dominant cost in the total cost. The route selection regarding SNF transportation can be supported with the proposed method reasonably and unnecessary or exhausting controversies about the shipments could be avoided

  4. An Entropy-based gene selection method for cancer classification using microarray data

    Directory of Open Access Journals (Sweden)

    Krishnan Arun

    2005-03-01

    Full Text Available Abstract Background Accurate diagnosis of cancer subtypes remains a challenging problem. Building classifiers based on gene expression data is a promising approach; yet the selection of non-redundant but relevant genes is difficult. The selected gene set should be small enough to allow diagnosis even in regular clinical laboratories and ideally identify genes involved in cancer-specific regulatory pathways. Here an entropy-based method is proposed that selects genes related to the different cancer classes while at the same time reducing the redundancy among the genes. Results The present study identifies a subset of features by maximizing the relevance and minimizing the redundancy of the selected genes. A merit called normalized mutual information is employed to measure the relevance and the redundancy of the genes. In order to find a more representative subset of features, an iterative procedure is adopted that incorporates an initial clustering followed by data partitioning and the application of the algorithm to each of the partitions. A leave-one-out approach then selects the most commonly selected genes across all the different runs and the gene selection algorithm is applied again to pare down the list of selected genes until a minimal subset is obtained that gives a satisfactory accuracy of classification. The algorithm was applied to three different data sets and the results obtained were compared to work done by others using the same data sets Conclusion This study presents an entropy-based iterative algorithm for selecting genes from microarray data that are able to classify various cancer sub-types with high accuracy. In addition, the feature set obtained is very compact, that is, the redundancy between genes is reduced to a large extent. This implies that classifiers can be built with a smaller subset of genes.

  5. a Method for the Seamlines Network Automatic Selection Based on Building Vector

    Science.gov (United States)

    Li, P.; Dong, Y.; Hu, Y.; Li, X.; Tan, P.

    2018-04-01

    In order to improve the efficiency of large scale orthophoto production of city, this paper presents a method for automatic selection of seamlines network in large scale orthophoto based on the buildings' vector. Firstly, a simple model of the building is built by combining building's vector, height and DEM, and the imaging area of the building on single DOM is obtained. Then, the initial Voronoi network of the measurement area is automatically generated based on the positions of the bottom of all images. Finally, the final seamlines network is obtained by optimizing all nodes and seamlines in the network automatically based on the imaging areas of the buildings. The experimental results show that the proposed method can not only get around the building seamlines network quickly, but also remain the Voronoi network' characteristics of projection distortion minimum theory, which can solve the problem of automatic selection of orthophoto seamlines network in image mosaicking effectively.

  6. A Time-Series Water Level Forecasting Model Based on Imputation and Variable Selection Method.

    Science.gov (United States)

    Yang, Jun-He; Cheng, Ching-Hsue; Chan, Chia-Pan

    2017-01-01

    Reservoirs are important for households and impact the national economy. This paper proposed a time-series forecasting model based on estimating a missing value followed by variable selection to forecast the reservoir's water level. This study collected data from the Taiwan Shimen Reservoir as well as daily atmospheric data from 2008 to 2015. The two datasets are concatenated into an integrated dataset based on ordering of the data as a research dataset. The proposed time-series forecasting model summarily has three foci. First, this study uses five imputation methods to directly delete the missing value. Second, we identified the key variable via factor analysis and then deleted the unimportant variables sequentially via the variable selection method. Finally, the proposed model uses a Random Forest to build the forecasting model of the reservoir's water level. This was done to compare with the listing method under the forecasting error. These experimental results indicate that the Random Forest forecasting model when applied to variable selection with full variables has better forecasting performance than the listing model. In addition, this experiment shows that the proposed variable selection can help determine five forecast methods used here to improve the forecasting capability.

  7. A Time-Series Water Level Forecasting Model Based on Imputation and Variable Selection Method

    Directory of Open Access Journals (Sweden)

    Jun-He Yang

    2017-01-01

    Full Text Available Reservoirs are important for households and impact the national economy. This paper proposed a time-series forecasting model based on estimating a missing value followed by variable selection to forecast the reservoir’s water level. This study collected data from the Taiwan Shimen Reservoir as well as daily atmospheric data from 2008 to 2015. The two datasets are concatenated into an integrated dataset based on ordering of the data as a research dataset. The proposed time-series forecasting model summarily has three foci. First, this study uses five imputation methods to directly delete the missing value. Second, we identified the key variable via factor analysis and then deleted the unimportant variables sequentially via the variable selection method. Finally, the proposed model uses a Random Forest to build the forecasting model of the reservoir’s water level. This was done to compare with the listing method under the forecasting error. These experimental results indicate that the Random Forest forecasting model when applied to variable selection with full variables has better forecasting performance than the listing model. In addition, this experiment shows that the proposed variable selection can help determine five forecast methods used here to improve the forecasting capability.

  8. Selection of boron based tribological hard coatings using multi-criteria decision making methods

    International Nuclear Information System (INIS)

    Çalışkan, Halil

    2013-01-01

    Highlights: • Boron based coating selection problem for cutting tools was solved. • EXPROM2, TOPSIS and VIKOR methods were used for ranking the alternative materials. • The best coatings for cutting tool were selected as TiBN and TiSiBN. • The ranking results are in good agreement with cutting test results in literature. - Abstract: Mechanical and tribological properties of hard coatings can be enhanced using boron as alloying element. Therefore, multicomponent nanostructured boron based hard coatings are deposited on cutting tools by different methods at different parameters. Different mechanical and tribological properties are obtained after deposition, and it is a difficult task to select the best coating material. In this paper, therefore, a systematic evaluation model was proposed to tackle the difficulty of the material selection with specific properties among a set of available alternatives. The alternatives consist of multicomponent nanostructured TiBN, TiCrBN, TiSiBN and TiAlSiBN coatings deposited by magnetron sputtering and ion implantation assisted magnetron sputtering at different parameters. The alternative coating materials were ranked by using three multi-criteria decision-making (MCDM) methods, i.e. EXPROM2 (preference ranking organization method for enrichment evaluation), TOPSIS (technique for order performance by similarity to ideal solution) and VIKOR (VIšekriterijumsko KOmpromisno Rangiranje), in order to determine the best coating material for cutting tools. Hardness (H), Young’s modulus (E), elastic recovery, friction coefficient, critical load, H/E and H 3 /E 2 ratios were considered as material selection criteria. In order to determine the importance weights of the evaluation criteria, a compromised weighting method, which composes of the analytic hierarchy process and Entropy methods, were used. The ranking results showed that TiBN and TiSiBN coatings deposited at given parameters are the best coatings for cutting tools

  9. Analysis of a wavelength selectable cascaded DFB laser based on the transfer matrix method

    International Nuclear Information System (INIS)

    Xie Hongyun; Chen Liang; Shen Pei; Sun Botao; Wang Renqing; Xiao Ying; You Yunxia; Zhang Wanrong

    2010-01-01

    A novel cascaded DFB laser, which consists of two serial gratings to provide selectable wavelengths, is presented and analyzed by the transfer matrix method. In this method, efficient facet reflectivity is derived from the transfer matrix built for each serial section and is then used to simulate the performance of the novel cascaded DFB laser through self-consistently solving the gain equation, the coupled wave equation and the current continuity equations. The simulations prove the feasibility of this kind of wavelength selectable laser and a corresponding designed device with two selectable wavelengths of 1.51 μm and 1.53 μm is realized by experiments on InP-based multiple quantum well structure. (semiconductor devices)

  10. Parameter Selection Method for Support Vector Regression Based on Adaptive Fusion of the Mixed Kernel Function

    Directory of Open Access Journals (Sweden)

    Hailun Wang

    2017-01-01

    Full Text Available Support vector regression algorithm is widely used in fault diagnosis of rolling bearing. A new model parameter selection method for support vector regression based on adaptive fusion of the mixed kernel function is proposed in this paper. We choose the mixed kernel function as the kernel function of support vector regression. The mixed kernel function of the fusion coefficients, kernel function parameters, and regression parameters are combined together as the parameters of the state vector. Thus, the model selection problem is transformed into a nonlinear system state estimation problem. We use a 5th-degree cubature Kalman filter to estimate the parameters. In this way, we realize the adaptive selection of mixed kernel function weighted coefficients and the kernel parameters, the regression parameters. Compared with a single kernel function, unscented Kalman filter (UKF support vector regression algorithms, and genetic algorithms, the decision regression function obtained by the proposed method has better generalization ability and higher prediction accuracy.

  11. The criteria for selecting a method for unfolding neutron spectra based on the information entropy theory

    International Nuclear Information System (INIS)

    Zhu, Qingjun; Song, Fengquan; Ren, Jie; Chen, Xueyong; Zhou, Bin

    2014-01-01

    To further expand the application of an artificial neural network in the field of neutron spectrometry, the criteria for choosing between an artificial neural network and the maximum entropy method for the purpose of unfolding neutron spectra was presented. The counts of the Bonner spheres for IAEA neutron spectra were used as a database, and the artificial neural network and the maximum entropy method were used to unfold neutron spectra; the mean squares of the spectra were defined as the differences between the desired and unfolded spectra. After the information entropy of each spectrum was calculated using information entropy theory, the relationship between the mean squares of the spectra and the information entropy was acquired. Useful information from the information entropy guided the selection of unfolding methods. Due to the importance of the information entropy, the method for predicting the information entropy using the Bonner spheres' counts was established. The criteria based on the information entropy theory can be used to choose between the artificial neural network and the maximum entropy method unfolding methods. The application of an artificial neural network to unfold neutron spectra was expanded. - Highlights: • Two neutron spectra unfolding methods, ANN and MEM, were compared. • The spectrum's entropy offers useful information for selecting unfolding methods. • For the spectrum with low entropy, the ANN was generally better than MEM. • The spectrum's entropy was predicted based on the Bonner spheres' counts

  12. Hybrid Multicriteria Group Decision Making Method for Information System Project Selection Based on Intuitionistic Fuzzy Theory

    Directory of Open Access Journals (Sweden)

    Jian Guo

    2013-01-01

    Full Text Available Information system (IS project selection is of critical importance to every organization in dynamic competing environment. The aim of this paper is to develop a hybrid multicriteria group decision making approach based on intuitionistic fuzzy theory for IS project selection. The decision makers’ assessment information can be expressed in the form of real numbers, interval-valued numbers, linguistic variables, and intuitionistic fuzzy numbers (IFNs. All these evaluation pieces of information can be transformed to the form of IFNs. Intuitionistic fuzzy weighted averaging (IFWA operator is utilized to aggregate individual opinions of decision makers into a group opinion. Intuitionistic fuzzy entropy is used to obtain the entropy weights of the criteria. TOPSIS method combined with intuitionistic fuzzy set is proposed to select appropriate IS project in group decision making environment. Finally, a numerical example for information system projects selection is given to illustrate application of hybrid multi-criteria group decision making (MCGDM method based on intuitionistic fuzzy theory and TOPSIS method.

  13. A Novel Fault Line Selection Method Based on Improved Oscillator System of Power Distribution Network

    Directory of Open Access Journals (Sweden)

    Xiaowei Wang

    2014-01-01

    Full Text Available A novel method of fault line selection based on IOS is presented. Firstly, the IOS is established by using math model, which adopted TZSC signal to replace built-in signal of duffing chaotic oscillator by selecting appropriate parameters. Then, each line’s TZSC decomposed by db10 wavelet packet to get CFB with the maximum energy principle, and CFB was solved by IOS. Finally, maximum chaotic distance and average chaotic distance on the phase trajectory are used to judge fault line. Simulation results show that the proposed method can accurately judge fault line and healthy line in strong noisy background. Besides, the nondetection zones of proposed method are elaborated.

  14. ReliefF-Based EEG Sensor Selection Methods for Emotion Recognition.

    Science.gov (United States)

    Zhang, Jianhai; Chen, Ming; Zhao, Shaokai; Hu, Sanqing; Shi, Zhiguo; Cao, Yu

    2016-09-22

    Electroencephalogram (EEG) signals recorded from sensor electrodes on the scalp can directly detect the brain dynamics in response to different emotional states. Emotion recognition from EEG signals has attracted broad attention, partly due to the rapid development of wearable computing and the needs of a more immersive human-computer interface (HCI) environment. To improve the recognition performance, multi-channel EEG signals are usually used. A large set of EEG sensor channels will add to the computational complexity and cause users inconvenience. ReliefF-based channel selection methods were systematically investigated for EEG-based emotion recognition on a database for emotion analysis using physiological signals (DEAP). Three strategies were employed to select the best channels in classifying four emotional states (joy, fear, sadness and relaxation). Furthermore, support vector machine (SVM) was used as a classifier to validate the performance of the channel selection results. The experimental results showed the effectiveness of our methods and the comparison with the similar strategies, based on the F-score, was given. Strategies to evaluate a channel as a unity gave better performance in channel reduction with an acceptable loss of accuracy. In the third strategy, after adjusting channels' weights according to their contribution to the classification accuracy, the number of channels was reduced to eight with a slight loss of accuracy (58.51% ± 10.05% versus the best classification accuracy 59.13% ± 11.00% using 19 channels). In addition, the study of selecting subject-independent channels, related to emotion processing, was also implemented. The sensors, selected subject-independently from frontal, parietal lobes, have been identified to provide more discriminative information associated with emotion processing, and are distributed symmetrically over the scalp, which is consistent with the existing literature. The results will make a contribution to the

  15. ReliefF-Based EEG Sensor Selection Methods for Emotion Recognition

    Directory of Open Access Journals (Sweden)

    Jianhai Zhang

    2016-09-01

    Full Text Available Electroencephalogram (EEG signals recorded from sensor electrodes on the scalp can directly detect the brain dynamics in response to different emotional states. Emotion recognition from EEG signals has attracted broad attention, partly due to the rapid development of wearable computing and the needs of a more immersive human-computer interface (HCI environment. To improve the recognition performance, multi-channel EEG signals are usually used. A large set of EEG sensor channels will add to the computational complexity and cause users inconvenience. ReliefF-based channel selection methods were systematically investigated for EEG-based emotion recognition on a database for emotion analysis using physiological signals (DEAP. Three strategies were employed to select the best channels in classifying four emotional states (joy, fear, sadness and relaxation. Furthermore, support vector machine (SVM was used as a classifier to validate the performance of the channel selection results. The experimental results showed the effectiveness of our methods and the comparison with the similar strategies, based on the F-score, was given. Strategies to evaluate a channel as a unity gave better performance in channel reduction with an acceptable loss of accuracy. In the third strategy, after adjusting channels’ weights according to their contribution to the classification accuracy, the number of channels was reduced to eight with a slight loss of accuracy (58.51% ± 10.05% versus the best classification accuracy 59.13% ± 11.00% using 19 channels. In addition, the study of selecting subject-independent channels, related to emotion processing, was also implemented. The sensors, selected subject-independently from frontal, parietal lobes, have been identified to provide more discriminative information associated with emotion processing, and are distributed symmetrically over the scalp, which is consistent with the existing literature. The results will make a

  16. Band selection method based on spectrum difference in targets of interest in hyperspectral imagery

    Science.gov (United States)

    Zhang, Xiaohan; Yang, Guang; Yang, Yongbo; Huang, Junhua

    2016-10-01

    While hyperspectral data shares rich spectrum information, it has numbers of bands with high correlation coefficients, causing great data redundancy. A reasonable band selection is important for subsequent processing. Bands with large amount of information and low correlation should be selected. On this basis, according to the needs of target detection applications, the spectral characteristics of the objects of interest are taken into consideration in this paper, and a new method based on spectrum difference is proposed. Firstly, according to the spectrum differences of targets of interest, a difference matrix which represents the different spectral reflectance of different targets in different bands is structured. By setting a threshold, the bands satisfying the conditions would be left, constituting a subset of bands. Then, the correlation coefficients between bands are calculated and correlation matrix is given. According to the size of the correlation coefficient, the bands can be set into several groups. At last, the conception of normalized variance is used on behalf of the information content of each band. The bands are sorted by the value of its normalized variance. Set needing number of bands, and the optimum band combination solution can be get by these three steps. This method retains the greatest degree of difference between the target of interest and is easy to achieve by computer automatically. Besides, false color image synthesis experiment is carried out using the bands selected by this method as well as other 3 methods to show the performance of method in this paper.

  17. Research on Methods for Discovering and Selecting Cloud Infrastructure Services Based on Feature Modeling

    Directory of Open Access Journals (Sweden)

    Huamin Zhu

    2016-01-01

    Full Text Available Nowadays more and more cloud infrastructure service providers are providing large numbers of service instances which are a combination of diversified resources, such as computing, storage, and network. However, for cloud infrastructure services, the lack of a description standard and the inadequate research of systematic discovery and selection methods have exposed difficulties in discovering and choosing services for users. First, considering the highly configurable properties of a cloud infrastructure service, the feature model method is used to describe such a service. Second, based on the description of the cloud infrastructure service, a systematic discovery and selection method for cloud infrastructure services are proposed. The automatic analysis techniques of the feature model are introduced to verify the model’s validity and to perform the matching of the service and demand models. Finally, we determine the critical decision metrics and their corresponding measurement methods for cloud infrastructure services, where the subjective and objective weighting results are combined to determine the weights of the decision metrics. The best matching instances from various providers are then ranked by their comprehensive evaluations. Experimental results show that the proposed methods can effectively improve the accuracy and efficiency of cloud infrastructure service discovery and selection.

  18. A New Decision-Making Method for Stock Portfolio Selection Based on Computing with Linguistic Assessment

    Directory of Open Access Journals (Sweden)

    Chen-Tung Chen

    2009-01-01

    Full Text Available The purpose of stock portfolio selection is how to allocate the capital to a large number of stocks in order to bring a most profitable return for investors. In most of past literatures, experts considered the portfolio of selection problem only based on past crisp or quantitative data. However, many qualitative and quantitative factors will influence the stock portfolio selection in real investment situation. It is very important for experts or decision-makers to use their experience or knowledge to predict the performance of each stock and make a stock portfolio. Because of the knowledge, experience, and background of each expert are different and vague, different types of 2-tuple linguistic variable are suitable used to express experts' opinions for the performance evaluation of each stock with respect to criteria. According to the linguistic evaluations of experts, the linguistic TOPSIS and linguistic ELECTRE methods are combined to present a new decision-making method for dealing with stock selection problems in this paper. Once the investment set has been determined, the risk preferences of investor are considered to calculate the investment ratio of each stock in the investment set. Finally, an example is implemented to demonstrate the practicability of the proposed method.

  19. A QSAR Study of Environmental Estrogens Based on a Novel Variable Selection Method

    Directory of Open Access Journals (Sweden)

    Aiqian Zhang

    2012-05-01

    Full Text Available A large number of descriptors were employed to characterize the molecular structure of 53 natural, synthetic, and environmental chemicals which are suspected of disrupting endocrine functions by mimicking or antagonizing natural hormones and may thus pose a serious threat to the health of humans and wildlife. In this work, a robust quantitative structure-activity relationship (QSAR model with a novel variable selection method has been proposed for the effective estrogens. The variable selection method is based on variable interaction (VSMVI with leave-multiple-out cross validation (LMOCV to select the best subset. During variable selection, model construction and assessment, the Organization for Economic Co-operation and Development (OECD principles for regulation of QSAR acceptability were fully considered, such as using an unambiguous multiple-linear regression (MLR algorithm to build the model, using several validation methods to assessment the performance of the model, giving the define of applicability domain and analyzing the outliers with the results of molecular docking. The performance of the QSAR model indicates that the VSMVI is an effective, feasible and practical tool for rapid screening of the best subset from large molecular descriptors.

  20. A novel EMD selecting thresholding method based on multiple iteration for denoising LIDAR signal

    Science.gov (United States)

    Li, Meng; Jiang, Li-hui; Xiong, Xing-long

    2015-06-01

    Empirical mode decomposition (EMD) approach has been believed to be potentially useful for processing the nonlinear and non-stationary LIDAR signals. To shed further light on its performance, we proposed the EMD selecting thresholding method based on multiple iteration, which essentially acts as a development of EMD interval thresholding (EMD-IT), and randomly alters the samples of noisy parts of all the corrupted intrinsic mode functions to generate a better effect of iteration. Simulations on both synthetic signals and LIDAR signals from real world support this method.

  1. A Feature Selection Method Based on Fisher's Discriminant Ratio for Text Sentiment Classification

    Science.gov (United States)

    Wang, Suge; Li, Deyu; Wei, Yingjie; Li, Hongxia

    With the rapid growth of e-commerce, product reviews on the Web have become an important information source for customers' decision making when they intend to buy some product. As the reviews are often too many for customers to go through, how to automatically classify them into different sentiment orientation categories (i.e. positive/negative) has become a research problem. In this paper, based on Fisher's discriminant ratio, an effective feature selection method is proposed for product review text sentiment classification. In order to validate the validity of the proposed method, we compared it with other methods respectively based on information gain and mutual information while support vector machine is adopted as the classifier. In this paper, 6 subexperiments are conducted by combining different feature selection methods with 2 kinds of candidate feature sets. Under 1006 review documents of cars, the experimental results indicate that the Fisher's discriminant ratio based on word frequency estimation has the best performance with F value 83.3% while the candidate features are the words which appear in both positive and negative texts.

  2. A kernel-based multivariate feature selection method for microarray data classification.

    Directory of Open Access Journals (Sweden)

    Shiquan Sun

    Full Text Available High dimensionality and small sample sizes, and their inherent risk of overfitting, pose great challenges for constructing efficient classifiers in microarray data classification. Therefore a feature selection technique should be conducted prior to data classification to enhance prediction performance. In general, filter methods can be considered as principal or auxiliary selection mechanism because of their simplicity, scalability, and low computational complexity. However, a series of trivial examples show that filter methods result in less accurate performance because they ignore the dependencies of features. Although few publications have devoted their attention to reveal the relationship of features by multivariate-based methods, these methods describe relationships among features only by linear methods. While simple linear combination relationship restrict the improvement in performance. In this paper, we used kernel method to discover inherent nonlinear correlations among features as well as between feature and target. Moreover, the number of orthogonal components was determined by kernel Fishers linear discriminant analysis (FLDA in a self-adaptive manner rather than by manual parameter settings. In order to reveal the effectiveness of our method we performed several experiments and compared the results between our method and other competitive multivariate-based features selectors. In our comparison, we used two classifiers (support vector machine, [Formula: see text]-nearest neighbor on two group datasets, namely two-class and multi-class datasets. Experimental results demonstrate that the performance of our method is better than others, especially on three hard-classify datasets, namely Wang's Breast Cancer, Gordon's Lung Adenocarcinoma and Pomeroy's Medulloblastoma.

  3. A Feature Subset Selection Method Based On High-Dimensional Mutual Information

    Directory of Open Access Journals (Sweden)

    Chee Keong Kwoh

    2011-04-01

    Full Text Available Feature selection is an important step in building accurate classifiers and provides better understanding of the data sets. In this paper, we propose a feature subset selection method based on high-dimensional mutual information. We also propose to use the entropy of the class attribute as a criterion to determine the appropriate subset of features when building classifiers. We prove that if the mutual information between a feature set X and the class attribute Y equals to the entropy of Y , then X is a Markov Blanket of Y . We show that in some cases, it is infeasible to approximate the high-dimensional mutual information with algebraic combinations of pairwise mutual information in any forms. In addition, the exhaustive searches of all combinations of features are prerequisite for finding the optimal feature subsets for classifying these kinds of data sets. We show that our approach outperforms existing filter feature subset selection methods for most of the 24 selected benchmark data sets.

  4. Absolute cosine-based SVM-RFE feature selection method for prostate histopathological grading.

    Science.gov (United States)

    Sahran, Shahnorbanun; Albashish, Dheeb; Abdullah, Azizi; Shukor, Nordashima Abd; Hayati Md Pauzi, Suria

    2018-04-18

    Feature selection (FS) methods are widely used in grading and diagnosing prostate histopathological images. In this context, FS is based on the texture features obtained from the lumen, nuclei, cytoplasm and stroma, all of which are important tissue components. However, it is difficult to represent the high-dimensional textures of these tissue components. To solve this problem, we propose a new FS method that enables the selection of features with minimal redundancy in the tissue components. We categorise tissue images based on the texture of individual tissue components via the construction of a single classifier and also construct an ensemble learning model by merging the values obtained by each classifier. Another issue that arises is overfitting due to the high-dimensional texture of individual tissue components. We propose a new FS method, SVM-RFE(AC), that integrates a Support Vector Machine-Recursive Feature Elimination (SVM-RFE) embedded procedure with an absolute cosine (AC) filter method to prevent redundancy in the selected features of the SV-RFE and an unoptimised classifier in the AC. We conducted experiments on H&E histopathological prostate and colon cancer images with respect to three prostate classifications, namely benign vs. grade 3, benign vs. grade 4 and grade 3 vs. grade 4. The colon benchmark dataset requires a distinction between grades 1 and 2, which are the most difficult cases to distinguish in the colon domain. The results obtained by both the single and ensemble classification models (which uses the product rule as its merging method) confirm that the proposed SVM-RFE(AC) is superior to the other SVM and SVM-RFE-based methods. We developed an FS method based on SVM-RFE and AC and successfully showed that its use enabled the identification of the most crucial texture feature of each tissue component. Thus, it makes possible the distinction between multiple Gleason grades (e.g. grade 3 vs. grade 4) and its performance is far superior to

  5. A Method Based on Intuitionistic Fuzzy Dependent Aggregation Operators for Supplier Selection

    Directory of Open Access Journals (Sweden)

    Fen Wang

    2013-01-01

    Full Text Available Recently, resolving the decision making problem of evaluation and ranking the potential suppliers have become as a key strategic factor for business firms. In this paper, two new intuitionistic fuzzy aggregation operators are developed: dependent intuitionistic fuzzy ordered weighed averaging (DIFOWA operator and dependent intuitionistic fuzzy hybrid weighed aggregation (DIFHWA operator. Some of their main properties are studied. A method based on the DIFHWA operator for intuitionistic fuzzy multiple attribute decision making is presented. Finally, an illustrative example concerning supplier selection is given.

  6. A stereo remote sensing feature selection method based on artificial bee colony algorithm

    Science.gov (United States)

    Yan, Yiming; Liu, Pigang; Zhang, Ye; Su, Nan; Tian, Shu; Gao, Fengjiao; Shen, Yi

    2014-05-01

    To improve the efficiency of stereo information for remote sensing classification, a stereo remote sensing feature selection method is proposed in this paper presents, which is based on artificial bee colony algorithm. Remote sensing stereo information could be described by digital surface model (DSM) and optical image, which contain information of the three-dimensional structure and optical characteristics, respectively. Firstly, three-dimensional structure characteristic could be analyzed by 3D-Zernike descriptors (3DZD). However, different parameters of 3DZD could descript different complexity of three-dimensional structure, and it needs to be better optimized selected for various objects on the ground. Secondly, features for representing optical characteristic also need to be optimized. If not properly handled, when a stereo feature vector composed of 3DZD and image features, that would be a lot of redundant information, and the redundant information may not improve the classification accuracy, even cause adverse effects. To reduce information redundancy while maintaining or improving the classification accuracy, an optimized frame for this stereo feature selection problem is created, and artificial bee colony algorithm is introduced for solving this optimization problem. Experimental results show that the proposed method can effectively improve the computational efficiency, improve the classification accuracy.

  7. A ROC-based feature selection method for computer-aided detection and diagnosis

    Science.gov (United States)

    Wang, Songyuan; Zhang, Guopeng; Liao, Qimei; Zhang, Junying; Jiao, Chun; Lu, Hongbing

    2014-03-01

    Image-based computer-aided detection and diagnosis (CAD) has been a very active research topic aiming to assist physicians to detect lesions and distinguish them from benign to malignant. However, the datasets fed into a classifier usually suffer from small number of samples, as well as significantly less samples available in one class (have a disease) than the other, resulting in the classifier's suboptimal performance. How to identifying the most characterizing features of the observed data for lesion detection is critical to improve the sensitivity and minimize false positives of a CAD system. In this study, we propose a novel feature selection method mR-FAST that combines the minimal-redundancymaximal relevance (mRMR) framework with a selection metric FAST (feature assessment by sliding thresholds) based on the area under a ROC curve (AUC) generated on optimal simple linear discriminants. With three feature datasets extracted from CAD systems for colon polyps and bladder cancer, we show that the space of candidate features selected by mR-FAST is more characterizing for lesion detection with higher AUC, enabling to find a compact subset of superior features at low cost.

  8. Selecting the patients for morning report sessions: case-based vs. conventional method.

    Science.gov (United States)

    Rabiei, Mehdi; Saeidi, Masumeh; Kiani, Mohammad Ali; Amin, Sakineh Mohebi; Ahanchian, Hamid; Jafari, Seyed Ali; Kianifar, Hamidreza

    2015-08-01

    One of the most important issues in morning report sessions is the number of patients. The aim of this study was to investigate and compare the number of cases reported in the morning report sessions in terms of case-based and conventional methods from the perspective of pediatric residents of Mashhad University of Medical Sciences. The present study was conducted on 24 pediatric residents of Mashhad University of Medical Sciences in the academic year 2014-2015. In this survey, the residents replied to a 20-question researcher-made questionnaire that had been designed to measure the views of residents regarding the number of patients in the morning report sessions using case-based and conventional methods. The validity of the questionnaire was confirmed by experts' views and its reliability by calculating Cronbach's alpha coefficients. Data were analyzed by t-test analysis. The mean age of the residents was 30.852 ± 2.506, and 66.6% of them were female. The results showed that there was no significant relationship among the variables of academic year, gender, and residents' perspective to choosing the number of patients in the morning report sessions (P > 0.05). T-test analysis showed a significant relationship among the average scores of residents in the selection of the case-based method in comparison to the conventional method (P case-based morning report was preferred compared to the conventional method. This method makes residents pay more attention to the details of patients' issues and therefore helps them to better plan how to address patient problems and improve their differential diagnosis skills.

  9. Supplier Portfolio Selection and Optimum Volume Allocation: A Knowledge Based Method

    NARCIS (Netherlands)

    Aziz, Romana; Aziz, R.; van Hillegersberg, Jos; Kersten, W.; Blecker, T.; Luthje, C.

    2010-01-01

    Selection of suppliers and allocation of optimum volumes to suppliers is a strategic business decision. This paper presents a decision support method for supplier selection and the optimal allocation of volumes in a supplier portfolio. The requirements for the method were gathered during a case

  10. PLS-based and regularization-based methods for the selection of relevant variables in non-targeted metabolomics data

    Directory of Open Access Journals (Sweden)

    Renata Bujak

    2016-07-01

    Full Text Available Non-targeted metabolomics constitutes a part of systems biology and aims to determine many metabolites in complex biological samples. Datasets obtained in non-targeted metabolomics studies are multivariate and high-dimensional due to the sensitivity of mass spectrometry-based detection methods as well as complexity of biological matrices. Proper selection of variables which contribute into group classification is a crucial step, especially in metabolomics studies which are focused on searching for disease biomarker candidates. In the present study, three different statistical approaches were tested using two metabolomics datasets (RH and PH study. Orthogonal projections to latent structures-discriminant analysis (OPLS-DA without and with multiple testing correction as well as least absolute shrinkage and selection operator (LASSO were tested and compared. For the RH study, OPLS-DA model built without multiple testing correction, selected 46 and 218 variables based on VIP criteria using Pareto and UV scaling, respectively. In the case of the PH study, 217 and 320 variables were selected based on VIP criteria using Pareto and UV scaling, respectively. In the RH study, OPLS-DA model built with multiple testing correction, selected 4 and 19 variables as statistically significant in terms of Pareto and UV scaling, respectively. For PH study, 14 and 18 variables were selected based on VIP criteria in terms of Pareto and UV scaling, respectively. Additionally, the concept and fundaments of the least absolute shrinkage and selection operator (LASSO with bootstrap procedure evaluating reproducibility of results, was demonstrated. In the RH and PH study, the LASSO selected 14 and 4 variables with reproducibility between 99.3% and 100%. However, apart from the popularity of PLS-DA and OPLS-DA methods in metabolomics, it should be highlighted that they do not control type I or type II error, but only arbitrarily establish a cut-off value for PLS-DA loadings

  11. Sound recovery via intensity variations of speckle pattern pixels selected with variance-based method

    Science.gov (United States)

    Zhu, Ge; Yao, Xu-Ri; Qiu, Peng; Mahmood, Waqas; Yu, Wen-Kai; Sun, Zhi-Bin; Zhai, Guang-Jie; Zhao, Qing

    2018-02-01

    In general, the sound waves can cause the vibration of the objects that are encountered in the traveling path. If we make a laser beam illuminate the rough surface of an object, it will be scattered into a speckle pattern that vibrates with these sound waves. Here, an efficient variance-based method is proposed to recover the sound information from speckle patterns captured by a high-speed camera. This method allows us to select the proper pixels that have large variances of the gray-value variations over time, from a small region of the speckle patterns. The gray-value variations of these pixels are summed together according to a simple model to recover the sound with a high signal-to-noise ratio. Meanwhile, our method will significantly simplify the computation compared with the traditional digital-image-correlation technique. The effectiveness of the proposed method has been verified by applying a variety of objects. The experimental results illustrate that the proposed method is robust to the quality of the speckle patterns and costs more than one-order less time to perform the same number of the speckle patterns. In our experiment, a sound signal of time duration 1.876 s is recovered from various objects with time consumption of 5.38 s only.

  12. Supplier selection in manufacturing innovation chain-oriented public procurement based on improved PSO method

    Directory of Open Access Journals (Sweden)

    Xin Xu

    2014-01-01

    Full Text Available Purpose: At the dynamic innovation market, it is very difficult for an enterprise to accomplish innovation individually; technology innovation is shifting towards collaborative R&D chain mode. Thus, supplier selection based on individually innovation efficiency of enterprise is inapplicable to construct collaborative R&D innovation chain. This study is seeking to address how to select R&D innovation chain supplier in manufacturing industry. Design/methodology/approach: Firstly, Delphi method and AHP method are applied to establish an index system evaluating the suppliers of innovation chain, and then each index is weighted by experts with AHP method. Thirdly, optimized PSO algorithm is put forwarded based on the optimal efficiency of innovation chain to discriminate ideal suppliers meeting realistic conditions. Fourthly, innovation chain construction at generator manufacturing industry was taken as empirical case study to testify the improved PSO model. Findings: The innovation chain is comprised up by several enterprises, innovation performance of a single enterprise is not always positively correlated to that of one innovation chain, and the proposed model is capable to find out the best combination to construct an innovation chain. Research limitations/implications: The relations between these constructs with other variables of interest to the academicals fields were analyzed by a precise and credible data with a clear and concise description of the supply chain integration measurement scales. Practical implications: providing scales that are valid as a diagnostic tool for best practices, as well as providing a benchmark with which to compare the score for each individual plant against a chain of industrial innovation from machinery. Originality/value: Innovation chain integration is an important factor in explaining the innovation performance of companies. The vast range of results obtained is due to the fact that there is no exactness to

  13. Simulation-based investigation of the paired-gear method in cod-end selectivity studies

    DEFF Research Database (Denmark)

    Herrmann, Bent; Frandsen, Rikke; Holst, René

    2007-01-01

    In this paper, the paired-gear and covered cod-end methods for estimating the selectivity of trawl cod-ends are compared. A modified version of the cod-end selectivity simulator PRESEMO is used to simulate the data that would be collected from a paired-gear experiment where the test cod-end also ...

  14. An Integrated DEMATEL-VIKOR Method-Based Approach for Cotton Fibre Selection and Evaluation

    Science.gov (United States)

    Chakraborty, Shankar; Chatterjee, Prasenjit; Prasad, Kanika

    2018-01-01

    Selection of the most appropriate cotton fibre type for yarn manufacturing is often treated as a multi-criteria decision-making (MCDM) problem as the optimal selection decision needs to be taken in presence of several conflicting fibre properties. In this paper, two popular MCDM methods in the form of decision making trial and evaluation laboratory (DEMATEL) and VIse Kriterijumska Optimizacija kompromisno Resenje (VIKOR) are integrated to aid the cotton fibre selection decision. DEMATEL method addresses the interrelationships between various physical properties of cotton fibres while segregating them into cause and effect groups, whereas, VIKOR method helps in ranking all the considered 17 cotton fibres from the best to the worst. The derived ranking of cotton fibre alternatives closely matches with that obtained by the past researchers. This model can assist the spinning industry personnel in the blending process while making accurate fibre selection decision when cotton fibre properties are numerous and interrelated.

  15. Computational Experiment Study on Selection Mechanism of Project Delivery Method Based on Complex Factors

    Directory of Open Access Journals (Sweden)

    Xiang Ding

    2014-01-01

    Full Text Available Project delivery planning is a key stage used by the project owner (or project investor for organizing design, construction, and other operations in a construction project. The main task in this stage is to select an appropriate project delivery method. In order to analyze different factors affecting the PDM selection, this paper establishes a multiagent model mainly to show how project complexity, governance strength, and market environment affect the project owner’s decision on PDM. Experiment results show that project owner usually choose Design-Build method when the project is very complex within a certain range. Besides, this paper points out that Design-Build method will be the prior choice when the potential contractors develop quickly. This paper provides the owners with methods and suggestions in terms of showing how the factors affect PDM selection, and it may improve the project performance.

  16. Research on filter’s parameter selection based on PROMETHEE method

    Science.gov (United States)

    Zhu, Hui-min; Wang, Hang-yu; Sun, Shi-yan

    2018-03-01

    The selection of filter’s parameters in target recognition was studied in this paper. The PROMETHEE method was applied to the optimization problem of Gabor filter parameters decision, the correspondence model of the elemental relation between two methods was established. The author took the identification of military target as an example, problem about the filter’s parameter decision was simulated and calculated by PROMETHEE. The result showed that using PROMETHEE method for the selection of filter’s parameters was more scientific. The human disturbance caused by the experts method and empirical method could be avoided by this way. The method can provide reference for the parameter configuration scheme decision of the filter.

  17. Data Visualization and Feature Selection Methods in Gel-based Proteomics

    DEFF Research Database (Denmark)

    Silva, Tomé Santos; Richard, Nadege; Dias, Jorge P.

    2014-01-01

    -based proteomics, summarizing the current state of research within this field. Particular focus is given on discussing the usefulness of available multivariate analysis tools both for data visualization and feature selection purposes. Visual examples are given using a real gel-based proteomic dataset as basis....

  18. TEHRAN AIR POLLUTANTS PREDICTION BASED ON RANDOM FOREST FEATURE SELECTION METHOD

    Directory of Open Access Journals (Sweden)

    A. Shamsoddini

    2017-09-01

    Full Text Available Air pollution as one of the most serious forms of environmental pollutions poses huge threat to human life. Air pollution leads to environmental instability, and has harmful and undesirable effects on the environment. Modern prediction methods of the pollutant concentration are able to improve decision making and provide appropriate solutions. This study examines the performance of the Random Forest feature selection in combination with multiple-linear regression and Multilayer Perceptron Artificial Neural Networks methods, in order to achieve an efficient model to estimate carbon monoxide and nitrogen dioxide, sulfur dioxide and PM2.5 contents in the air. The results indicated that Artificial Neural Networks fed by the attributes selected by Random Forest feature selection method performed more accurate than other models for the modeling of all pollutants. The estimation accuracy of sulfur dioxide emissions was lower than the other air contaminants whereas the nitrogen dioxide was predicted more accurate than the other pollutants.

  19. Tehran Air Pollutants Prediction Based on Random Forest Feature Selection Method

    Science.gov (United States)

    Shamsoddini, A.; Aboodi, M. R.; Karami, J.

    2017-09-01

    Air pollution as one of the most serious forms of environmental pollutions poses huge threat to human life. Air pollution leads to environmental instability, and has harmful and undesirable effects on the environment. Modern prediction methods of the pollutant concentration are able to improve decision making and provide appropriate solutions. This study examines the performance of the Random Forest feature selection in combination with multiple-linear regression and Multilayer Perceptron Artificial Neural Networks methods, in order to achieve an efficient model to estimate carbon monoxide and nitrogen dioxide, sulfur dioxide and PM2.5 contents in the air. The results indicated that Artificial Neural Networks fed by the attributes selected by Random Forest feature selection method performed more accurate than other models for the modeling of all pollutants. The estimation accuracy of sulfur dioxide emissions was lower than the other air contaminants whereas the nitrogen dioxide was predicted more accurate than the other pollutants.

  20. METHOD FOR SELECTION OF PROJECT MANAGEMENT APPROACH BASED ON FUZZY CONCEPTS

    Directory of Open Access Journals (Sweden)

    Igor V. KONONENKO

    2017-03-01

    Full Text Available Literature analysis of works that devoted to research of the selection a project management approach and development of effective methods for this problem solution is given. Mathematical model and method for selection of project management approach with fuzzy concepts of applicability of existing approaches are proposed. The selection is made of such approaches as the PMBOK Guide, the ISO21500 standard, the PRINCE2 methodology, the SWEBOK Guide, agile methodologies Scrum, XP, and Kanban. The number of project parameters which have a great impact on the result of the selection and measure of their impact is determined. Project parameters relate to information about the project, team, communication, critical project risks. They include the number of people involved in the project, the customer's experience with this project team, the project team's experience in this field, the project team's understanding of requirements, adapting ability, initiative, and others. The suggested method is considered on the example of its application for selection a project management approach to software development project.

  1. The multi-objective decision making methods based on MULTIMOORA and MOOSRA for the laptop selection problem

    Science.gov (United States)

    Aytaç Adalı, Esra; Tuş Işık, Ayşegül

    2017-06-01

    A decision making process requires the values of conflicting objectives for alternatives and the selection of the best alternative according to the needs of decision makers. Multi-objective optimization methods may provide solution for this selection. In this paper it is aimed to present the laptop selection problem based on MOORA plus full multiplicative form (MULTIMOORA) and multi-objective optimization on the basis of simple ratio analysis (MOOSRA) which are relatively new multi-objective optimization methods. The novelty of this paper is solving this problem with the MULTIMOORA and MOOSRA methods for the first time.

  2. A Time-Series Water Level Forecasting Model Based on Imputation and Variable Selection Method

    OpenAIRE

    Jun-He Yang; Ching-Hsue Cheng; Chia-Pan Chan

    2017-01-01

    Reservoirs are important for households and impact the national economy. This paper proposed a time-series forecasting model based on estimating a missing value followed by variable selection to forecast the reservoir's water level. This study collected data from the Taiwan Shimen Reservoir as well as daily atmospheric data from 2008 to 2015. The two datasets are concatenated into an integrated dataset based on ordering of the data as a research dataset. The proposed time-series forecasting m...

  3. Entropy Based Test Point Evaluation and Selection Method for Analog Circuit Fault Diagnosis

    Directory of Open Access Journals (Sweden)

    Yuan Gao

    2014-01-01

    Full Text Available By simplifying tolerance problem and treating faulty voltages on different test points as independent variables, integer-coded table technique is proposed to simplify the test point selection process. Usually, simplifying tolerance problem may induce a wrong solution while the independence assumption will result in over conservative result. To address these problems, the tolerance problem is thoroughly considered in this paper, and dependency relationship between different test points is considered at the same time. A heuristic graph search method is proposed to facilitate the test point selection process. First, the information theoretic concept of entropy is used to evaluate the optimality of test point. The entropy is calculated by using the ambiguous sets and faulty voltage distribution, determined by component tolerance. Second, the selected optimal test point is used to expand current graph node by using dependence relationship between the test point and graph node. Simulated results indicate that the proposed method more accurately finds the optimal set of test points than other methods; therefore, it is a good solution to minimize the size of the test point set. To simplify and clarify the proposed method, only catastrophic and some specific parametric faults are discussed in this paper.

  4. Log-linear model based behavior selection method for artificial fish swarm algorithm.

    Science.gov (United States)

    Huang, Zhehuang; Chen, Yidong

    2015-01-01

    Artificial fish swarm algorithm (AFSA) is a population based optimization technique inspired by social behavior of fishes. In past several years, AFSA has been successfully applied in many research and application areas. The behavior of fishes has a crucial impact on the performance of AFSA, such as global exploration ability and convergence speed. How to construct and select behaviors of fishes are an important task. To solve these problems, an improved artificial fish swarm algorithm based on log-linear model is proposed and implemented in this paper. There are three main works. Firstly, we proposed a new behavior selection algorithm based on log-linear model which can enhance decision making ability of behavior selection. Secondly, adaptive movement behavior based on adaptive weight is presented, which can dynamically adjust according to the diversity of fishes. Finally, some new behaviors are defined and introduced into artificial fish swarm algorithm at the first time to improve global optimization capability. The experiments on high dimensional function optimization showed that the improved algorithm has more powerful global exploration ability and reasonable convergence speed compared with the standard artificial fish swarm algorithm.

  5. Log-Linear Model Based Behavior Selection Method for Artificial Fish Swarm Algorithm

    Directory of Open Access Journals (Sweden)

    Zhehuang Huang

    2015-01-01

    Full Text Available Artificial fish swarm algorithm (AFSA is a population based optimization technique inspired by social behavior of fishes. In past several years, AFSA has been successfully applied in many research and application areas. The behavior of fishes has a crucial impact on the performance of AFSA, such as global exploration ability and convergence speed. How to construct and select behaviors of fishes are an important task. To solve these problems, an improved artificial fish swarm algorithm based on log-linear model is proposed and implemented in this paper. There are three main works. Firstly, we proposed a new behavior selection algorithm based on log-linear model which can enhance decision making ability of behavior selection. Secondly, adaptive movement behavior based on adaptive weight is presented, which can dynamically adjust according to the diversity of fishes. Finally, some new behaviors are defined and introduced into artificial fish swarm algorithm at the first time to improve global optimization capability. The experiments on high dimensional function optimization showed that the improved algorithm has more powerful global exploration ability and reasonable convergence speed compared with the standard artificial fish swarm algorithm.

  6. Analysis of selected structures for model-based measuring methods using fuzzy logic

    Energy Technology Data Exchange (ETDEWEB)

    Hampel, R.; Kaestner, W.; Fenske, A.; Vandreier, B.; Schefter, S. [Hochschule fuer Technik, Wirtschaft und Sozialwesen Zittau/Goerlitz (FH), Zittau (DE). Inst. fuer Prozesstechnik, Prozessautomatisierung und Messtechnik e.V. (IPM)

    2000-07-01

    Monitoring and diagnosis of safety-related technical processes in nuclear enginering can be improved with the help of intelligent methods of signal processing such as analytical redundancies. This chapter gives an overview about combined methods in form of hybrid models using model based measuring methods (observer) and knowledge-based methods (fuzzy logic). Three variants of hybrid observers (fuzzy-supported observer, hybrid observer with variable gain and hybrid non-linear operating point observer) are explained. As a result of the combination of analytical and fuzzy-based algorithms a new quality of monitoring and diagnosis is achieved. The results will be demonstrated in summary for the example water level estimation within pressure vessels (pressurizer, steam generator, and Boiling Water Reactor) with water-steam mixture during the accidental depressurization. (orig.)

  7. Analysis of selected structures for model-based measuring methods using fuzzy logic

    International Nuclear Information System (INIS)

    Hampel, R.; Kaestner, W.; Fenske, A.; Vandreier, B.; Schefter, S.

    2000-01-01

    Monitoring and diagnosis of safety-related technical processes in nuclear engineering can be improved with the help of intelligent methods of signal processing such as analytical redundancies. This chapter gives an overview about combined methods in form of hybrid models using model based measuring methods (observer) and knowledge-based methods (fuzzy logic). Three variants of hybrid observers (fuzzy-supported observer, hybrid observer with variable gain and hybrid non-linear operating point observer) are explained. As a result of the combination of analytical and fuzzy-based algorithms a new quality of monitoring and diagnosis is achieved. The results will be demonstrated in summary for the example water level estimation within pressure vessels (pressurizer, steam generator, and Boiling Water Reactor) with water-steam mixture during the accidental depressurization. (orig.)

  8. A Selection Method That Succeeds!

    Science.gov (United States)

    Weitman, Catheryn J.

    Provided a structural selection method is carried out, it is possible to find quality early childhood personnel. The hiring process involves five definite steps, each of which establishes a base for the next. A needs assessment formulating basic minimal qualifications is the first step. The second step involves review of current job descriptions…

  9. Influence of sand base preparation on properties of chromite moulding sands with sodium silicate hardened with selected methods

    Directory of Open Access Journals (Sweden)

    Stachowicz M.

    2017-03-01

    Full Text Available The paper presents a research on the relation between thermal preparation of chromite sand base of moulding sands containing sodium silicate, hardened with selected physical and chemical methods, and structure of the created bonding bridges. Test specimens were prepared of chromite sand - fresh or baked at 950°C for 10 or 24 hours - mixed with 0.5 wt.% of the selected non-modified inorganic binder and, after forming, were hardened with CO2 or liquid esters, dried traditionally or heated with microwaves at 2.45 GHz. It was shown on the grounds of SEM observations that the time of baking the base sand and the hardening method significantly affect structure of the bonding bridges and are correlated with mechanical properties of the moulding sands. It was found that hardening chromite-based moulding mixtures with physical methods is much more favourable than hardening with chemical methods, guaranteeing also more than ten times higher mechanical properties.

  10. Method of App Selection for Healthcare Providers Based on Consumer Needs.

    Science.gov (United States)

    Lee, Jisan; Kim, Jeongeun

    2018-01-01

    Mobile device applications can be used to manage health. However, healthcare providers hesitate to use them because selection methods that consider the needs of health consumers and identify the most appropriate application are rare. This study aimed to create an effective method of identifying applications that address user needs. Women experiencing dysmenorrhea and premenstrual syndrome were the targeted users. First, we searched for related applications from two major sources of mobile applications. Brainstorming, mind mapping, and persona and scenario techniques were used to create a checklist of relevant criteria, which was used to rate the applications. Of the 2784 applications found, 369 were analyzed quantitatively. Of those, five of the top candidates were evaluated by three groups: application experts, clinical experts, and potential users. All three groups ranked one application the highest; however, the remaining rankings differed. The results of this study suggest that the method created is useful because it considers not only the needs of various users but also the knowledge of application and clinical experts. This study proposes a method for finding and using the best among existing applications and highlights the need for nurses who can understand and combine opinions of users and application and clinical experts.

  11. Adaptive method for multi-dimensional integration and selection of a base of chaos polynomials

    International Nuclear Information System (INIS)

    Crestaux, T.

    2011-01-01

    This research thesis addresses the propagation of uncertainty in numerical simulations and its processing within a probabilistic framework by a functional approach based on random variable functions. The author reports the use of the spectral method to represent random variables by development in polynomial chaos. More precisely, the author uses the method of non-intrusive projection which uses the orthogonality of Chaos Polynomials to compute the development coefficients by approximation of scalar products. The approach is applied to a cavity and to waste storage [fr

  12. Wavelength Selection Method Based on Differential Evolution for Precise Quantitative Analysis Using Terahertz Time-Domain Spectroscopy.

    Science.gov (United States)

    Li, Zhi; Chen, Weidong; Lian, Feiyu; Ge, Hongyi; Guan, Aihong

    2017-12-01

    Quantitative analysis of component mixtures is an important application of terahertz time-domain spectroscopy (THz-TDS) and has attracted broad interest in recent research. Although the accuracy of quantitative analysis using THz-TDS is affected by a host of factors, wavelength selection from the sample's THz absorption spectrum is the most crucial component. The raw spectrum consists of signals from the sample and scattering and other random disturbances that can critically influence the quantitative accuracy. For precise quantitative analysis using THz-TDS, the signal from the sample needs to be retained while the scattering and other noise sources are eliminated. In this paper, a novel wavelength selection method based on differential evolution (DE) is investigated. By performing quantitative experiments on a series of binary amino acid mixtures using THz-TDS, we demonstrate the efficacy of the DE-based wavelength selection method, which yields an error rate below 5%.

  13. Verification of a characterization method of the laser-induced selective activation based on industrial lasers

    DEFF Research Database (Denmark)

    Zhang, Yang; Hansen, Hans Nørgaard; Tang, Peter T.

    2013-01-01

    In this article, laser-induced selective activation (LISA) for subsequent autocatalytic copper plating is performed by several types of industrial scale lasers, including a Nd:YAG laser, a UV laser, a fiber laser, a green laser, and a short pulsed laser. Based on analysis of all the laser......-machined surfaces, normalized bearing area curves and parameters are used to characterize the surface quantitatively. The range of normalized bearing area curve parameters for plate-able surface is suggested. PBT/PET with 40 % glass fiber was used as the substrate material. For all of the studied lasers......, the parameters were varied in a relatively large range, and matrixes of the laser-machined surface were obtained. The topography of those laser-machined surfaces was examined by scanning electronic microscope (SEM). For each sample examined by SEM, there was an identical workpiece plated by for 90 min...

  14. Interval MULTIMOORA method with target values of attributes based on interval distance and preference degree: biomaterials selection

    Science.gov (United States)

    Hafezalkotob, Arian; Hafezalkotob, Ashkan

    2017-06-01

    A target-based MADM method covers beneficial and non-beneficial attributes besides target values for some attributes. Such techniques are considered as the comprehensive forms of MADM approaches. Target-based MADM methods can also be used in traditional decision-making problems in which beneficial and non-beneficial attributes only exist. In many practical selection problems, some attributes have given target values. The values of decision matrix and target-based attributes can be provided as intervals in some of such problems. Some target-based decision-making methods have recently been developed; however, a research gap exists in the area of MADM techniques with target-based attributes under uncertainty of information. We extend the MULTIMOORA method for solving practical material selection problems in which material properties and their target values are given as interval numbers. We employ various concepts of interval computations to reduce degeneration of uncertain data. In this regard, we use interval arithmetic and introduce innovative formula for interval distance of interval numbers to create interval target-based normalization technique. Furthermore, we use a pairwise preference matrix based on the concept of degree of preference of interval numbers to calculate the maximum, minimum, and ranking of these numbers. Two decision-making problems regarding biomaterials selection of hip and knee prostheses are discussed. Preference degree-based ranking lists for subordinate parts of the extended MULTIMOORA method are generated by calculating the relative degrees of preference for the arranged assessment values of the biomaterials. The resultant rankings for the problem are compared with the outcomes of other target-based models in the literature.

  15. Optimal Site Selection of Electric Vehicle Charging Stations Based on a Cloud Model and the PROMETHEE Method

    Directory of Open Access Journals (Sweden)

    Yunna Wu

    2016-03-01

    Full Text Available The task of site selection for electric vehicle charging stations (EVCS is hugely important from the perspective of harmonious and sustainable development. However, flaws and inadequacies in the currently used multi-criteria decision making methods could result in inaccurate and irrational decision results. First of all, the uncertainty of the information cannot be described integrally in the evaluation of the EVCS site selection. Secondly, rigorous consideration of the mutual influence between the various criteria is lacking, which is mainly evidenced in two aspects: one is ignoring the correlation, and the other is the unconscionable measurements. Last but not least, the ranking method adopted in previous studies is not very appropriate for evaluating the EVCS site selection problem. As a result of the above analysis, a Preference Ranking Organization Method for Enrichment Evaluations (PROMETHEE method-based decision system combined with the cloud model is proposed in this paper for EVCS site selection. Firstly, the use of the PROMETHEE method can bolster the confidence and visibility for decision makers. Secondly, the cloud model is recommended to describe the fuzziness and randomness of linguistic terms integrally and accurately. Finally, the Analytical Network Process (ANP method is adopted to measure the correlation of the indicators with a greatly simplified calculation of the parameters and the steps required.

  16. Improved Screening Method for the Selection of Wine Yeasts Based on Their Pigment Adsorption Activity

    Directory of Open Access Journals (Sweden)

    Andrea Caridi

    2013-01-01

    Full Text Available The aim of this research is to improve an existing low-cost and simple but consistent culturing technique for measuring the adsorption of grape skin pigments on yeasts, comprising: (i growing yeasts in Petri dishes on chromogenic grape-skin-based medium, (ii photographing the yeast biomass, (iii measuring its red, green, and blue colour components, and (iv performing the statistical analysis of the data. Twenty strains of Saccharomyces cerevisiae were grown on different lots of the chromogenic medium, prepared using grape skins from dark cultivars Greco Nero, Magliocco and Nero d’Avola. Microscale wine fermentation trials were also performed. Wide and significant differences among wine yeasts were observed. The chromogenic grape-skin-based medium can be prepared using any grape cultivar, thus allowing the specific selection of the most suitable strain of Saccharomyces cerevisiae for each grape must, mainly for red winemaking. The research provides a useful tool to characterize wine yeasts in relation to pigment adsorption, allowing the improvement of wine colour.

  17. Selecting Measures to Evaluate Complex Sociotechnical Systems: An Empirical Comparison of a Task-based and Constraint-based Method

    Science.gov (United States)

    2013-07-01

    personnel selection, work methods, labour standards and an individual’s motivation to perform work. His work became less relevant as tasks became more...people were employed to do and was able to show that non-physical factors such as job satisfaction and the psychological states of workers contributed...all threats, flight conditions, consequences of their actions (for example, damaging the aircraft during a “hard” landing) and expressed satisfaction

  18. Determining Selection across Heterogeneous Landscapes: A Perturbation-Based Method and Its Application to Modeling Evolution in Space.

    Science.gov (United States)

    Wickman, Jonas; Diehl, Sebastian; Blasius, Bernd; Klausmeier, Christopher A; Ryabov, Alexey B; Brännström, Åke

    2017-04-01

    Spatial structure can decisively influence the way evolutionary processes unfold. To date, several methods have been used to study evolution in spatial systems, including population genetics, quantitative genetics, moment-closure approximations, and individual-based models. Here we extend the study of spatial evolutionary dynamics to eco-evolutionary models based on reaction-diffusion equations and adaptive dynamics. Specifically, we derive expressions for the strength of directional and stabilizing/disruptive selection that apply both in continuous space and to metacommunities with symmetrical dispersal between patches. For directional selection on a quantitative trait, this yields a way to integrate local directional selection across space and determine whether the trait value will increase or decrease. The robustness of this prediction is validated against quantitative genetics. For stabilizing/disruptive selection, we show that spatial heterogeneity always contributes to disruptive selection and hence always promotes evolutionary branching. The expression for directional selection is numerically very efficient and hence lends itself to simulation studies of evolutionary community assembly. We illustrate the application and utility of the expressions for this purpose with two examples of the evolution of resource utilization. Finally, we outline the domain of applicability of reaction-diffusion equations as a modeling framework and discuss their limitations.

  19. Quick Link Selection Method by Using Pricing Strategy Based on User Equilibrium for Implementing an Effective Urban Travel Demand Management

    Directory of Open Access Journals (Sweden)

    Shahriar Afandizadeh Zargari

    2016-12-01

    Full Text Available This paper presents a two-stage model of optimization as a quick method to choose the best potential links for implementing urban travel demand management (UTDM strategy like road pricing. The model is optimized by minimizing the hidden cost of congestion based on user equilibrium (MHCCUE. It forecasts the exact amount of flows and tolls for links in user equilibrium condition to determine the hidden cost for each link to optimize the link selection based on the network congestion priority. The results show that not only the amount of total cost is decreased, but also the number of selected links for pricing is reduced as compared with the previous toll minimization methods. Moreover, as this model just uses the traffic assignment data for calculation, it could be considered as a quick and optimum solution for choosing the potential links.

  20. Studies on the matched potential method for determining the selectivity coefficients of ion-selective electrodes based on neutral ionophores: experimental and theoretical verification.

    Science.gov (United States)

    Tohda, K; Dragoe, D; Shibata, M; Umezawa, Y

    2001-06-01

    A theory is presented that describes the matched potential method (MPM) for the determination of the potentiometric selectivity coefficients (KA,Bpot) of ion-selective electrodes for two ions with any charge. This MPM theory is based on electrical diffuse layers on both the membrane and the aqueous side of the interface, and is therefore independent of the Nicolsky-Eisenman equation. Instead, the Poisson equation is used and a Boltzmann distribution is assumed with respect to all charged species, including primary, interfering and background electrolyte ions located at the diffuse double layers. In this model, the MPM-selectivity coefficients of ions with equal charge (ZA = ZB) are expressed as the ratio of the concentrations of the primary and interfering ions in aqueous solutions at which the same amounts of the primary and interfering ions permselectively extracted into the membrane surface. For ions with unequal charge (ZA not equal to ZB), the selectivity coefficients are expressed as a function not only of the amounts of the primary and interfering ions permeated into the membrane surface, but also of the primary ion concentration in the initial reference solution and the delta EMF value. Using the measured complexation stability constants and single ion distribution coefficients for the relevant systems, the corresponding MPM selectivity coefficients can be calculated from the developed MPM theory. It was found that this MPM theory is capable of accurately and precisely predicting the MPM selectivity coefficients for a series of ion-selective electrodes (ISEs) with representative ionophore systems, which are generally in complete agreement with independently determined MPM selectivity values from the potentiometric measurements. These results also conclude that the assumption for the Boltzmann distribution was in fact valid in the theory. The recent critical papers on MPM have pointed out that because the MPM selectivity coefficients are highly concentration

  1. A parallel optimization method for product configuration and supplier selection based on interval

    Science.gov (United States)

    Zheng, Jian; Zhang, Meng; Li, Guoxi

    2017-06-01

    In the process of design and manufacturing, product configuration is an important way of product development, and supplier selection is an essential component of supply chain management. To reduce the risk of procurement and maximize the profits of enterprises, this study proposes to combine the product configuration and supplier selection, and express the multiple uncertainties as interval numbers. An integrated optimization model of interval product configuration and supplier selection was established, and NSGA-II was put forward to locate the Pareto-optimal solutions to the interval multiobjective optimization model.

  2. Oral cancer prognosis based on clinicopathologic and genomic markers using a hybrid of feature selection and machine learning methods

    Science.gov (United States)

    2013-01-01

    Background Machine learning techniques are becoming useful as an alternative approach to conventional medical diagnosis or prognosis as they are good for handling noisy and incomplete data, and significant results can be attained despite a small sample size. Traditionally, clinicians make prognostic decisions based on clinicopathologic markers. However, it is not easy for the most skilful clinician to come out with an accurate prognosis by using these markers alone. Thus, there is a need to use genomic markers to improve the accuracy of prognosis. The main aim of this research is to apply a hybrid of feature selection and machine learning methods in oral cancer prognosis based on the parameters of the correlation of clinicopathologic and genomic markers. Results In the first stage of this research, five feature selection methods have been proposed and experimented on the oral cancer prognosis dataset. In the second stage, the model with the features selected from each feature selection methods are tested on the proposed classifiers. Four types of classifiers are chosen; these are namely, ANFIS, artificial neural network, support vector machine and logistic regression. A k-fold cross-validation is implemented on all types of classifiers due to the small sample size. The hybrid model of ReliefF-GA-ANFIS with 3-input features of drink, invasion and p63 achieved the best accuracy (accuracy = 93.81%; AUC = 0.90) for the oral cancer prognosis. Conclusions The results revealed that the prognosis is superior with the presence of both clinicopathologic and genomic markers. The selected features can be investigated further to validate the potential of becoming as significant prognostic signature in the oral cancer studies. PMID:23725313

  3. A Permutation Importance-Based Feature Selection Method for Short-Term Electricity Load Forecasting Using Random Forest

    Directory of Open Access Journals (Sweden)

    Nantian Huang

    2016-09-01

    Full Text Available The prediction accuracy of short-term load forecast (STLF depends on prediction model choice and feature selection result. In this paper, a novel random forest (RF-based feature selection method for STLF is proposed. First, 243 related features were extracted from historical load data and the time information of prediction points to form the original feature set. Subsequently, the original feature set was used to train an RF as the original model. After the training process, the prediction error of the original model on the test set was recorded and the permutation importance (PI value of each feature was obtained. Then, an improved sequential backward search method was used to select the optimal forecasting feature subset based on the PI value of each feature. Finally, the optimal forecasting feature subset was used to train a new RF model as the final prediction model. Experiments showed that the prediction accuracy of RF trained by the optimal forecasting feature subset was higher than that of the original model and comparative models based on support vector regression and artificial neural network.

  4. Selection of events at Ukrainian NPPs using the algorithm based on accident precursor method

    International Nuclear Information System (INIS)

    Vorontsov, D.V.; Lyigots'kij, O.Yi.; Serafin, R.Yi.; Tkachova, L.M.

    2012-01-01

    The paper describes a general approach to the first stage of research and development on analysis of Ukrainian NPP operation events from 1 January 2000 to 31 December 2010 using the accident precursor approach. Groups of potentially important events formed after their selection and classification are provided

  5. Software selection based on analysis and forecasting methods, practised in 1C

    Science.gov (United States)

    Vazhdaev, A. N.; Chernysheva, T. Y.; Lisacheva, E. I.

    2015-09-01

    The research focuses on the problem of a “1C: Enterprise 8” platform inboard mechanisms for data analysis and forecasting. It is important to evaluate and select proper software to develop effective strategies for customer relationship management in terms of sales, as well as implementation and further maintenance of software. Research data allows creating new forecast models to schedule further software distribution.

  6. A consistency-based feature selection method allied with linear SVMs for HIV-1 protease cleavage site prediction.

    Directory of Open Access Journals (Sweden)

    Orkun Oztürk

    Full Text Available BACKGROUND: Predicting type-1 Human Immunodeficiency Virus (HIV-1 protease cleavage site in protein molecules and determining its specificity is an important task which has attracted considerable attention in the research community. Achievements in this area are expected to result in effective drug design (especially for HIV-1 protease inhibitors against this life-threatening virus. However, some drawbacks (like the shortage of the available training data and the high dimensionality of the feature space turn this task into a difficult classification problem. Thus, various machine learning techniques, and specifically several classification methods have been proposed in order to increase the accuracy of the classification model. In addition, for several classification problems, which are characterized by having few samples and many features, selecting the most relevant features is a major factor for increasing classification accuracy. RESULTS: We propose for HIV-1 data a consistency-based feature selection approach in conjunction with recursive feature elimination of support vector machines (SVMs. We used various classifiers for evaluating the results obtained from the feature selection process. We further demonstrated the effectiveness of our proposed method by comparing it with a state-of-the-art feature selection method applied on HIV-1 data, and we evaluated the reported results based on attributes which have been selected from different combinations. CONCLUSION: Applying feature selection on training data before realizing the classification task seems to be a reasonable data-mining process when working with types of data similar to HIV-1. On HIV-1 data, some feature selection or extraction operations in conjunction with different classifiers have been tested and noteworthy outcomes have been reported. These facts motivate for the work presented in this paper. SOFTWARE AVAILABILITY: The software is available at http

  7. Selective gas exhaustion method

    International Nuclear Information System (INIS)

    Hirano, Yoichi

    1998-01-01

    The present invention provides a method capable of evacuating gases at an exhaustion rate which varies depending on the kind of gases. For example, in a thermonuclear experimental device, a hydrogen gas exhaustion rate is determined to 0 and an exhaustion rate for other impure gases is made greater. Namely, a baffle plate is cooled to a temperature to a level at which the vapor pressure of gases to evacuate a baffle plate is required in a pump incorporating a baffle plate, for example, a cryopump or a sorption pump. In this case, the level of the vapor pressure required for evacuating the exhaustion gas ingredients is 1 x 10 -8 Torr or less, preferably, 1 x 10 -9 Torr. In a thermonuclear experimental device, a gas having a lower boiling point next to hydrogen is neon, but neon is scarcely present in natural world. Nitrogen has a lower boiling point next thereto, and if the temperature is lowered to such a level that the vapor pressure for evacuating gases such as nitrogen, and carbon monoxide, oxygen, fluorine, argon or methane having a boiling point at or lower than nitrogen is required. Then, evacuation rate sufficient for gases other than hydrogen gas can be obtained. (I.S.)

  8. Model for Selection of the Best Location Based on Fuzzy AHP and Hurwitz Methods

    Directory of Open Access Journals (Sweden)

    Slavko Arsovski

    2017-01-01

    Full Text Available The problem of evaluation and selection of parking lots is a part of significant issues of public transport management in cities. As population expands as well as urban areas, solving the mentioned issues affects employees, security and safety of citizens, and quality of life in long-time period. The aim of this paper is to propose a multicriteria decision model which includes both quantitative and qualitative criteria, which may be of either benefit or cost type, to evaluate locations. The criteria values and the importance of criteria are either precise or linguistic expressions defined by trapezoidal fuzzy numbers. The human judgments of the relative importance of evaluation criteria and uncertain criteria values are often vague and cannot be expressed by exact precise values. The ranking of locations with respect to all criteria and their weights is performed for various degrees of pessimistic-optimistic index. The proposed model is tested through an illustrative example with real life data, where it shows the practical implications in public communal enterprises.

  9. Reliability-based decision making for selection of ready-mix concrete supply using stochastic superiority and inferiority ranking method

    International Nuclear Information System (INIS)

    Chou, Jui-Sheng; Ongkowijoyo, Citra Satria

    2015-01-01

    Corporate competitiveness is heavily influenced by the information acquired, processed, utilized and transferred by professional staff involved in the supply chain. This paper develops a decision aid for selecting on-site ready-mix concrete (RMC) unloading type in decision making situations involving multiple stakeholders and evaluation criteria. The uncertainty of criteria weights set by expert judgment can be transformed in random ways based on the probabilistic virtual-scale method within a prioritization matrix. The ranking is performed by grey relational grade systems considering stochastic criteria weight based on individual preference. Application of the decision aiding model in actual RMC case confirms that the method provides a robust and effective tool for facilitating decision making under uncertainty. - Highlights: • This study models decision aiding method to assess ready-mix concrete unloading type. • Applying Monte Carlo simulation to virtual-scale method achieves a reliable process. • Individual preference ranking method enhances the quality of global decision making. • Robust stochastic superiority and inferiority ranking obtains reasonable results

  10. A simple and rapid method for calixarene-based selective extraction of bioactive molecules from natural products.

    Science.gov (United States)

    Segneanu, Adina-Elena; Damian, Daniel; Hulka, Iosif; Grozescu, Ioan; Salifoglou, Athanasios

    2016-03-01

    Natural products derived from medicinal plants have gained an important role in drug discovery due to their complex and abundant composition of secondary metabolites, with their structurally unique molecular components bearing a significant number of stereo-centers exhibiting high specificity linked to biological activity. Usually, the extraction process of natural products involves various techniques targeting separation of a specific class of compounds from a highly complex matrix. Aiding the process entails the use of well-defined and selective molecular extractants with distinctly configured structural attributes. Calixarenes conceivably belong to that class of molecules. They have been studied intensely over the years in an effort to develop new and highly selective receptors for biomolecules. These macrocycles, which display remarkable structural architectures and properties, could help usher a new approach in the efficient separation of specific classes of compounds from complex matrices in natural products. A simple and rapid such extraction method is presented herein, based on host-guest interaction(s) between a calixarene synthetic receptor, 4-tert-butyl-calix[6]arene, and natural biomolecular targets (amino acids and peptides) from Helleborus purpurascens and Viscum album. Advanced physicochemical methods (including GC-MS and chip-based nanoESI-MS analysis) suggest that the molecular structure and specifically the calixarene cavity size are closely linked to the nature of compounds separated. Incorporation of biomolecules and modification of the macrocyclic architecture during separation were probed and confirmed by scanning electronic microscopy and atomic force microscopy. The collective results project calixarene as a promising molecular extractant candidate, facilitating the selective separation of amino acids and peptides from natural products.

  11. Highly selective apo-arginase based method for sensitive enzymatic assay of manganese (II) and cobalt (II) ions

    Science.gov (United States)

    Stasyuk, Nataliya; Gayda, Galina; Zakalskiy, Andriy; Zakalska, Oksana; Errachid, Abdelhamid; Gonchar, Mykhailo

    2018-03-01

    A novel enzymatic method of manganese (II) and cobalt (II) ions assay, based on using apo-enzyme of Mn2 +-dependent recombinant arginase I (arginase) and 2,3-butanedione monoxime (DMO) as a chemical reagent is proposed. The principle of the method is the evaluation of the activity of L-arginine-hydrolyzing of arginase holoenzyme after the specific binding of Mn2 + or Co2 + with apo-arginase. Urea, which is the product of enzymatic hydrolysis of L-arginine (Arg), reacts with DMO and the resulted compound is detected by both fluorometry and visual spectrophotometry. Thus, the content of metal ions in the tested samples can be determined by measuring the level of urea generated after enzymatic hydrolysis of Arg by reconstructed arginase holoenzyme in the presence of tested metal ions. The linearity range of the fluorometric apo-arginase-DMO method in the case of Mn2 + assay is from 4 pM to 1.10 nM with a limit of detection of 1 pM Mn2 +, whereas the linearity range of the present method in the case of Co2 + assay is from 8 pM to 45 nM with a limit of detection of 2.5 pM Co2 +. The proposed method being highly sensitive, selective, valid and low-cost, may be useful to monitor Mn2 + and Co2 + content in clinical laboratories, food industry and environmental control service.

  12. A Simple and Sensitive Plant-Based Western Corn Rootworm Bioassay Method for Resistance Determination and Event Selection.

    Science.gov (United States)

    Wen, Zhimou; Chen, Jeng Shong

    2018-05-26

    We report here a simple and sensitive plant-based western corn rootworm, Diabrotica virgifera virgifera LeConte (Coleoptera: Chrysomelidae), bioassay method that allows for examination of multiple parameters for both plants and insects in a single experimental setup within a short duration. For plants, injury to roots can be visually examined, fresh root weight can be measured, and expression of trait protein in plant roots can be analyzed. For insects, in addition to survival, larval growth and development can be evaluated in several aspects including body weight gain, body length, and head capsule width. We demonstrated using the method that eCry3.1Ab-expressing 5307 corn was very effective against western corn rootworm by eliciting high mortality and significantly inhibiting larval growth and development. We also validated that the method allowed determination of resistance in an eCry3.1Ab-resistant western corn rootworm strain. While data presented in this paper demonstrate the usefulness of the method for selection of events of protein traits and for determination of resistance in laboratory populations, we envision that the method can be applied in much broader applications.

  13. Influence of Feature Selection Methods on Classification Sensitivity Based on the Example of A Study of Polish Voivodship Tourist Attractiveness

    Directory of Open Access Journals (Sweden)

    Bąk Iwona

    2014-07-01

    Full Text Available The purpose of this article is to determine the influence of various methods of selection of diagnostic features on the sensitivity of classification. Three options of feature selection are presented: a parametric feature selection method with a sum (option I, a median of the correlation coefficients matrix column elements (option II and the method of a reversed matrix (option III. Efficiency of the groupings was verified by the indicators of homogeneity, heterogeneity and the correctness of grouping. In the assessment of group efficiency the approach with the Weber median was used. The undertaken problem was illustrated with a research into the tourist attractiveness of voivodships in Poland in 2011.

  14. Use of different marker pre-selection methods based on single SNP regression in the estimation of Genomic-EBVs

    Directory of Open Access Journals (Sweden)

    Corrado Dimauro

    2010-01-01

    Full Text Available Two methods of SNPs pre-selection based on single marker regression for the estimation of genomic breeding values (G-EBVs were compared using simulated data provided by the XII QTL-MAS workshop: i Bonferroni correction of the significance threshold and ii Permutation test to obtain the reference distribution of the null hypothesis and identify significant markers at P<0.01 and P<0.001 significance thresholds. From the set of markers significant at P<0.001, random subsets of 50% and 25% markers were extracted, to evaluate the effect of further reducing the number of significant SNPs on G-EBV predictions. The Bonferroni correction method allowed the identification of 595 significant SNPs that gave the best G-EBV accuracies in prediction generations (82.80%. The permutation methods gave slightly lower G-EBV accuracies even if a larger number of SNPs resulted significant (2,053 and 1,352 for 0.01 and 0.001 significance thresholds, respectively. Interestingly, halving or dividing by four the number of SNPs significant at P<0.001 resulted in an only slightly decrease of G-EBV accuracies. The genetic structure of the simulated population with few QTL carrying large effects, might have favoured the Bonferroni method.

  15. Knowledge based decision making method for the selection of mixed refrigerant systems for energy efficient LNG processes

    International Nuclear Information System (INIS)

    Khan, Mohd Shariq; Lee, Sanggyu; Rangaiah, G.P.; Lee, Moonyong

    2013-01-01

    Highlights: • Practical method for finding optimum refrigerant composition is proposed for LNG plant. • Knowledge of boiling point differences in refrigerant component is employed. • Implementation of process knowledge notably makes LNG process energy efficient. • Optimization of LNG plant is more transparent using process knowledge. - Abstract: Mixed refrigerant (MR) systems are used in many industrial applications because of their high energy efficiency, compact design and energy-efficient heat transfer compared to other processes operating with pure refrigerants. The performance of MR systems depends strongly on the optimum refrigerant composition, which is difficult to obtain. This paper proposes a simple and practical method for selecting the appropriate refrigerant composition, which was inspired by (i) knowledge of the boiling point difference in MR components, and (ii) their specific refrigeration effect in bringing a MR system close to reversible operation. A feasibility plot and composite curves were used for full enforcement of the approach temperature. The proposed knowledge-based optimization approach was described and applied to a single MR and a propane precooled MR system for natural gas liquefaction. Maximization of the heat exchanger exergy efficiency was considered as the optimization objective to achieve an energy efficient design goal. Several case studies on single MR and propane precooled MR processes were performed to show the effectiveness of the proposed method. The application of the proposed method is not restricted to liquefiers, and can be applied to any refrigerator and cryogenic cooler where a MR is involved

  16. Clustering based gene expression feature selection method: A computational approach to enrich the classifier efficiency of differentially expressed genes

    KAUST Repository

    Abusamra, Heba; Bajic, Vladimir B.

    2016-01-01

    decrease the computational time and cost, but also improve the classification performance. Among different approaches of feature selection methods, however most of them suffer from several problems such as lack of robustness, validation issues etc. Here, we

  17. Methodical bases of selection and evaluation of the effectiveness of the projects of the urban territory renovation

    Science.gov (United States)

    Sizova, Evgeniya; Zhutaeva, Evgeniya; Chugunov, Andrei

    2018-03-01

    The article highlights features of processes of urban territory renovation from the perspective of a commercial entity participating in the implementation of a project. The requirements of high-rise construction projects to the entities, that carry out them, are considered. The advantages of large enterprises as participants in renovation projects are systematized, contributing to their most efficient implementation. The factors, which influence the success of the renovation projects, are presented. A method for selecting projects for implementation based on criteria grouped by qualitative characteristics and contributing to the most complete and comprehensive evaluation of the project is suggested. Patterns to prioritize and harmonize renovation projects in terms of multi-project activity of the enterprise are considered.

  18. Inspection methods and their selection

    International Nuclear Information System (INIS)

    Maier, H.J.

    1980-01-01

    First those nondestructive testing methods, which are used in quality assurance, are to be treated, e.g. - ultrasonics - radiography - magnetic particle testing - dye penetrant testing - eddy currents, and their capabilities and limitations are shown. Second the selection of optimal testing methods under the aspect of defect recognition in different materials and components are shown. (orig./RW)

  19. MADM Technique Integrated with Grey- based Taguchi method for Selection of Alluminium alloys to minimize deburring cost during Drilling

    Directory of Open Access Journals (Sweden)

    Reddy Sreenivasulu

    2015-06-01

    Full Text Available Traditionally, burr problems had been considered unavoidable so that most efforts had been made on removal of the burr as a post process. Nowadays, a trend of manufacturing is an integration of the whole production flow from design to end product. Manufacturing problem issues are handled in various stages even from design stage. Therefore, the methods of describing the burr are getting much attention in recent years for the systematic approach to resolve the burr problem at various manufacturing stages. The main objective of this paper is to explore the basic concepts of MADM methods. In this study, five parameters namely speed, feed, drill size, drill geometry such as point angle and clearance angle were identified to influence more on burr formation during drilling. L 18 orthogonal array was selected and experiments were conducted as per Taguchi experimental plan for Aluminium alloy of 2014, 6061, 5035 and 7075 series. The experiment performed on a CNC Machining center with HSS twist drills. The burr size such as height and thickness were measured on exit of each hole. An optimal combination of process parameters was obtained to minimize the burr size via grey relational analysis. The output from grey based- taguchi method fed as input to the MADM. Apart from burr size strength and temperature are also considered as attributes. Finally, the results generated in MADM suggests the suitable alternative of  aluminium alloy, which results in less deburring cost, high strength and high resistance at elevated temperatures.

  20. Clustering based gene expression feature selection method: A computational approach to enrich the classifier efficiency of differentially expressed genes

    KAUST Repository

    Abusamra, Heba

    2016-07-20

    The native nature of high dimension low sample size of gene expression data make the classification task more challenging. Therefore, feature (gene) selection become an apparent need. Selecting a meaningful and relevant genes for classifier not only decrease the computational time and cost, but also improve the classification performance. Among different approaches of feature selection methods, however most of them suffer from several problems such as lack of robustness, validation issues etc. Here, we present a new feature selection technique that takes advantage of clustering both samples and genes. Materials and methods We used leukemia gene expression dataset [1]. The effectiveness of the selected features were evaluated by four different classification methods; support vector machines, k-nearest neighbor, random forest, and linear discriminate analysis. The method evaluate the importance and relevance of each gene cluster by summing the expression level for each gene belongs to this cluster. The gene cluster consider important, if it satisfies conditions depend on thresholds and percentage otherwise eliminated. Results Initial analysis identified 7120 differentially expressed genes of leukemia (Fig. 15a), after applying our feature selection methodology we end up with specific 1117 genes discriminating two classes of leukemia (Fig. 15b). Further applying the same method with more stringent higher positive and lower negative threshold condition, number reduced to 58 genes have be tested to evaluate the effectiveness of the method (Fig. 15c). The results of the four classification methods are summarized in Table 11. Conclusions The feature selection method gave good results with minimum classification error. Our heat-map result shows distinct pattern of refines genes discriminating between two classes of leukemia.

  1. Selection Method for COTS Systems

    DEFF Research Database (Denmark)

    Hedman, Jonas; Andersson, Bo

    2014-01-01

    feature behind the method is that improved understanding of organizational ‘ends’ or goals should govern the selection of a COTS system. This can also be expressed as a match or fit between ‘ends’ (e.g. improved organizational effectiveness) and ‘means’ (e.g. implementing COTS systems). This way...

  2. A New Variable Selection Method Based on Mutual Information Maximization by Replacing Collinear Variables for Nonlinear Quantitative Structure-Property Relationship Models

    Energy Technology Data Exchange (ETDEWEB)

    Ghasemi, Jahan B.; Zolfonoun, Ehsan [Toosi University of Technology, Tehran (Korea, Republic of)

    2012-05-15

    Selection of the most informative molecular descriptors from the original data set is a key step for development of quantitative structure activity/property relationship models. Recently, mutual information (MI) has gained increasing attention in feature selection problems. This paper presents an effective mutual information-based feature selection approach, named mutual information maximization by replacing collinear variables (MIMRCV), for nonlinear quantitative structure-property relationship models. The proposed variable selection method was applied to three different QSPR datasets, soil degradation half-life of 47 organophosphorus pesticides, GC-MS retention times of 85 volatile organic compounds, and water-to-micellar cetyltrimethylammonium bromide partition coefficients of 62 organic compounds.The obtained results revealed that using MIMRCV as feature selection method improves the predictive quality of the developed models compared to conventional MI based variable selection algorithms.

  3. A New Variable Selection Method Based on Mutual Information Maximization by Replacing Collinear Variables for Nonlinear Quantitative Structure-Property Relationship Models

    International Nuclear Information System (INIS)

    Ghasemi, Jahan B.; Zolfonoun, Ehsan

    2012-01-01

    Selection of the most informative molecular descriptors from the original data set is a key step for development of quantitative structure activity/property relationship models. Recently, mutual information (MI) has gained increasing attention in feature selection problems. This paper presents an effective mutual information-based feature selection approach, named mutual information maximization by replacing collinear variables (MIMRCV), for nonlinear quantitative structure-property relationship models. The proposed variable selection method was applied to three different QSPR datasets, soil degradation half-life of 47 organophosphorus pesticides, GC-MS retention times of 85 volatile organic compounds, and water-to-micellar cetyltrimethylammonium bromide partition coefficients of 62 organic compounds.The obtained results revealed that using MIMRCV as feature selection method improves the predictive quality of the developed models compared to conventional MI based variable selection algorithms

  4. An innovative method for offshore wind farm site selection based on the interval number with probability distribution

    Science.gov (United States)

    Wu, Yunna; Chen, Kaifeng; Xu, Hu; Xu, Chuanbo; Zhang, Haobo; Yang, Meng

    2017-12-01

    There is insufficient research relating to offshore wind farm site selection in China. The current methods for site selection have some defects. First, information loss is caused by two aspects: the implicit assumption that the probability distribution on the interval number is uniform; and ignoring the value of decision makers' (DMs') common opinion on the criteria information evaluation. Secondly, the difference in DMs' utility function has failed to receive attention. An innovative method is proposed in this article to solve these drawbacks. First, a new form of interval number and its weighted operator are proposed to reflect the uncertainty and reduce information loss. Secondly, a new stochastic dominance degree is proposed to quantify the interval number with a probability distribution. Thirdly, a two-stage method integrating the weighted operator with stochastic dominance degree is proposed to evaluate the alternatives. Finally, a case from China proves the effectiveness of this method.

  5. Highly selective ionic liquid-based microextraction method for sensitive trace cobalt determination in environmental and biological samples

    International Nuclear Information System (INIS)

    Berton, Paula; Wuilloud, Rodolfo G.

    2010-01-01

    A simple and rapid dispersive liquid-liquid microextraction procedure based on an ionic liquid (IL-DLLME) was developed for selective determination of cobalt (Co) with electrothermal atomic absorption spectrometry (ETAAS) detection. Cobalt was initially complexed with 1-nitroso-2-naphtol (1N2N) reagent at pH 4.0. The IL-DLLME procedure was then performed by using a few microliters of the room temperature ionic liquid (RTIL) 1-hexyl-3-methylimidazolium hexafluorophosphate [C 6 mim][PF 6 ] as extractant while methanol was the dispersant solvent. After microextraction procedure, the Co-enriched RTIL phase was solubilized in methanol and directly injected into the graphite furnace. The effect of several variables on Co-1N2N complex formation, extraction with the dispersed RTIL phase, and analyte detection with ETAAS, was carefully studied in this work. An enrichment factor of 120 was obtained with only 6 mL of sample solution and under optimal experimental conditions. The resultant limit of detection (LOD) was 3.8 ng L -1 , while the relative standard deviation (RSD) was 3.4% (at 1 μg L -1 Co level and n = 10), calculated from the peak height of absorbance signals. The accuracy of the proposed methodology was tested by analysis of a certified reference material. The method was successfully applied for the determination of Co in environmental and biological samples.

  6. A Distributed Dynamic Super Peer Selection Method Based on Evolutionary Game for Heterogeneous P2P Streaming Systems

    Directory of Open Access Journals (Sweden)

    Jing Chen

    2013-01-01

    Full Text Available Due to high efficiency and good scalability, hierarchical hybrid P2P architecture has drawn more and more attention in P2P streaming research and application fields recently. The problem about super peer selection, which is the key problem in hybrid heterogeneous P2P architecture, is becoming highly challenging because super peers must be selected from a huge and dynamically changing network. A distributed super peer selection (SPS algorithm for hybrid heterogeneous P2P streaming system based on evolutionary game is proposed in this paper. The super peer selection procedure is modeled based on evolutionary game framework firstly, and its evolutionarily stable strategies are analyzed. Then a distributed Q-learning algorithm (ESS-SPS according to the mixed strategies by analysis is proposed for the peers to converge to the ESSs based on its own payoff history. Compared to the traditional randomly super peer selection scheme, experiments results show that the proposed ESS-SPS algorithm achieves better performance in terms of social welfare and average upload rate of super peers and keeps the upload capacity of the P2P streaming system increasing steadily with the number of peers increasing.

  7. Research on the Selection Strategy of Green Building Parts Supplier Based on the Catastrophe Theory and Kent Index Method

    Directory of Open Access Journals (Sweden)

    Zhenhua Luo

    2016-01-01

    Full Text Available At present, the green building and housing industrialization are two mainstream directions in the real estate industry. The production of green building parts which combines green building and housing industrialization, two concepts, is to be vigorously developed. The key of quality assurance in the assembly project is choosing reliable and proper green building parts suppliers. This paper analyzes the inherent requirements of the green building, combined with the characteristics of the housing industrialization, and puts forward an evaluation index system of supplier selection for green building parts, which includes product index, enterprise index, green development index, and cooperation ability index. To reduce the influence of subjective factors, the improved method which merges Kent index method and catastrophe theory is applied to the green building parts supplier selection and evaluation. This paper takes the selection of the unit bathroom suppliers as an example, uses the improved model to calculate and analyze the data of each supplier, and finally selects the optimal supplier. With combination of the Kent index and the catastrophe theory, the result shows that it can effectively reduce the subjectivity of the evaluation and provide a basis for the selection of the green building parts suppliers.

  8. An improved selective sampling method

    International Nuclear Information System (INIS)

    Miyahara, Hiroshi; Iida, Nobuyuki; Watanabe, Tamaki

    1986-01-01

    The coincidence methods which are currently used for the accurate activity standardisation of radio-nuclides, require dead time and resolving time corrections which tend to become increasingly uncertain as countrates exceed about 10 K. To reduce the dependence on such corrections, Muller, in 1981, proposed the selective sampling method using a fast multichannel analyser (50 ns ch -1 ) for measuring the countrates. It is, in many ways, more convenient and possibly potentially more reliable to replace the MCA with scalers and a circuit is described employing five scalers; two of them serving to measure the background correction. Results of comparisons using our new method and the coincidence method for measuring the activity of 60 Co sources yielded agree-ment within statistical uncertainties. (author)

  9. TU-AB-202-10: How Effective Are Current Atlas Selection Methods for Atlas-Based Auto-Contouring in Radiotherapy Planning?

    Energy Technology Data Exchange (ETDEWEB)

    Peressutti, D; Schipaanboord, B; Kadir, T; Gooding, M [Mirada Medical Limited, Science and Medical Technology, Oxford (United Kingdom); Soest, J van; Lustberg, T; Elmpt, W van; Dekker, A [Maastricht University Medical Centre, Department of Radiation Oncology MAASTRO - GROW School for Oncology Developmental Biology, Maastricht (Netherlands)

    2016-06-15

    Purpose: To investigate the effectiveness of atlas selection methods for improving atlas-based auto-contouring in radiotherapy planning. Methods: 275 H&N clinically delineated cases were employed as an atlas database from which atlases would be selected. A further 40 previously contoured cases were used as test patients against which atlas selection could be performed and evaluated. 26 variations of selection methods proposed in the literature and used in commercial systems were investigated. Atlas selection methods comprised either global or local image similarity measures, computed after rigid or deformable registration, combined with direct atlas search or with an intermediate template image. Workflow Box (Mirada-Medical, Oxford, UK) was used for all auto-contouring. Results on brain, brainstem, parotids and spinal cord were compared to random selection, a fixed set of 10 “good” atlases, and optimal selection by an “oracle” with knowledge of the ground truth. The Dice score and the average ranking with respect to the “oracle” were employed to assess the performance of the top 10 atlases selected by each method. Results: The fixed set of “good” atlases outperformed all of the atlas-patient image similarity-based selection methods (mean Dice 0.715 c.f. 0.603 to 0.677). In general, methods based on exhaustive comparison of local similarity measures showed better average Dice scores (0.658 to 0.677) compared to the use of either template image (0.655 to 0.672) or global similarity measures (0.603 to 0.666). The performance of image-based selection methods was found to be only slightly better than a random (0.645). Dice scores given relate to the left parotid, but similar results patterns were observed for all organs. Conclusion: Intuitively, atlas selection based on the patient CT is expected to improve auto-contouring performance. However, it was found that published approaches performed marginally better than random and use of a fixed set of

  10. Experimental Evaluation of Suitability of Selected Multi-Criteria Decision-Making Methods for Large-Scale Agent-Based Simulations.

    Science.gov (United States)

    Tučník, Petr; Bureš, Vladimír

    2016-01-01

    Multi-criteria decision-making (MCDM) can be formally implemented by various methods. This study compares suitability of four selected MCDM methods, namely WPM, TOPSIS, VIKOR, and PROMETHEE, for future applications in agent-based computational economic (ACE) models of larger scale (i.e., over 10 000 agents in one geographical region). These four MCDM methods were selected according to their appropriateness for computational processing in ACE applications. Tests of the selected methods were conducted on four hardware configurations. For each method, 100 tests were performed, which represented one testing iteration. With four testing iterations conducted on each hardware setting and separated testing of all configurations with the-server parameter de/activated, altogether, 12800 data points were collected and consequently analyzed. An illustrational decision-making scenario was used which allows the mutual comparison of all of the selected decision making methods. Our test results suggest that although all methods are convenient and can be used in practice, the VIKOR method accomplished the tests with the best results and thus can be recommended as the most suitable for simulations of large-scale agent-based models.

  11. A Selection Model to Logistic Centers Based on TOPSIS and MCGP Methods: The Case of Airline Industry

    Directory of Open Access Journals (Sweden)

    Kou-Huang Chen

    2014-01-01

    Full Text Available The location selection of a logistics center is a crucial decision relating to cost and benefit analysis in airline industry. However, it is difficult to be solved because there are many conflicting and multiple objectives in location problems. To solve the problem, this paper integrates fuzzy technique for order preference by similarity to an ideal solution (TOPSIS and multichoice goal programming (MCGP to obtain an appropriate logistics center from many alternative locations for airline industry. The proposed method in this paper will offer the decision makers (DMs to set multiple aspiration levels for the decision criteria. A numerical example of application is also presented.

  12. A review of methods supporting supplier selection

    NARCIS (Netherlands)

    de Boer, L.; Labro, Eva; Morlacchi, Pierangela

    2001-01-01

    this paper we present a review of decision methods reported in the literature for supporting the supplier selection process. The review is based on an extensive search in the academic literature. We position the contributions in a framework that takes the diversity of procurement situations in terms

  13. Determining selection across heterogeneous landscapes: a perturbation-based method and its application to modeling evolution in space

    OpenAIRE

    Wickman, J.; Diehl, S.; Blasius, B.; Klausmeier, C.; Ryabov, A.; Brännström, Å.

    2017-01-01

    Spatial structure can decisively influence the way evolutionary processes unfold. Several methods have thus far been used to study evolution in spatial systems, including population genetics, quantitative genetics, momentclosure approximations, and individual-based models. Here we extend the study of spatial evolutionary dynamics to eco-evolutionary models based on reaction-diffusion equations and adaptive dynamics. Specifically, we derive expressions for the strength of directional and stabi...

  14. Selection and use of contraceptive methods among internal migrant workers in three large Chinese cities: a workplace-based survey.

    Science.gov (United States)

    Zhao, Hong-Xin; Wu, Jun-Qing; Li, Yu-Yan; Zhang, Yu-Feng; Ye, Jiang-Feng; Zhan, Shao-Kang; Zheng, Xiao-Ying; Yang, Ting-Zhong

    2011-08-01

    To describe the current status of the decision-making process with regard to the use of contraceptive methods among internal migrant workers in three large Chinese cities. A total of 4313 sexually active internal migrant workers were recruited in Beijing, Shanghai, and Chengdu. Information on contraceptive use was collected by means of questionnaires. Contraceptive prevalence was 86% among unmarried sexually active migrant workers and 91% among married workers. The main contraceptive methods used by married migrants were the intrauterine device (51%), condoms (25%) and female/male sterilisation (17%); the main methods resorted to by unmarried, sexually active migrants were condoms (74%) and oral contraceptives (11%). The contraceptive method applied by 20% of married respondents had been selected by other people, without they themselves having their share in an informed choice. Adopting the contraceptive decisions made by others was associated with being a married migrant, a construction or service worker, a rural-urban migrant, a migrant living in collective or rented rooms, or a migrant with more children. Many internal migrants in these large cities did not choose their contraceptive method on their own. Efforts enabling and encouraging migrants to make informed choices are needed.

  15. Feature Selection Method Based on Artificial Bee Colony Algorithm and Support Vector Machines for Medical Datasets Classification

    Directory of Open Access Journals (Sweden)

    Mustafa Serter Uzer

    2013-01-01

    Full Text Available This paper offers a hybrid approach that uses the artificial bee colony (ABC algorithm for feature selection and support vector machines for classification. The purpose of this paper is to test the effect of elimination of the unimportant and obsolete features of the datasets on the success of the classification, using the SVM classifier. The developed approach conventionally used in liver diseases and diabetes diagnostics, which are commonly observed and reduce the quality of life, is developed. For the diagnosis of these diseases, hepatitis, liver disorders and diabetes datasets from the UCI database were used, and the proposed system reached a classification accuracies of 94.92%, 74.81%, and 79.29%, respectively. For these datasets, the classification accuracies were obtained by the help of the 10-fold cross-validation method. The results show that the performance of the method is highly successful compared to other results attained and seems very promising for pattern recognition applications.

  16. System and method for controlling an engine based on ammonia storage in multiple selective catalytic reduction catalysts

    Science.gov (United States)

    Sun, MIn; Perry, Kevin L.

    2015-11-20

    A system according to the principles of the present disclosure includes a storage estimation module and an air/fuel ratio control module. The storage estimation module estimates a first amount of ammonia stored in a first selective catalytic reduction (SCR) catalyst and estimates a second amount of ammonia stored in a second SCR catalyst. The air/fuel ratio control module controls an air/fuel ratio of an engine based on the first amount, the second amount, and a temperature of a substrate disposed in the second SCR catalyst.

  17. [Research on K-means clustering segmentation method for MRI brain image based on selecting multi-peaks in gray histogram].

    Science.gov (United States)

    Chen, Zhaoxue; Yu, Haizhong; Chen, Hao

    2013-12-01

    To solve the problem of traditional K-means clustering in which initial clustering centers are selected randomly, we proposed a new K-means segmentation algorithm based on robustly selecting 'peaks' standing for White Matter, Gray Matter and Cerebrospinal Fluid in multi-peaks gray histogram of MRI brain image. The new algorithm takes gray value of selected histogram 'peaks' as the initial K-means clustering center and can segment the MRI brain image into three parts of tissue more effectively, accurately, steadily and successfully. Massive experiments have proved that the proposed algorithm can overcome many shortcomings caused by traditional K-means clustering method such as low efficiency, veracity, robustness and time consuming. The histogram 'peak' selecting idea of the proposed segmentootion method is of more universal availability.

  18. Simple method for clonal selection of hepatitis A virus based on recovery of virus from radioimmunofocus overlays

    Energy Technology Data Exchange (ETDEWEB)

    Lemon, S M; Jansen, R W

    1985-06-01

    Hepatitis A virus (HAV), has been quantitated in cell culture by autoradiographic detection of foci of viral replication developing beneath an agarose overlay following fixation and 'staining' of the cell sheet with radiolabelled antibody (radioimmunofocus assay). Using a modification of this basic technique, a clonal variant of HM-175 strain HAV was isolated from agarose overlying individual radioimmunofoci. Virus recovered from the agarose was amplified in small volume cultures of BS-C-1 cells and identified in supernatant culture fluids by cDNA-RNA hybridizaton. No virus was recovered from agarose which did not overlie a focus of viral replication. This method offers a simple, yet relatively rapid and certain means of selecting clonal variants of non-plaquing viruses such as hepatitis A virus.

  19. A simple method for clonal selection of hepatitis A virus based on recovery of virus from radioimmunofocus overlays

    International Nuclear Information System (INIS)

    Lemon, S.M.; Jansen, R.W.

    1985-01-01

    Hepatitis A virus (HAV), has been quantitated in cell culture by autoradiographic detection of foci of viral replication developing beneath an agarose overlay following fixation and 'staining' of the cell sheet with radiolabelled antibody (radioimmunofocus assay). Using a modification of this basic technique, a clonal variant of HM-175 strain HAV was isolated from agarose overlying individual radioimmunofoci. Virus recovered from the agarose was amplified in small volume cultures of BS-C-1 cells and identified in supernatant culture fluids by cDNA-RNA hybridizaton. No virus was recovered from agarose which did not overlie a focus of viral replication. This method offers a simple, yet relatively rapid and certain means of selecting clonal variants of non-plaquing viruses such as hepatitis A virus. (Auth.)

  20. Electronic Devices, Methods, and Computer Program Products for Selecting an Antenna Element Based on a Wireless Communication Performance Criterion

    DEFF Research Database (Denmark)

    2014-01-01

    A method of operating an electronic device includes providing a plurality of antenna elements, evaluating a wireless communication performance criterion to obtain a performance evaluation, and assigning a first one of the plurality of antenna elements to a main wireless signal reception...... and transmission path and a second one of the plurality of antenna elements to a diversity wireless signal reception path based on the performance evaluation....

  1. Studies of the fate of sulfur trioxide in coal-fired utility boilers based on modified selected condensation methods.

    Science.gov (United States)

    Cao, Yan; Zhou, Hongcang; Jiang, Wu; Chen, Chien-Wei; Pan, Wei-Ping

    2010-05-01

    The formation of sulfur trioxide (SO(3)) in coal-fired utility boilers can have negative effects on boiler performance and operation, such as fouling and corrosion of equipment, efficiency loss in the air preheater (APH), increase in stack opacity, and the formation of PM(2.5). Sulfur trioxide can also compete with mercury when bonding with injected activated carbons. Tests in a lab-scale reactor confirmed there are major interferences between fly ash and SO(3) during SO(3) sampling. A modified SO(3) procedure to maximize the elimination of measurement biases, based on the inertial-filter-sampling and the selective-condensation-collecting of SO(3), was applied in SO(3) tests in three full-scale utility boilers. For the two units burning bituminous coal, SO(3) levels starting at 20 to 25 ppmv at the inlet to the selective catalytic reduction (SCR), increased slightly across the SCR, owing to catalytic conversion of SO(2) to SO(3,) and then declined in other air pollutant control device (APCD) modules downstream to approximately 5 ppmv and 15 ppmv at the two sites, respectively. In the unit burning sub-bituminous coal, the much lower initial concentration of SO(3) estimated to be approximately 1.5 ppmv at the inlet to the SCR was reduced to about 0.8 ppmv across the SCR and to about 0.3 ppmv at the exit of the wet flue gas desulfurization (WFGD). The SO(3) removal efficiency across the WFGD scrubbers at the three sites was generally 35% or less. Reductions in SO(3) across either the APH or the dry electrostatic precipitator (ESP) in units burning high-sulfur bituminous coal were attributed to operating temperatures being below the dew point of SO(3).

  2. The effect of the synthesis method on the parameters of pore structure and selectivity of ferrocyanide sorbents based on natural minerals

    International Nuclear Information System (INIS)

    Voronina, A.V.; Gorbunova, T.V.; Semenishchev, V.S.

    2017-01-01

    Ferrocyanide sorbents were obtained via thin-layer and surface modification of natural clinoptilolite and marl. The effect of modification method on surface characteristics of these sorbents and their selectivity for cesium was studied. It was shown that the modification resulted in an increase of selectivity of modified ferrocyanide sorbents to cesium as compared with the natural clinoptilolite in presence of Na + , as well as in an increase of cesium distribution coefficients in presence of K + . The nickel-potassium ferrocyanide based on the clinoptilolite showed the highest selectivity for cesium at sodium concentrations of 10 -4 -2 mol L -1 : cesium distribution coefficient was lg K d = 4.5 ± 0.4 L kg -1 and cesium/sodium separation factor was α(Cs/Na) = 250. In the presence of NH 4 + , all modified sorbents showed approximately equal selectivity for 137 Cs. Probable applications of the sorbents were suggested. (author)

  3. Selecting the minimum prediction base of historical data to perform 5-year predictions of the cancer burden: The GoF-optimal method.

    Science.gov (United States)

    Valls, Joan; Castellà, Gerard; Dyba, Tadeusz; Clèries, Ramon

    2015-06-01

    Predicting the future burden of cancer is a key issue for health services planning, where a method for selecting the predictive model and the prediction base is a challenge. A method, named here Goodness-of-Fit optimal (GoF-optimal), is presented to determine the minimum prediction base of historical data to perform 5-year predictions of the number of new cancer cases or deaths. An empirical ex-post evaluation exercise for cancer mortality data in Spain and cancer incidence in Finland using simple linear and log-linear Poisson models was performed. Prediction bases were considered within the time periods 1951-2006 in Spain and 1975-2007 in Finland, and then predictions were made for 37 and 33 single years in these periods, respectively. The performance of three fixed different prediction bases (last 5, 10, and 20 years of historical data) was compared to that of the prediction base determined by the GoF-optimal method. The coverage (COV) of the 95% prediction interval and the discrepancy ratio (DR) were calculated to assess the success of the prediction. The results showed that (i) models using the prediction base selected through GoF-optimal method reached the highest COV and the lowest DR and (ii) the best alternative strategy to GoF-optimal was the one using the base of prediction of 5-years. The GoF-optimal approach can be used as a selection criterion in order to find an adequate base of prediction. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Development of a computer code system for selecting off-site protective action in radiological accidents based on the multiobjective optimization method

    International Nuclear Information System (INIS)

    Ishigami, Tsutomu; Oyama, Kazuo

    1989-09-01

    This report presents a new method to support selection of off-site protective action in nuclear reactor accidents, and provides a user's manual of a computer code system, PRASMA, developed using the method. The PRASMA code system gives several candidates of protective action zones of evacuation, sheltering and no action based on the multiobjective optimization method, which requires objective functions and decision variables. We have assigned population risks of fatality, injury and cost as the objective functions, and distance from a nuclear power plant characterizing the above three protective action zones as the decision variables. (author)

  5. A quick method based on SIMPLISMA-KPLS for simultaneously selecting outlier samples and informative samples for model standardization in near infrared spectroscopy

    Science.gov (United States)

    Li, Li-Na; Ma, Chang-Ming; Chang, Ming; Zhang, Ren-Cheng

    2017-12-01

    A novel method based on SIMPLe-to-use Interactive Self-modeling Mixture Analysis (SIMPLISMA) and Kernel Partial Least Square (KPLS), named as SIMPLISMA-KPLS, is proposed in this paper for selection of outlier samples and informative samples simultaneously. It is a quick algorithm used to model standardization (or named as model transfer) in near infrared (NIR) spectroscopy. The NIR experiment data of the corn for analysis of the protein content is introduced to evaluate the proposed method. Piecewise direct standardization (PDS) is employed in model transfer. And the comparison of SIMPLISMA-PDS-KPLS and KS-PDS-KPLS is given in this research by discussion of the prediction accuracy of protein content and calculation speed of each algorithm. The conclusions include that SIMPLISMA-KPLS can be utilized as an alternative sample selection method for model transfer. Although it has similar accuracy to Kennard-Stone (KS), it is different from KS as it employs concentration information in selection program. This means that it ensures analyte information is involved in analysis, and the spectra (X) of the selected samples is interrelated with concentration (y). And it can be used for outlier sample elimination simultaneously by validation of calibration. According to the statistical data results of running time, it is clear that the sample selection process is more rapid when using KPLS. The quick algorithm of SIMPLISMA-KPLS is beneficial to improve the speed of online measurement using NIR spectroscopy.

  6. Optimization methods for activities selection problems

    Science.gov (United States)

    Mahad, Nor Faradilah; Alias, Suriana; Yaakop, Siti Zulaika; Arshad, Norul Amanina Mohd; Mazni, Elis Sofia

    2017-08-01

    Co-curriculum activities must be joined by every student in Malaysia and these activities bring a lot of benefits to the students. By joining these activities, the students can learn about the time management and they can developing many useful skills. This project focuses on the selection of co-curriculum activities in secondary school using the optimization methods which are the Analytic Hierarchy Process (AHP) and Zero-One Goal Programming (ZOGP). A secondary school in Negeri Sembilan, Malaysia was chosen as a case study. A set of questionnaires were distributed randomly to calculate the weighted for each activity based on the 3 chosen criteria which are soft skills, interesting activities and performances. The weighted was calculated by using AHP and the results showed that the most important criteria is soft skills. Then, the ZOGP model will be analyzed by using LINGO Software version 15.0. There are two priorities to be considered. The first priority which is to minimize the budget for the activities is achieved since the total budget can be reduced by RM233.00. Therefore, the total budget to implement the selected activities is RM11,195.00. The second priority which is to select the co-curriculum activities is also achieved. The results showed that 9 out of 15 activities were selected. Thus, it can concluded that AHP and ZOGP approach can be used as the optimization methods for activities selection problem.

  7. Using an Integrated Group Decision Method Based on SVM, TFN-RS-AHP, and TOPSIS-CD for Cloud Service Supplier Selection

    Directory of Open Access Journals (Sweden)

    Lian-hui Li

    2017-01-01

    Full Text Available To solve the cloud service supplier selection problem under the background of cloud computing emergence, an integrated group decision method is proposed. The cloud service supplier selection index framework is built from two perspectives of technology and technology management. Support vector machine- (SVM- based classification model is applied for the preliminary screening to reduce the number of candidate suppliers. A triangular fuzzy number-rough sets-analytic hierarchy process (TFN-RS-AHP method is designed to calculate supplier’s index value by expert’s wisdom and experience. The index weight is determined by criteria importance through intercriteria correlation (CRITIC. The suppliers are evaluated by the improved TOPSIS replacing Euclidean distance with connection distance (TOPSIS-CD. An electric power enterprise’s case is given to illustrate the correctness and feasibility of the proposed method.

  8. Selection of logging-based TOC calculation methods for shale reservoirs: A case study of the Jiaoshiba shale gas field in the Sichuan Basin

    Directory of Open Access Journals (Sweden)

    Renchun Huang

    2015-03-01

    Full Text Available Various methods are available for calculating the TOC of shale reservoirs with logging data, and each method has its unique applicability and accuracy. So it is especially important to establish a regional experimental calculation model based on a thorough analysis of their applicability. With the Upper Ordovician Wufeng Fm-Lower Silurian Longmaxi Fm shale reservoirs as an example, TOC calculation models were built by use of the improved ΔlgR, bulk density, natural gamma spectroscopy, multi-fitting and volume model methods respectively, considering the previous research results and the geologic features of the area. These models were compared based on the core data. Finally, the bulk density method was selected as the regional experimental calculation model. Field practices demonstrated that the improved ΔlgR and natural gamma spectroscopy methods are poor in accuracy; although the multi-fitting method and bulk density method have relatively high accuracy, the bulk density method is simpler and wider in application. For further verifying its applicability, the bulk density method was applied to calculate the TOC of shale reservoirs in several key wells in the Jiaoshiba shale gas field, Sichuan Basin, and the calculation accuracy was clarified with the measured data of core samples, showing that the coincidence rate of logging-based TOC calculation is up to 90.5%–91.0%.

  9. Supplier Selection Using Weighted Utility Additive Method

    Science.gov (United States)

    Karande, Prasad; Chakraborty, Shankar

    2015-10-01

    Supplier selection is a multi-criteria decision-making (MCDM) problem which mainly involves evaluating a number of available suppliers according to a set of common criteria for choosing the best one to meet the organizational needs. For any manufacturing or service organization, selecting the right upstream suppliers is a key success factor that will significantly reduce purchasing cost, increase downstream customer satisfaction and improve competitive ability. The past researchers have attempted to solve the supplier selection problem employing different MCDM techniques which involve active participation of the decision makers in the decision-making process. This paper deals with the application of weighted utility additive (WUTA) method for solving supplier selection problems. The WUTA method, an extension of utility additive approach, is based on ordinal regression and consists of building a piece-wise linear additive decision model from a preference structure using linear programming (LP). It adopts preference disaggregation principle and addresses the decision-making activities through operational models which need implicit preferences in the form of a preorder of reference alternatives or a subset of these alternatives present in the process. The preferential preorder provided by the decision maker is used as a restriction of a LP problem, which has its own objective function, minimization of the sum of the errors associated with the ranking of each alternative. Based on a given reference ranking of alternatives, one or more additive utility functions are derived. Using these utility functions, the weighted utilities for individual criterion values are combined into an overall weighted utility for a given alternative. It is observed that WUTA method, having a sound mathematical background, can provide accurate ranking to the candidate suppliers and choose the best one to fulfill the organizational requirements. Two real time examples are illustrated to prove

  10. An ecologically-based method for selecting ecological indicators for assessing risks to biological diversity from genetically-engineered plants

    DEFF Research Database (Denmark)

    Andow, D. A.; Lövei, Gabor L; Arpaia, Salvatore

    2013-01-01

    into ecological functional groups and selecting those that deliver the identified environmental values. (3) All of the species or ecosystem processes related to the selected functional groups are identified and (4) multi-criteria decision analysis (MCDA) is used to rank the indicator endpoint entities, which may...... adverse effects to biological diversity. The approach starts by (1) identifying the local environmental values so the ERA addresses specific concerns associated with local biological diversity. The model simplifies the indicator endpoint selection problem by (2) classifying biological diversity...... be species or ecological processes. MCDA focuses on those species and processes that are critical for the identified ecological functions and are likely to be highly exposed to the GE organism. The highest ranked indicator entities are selected for the next step. (5) Relevant risk hypotheses are identified...

  11. Sustainable Supplier Selection with A Fuzzy Multi-Criteria Decision Making Method Based on Triple Bottom Line

    OpenAIRE

    Öztürk, Burcu Avcı; Özçelik , Funda

    2014-01-01

    To meet the demands of various stakeholders and to comply with environmental legislations, businesses started to look at their supply chain to enhance their overall sustainability profile. Supply chain operations with sustainability awareness have become an important issue in recent years and make the sustainable supplier performance evaluation and selection process as a central concept of sustainable supply chain management. In this study, supplier selection problem is modelled within the co...

  12. Novel feature selection method based on Stochastic Methods Coupled to Support Vector Machines using H- NMR data (data of olive and hazelnut oils

    Directory of Open Access Journals (Sweden)

    Oscar Eduardo Gualdron

    2014-12-01

    Full Text Available One of the principal inconveniences that analysis and information processing presents is that of the representation of dataset. Normally, one encounters a high number of samples, each one with thousands of variables, and in many cases with irrelevant information and noise. Therefore, in order to represent findings in a clearer way, it is necessary to reduce the amount of variables. In this paper, a novel variable selection technique for multivariable data analysis, inspired on stochastic methods and designed to work with support vector machines (SVM, is described. The approach is demonstrated in a food application involving the detection of adulteration of olive oil (more expensive with hazelnut oil (cheaper. Fingerprinting by H NMR spectroscopy was used to analyze the different samples. Results show that it is possible to reduce the number of variables without affecting classification results.

  13. Laccase-catalyzed oxidation and intramolecular cyclization of dopamine: A new method for selective determination of dopamine with laccase/carbon nanotube-based electrochemical biosensors

    International Nuclear Information System (INIS)

    Xiang, Ling; Lin, Yuqing; Yu, Ping; Su, Lei; Mao, Lanqun

    2007-01-01

    This study demonstrates a new electrochemical method for the selective determination of dopamine (DA) with the coexistence of ascorbic acid (AA) and 3,4-dihydroxyphenylacetic acid (DOPAC) with laccase/multi-walled carbon nanotube (MWNT)-based biosensors prepared by cross-linking laccase into MWNT layer confined onto glassy carbon electrodes. The method described here is essentially based on the chemical reaction properties of DA including oxidation, intramolecular cyclization and disproportionation reactions to finally give 5,6-dihydroxyindoline quinone and on the uses of the two-electron and two-proton reduction of the formed 5,6-dihydroxyindoline quinone to constitute a method for the selective determination of DA at a negative potential that is totally separated from those for the redox processes of AA and DOPAC. Instead of the ECE reactions of DA with the first oxidation of DA being driven electrochemically, laccase is used here as the biocatalyst to drive the first oxidation of DA into its quinone form and thus initialize the sequential reactions of DA finally into 5,6-dihydroxyindoline quinone. In addition, laccase also catalyzes the oxidation of AA and DOPAC into electroinactive species with the concomitant reduction of O 2 . As a consequence, a combinational exploitation of the chemical properties inherent in DA and the multifunctional catalytic properties of laccase as well as the excellent electrochemical properties of carbon nanotubes substantially enables the prepared laccase/MWNT-based biosensors to be well competent for the selective determination of DA with the coexistence of physiological levels of AA and DOPAC. This demonstration offers a new method for the selective determination of DA, which could be potentially employed for the determination of DA in biological systems

  14. A Phos-tag-based magnetic-bead method for rapid and selective separation of phosphorylated biomolecules.

    Science.gov (United States)

    Tsunehiro, Masaya; Meki, Yuma; Matsuoka, Kanako; Kinoshita-Kikuta, Emiko; Kinoshita, Eiji; Koike, Tohru

    2013-04-15

    A simple and efficient method based on magnetic-bead technology has been developed for the separation of phosphorylated and nonphosphorylated low-molecular-weight biomolecules, such as nucleotides, phosphorylated amino acids, or phosphopeptides. The phosphate-binding site on the bead is an alkoxide-bridged dinuclear zinc(II) complex with 1,3-bis(pyridin-2-ylmethylamino)propan-2-olate (Phos-tag), which is linked to a hydrophilic cross-linked agarose coating on a magnetic core particle. All steps for the phosphate-affinity separation are conducted in buffers of neutral pH with 50 μL of the magnetic beads in a 1.5-mL microtube. The entire separation protocol for phosphomonoester-type compounds, from addition to elution, requires less than 12 min per sample if the buffers and the zinc(II)-bound Phos-tag magnetic beads have been prepared in advance. The phosphate-affinity magnetic beads are reusable at least 15 times without a decrease in their phosphate-binding ability and they are stable for three months in propan-2-ol. Copyright © 2013 Elsevier B.V. All rights reserved.

  15. A BAND SELECTION METHOD FOR SUB-PIXEL TARGET DETECTION IN HYPERSPECTRAL IMAGES BASED ON LABORATORY AND FIELD REFLECTANCE SPECTRAL COMPARISON

    Directory of Open Access Journals (Sweden)

    S. Sharifi hashjin

    2016-06-01

    Full Text Available In recent years, developing target detection algorithms has received growing interest in hyperspectral images. In comparison to the classification field, few studies have been done on dimension reduction or band selection for target detection in hyperspectral images. This study presents a simple method to remove bad bands from the images in a supervised manner for sub-pixel target detection. The proposed method is based on comparing field and laboratory spectra of the target of interest for detecting bad bands. For evaluation, the target detection blind test dataset is used in this study. Experimental results show that the proposed method can improve efficiency of the two well-known target detection methods, ACE and CEM.

  16. A Selection Method for COTS Systems

    DEFF Research Database (Denmark)

    Hedman, Jonas

    new skills and methods supporting the process of evaluating and selecting information systems. This paper presents a method for selecting COTS systems. The method includes the following phases: problem framing, requirements and appraisal, and selection of systems. The idea and distinguishing feature...... behind the method is that improved understanding of organizational' ends' or goals should govern the selection of a COTS system. This can also be expressed as a match or fit between ‘ends' (e.g. improved organizational effectiveness) and ‘means' (e.g. implementing COTS systems). This way of approaching...

  17. Discriminative Projection Selection Based Face Image Hashing

    Science.gov (United States)

    Karabat, Cagatay; Erdogan, Hakan

    Face image hashing is an emerging method used in biometric verification systems. In this paper, we propose a novel face image hashing method based on a new technique called discriminative projection selection. We apply the Fisher criterion for selecting the rows of a random projection matrix in a user-dependent fashion. Moreover, another contribution of this paper is to employ a bimodal Gaussian mixture model at the quantization step. Our simulation results on three different databases demonstrate that the proposed method has superior performance in comparison to previously proposed random projection based methods.

  18. A new laser vibrometry-based 2D selective intensity method for source identification in reverberant fields: part II. Application to an aircraft cabin

    International Nuclear Information System (INIS)

    Revel, G M; Martarelli, M; Chiariotti, P

    2010-01-01

    The selective intensity technique is a powerful tool for the localization of acoustic sources and for the identification of the structural contribution to the acoustic emission. In practice, the selective intensity method is based on simultaneous measurements of acoustic intensity, by means of a couple of matched microphones, and structural vibration of the emitting object. In this paper high spatial density multi-point vibration data, acquired by using a scanning laser Doppler vibrometer, have been used for the first time. Therefore, by applying the selective intensity algorithm, the contribution of a large number of structural sources to the acoustic field radiated by the vibrating object can be estimated. The selective intensity represents the distribution of the acoustic monopole sources on the emitting surface, as if each monopole acted separately from the others. This innovative selective intensity approach can be very helpful when the measurement is performed on large panels in highly reverberating environments, such as aircraft cabins. In this case the separation of the direct acoustic field (radiated by the vibrating panels of the fuselage) and the reverberant one is difficult by traditional techniques. The work shown in this paper is the application of part of the results of the European project CREDO (Cabin Noise Reduction by Experimental and Numerical Design Optimization) carried out within the framework of the EU. Therefore the aim of this paper is to illustrate a real application of the method to the interior acoustic characterization of an Alenia Aeronautica ATR42 ground test facility, Alenia Aeronautica being a partner of the CREDO project

  19. Evidence-based case selection: An innovative knowledge management method to cluster public technical and vocational education and training colleges in South Africa

    Directory of Open Access Journals (Sweden)

    Margaretha M. Visser

    2017-03-01

    Full Text Available Background: Case studies are core constructs used in information management research. A persistent challenge for business, information management and social science researchers is how to select a representative sample of cases among a population with diverse characteristics when convenient or purposive sampling is not considered rigorous enough. The context of the study is post-school education, and it involves an investigation of quantitative methods of clustering the population of public technical and vocational education and training (TVET colleges in South Africa into groups with a similar level of maturity in terms of their information systems. Objectives: The aim of the study was to propose an evidence-based quantitative method for the selection of cases for case study research and to demonstrate the use and usefulness thereof by clustering public TVET colleges. Method: The clustering method was based on the use of a representative characteristic of the context, as a proxy. In this context of management information systems (MISs, website maturity was used as a proxy and website maturity model theory was used in the development of an evaluation questionnaire. The questionnaire was used for capturing data on website characteristics, which was used to determine website maturity. The websites of the 50 public TVET colleges were evaluated by nine evaluators. Multiple statistical techniques were applied to establish inter-rater reliability and to produce clusters of colleges. Results: The analyses revealed three clusters of public TVET colleges based on their website maturity levels. The first cluster includes three colleges with no websites or websites at a low maturity level. The second cluster consists of 30 colleges with websites at an average maturity level. The third cluster contains 17 colleges with websites at a high maturity level. Conclusion: The main contribution to the knowledge domain is an innovative quantitative method employing a

  20. Selected Reaction Monitoring method to determine the species origin of blood-based binding agents in meats: a collaborative study

    NARCIS (Netherlands)

    Grundy, H.; Read, W.A.; Macarthur, R.; Alewijn, M.

    2013-01-01

    Binding products or food ‘glues’ are used throughout the food industry to increase the meat use rate or to augment economic efficiency. Some of these binders contain thrombin from bovine and porcine blood. The European parliament has recently banned thrombin-based additives and labelling legislation

  1. Application of Mean of Absolute Deviation Method for the Selection of Best Nonlinear Component Based on Video Encryption

    Science.gov (United States)

    Anees, Amir; Khan, Waqar Ahmad; Gondal, Muhammad Asif; Hussain, Iqtadar

    2013-07-01

    The aim of this work is to make use of the mean of absolute deviation (MAD) method for the evaluation process of substitution boxes used in the advanced encryption standard. In this paper, we use the MAD technique to analyze some popular and prevailing substitution boxes used in encryption processes. In particular, MAD is applied to advanced encryption standard (AES), affine power affine (APA), Gray, Lui J., Residue Prime, S8 AES, SKIPJACK, and Xyi substitution boxes.

  2. A two-step leaching method designed based on chemical fraction distribution of the heavy metals for selective leaching of Cd, Zn, Cu, and Pb from metallurgical sludge.

    Science.gov (United States)

    Wang, Fen; Yu, Junxia; Xiong, Wanli; Xu, Yuanlai; Chi, Ru-An

    2018-01-01

    For selective leaching and highly effective recovery of heavy metals from a metallurgical sludge, a two-step leaching method was designed based on the distribution analysis of the chemical fractions of the loaded heavy metal. Hydrochloric acid (HCl) was used as a leaching agent in the first step to leach the relatively labile heavy metals and then ethylenediamine tetraacetic acid (EDTA) was applied to leach the residual metals according to their different fractional distribution. Using the two-step leaching method, 82.89% of Cd, 55.73% of Zn, 10.85% of Cu, and 0.25% of Pb were leached in the first step by 0.7 M HCl at a contact time of 240 min, and the leaching efficiencies for Cd, Zn, Cu, and Pb were elevated up to 99.76, 91.41, 71.85, and 94.06%, by subsequent treatment with 0.2 M EDTA at 480 min, respectively. Furthermore, HCl leaching induced fractional redistribution, which might increase the mobility of the remaining metals and then facilitate the following metal removal by EDTA. The facilitation was further confirmed by the comparison to the one-step leaching method with single HCl or single EDTA, respectively. These results suggested that the designed two-step leaching method by HCl and EDTA could be used for selective leaching and effective recovery of heavy metals from the metallurgical sludge or heavy metal-contaminated solid media.

  3. Selective formation of GaN-based nanorod heterostructures on soda-lime glass substrates by a local heating method

    Energy Technology Data Exchange (ETDEWEB)

    Hong, Young Joon; Kim, Yong-Jin [Department of Materials Science and Engineering, POSTECH, Pohang, Gyeongbuk 790-784 (Korea, Republic of); Jeon, Jong-Myeong; Kim, Miyoung; Choi, Jun Hee [Department of Materials Science and Engineering, Seoul National University, Seoul 151-744 (Korea, Republic of); Baik, Chan Wook; Kim, Sun Il; Park, Sung Soo; Kim, Jong Min [Frontier Research Laboratory, Samsung Advanced Institute of Technology, PO Box 111, Kiheung 446-712 (Korea, Republic of); Yi, Gyu-Chul, E-mail: joonie.choi@samsung.com, E-mail: gcyi@snu.ac.kr [National Creative Research Initiative Center for Semiconductor Nanorods, Department of Physics and Astronomy, Seoul National University, Seoul 151-747 (Korea, Republic of)

    2011-05-20

    We report on the fabrication of high-quality GaN on soda-lime glass substrates, heretofore precluded by both the intolerance of soda-lime glass to the high temperatures required for III-nitride growth and the lack of an epitaxial relationship with amorphous glass. The difficulties were circumvented by heteroepitaxial coating of GaN on ZnO nanorods via a local microheating method. Metal-organic chemical vapor deposition of ZnO nanorods and GaN layers using the microheater arrays produced high-quality GaN/ZnO coaxial nanorod heterostructures at only the desired regions on the soda-lime glass substrates. High-resolution transmission electron microscopy examination of the coaxial nanorod heterostructures indicated the formation of an abrupt, semicoherent interface. Photoluminescence and cathodoluminescence spectroscopy was also applied to confirm the high optical quality of the coaxial nanorod heterostructures. Mg-doped GaN/ZnO coaxial nanorod heterostructure arrays, whose GaN shell layers were grown with various different magnesocene flow rates, were further investigated by using photoluminescence spectroscopy for the p-type doping characteristics. The suggested method for fabrication of III-nitrides on glass substrates signifies potentials for low-cost and large-size optoelectronic device applications.

  4. Selective formation of GaN-based nanorod heterostructures on soda-lime glass substrates by a local heating method.

    Science.gov (United States)

    Hong, Young Joon; Kim, Yong-Jin; Jeon, Jong-Myeong; Kim, Miyoung; Choi, Jun Hee; Baik, Chan Wook; Kim, Sun Il; Park, Sung Soo; Kim, Jong Min; Yi, Gyu-Chul

    2011-05-20

    We report on the fabrication of high-quality GaN on soda-lime glass substrates, heretofore precluded by both the intolerance of soda-lime glass to the high temperatures required for III-nitride growth and the lack of an epitaxial relationship with amorphous glass. The difficulties were circumvented by heteroepitaxial coating of GaN on ZnO nanorods via a local microheating method. Metal-organic chemical vapor deposition of ZnO nanorods and GaN layers using the microheater arrays produced high-quality GaN/ZnO coaxial nanorod heterostructures at only the desired regions on the soda-lime glass substrates. High-resolution transmission electron microscopy examination of the coaxial nanorod heterostructures indicated the formation of an abrupt, semicoherent interface. Photoluminescence and cathodoluminescence spectroscopy was also applied to confirm the high optical quality of the coaxial nanorod heterostructures. Mg-doped GaN/ZnO coaxial nanorod heterostructure arrays, whose GaN shell layers were grown with various different magnesocene flow rates, were further investigated by using photoluminescence spectroscopy for the p-type doping characteristics. The suggested method for fabrication of III-nitrides on glass substrates signifies potentials for low-cost and large-size optoelectronic device applications.

  5. Evidence based case selection: An innovative knowledge management method to cluster public technical and vocational education and training colleges in South Africa

    CSIR Research Space (South Africa)

    Visser, MM

    2017-03-01

    Full Text Available for MIS maturity. The method can be applied in quantitative, qualitative and mixed methods research, to group a population and thereby simplify the process of sample selection of cases for further in-depth investigation....

  6. Novel Selectivity-Based Forensic Toxicological Validation of a Paper Spray Mass Spectrometry Method for the Quantitative Determination of Eight Amphetamines in Whole Blood

    Science.gov (United States)

    Teunissen, Sebastiaan F.; Fedick, Patrick W.; Berendsen, Bjorn J. A.; Nielen, Michel W. F.; Eberlin, Marcos N.; Graham Cooks, R.; van Asten, Arian C.

    2017-12-01

    Paper spray tandem mass spectrometry is used to identify and quantify eight individual amphetamines in whole blood in 1.3 min. The method has been optimized and fully validated according to forensic toxicology guidelines, for the quantification of amphetamine, methamphetamine, 3,4-methylenedioxyamphetamine (MDA), 3,4-methylenedioxy- N-methylamphetamine (MDMA), 3,4-methylenedioxy- N-ethylamphetamine (MDEA), para-methoxyamphetamine (PMA), para-methoxymethamphetamine (PMMA), and 4-fluoroamphetamine (4-FA). Additionally, a new concept of intrinsic and application-based selectivity is discussed, featuring increased confidence in the power to discriminate the amphetamines from other chemically similar compounds when applying an ambient mass spectrometric method without chromatographic separation. Accuracy was within ±15% and average precision was better than 15%, and better than 20% at the LLOQ. Detection limits between 15 and 50 ng/mL were obtained using only 12 μL of whole blood. [Figure not available: see fulltext.

  7. Diffusion weighted imaging for differentiating benign from malignant orbital tumors: Diagnostic performance of the apparent diffusion coefficient based on region of interest selection method

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Xiao Quan; Hu, Hao Hu; Su, Guo Yi; Liu, Hu; Shi, Hai Bin; Wu, Fei Yun [First Affiliated Hospital of Nanjing Medical University, Nanjing (China)

    2016-09-15

    To evaluate the differences in the apparent diffusion coefficient (ADC) measurements based on three different region of interest (ROI) selection methods, and compare their diagnostic performance in differentiating benign from malignant orbital tumors. Diffusion-weighted imaging data of sixty-four patients with orbital tumors (33 benign and 31 malignant) were retrospectively analyzed. Two readers independently measured the ADC values using three different ROIs selection methods including whole-tumor (WT), single-slice (SS), and reader-defined small sample (RDSS). The differences of ADC values (ADC-ROI{sub WT}, ADC-ROI{sub SS}, and ADC-ROI{sub RDSS}) between benign and malignant group were compared using unpaired t test. Receiver operating characteristic curve was used to determine and compare their diagnostic ability. The ADC measurement time was compared using ANOVA analysis and the measurement reproducibility was assessed using Bland-Altman method and intra-class correlation coefficient (ICC). Malignant group showed significantly lower ADC-ROI{sub WT}, ADC-ROI{sub SS}, and ADC-ROI{sub RDSS} than benign group (all p < 0.05). The areas under the curve showed no significant difference when using ADC-ROI{sub WT}, ADC-ROI{sub SS}, and ADC-ROI{sub RDSS} as differentiating index, respectively (all p > 0.05). The ROI{sub SS} and ROI{sub RDSS} required comparable measurement time (p > 0.05), while significantly shorter than ROI{sub WT} (p < 0.05). The ROI{sub SS} showed the best reproducibility (mean difference ± limits of agreement between two readers were 0.022 [-0.080–0.123] × 10{sup -3} mm{sup 2}/s; ICC, 0.997) among three ROI method. Apparent diffusion coefficient values based on the three different ROI selection methods can help to differentiate benign from malignant orbital tumors. The results of measurement time, reproducibility and diagnostic ability suggest that the ROI{sub SS} method are potentially useful for clinical practice.

  8. Quantitative Methods for Software Selection and Evaluation

    National Research Council Canada - National Science Library

    Bandor, Michael S

    2006-01-01

    ... (the ability of the product to meet the need) and the cost. The method used for the analysis and selection activities can range from the use of basic intuition to counting the number of requirements fulfilled, or something...

  9. Mining method selection by integrated AHP and PROMETHEE method.

    Science.gov (United States)

    Bogdanovic, Dejan; Nikolic, Djordje; Ilic, Ivana

    2012-03-01

    Selecting the best mining method among many alternatives is a multicriteria decision making problem. The aim of this paper is to demonstrate the implementation of an integrated approach that employs AHP and PROMETHEE together for selecting the most suitable mining method for the "Coka Marin" underground mine in Serbia. The related problem includes five possible mining methods and eleven criteria to evaluate them. Criteria are accurately chosen in order to cover the most important parameters that impact on the mining method selection, such as geological and geotechnical properties, economic parameters and geographical factors. The AHP is used to analyze the structure of the mining method selection problem and to determine weights of the criteria, and PROMETHEE method is used to obtain the final ranking and to make a sensitivity analysis by changing the weights. The results have shown that the proposed integrated method can be successfully used in solving mining engineering problems.

  10. Location of airports - selected quantitative methods

    Directory of Open Access Journals (Sweden)

    Agnieszka Merkisz-Guranowska

    2016-09-01

    Full Text Available Background: The role of air transport in  the economic development of a country and its regions cannot be overestimated. The decision concerning an airport's location must be in line with the expectations of all the stakeholders involved. This article deals with the issues related to the choice of  sites where airports should be located. Methods: Two main quantitative approaches related to the issue of airport location are presented in this article, i.e. the question of optimizing such a choice and the issue of selecting the location from a predefined set. The former involves mathematical programming and formulating the problem as an optimization task, the latter, however, involves ranking the possible variations. Due to various methodological backgrounds, the authors present the advantages and disadvantages of both approaches and point to the one which currently has its own practical application. Results: Based on real-life examples, the authors present a multi-stage procedure, which renders it possible to solve the problem of airport location. Conclusions: Based on the overview of literature of the subject, the authors point to three types of approach to the issue of airport location which could enable further development of currently applied methods.

  11. Supplier selection an MCDA-based approach

    CERN Document Server

    Mukherjee, Krishnendu

    2017-01-01

    The purpose of this book is to present a comprehensive review of the latest research and development trends at the international level for modeling and optimization of the supplier selection process for different industrial sectors. It is targeted to serve two audiences: the MBA and PhD student interested in procurement, and the practitioner who wishes to gain a deeper understanding of procurement analysis with multi-criteria based decision tools to avoid upstream risks to get better supply chain visibility. The book is expected to serve as a ready reference for supplier selection criteria and various multi-criteria based supplier’s evaluation methods for forward, reverse and mass customized supply chain. This book encompasses several criteria, methods for supplier selection in a systematic way based on extensive literature review from 1998 to 2012. It provides several case studies and some useful links which can serve as a starting point for interested researchers. In the appendix several computer code wri...

  12. Equipment Selection by using Fuzzy TOPSIS Method

    Science.gov (United States)

    Yavuz, Mahmut

    2016-10-01

    In this study, Fuzzy TOPSIS method was performed for the selection of open pit truck and the optimal solution of the problem was investigated. Data from Turkish Coal Enterprises was used in the application of the method. This paper explains the Fuzzy TOPSIS approaches with group decision-making application in an open pit coal mine in Turkey. An algorithm of the multi-person multi-criteria decision making with fuzzy set approach was applied an equipment selection problem. It was found that Fuzzy TOPSIS with a group decision making is a method that may help decision-makers in solving different decision-making problems in mining.

  13. A comparative study between three stability indicating spectrophotometric methods for the determination of diatrizoate sodium in presence of its cytotoxic degradation product based on two-wavelength selection

    Science.gov (United States)

    Riad, Safaa M.; El-Rahman, Mohamed K. Abd; Fawaz, Esraa M.; Shehata, Mostafa A.

    2015-06-01

    Three sensitive, selective, and precise stability indicating spectrophotometric methods for the determination of the X-ray contrast agent, diatrizoate sodium (DTA) in the presence of its acidic degradation product (highly cytotoxic 3,5-diamino metabolite) and in pharmaceutical formulation, were developed and validated. The first method is ratio difference, the second one is the bivariate method, and the third one is the dual wavelength method. The calibration curves for the three proposed methods are linear over a concentration range of 2-24 μg/mL. The selectivity of the proposed methods was tested using laboratory prepared mixtures. The proposed methods have been successfully applied to the analysis of DTA in pharmaceutical dosage forms without interference from other dosage form additives. The results were statistically compared with the official US pharmacopeial method. No significant difference for either accuracy or precision was observed.

  14. An efficient strategy based on an individualized selection of registration methods. Application to the coregistration of MR and SPECT images in neuro-oncology

    International Nuclear Information System (INIS)

    Tacchella, Jean-Marc; Lefort, Muriel; Habert, Marie-Odile; Yeni, Nathanaëlle; Kas, Aurélie; Frouin, Frédérique; Roullot, Elodie; Cohen, Mike-Ely; Guillevin, Rémy; Petrirena, Grégorio; Delattre, Jean-Yves

    2014-01-01

    An efficient registration strategy is described that aims to help solve delicate medical imaging registration problems. It consists of running several registration methods for each dataset and selecting the best one for each specific dataset, according to an evaluation criterion. Finally, the quality of the registration results, obtained with the best method, is visually scored by an expert as excellent, correct or poor. The strategy was applied to coregister Technetium-99m Sestamibi SPECT and MRI data in the framework of a follow-up protocol in patients with high grade gliomas receiving antiangiogenic therapy. To adapt the strategy to this clinical context, a robust semi-automatic evaluation criterion based on the physiological uptake of the Sestamibi tracer was defined. A panel of eighteen multimodal registration algorithms issued from BrainVisa, SPM or AIR software environments was systematically applied to the clinical database composed of sixty-two datasets. According to the expert visual validation, this new strategy provides 85% excellent registrations, 12% correct ones and only 3% poor ones. These results compare favorably to the ones obtained by the globally most efficient registration method over the whole database, for which only 61% of excellent registration results have been reported. Thus the registration strategy in its current implementation proves to be suitable for clinical application. (paper)

  15. Leukemia and colon tumor detection based on microarray data classification using momentum backpropagation and genetic algorithm as a feature selection method

    Science.gov (United States)

    Wisesty, Untari N.; Warastri, Riris S.; Puspitasari, Shinta Y.

    2018-03-01

    Cancer is one of the major causes of mordibility and mortality problems in the worldwide. Therefore, the need of a system that can analyze and identify a person suffering from a cancer by using microarray data derived from the patient’s Deoxyribonucleic Acid (DNA). But on microarray data has thousands of attributes, thus making the challenges in data processing. This is often referred to as the curse of dimensionality. Therefore, in this study built a system capable of detecting a patient whether contracted cancer or not. The algorithm used is Genetic Algorithm as feature selection and Momentum Backpropagation Neural Network as a classification method, with data used from the Kent Ridge Bio-medical Dataset. Based on system testing that has been done, the system can detect Leukemia and Colon Tumor with best accuracy equal to 98.33% for colon tumor data and 100% for leukimia data. Genetic Algorithm as feature selection algorithm can improve system accuracy, which is from 64.52% to 98.33% for colon tumor data and 65.28% to 100% for leukemia data, and the use of momentum parameters can accelerate the convergence of the system in the training process of Neural Network.

  16. A Theoretical and Empirical Integrated Method to Select the Optimal Combined Signals for Geometry-Free and Geometry-Based Three-Carrier Ambiguity Resolution.

    Science.gov (United States)

    Zhao, Dongsheng; Roberts, Gethin Wyn; Lau, Lawrence; Hancock, Craig M; Bai, Ruibin

    2016-11-16

    Twelve GPS Block IIF satellites, out of the current constellation, can transmit on three-frequency signals (L1, L2, L5). Taking advantages of these signals, Three-Carrier Ambiguity Resolution (TCAR) is expected to bring much benefit for ambiguity resolution. One of the research areas is to find the optimal combined signals for a better ambiguity resolution in geometry-free (GF) and geometry-based (GB) mode. However, the existing researches select the signals through either pure theoretical analysis or testing with simulated data, which might be biased as the real observation condition could be different from theoretical prediction or simulation. In this paper, we propose a theoretical and empirical integrated method, which first selects the possible optimal combined signals in theory and then refines these signals with real triple-frequency GPS data, observed at eleven baselines of different lengths. An interpolation technique is also adopted in order to show changes of the AR performance with the increase in baseline length. The results show that the AR success rate can be improved by 3% in GF mode and 8% in GB mode at certain intervals of the baseline length. Therefore, the TCAR can perform better by adopting the combined signals proposed in this paper when the baseline meets the length condition.

  17. Variable selection by lasso-type methods

    Directory of Open Access Journals (Sweden)

    Sohail Chand

    2011-09-01

    Full Text Available Variable selection is an important property of shrinkage methods. The adaptive lasso is an oracle procedure and can do consistent variable selection. In this paper, we provide an explanation that how use of adaptive weights make it possible for the adaptive lasso to satisfy the necessary and almost sufcient condition for consistent variable selection. We suggest a novel algorithm and give an important result that for the adaptive lasso if predictors are normalised after the introduction of adaptive weights, it makes the adaptive lasso performance identical to the lasso.

  18. The co-feature ratio, a novel method for the measurement of chromatographic and signal selectivity in LC-MS-based metabolomics

    International Nuclear Information System (INIS)

    Elmsjö, Albert; Haglöf, Jakob; Engskog, Mikael K.R.; Nestor, Marika; Arvidsson, Torbjörn; Pettersson, Curt

    2017-01-01

    Evaluation of analytical procedures, especially in regards to measuring chromatographic and signal selectivity, is highly challenging in untargeted metabolomics. The aim of this study was to suggest a new straightforward approach for a systematic examination of chromatographic and signal selectivity in LC-MS-based metabolomics. By calculating the ratio between each feature and its co-eluting features (the co-features), a measurement of the chromatographic selectivity (i.e. extent of co-elution) as well as the signal selectivity (e.g. amount of adduct formation) of each feature could be acquired, the co-feature ratio. This approach was used to examine possible differences in chromatographic and signal selectivity present in samples exposed to three different sample preparation procedures. The capability of the co-feature ratio was evaluated both in a classical targeted setting using isotope labelled standards as well as without standards in an untargeted setting. For the targeted analysis, several metabolites showed a skewed quantitative signal due to poor chromatographic selectivity and/or poor signal selectivity. Moreover, evaluation of the untargeted approach through multivariate analysis of the co-feature ratios demonstrated the possibility to screen for metabolites displaying poor chromatographic and/or signal selectivity characteristics. We conclude that the co-feature ratio can be a useful tool in the development and evaluation of analytical procedures in LC-MS-based metabolomics investigations. Increased selectivity through proper choice of analytical procedures may decrease the false positive and false negative discovery rate and thereby increase the validity of any metabolomic investigation. - Highlights: • The co-feature ratio (CFR) is introduced. • CFR measures chromatographic and signal selectivity of a feature. • CFR can be used for evaluating experimental procedures in metabolomics. • CFR can aid in locating features with poor selectivity.

  19. The co-feature ratio, a novel method for the measurement of chromatographic and signal selectivity in LC-MS-based metabolomics

    Energy Technology Data Exchange (ETDEWEB)

    Elmsjö, Albert, E-mail: Albert.Elmsjo@farmkemi.uu.se [Department of Medicinal Chemistry, Division of Analytical Pharmaceutical Chemistry, Uppsala University (Sweden); Haglöf, Jakob; Engskog, Mikael K.R. [Department of Medicinal Chemistry, Division of Analytical Pharmaceutical Chemistry, Uppsala University (Sweden); Nestor, Marika [Department of Immunology, Genetics and Pathology, Uppsala University (Sweden); Arvidsson, Torbjörn [Department of Medicinal Chemistry, Division of Analytical Pharmaceutical Chemistry, Uppsala University (Sweden); Medical Product Agency, Uppsala (Sweden); Pettersson, Curt [Department of Medicinal Chemistry, Division of Analytical Pharmaceutical Chemistry, Uppsala University (Sweden)

    2017-03-01

    Evaluation of analytical procedures, especially in regards to measuring chromatographic and signal selectivity, is highly challenging in untargeted metabolomics. The aim of this study was to suggest a new straightforward approach for a systematic examination of chromatographic and signal selectivity in LC-MS-based metabolomics. By calculating the ratio between each feature and its co-eluting features (the co-features), a measurement of the chromatographic selectivity (i.e. extent of co-elution) as well as the signal selectivity (e.g. amount of adduct formation) of each feature could be acquired, the co-feature ratio. This approach was used to examine possible differences in chromatographic and signal selectivity present in samples exposed to three different sample preparation procedures. The capability of the co-feature ratio was evaluated both in a classical targeted setting using isotope labelled standards as well as without standards in an untargeted setting. For the targeted analysis, several metabolites showed a skewed quantitative signal due to poor chromatographic selectivity and/or poor signal selectivity. Moreover, evaluation of the untargeted approach through multivariate analysis of the co-feature ratios demonstrated the possibility to screen for metabolites displaying poor chromatographic and/or signal selectivity characteristics. We conclude that the co-feature ratio can be a useful tool in the development and evaluation of analytical procedures in LC-MS-based metabolomics investigations. Increased selectivity through proper choice of analytical procedures may decrease the false positive and false negative discovery rate and thereby increase the validity of any metabolomic investigation. - Highlights: • The co-feature ratio (CFR) is introduced. • CFR measures chromatographic and signal selectivity of a feature. • CFR can be used for evaluating experimental procedures in metabolomics. • CFR can aid in locating features with poor selectivity.

  20. The Selection of Wagons for the Internal Transport of a Logistics Company: A Novel Approach Based on Rough BWM and Rough SAW Methods

    Directory of Open Access Journals (Sweden)

    Željko Stević

    2017-11-01

    Full Text Available The rationalization of logistics activities and processes is very important in the business and efficiency of every company. In this respect, transportation as a subsystem of logistics, whether internal or external, is potentially a huge area for achieving significant savings. In this paper, the emphasis is placed upon the internal transport logistics of a paper manufacturing company. It is necessary to rationalize the movement of vehicles in the company’s internal transport, that is, for the majority of the transport to be transferred to rail transport, because the company already has an industrial track installed in its premises. To do this, it is necessary to purchase at least two used wagons. The problem is formulated as a multi-criteria decision model with eight criteria and eight alternatives. The paper presents a new approach based on a combination of the Simple Additive Weighting (SAW method and rough numbers, which is used for ranking the potential solutions and selecting the most suitable one. The rough Best–Worst Method (BWM was used to determine the weight values of the criteria. The results obtained using a combination of these two methods in their rough form were verified by means of a sensitivity analysis consisting of a change in the weight criteria and comparison with the following methods in their conventional and rough forms: the Analytic Hierarchy Process (AHP, Technique for Ordering Preference by Similarity to Ideal Solution (TOPSIS and MultiAttributive Border Approximation area Comparison (MABAC. The results show very high stability of the model and ranks that are the same or similar in different scenarios.

  1. Improved methods in Agrobacterium-mediated transformation of almond using positive (mannose/pmi) or negative (kanamycin resistance) selection-based protocols.

    Science.gov (United States)

    Ramesh, Sunita A; Kaiser, Brent N; Franks, Tricia; Collins, Graham; Sedgley, Margaret

    2006-08-01

    A protocol for Agrobacterium-mediated transformation with either kanamycin or mannose selection was developed for leaf explants of the cultivar Prunus dulcis cv. Ne Plus Ultra. Regenerating shoots were selected on medium containing 15 muM kanamycin (negative selection), while in the positive selection strategy, shoots were selected on 2.5 g/l mannose supplemented with 15 g/l sucrose. Transformation efficiencies based on PCR analysis of individual putative transformed shoots from independent lines relative to the initial numbers of leaf explants tested were 5.6% for kanamycin/nptII and 6.8% for mannose/pmi selection, respectively. Southern blot analysis on six randomly chosen PCR-positive shoots confirmed the presence of the nptII transgene in each, and five randomly chosen lines identified to contain the pmi transgene by PCR showed positive hybridisation to a pmi DNA probe. The positive (mannose/pmi) and the negative (kanamycin) selection protocols used in this study have greatly improved transformation efficiency in almond, which were confirmed with PCR and Southern blot. This study also demonstrates that in almond the mannose/pmi selection protocol is appropriate and can result in higher transformation efficiencies over that of kanamycin/nptII selection protocols.

  2. Highly selective and sensitive method for Cu2 + detection based on chiroptical activity of L-Cysteine mediated Au nanorod assemblies

    Science.gov (United States)

    Abbasi, Shahryar; Khani, Hamzeh

    2017-11-01

    Herein, we demonstrated a simple and efficient method to detect Cu2 + based on amplified optical activity in the chiral nanoassemblies of gold nanorods (Au NRs). L-Cysteine can induce side-by-side or end-to-end assembly of Au NRs with an evident plasmonic circular dichroism (PCD) response due to coupling between surface plasmon resonances (SPR) of Au NRs and the chiral signal of L-Cys. Because of the obvious stronger plasmonic circular dichrosim (CD) response of the side-by-side assembly compared with the end-to-end assemblies, SS assembled Au NRs was selected as a sensitive platform and used for Cu2 + detection. In the presence of Cu2 +, Cu2 + can catalyze O2 oxidation of cysteine to cystine. With an increase in Cu2 + concentration, the L-Cysteine-mediated assembly of Au NRs decreased because of decrease in the free cysteine thiol groups, and the PCD signal decreased. Taking advantage of this method, Cu2 + could be detected in the concentration range of 20 pM-5 nM. Under optimal conditions, the calculated detection limit was found to be 7 pM.

  3. A Highly Sensitive and Selective Method for the Determination of an Iodate in Table-salt Samples Using Malachite Green-based Spectrophotometry.

    Science.gov (United States)

    Konkayan, Mongkol; Limchoowong, Nunticha; Sricharoen, Phitchan; Chanthai, Saksit

    2016-01-01

    A simple, rapid, and sensitive malachite green-based spectrophotometric method for the selective trace determination of an iodate has been developed and presented for the first time. The reaction mixture was specifically involved in the liberation of iodine in the presence of an excess of iodide in an acidic condition following an instantaneous reaction between the liberated iodine and malachite green dye. The optimum condition was obtained with a buffer solution pH of 5.2 in the presence of 40 mg L -1 potassium iodide and 1.5 × 10 -5 M malachite green for a 5-min incubation time. The iodate contents in some table-salt samples were in the range of 26 to 45 mg kg -1 , while those of drinking water, tap water, canal water, and seawater samples were not detectable (< 96 ng mL -1 of limits of detection, LOQ) with their satisfied method of recoveries of between 93 and 108%. The results agreed with those obtained using ICP-OES for comparison.

  4. A novel method based on selective laser sintering for preparing high-performance carbon fibres/polyamide12/epoxy ternary composites

    Science.gov (United States)

    Zhu, Wei; Yan, Chunze; Shi, Yunsong; Wen, Shifeng; Liu, Jie; Wei, Qingsong; Shi, Yusheng

    2016-09-01

    A novel method based on selective laser sintering (SLS) process is proposed for the first time to prepare complex and high-performance carbon fibres/polyamide12/epoxy (CF/PA12/EP) ternary composites. The procedures are briefly described as follows: prepare polyamide12 (PA12) coated carbon fibre (CF) composite powder; build porous green parts by SLS; infiltrate the green parts with high-performance thermosetting epoxy (EP) resin; and finally cure the resin at high temperature. The obtained composites are a ternary composite system consisting of the matrix of novolac EP resin, the reinforcement of CFs and the transition thin layer of PA12 with a thickness of 595 nm. The SEM images and micro-CT analysis prove that the ternary system is a three-dimensional co-continuous structure and the reinforcement of CFs are well dispersed in the matrix of EP with the volume fraction of 31%. Mechanical tests show that the composites fabricated by this method yield an ultimate tensile strength of 101.03 MPa and a flexural strength of 153.43 MPa, which are higher than those of most of the previously reported SLS materials. Therefore, the process proposed in this paper shows great potential for manufacturing complex, lightweight and high-performance CF reinforced composite components in aerospace, automotive industries and other areas.

  5. Evidence-based selection of theories for designing behaviour change interventions: using methods based on theoretical construct domains to understand clinicians' blood transfusion behaviour.

    Science.gov (United States)

    Francis, Jill J; Stockton, Charlotte; Eccles, Martin P; Johnston, Marie; Cuthbertson, Brian H; Grimshaw, Jeremy M; Hyde, Chris; Tinmouth, Alan; Stanworth, Simon J

    2009-11-01

    Many theories of behaviour are potentially relevant to predictive and intervention studies but most studies investigate a narrow range of theories. Michie et al. (2005) agreed 12 'theoretical domains' from 33 theories that explain behaviour change. They developed a 'Theoretical Domains Interview' (TDI) for identifying relevant domains for specific clinical behaviours, but the framework has not been used for selecting theories for predictive studies. It was used here to investigate clinicians' transfusion behaviour in intensive care units (ICU). Evidence suggests that red blood cells transfusion could be reduced for some patients without reducing quality of care. (1) To identify the domains relevant to transfusion practice in ICUs and neonatal intensive care units (NICUs), using the TDI. (2) To use the identified domains to select appropriate theories for a study predicting transfusion behaviour. An adapted TDI about managing a patient with borderline haemoglobin by watching and waiting instead of transfusing red blood cells was used to conduct semi-structured, one-to-one interviews with 18 intensive care consultants and neonatologists across the UK. Relevant theoretical domains were: knowledge, beliefs about capabilities, beliefs about consequences, social influences, behavioural regulation. Further analysis at the construct level resulted in selection of seven theoretical approaches relevant to this context: Knowledge-Attitude-Behaviour Model, Theory of Planned Behaviour, Social Cognitive Theory, Operant Learning Theory, Control Theory, Normative Model of Work Team Effectiveness and Action Planning Approaches. This study illustrated, the use of the TDI to identify relevant domains in a complex area of inpatient care. This approach is potentially valuable for selecting theories relevant to predictive studies and resulted in greater breadth of potential explanations than would be achieved if a single theoretical model had been adopted.

  6. New decision criteria for selecting delta check methods based on the ratio of the delta difference to the width of the reference range can be generally applicable for each clinical chemistry test item.

    Science.gov (United States)

    Park, Sang Hyuk; Kim, So-Young; Lee, Woochang; Chun, Sail; Min, Won-Ki

    2012-09-01

    Many laboratories use 4 delta check methods: delta difference, delta percent change, rate difference, and rate percent change. However, guidelines regarding decision criteria for selecting delta check methods have not yet been provided. We present new decision criteria for selecting delta check methods for each clinical chemistry test item. We collected 811,920 and 669,750 paired (present and previous) test results for 27 clinical chemistry test items from inpatients and outpatients, respectively. We devised new decision criteria for the selection of delta check methods based on the ratio of the delta difference to the width of the reference range (DD/RR). Delta check methods based on these criteria were compared with those based on the CV% of the absolute delta difference (ADD) as well as those reported in 2 previous studies. The delta check methods suggested by new decision criteria based on the DD/RR ratio corresponded well with those based on the CV% of the ADD except for only 2 items each in inpatients and outpatients. Delta check methods based on the DD/RR ratio also corresponded with those suggested in the 2 previous studies, except for 1 and 7 items in inpatients and outpatients, respectively. The DD/RR method appears to yield more feasible and intuitive selection criteria and can easily explain changes in the results by reflecting both the biological variation of the test item and the clinical characteristics of patients in each laboratory. We suggest this as a measure to determine delta check methods.

  7. Methods for selective functionalization and separation of carbon nanotubes

    Science.gov (United States)

    Strano, Michael S. (Inventor); Usrey, Monica (Inventor); Barone, Paul (Inventor); Dyke, Christopher A. (Inventor); Tour, James M. (Inventor); Kittrell, W. Carter (Inventor); Hauge, Robert H (Inventor); Smalley, Richard E. (Inventor); Marek, legal representative, Irene Marie (Inventor)

    2011-01-01

    The present invention is directed toward methods of selectively functionalizing carbon nanotubes of a specific type or range of types, based on their electronic properties, using diazonium chemistry. The present invention is also directed toward methods of separating carbon nanotubes into populations of specific types or range(s) of types via selective functionalization and electrophoresis, and also to the novel compositions generated by such separations.

  8. Comparing observer models and feature selection methods for a task-based statistical assessment of digital breast tomsynthesis in reconstruction space

    Science.gov (United States)

    Park, Subok; Zhang, George Z.; Zeng, Rongping; Myers, Kyle J.

    2014-03-01

    A task-based assessment of image quality1 for digital breast tomosynthesis (DBT) can be done in either the projected or reconstructed data space. As the choice of observer models and feature selection methods can vary depending on the type of task and data statistics, we previously investigated the performance of two channelized- Hotelling observer models in conjunction with 2D Laguerre-Gauss (LG) and two implementations of partial least squares (PLS) channels along with that of the Hotelling observer in binary detection tasks involving DBT projections.2, 3 The difference in these observers lies in how the spatial correlation in DBT angular projections is incorporated in the observer's strategy to perform the given task. In the current work, we extend our method to the reconstructed data space of DBT. We investigate how various model observers including the aforementioned compare for performing the binary detection of a spherical signal embedded in structured breast phantoms with the use of DBT slices reconstructed via filtered back projection. We explore how well the model observers incorporate the spatial correlation between different numbers of reconstructed DBT slices while varying the number of projections. For this, relatively small and large scan angles (24° and 96°) are used for comparison. Our results indicate that 1) given a particular scan angle, the number of projections needed to achieve the best performance for each observer is similar across all observer/channel combinations, i.e., Np = 25 for scan angle 96° and Np = 13 for scan angle 24°, and 2) given these sufficient numbers of projections, the number of slices for each observer to achieve the best performance differs depending on the channel/observer types, which is more pronounced in the narrow scan angle case.

  9. Method for hydrometallurgical recovery of selected metals

    International Nuclear Information System (INIS)

    Lorenz, G.; Schaefer, B.; Balzat, W.

    1988-01-01

    The method for hydrometallurgical recovery of selected metals refers to ore dressing by means of milling and alkaline leaching of metals, preferably uranium. By adding CaO during wet milling, Na + or K + ions of clayey ores are replaced by Ca 2+ ions. Due to the ion exchange processes, the uranium bonded with clays becomes more accessible to the leaching solution. The uranium yield increases and the consumption of reagents decreases

  10. The experiments and analysis of several selective video encryption methods

    Science.gov (United States)

    Zhang, Yue; Yang, Cheng; Wang, Lei

    2013-07-01

    This paper presents four methods for selective video encryption based on the MPEG-2 video compression,including the slices, the I-frames, the motion vectors, and the DCT coefficients. We use the AES encryption method for simulation experiment for the four methods on VS2010 Platform, and compare the video effects and the processing speed of each frame after the video encrypted. The encryption depth can be arbitrarily selected, and design the encryption depth by using the double limit counting method, so the accuracy can be increased.

  11. Alternative microbial methods: An overview and selection criteria.

    Science.gov (United States)

    Jasson, Vicky; Jacxsens, Liesbeth; Luning, Pieternel; Rajkovic, Andreja; Uyttendaele, Mieke

    2010-09-01

    This study provides an overview and criteria for the selection of a method, other than the reference method, for microbial analysis of foods. In a first part an overview of the general characteristics of rapid methods available, both for enumeration and detection, is given with reference to relevant bibliography. Perspectives on future development and the potential of the rapid method for routine application in food diagnostics are discussed. As various alternative "rapid" methods in different formats are available on the market, it can be very difficult for a food business operator or for a control authority to select the most appropriate method which fits its purpose. Validation of a method by a third party, according to international accepted protocol based upon ISO 16140, may increase the confidence in the performance of a method. A list of at the moment validated methods for enumeration of both utility indicators (aerobic plate count) and hygiene indicators (Enterobacteriaceae, Escherichia coli, coagulase positive Staphylococcus) as well as for detection of the four major pathogens (Salmonella spp., Listeria monocytogenes, E. coli O157 and Campylobacter spp.) is included with reference to relevant websites to check for updates. In a second part of this study, selection criteria are introduced to underpin the choice of the appropriate method(s) for a defined application. The selection criteria link the definition of the context in which the user of the method functions - and thus the prospective use of the microbial test results - with the technical information on the method and its operational requirements and sustainability. The selection criteria can help the end user of the method to obtain a systematic insight into all relevant factors to be taken into account for selection of a method for microbial analysis. Copyright 2010 Elsevier Ltd. All rights reserved.

  12. Proactive AP Selection Method Considering the Radio Interference Environment

    Science.gov (United States)

    Taenaka, Yuzo; Kashihara, Shigeru; Tsukamoto, Kazuya; Yamaguchi, Suguru; Oie, Yuji

    In the near future, wireless local area networks (WLANs) will overlap to provide continuous coverage over a wide area. In such ubiquitous WLANs, a mobile node (MN) moving freely between multiple access points (APs) requires not only permanent access to the Internet but also continuous communication quality during handover. In order to satisfy these requirements, an MN needs to (1) select an AP with better performance and (2) execute a handover seamlessly. To satisfy requirement (2), we proposed a seamless handover method in a previous study. Moreover, in order to achieve (1), the Received Signal Strength Indicator (RSSI) is usually employed to measure wireless link quality in a WLAN system. However, in a real environment, especially if APs are densely situated, it is difficult to always select an AP with better performance based on only the RSSI. This is because the RSSI alone cannot detect the degradation of communication quality due to radio interference. Moreover, it is important that AP selection is completed only on an MN, because we can assume that, in ubiquitous WLANs, various organizations or operators will manage APs. Hence, we cannot modify the APs for AP selection. To overcome these difficulties, in the present paper, we propose and implement a proactive AP selection method considering wireless link condition based on the number of frame retransmissions in addition to the RSSI. In the evaluation, we show that the proposed AP selection method can appropriately select an AP with good wireless link quality, i.e., high RSSI and low radio interference.

  13. LCIA selection methods for assessing toxic releases

    DEFF Research Database (Denmark)

    Larsen, Henrik Fred; Birkved, Morten; Hauschild, Michael Zwicky

    2002-01-01

    the inventory that contribute significantly to the impact categories on ecotoxicity and human toxicity to focus the characterisation work. The reason why the selection methods are more important for the chemical-related impact categories than for other impact categories is the extremely high number......Characterization of toxic emissions in life cycle impact assessment (LCIA) is in many cases severely limited by the lack of characterization factors for the emissions mapped in the inventory. The number of substances assigned characterization factors for (eco)toxicity included in the dominating LCA....... The methods are evaluated against a set of pre-defined criteria (comprising consistency with characterization and data requirement) and applied to case studies and a test set of chemicals. The reported work is part of the EU-project OMNIITOX....

  14. Method for Selection of Solvents for Promotion of Organic Reactions

    DEFF Research Database (Denmark)

    Gani, Rafiqul; Jiménez-González, Concepción; Constable, David J.C.

    2005-01-01

    is to produce, for a given reaction, a short list of chemicals that could be considered as potential solvents, to evaluate their performance in the reacting system, and, based on this, to rank them according to a scoring system. Several examples of application are given to illustrate the main features and steps......A method to select appropriate green solvents for the promotion of a class of organic reactions has been developed. The method combines knowledge from industrial practice and physical insights with computer-aided property estimation tools for selection/design of solvents. In particular, it employs...... estimates of thermodynamic properties to generate a knowledge base of reaction, solvent and environment related properties that directly or indirectly influence the rate and/or conversion of a given reaction. Solvents are selected using a rules-based procedure where the estimated reaction-solvent properties...

  15. Using MACBETH method for supplier selection in manufacturing environment

    Directory of Open Access Journals (Sweden)

    Prasad Karande

    2013-04-01

    Full Text Available Supplier selection is always found to be a complex decision-making problem in manufacturing environment. The presence of several independent and conflicting evaluation criteria, either qualitative or quantitative, makes the supplier selection problem a candidate to be solved by multi-criteria decision-making (MCDM methods. Even several MCDM methods have already been proposed for solving the supplier selection problems, the need for an efficient method that can deal with qualitative judgments related to supplier selection still persists. In this paper, the applicability and usefulness of measuring attractiveness by a categorical-based evaluation technique (MACBETH is demonstrated to act as a decision support tool while solving two real time supplier selection problems having qualitative performance measures. The ability of MACBETH method to quantify the qualitative performance measures helps to provide a numerical judgment scale for ranking the alternative suppliers and selecting the best one. The results obtained from MACBETH method exactly corroborate with those derived by the past researchers employing different mathematical approaches.

  16. Sustainable Supplier Performance Evaluation and Selection with Neofuzzy TOPSIS Method.

    Science.gov (United States)

    Chaharsooghi, S K; Ashrafi, Mehdi

    2014-01-01

    Supplier selection plays an important role in the supply chain management and traditional criteria such as price, quality, and flexibility are considered for supplier performance evaluation in researches. In recent years sustainability has received more attention in the supply chain management literature with triple bottom line (TBL) describing the sustainability in supply chain management with social, environmental, and economic initiatives. This paper explores sustainability in supply chain management and examines the problem of identifying a new model for supplier selection based on extended model of TBL approach in supply chain by presenting fuzzy multicriteria method. Linguistic values of experts' subjective preferences are expressed with fuzzy numbers and Neofuzzy TOPSIS is proposed for finding the best solution of supplier selection problem. Numerical results show that the proposed model is efficient for integrating sustainability in supplier selection problem. The importance of using complimentary aspects of sustainability and Neofuzzy TOPSIS concept in sustainable supplier selection process is shown with sensitivity analysis.

  17. Personnel selection using group fuzzy AHP and SAW methods

    Directory of Open Access Journals (Sweden)

    Ali Reza Afshari

    2017-01-01

    Full Text Available Personnel evaluation and selection is a very important activity for the enterprises. Different job needs different ability and the requirement of criteria which can measure ability is different. It needs a suitable and flexible method to evaluate the performance of each candidate according to different requirements of different jobs in relation to each criterion. Analytic Hierarchy Process (AHP is one of Multi Criteria decision making methods derived from paired comparisons. Simple Additive Weighting (SAW is most frequently used multi attribute decision technique. The method is based on the weighted average. It successfully models the ambiguity and imprecision associated with the pair wise comparison process and reduces the personal biasness. This study tries to analyze the Analytic Hierarchy Process in order to make the recruitment process more reasonable, based on the fuzzy multiple criteria decision making model to achieve the goal of personnel selection. Finally, an example is implemented to demonstrate the practicability of the proposed method.

  18. Selective Integration in the Material-Point Method

    DEFF Research Database (Denmark)

    Andersen, Lars; Andersen, Søren; Damkilde, Lars

    2009-01-01

    The paper deals with stress integration in the material-point method. In order to avoid parasitic shear in bending, a formulation is proposed, based on selective integration in the background grid that is used to solve the governing equations. The suggested integration scheme is compared...... to a traditional material-point-method computation in which the stresses are evaluated at the material points. The deformation of a cantilever beam is analysed, assuming elastic or elastoplastic material behaviour....

  19. Multi-modal distribution crossover method based on two crossing segments bounded by selected parents applied to multi-objective design optimization

    Energy Technology Data Exchange (ETDEWEB)

    Ariyarit, Atthaphon; Kanazaki, Masahiro [Tokyo Metropolitan University, Tokyo (Japan)

    2015-04-15

    This paper discusses airfoil design optimization using a genetic algorithm (GA) with multi-modal distribution crossover (MMDX). The proposed crossover method creates four segments from four parents, of which two segments are bounded by selected parents and two segments are bounded by one parent and another segment. After these segments are defined, four offsprings are generated. This study applied the proposed optimization to a real-world, multi-objective airfoil design problem using class-shape function transformation parameterization, which is an airfoil representation that uses polynomial function, to investigate the effectiveness of this algorithm. The results are compared with the results of the blend crossover (BLX) and unimodal normal distribution crossover (UNDX) algorithms. The objective of these airfoil design problems is to successfully find the optimal design. The outcome of using this algorithm is superior to that of the BLX and UNDX crossover methods because the proposed method can maintain higher diversity than the BLX and UNDX methods. This advantage is desirable for real-world problems.

  20. Multi-modal distribution crossover method based on two crossing segments bounded by selected parents applied to multi-objective design optimization

    International Nuclear Information System (INIS)

    Ariyarit, Atthaphon; Kanazaki, Masahiro

    2015-01-01

    This paper discusses airfoil design optimization using a genetic algorithm (GA) with multi-modal distribution crossover (MMDX). The proposed crossover method creates four segments from four parents, of which two segments are bounded by selected parents and two segments are bounded by one parent and another segment. After these segments are defined, four offsprings are generated. This study applied the proposed optimization to a real-world, multi-objective airfoil design problem using class-shape function transformation parameterization, which is an airfoil representation that uses polynomial function, to investigate the effectiveness of this algorithm. The results are compared with the results of the blend crossover (BLX) and unimodal normal distribution crossover (UNDX) algorithms. The objective of these airfoil design problems is to successfully find the optimal design. The outcome of using this algorithm is superior to that of the BLX and UNDX crossover methods because the proposed method can maintain higher diversity than the BLX and UNDX methods. This advantage is desirable for real-world problems.

  1. SELECTION METHOD FOR AUTOMOTIVE PARTS RECONDITIONING

    Directory of Open Access Journals (Sweden)

    Dan Florin NITOI

    2015-05-01

    Full Text Available Paper presents technological methods for metal deposition, costs calculation and clasification for the main process that helps in automotive technologies to repair or to increase pieces properties. Paper was constructed based on many technological experiments that starts from practicans and returns to them. The main aim is to help young engineers or practicians engineers to choose the proper reconditioning process with the best information in repairing pieces from automotive industry.

  2. Application of a method based on the measurement of radiation reflectance when estimating the sensitivity of selected grain maize hybrids to the herbicide CALLISTO 480 SC + ATPLUS 463

    Directory of Open Access Journals (Sweden)

    Michal Vondra

    2009-01-01

    Full Text Available The application of methods based on measurements of photosynthesis efficiency is now more and more popular and used not only for the evaluation of the efficiency of herbicides but also for the estimation of their phytotoxicity to the cultivated crop. These methods enable to determine also dif­fe­ren­ces in the sensitivity of cultivars and/or hybrids to individual herbicides. The advantage of these methods consists above all in the speed and accuracy of measuring.In a field experiment, the sensitivity of several selected grain maize hybrids (EDENSTAR, NK AROBASE, NK LUGAN, LG 33.30 and NK THERMO to the herbicide CALLISTO 480 SC + ATPLUS 463 was tested for a period of three years. The sensitivity to a registered dose of 0.25 l . ha−1 + 0.5 % was measured by means of the apparatus PS1 meter, which could measure the reflected radiation. Measurements of sensitivity of hybrids were performed on the 2nd, 3rd, 4th, 5th and 8th day after the application of the tested herbicide, i.e. in the growing stage of the 3rd–5th leaf. Plant material was harvested using a small-plot combine harvester SAMPO 2010. Samples were weighed and converted to the yield with 15 % of moisture in grain DM.The obtained three-year results did not demonstrate differences in sensitivity of tested hybrids to the registered dose of the herbicide CALLISTO 480 SC + ATPLUS 463 (i.e. 0.25 l . ha−1 + 0,5 %. Recorded results indicated that for the majority of tested hybrids the most critical were the 4th and the 5th day after the application; on these days the average PS1 values were the highest at all. In years 2005 and 2007, none of the tested hybrids exceeded the limit value 15 (which indicated a certain decrease in the efficiency of photosynthesis. Although in 2006 three of tested hybrids showed a certain decrease in photosynthetic activity (i.e. EDENSTAR and NK AROBASE on the 3rd day and NK LUGAN on the 2nd–4th day after the application, no visual symptoms

  3. Toward optimal feature selection using ranking methods and classification algorithms

    Directory of Open Access Journals (Sweden)

    Novaković Jasmina

    2011-01-01

    Full Text Available We presented a comparison between several feature ranking methods used on two real datasets. We considered six ranking methods that can be divided into two broad categories: statistical and entropy-based. Four supervised learning algorithms are adopted to build models, namely, IB1, Naive Bayes, C4.5 decision tree and the RBF network. We showed that the selection of ranking methods could be important for classification accuracy. In our experiments, ranking methods with different supervised learning algorithms give quite different results for balanced accuracy. Our cases confirm that, in order to be sure that a subset of features giving the highest accuracy has been selected, the use of many different indices is recommended.

  4. Polystyrene Based Silver Selective Electrodes

    Directory of Open Access Journals (Sweden)

    Shiva Agarwal

    2002-06-01

    Full Text Available Silver(I selective sensors have been fabricated from polystyrene matrix membranes containing macrocycle, Me6(14 diene.2HClO4 as ionophore. Best performance was exhibited by the membrane having a composition macrocycle : Polystyrene in the ratio 15:1. This membrane worked well over a wide concentration range 5.0×10-6–1.0×10-1M of Ag+ with a near-Nernstian slope of 53.0 ± 1.0 mV per decade of Ag+ activity. The response time of the sensor is <15 s and the membrane can be used over a period of four months with good reproducibility. The proposed electrode works well in a wide pH range 2.5-9.0 and demonstrates good discriminating power over a number of mono-, di-, and trivalent cations. The sensor has also been used as an indicator electrode in the potentiometric titration of silver(II ions against NaCl solution. The sensor can also be used in non-aqueous medium with no significant change in the value of slope or working concentration range for the estimation of Ag+ in solution having up to 25% (v/v nonaqueous fraction.

  5. Pyrochemical and Dry Processing Methods Program. A selected bibliography

    International Nuclear Information System (INIS)

    McDuffie, H.F.; Smith, D.H.; Owen, P.T.

    1979-03-01

    This selected bibliography with abstracts was compiled to provide information support to the Pyrochemical and Dry Processing Methods (PDPM) Program sponsored by DOE and administered by the Argonne National Laboratory. Objectives of the PDPM Program are to evaluate nonaqueous methods of reprocessing spent fuel as a route to the development of proliferation-resistant and diversion-resistant methods for widespread use in the nuclear industry. Emphasis was placed on the literature indexed in the ERDA--DOE Energy Data Base (EDB). The bibliography includes indexes to authors, subject descriptors, EDB subject categories, and titles

  6. Pyrochemical and Dry Processing Methods Program. A selected bibliography

    Energy Technology Data Exchange (ETDEWEB)

    McDuffie, H.F.; Smith, D.H.; Owen, P.T.

    1979-03-01

    This selected bibliography with abstracts was compiled to provide information support to the Pyrochemical and Dry Processing Methods (PDPM) Program sponsored by DOE and administered by the Argonne National Laboratory. Objectives of the PDPM Program are to evaluate nonaqueous methods of reprocessing spent fuel as a route to the development of proliferation-resistant and diversion-resistant methods for widespread use in the nuclear industry. Emphasis was placed on the literature indexed in the ERDA--DOE Energy Data Base (EDB). The bibliography includes indexes to authors, subject descriptors, EDB subject categories, and titles.

  7. Evaluating the sustainable mining contractor selection problems: An imprecise last aggregation preference selection index method

    Directory of Open Access Journals (Sweden)

    Mohammad Panahi Borujeni

    2017-01-01

    Full Text Available The increasing complexity surrounding decision-making situations has made it inevitable for practitioners to apply ideas from a group of experts or decision makers (DMs instead of individuals. In a large proportion of recent studies, not enough attention has been paid to considering uncertainty in practical ways. In this paper, a hesitant fuzzy preference selection index (HFPSI method is proposed based on a new soft computing approach with risk preferences of DMs to deal with imprecise multi-criteria decision-making problems. Meanwhile, qualitative assessing criteria are considered in the process of the proposed method to help the DMs by providing suitable expressions of membership degrees for an element under a set. Moreover, the best alternative is selected based on considering the concepts of preference relation and hesitant fuzzy sets, simultaneously. Therefore, DMs' weights are determined according to the proposed hesitant fuzzy compromise solution technique to prevent judgment errors. Moreover, the proposed method has been extended based on the last aggregation method by aggregating the DMs' opinions during the last stage to avoid data loss. In this respect, a real case study about the mining contractor selection problem is provided to represent the effectiveness and efficiency of the proposed HFPSI method in practice. Then, a comparative analysis is performed to show the feasibility of the presented approach. Finally, sensitivity analysis is carried out to show the effect of considering the DMs' weights and last aggregation approach in a dispersion of the alternatives’ ranking values.

  8. A MITE-based genotyping method to reveal hundreds of DNA polymorphisms in an animal genome after a few generations of artificial selection

    Directory of Open Access Journals (Sweden)

    Tetreau Guillaume

    2008-10-01

    Full Text Available Abstract Background For most organisms, developing hundreds of genetic markers spanning the whole genome still requires excessive if not unrealistic efforts. In this context, there is an obvious need for methodologies allowing the low-cost, fast and high-throughput genotyping of virtually any species, such as the Diversity Arrays Technology (DArT. One of the crucial steps of the DArT technique is the genome complexity reduction, which allows obtaining a genomic representation characteristic of the studied DNA sample and necessary for subsequent genotyping. In this article, using the mosquito Aedes aegypti as a study model, we describe a new genome complexity reduction method taking advantage of the abundance of miniature inverted repeat transposable elements (MITEs in the genome of this species. Results Ae. aegypti genomic representations were produced following a two-step procedure: (1 restriction digestion of the genomic DNA and simultaneous ligation of a specific adaptor to compatible ends, and (2 amplification of restriction fragments containing a particular MITE element called Pony using two primers, one annealing to the adaptor sequence and one annealing to a conserved sequence motif of the Pony element. Using this protocol, we constructed a library comprising more than 6,000 DArT clones, of which at least 5.70% were highly reliable polymorphic markers for two closely related mosquito strains separated by only a few generations of artificial selection. Within this dataset, linkage disequilibrium was low, and marker redundancy was evaluated at 2.86% only. Most of the detected genetic variability was observed between the two studied mosquito strains, but individuals of the same strain could still be clearly distinguished. Conclusion The new complexity reduction method was particularly efficient to reveal genetic polymorphisms in Ae. egypti. Overall, our results testify of the flexibility of the DArT genotyping technique and open new

  9. Determination of Selection Method in Genetic Algorithm for Land Suitability

    Directory of Open Access Journals (Sweden)

    Irfianti Asti Dwi

    2016-01-01

    Full Text Available Genetic Algoirthm is one alternative solution in the field of modeling optimization, automatic programming and machine learning. The purpose of the study was to compare some type of selection methods in Genetic Algorithm for land suitability. Contribution of this research applies the best method to develop region based horticultural commodities. This testing is done by comparing the three methods on the method of selection, the Roulette Wheel, Tournament Selection and Stochastic Universal Sampling. Parameters of the locations used in the test scenarios include Temperature = 27°C, Rainfall = 1200 mm, hummidity = 30%, Cluster fruit = 4, Crossover Probabiitiy (Pc = 0.6, Mutation Probabilty (Pm = 0.2 and Epoch = 10. The second test epoch incluides location parameters consist of Temperature = 30°C, Rainfall = 2000 mm, Humidity = 35%, Cluster fruit = 5, Crossover Probability (Pc = 0.7, Mutation Probability (Pm = 0.3 and Epoch 10. The conclusion of this study shows that the Roulette Wheel is the best method because it produces more stable and fitness value than the other two methods.

  10. Selective saturation method for EPR dosimetry with tooth enamel

    International Nuclear Information System (INIS)

    Ignatiev, E.A.; Romanyukha, A.A.; Koshta, A.A.; Wieser, A.

    1996-01-01

    The method of selective saturation is based on the difference in the microwave (mw) power dependence of the background and radiation induced EPR components of the tooth enamel spectrum. The subtraction of the EPR spectrum recorded at low mw power from that recorded at higher mw power provides a considerable reduction of the background component in the spectrum. The resolution of the EPR spectrum could be improved 10-fold, however simultaneously the signal-to-noise ratio was found to be reduced twice. A detailed comparative study of reference samples with known absorbed doses was performed to demonstrate the advantage of the method. The application of the selective saturation method for EPR dosimetry with tooth enamel reduced the lower limit of EPR dosimetry to about 100 mGy. (author)

  11. Some selected quantitative methods of thermal image analysis in Matlab.

    Science.gov (United States)

    Koprowski, Robert

    2016-05-01

    The paper presents a new algorithm based on some selected automatic quantitative methods for analysing thermal images. It shows the practical implementation of these image analysis methods in Matlab. It enables to perform fully automated and reproducible measurements of selected parameters in thermal images. The paper also shows two examples of the use of the proposed image analysis methods for the area of ​​the skin of a human foot and face. The full source code of the developed application is also provided as an attachment. The main window of the program during dynamic analysis of the foot thermal image. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. A new DEA-GAHP method for supplier selection problem

    Directory of Open Access Journals (Sweden)

    Behrooz Ahadian

    2012-10-01

    Full Text Available Supplier selection is one of the most important decisions made in supply chain management. Supplier evaluation problem has been in the center of supply chain researcher’s attention in these years. Managers regard some of these studies and methods inappropriate due to simple, weight scoring methods that generally are based on subjective opinions and judgments of decision maker units involved in the supplier evaluation process yielding imprecise and even unreliable results. This paper seeks to propose a methodology to integrate data envelopment analysis (DEA and group analytical hierarchy process (GAHP for evaluating and selecting the most efficient supplier. We develop a methodology, which consists of 6 steps, one by one has been introduced in lecture and finally applicability of proposed method is indicated by assessing 12 suppliers in a numerical example.

  13. AMES: Towards an Agile Method for ERP Selection

    OpenAIRE

    Juell-Skielse, Gustaf; Nilsson, Anders G.; Nordqvist, Andreas; Westergren, Mattias

    2012-01-01

    Conventional on-premise installations of ERP are now rapidly being replaced by ERP as service. Although ERP becomes more accessible and no longer requires local infrastructure, current selection methods do not take full advantage of the provided agility. In this paper we present AMES (Agile Method for ERP Selection), a novel method for ERP selection which better utilizes the strengths of service oriented ERP. AMES is designed to shorten lead time for selection, support identification of essen...

  14. Information Gain Based Dimensionality Selection for Classifying Text Documents

    Energy Technology Data Exchange (ETDEWEB)

    Dumidu Wijayasekara; Milos Manic; Miles McQueen

    2013-06-01

    Selecting the optimal dimensions for various knowledge extraction applications is an essential component of data mining. Dimensionality selection techniques are utilized in classification applications to increase the classification accuracy and reduce the computational complexity. In text classification, where the dimensionality of the dataset is extremely high, dimensionality selection is even more important. This paper presents a novel, genetic algorithm based methodology, for dimensionality selection in text mining applications that utilizes information gain. The presented methodology uses information gain of each dimension to change the mutation probability of chromosomes dynamically. Since the information gain is calculated a priori, the computational complexity is not affected. The presented method was tested on a specific text classification problem and compared with conventional genetic algorithm based dimensionality selection. The results show an improvement of 3% in the true positives and 1.6% in the true negatives over conventional dimensionality selection methods.

  15. Outranking methods in support of supplier selection

    NARCIS (Netherlands)

    de Boer, L.; van der Wegen, Leonardus L.M.; Telgen, Jan

    1998-01-01

    Initial purchasing decisions such as make-or-buy decisions and supplier selection are decisions of strategic importance to companies. The nature of these decisions usually is complex and unstructured. Management Science techniques might be helpful tools for this kind of decision making problems. So

  16. Color selective photodetector and methods of making

    Science.gov (United States)

    Walker, Brian J.; Dorn, August; Bulovic, Vladimir; Bawendi, Moungi G.

    2013-03-19

    A photoelectric device, such as a photodetector, can include a semiconductor nanowire electrostatically associated with a J-aggregate. The J-aggregate can facilitate absorption of a desired wavelength of light, and the semiconductor nanowire can facilitate charge transport. The color of light detected by the device can be chosen by selecting a J-aggregate with a corresponding peak absorption wavelength.

  17. Genomic Selection in Plant Breeding: Methods, Models, and Perspectives.

    Science.gov (United States)

    Crossa, José; Pérez-Rodríguez, Paulino; Cuevas, Jaime; Montesinos-López, Osval; Jarquín, Diego; de Los Campos, Gustavo; Burgueño, Juan; González-Camacho, Juan M; Pérez-Elizalde, Sergio; Beyene, Yoseph; Dreisigacker, Susanne; Singh, Ravi; Zhang, Xuecai; Gowda, Manje; Roorkiwal, Manish; Rutkoski, Jessica; Varshney, Rajeev K

    2017-11-01

    Genomic selection (GS) facilitates the rapid selection of superior genotypes and accelerates the breeding cycle. In this review, we discuss the history, principles, and basis of GS and genomic-enabled prediction (GP) as well as the genetics and statistical complexities of GP models, including genomic genotype×environment (G×E) interactions. We also examine the accuracy of GP models and methods for two cereal crops and two legume crops based on random cross-validation. GS applied to maize breeding has shown tangible genetic gains. Based on GP results, we speculate how GS in germplasm enhancement (i.e., prebreeding) programs could accelerate the flow of genes from gene bank accessions to elite lines. Recent advances in hyperspectral image technology could be combined with GS and pedigree-assisted breeding. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Method for Business Process Management System Selection

    OpenAIRE

    Westelaken, van de, Thijs; Terwee, Bas; Ravesteijn, Pascal

    2013-01-01

    In recent years business process management (BPM) and specifically information systems that support the analysis, design and execution of processes (also called business process management systems (BPMS)) are getting more attention. This has lead to an increase in research on BPM and BPMS. However the research on BPMS is mostly focused on the architecture of the system and how to implement such systems. How to select a BPM system that fits the strategy and goals of a specific organization is ...

  19. A fractured rock geophysical toolbox method selection tool

    Science.gov (United States)

    Day-Lewis, F. D.; Johnson, C.D.; Slater, L.D.; Robinson, J.L.; Williams, J.H.; Boyden, C.L.; Werkema, D.D.; Lane, J.W.

    2016-01-01

    Geophysical technologies have the potential to improve site characterization and monitoring in fractured rock, but the appropriate and effective application of geophysics at a particular site strongly depends on project goals (e.g., identifying discrete fractures) and site characteristics (e.g., lithology). No method works at every site or for every goal. New approaches are needed to identify a set of geophysical methods appropriate to specific project goals and site conditions while considering budget constraints. To this end, we present the Excel-based Fractured-Rock Geophysical Toolbox Method Selection Tool (FRGT-MST). We envision the FRGT-MST (1) equipping remediation professionals with a tool to understand what is likely to be realistic and cost-effective when contracting geophysical services, and (2) reducing applications of geophysics with unrealistic objectives or where methods are likely to fail.

  20. Attribute-Based Methods

    Science.gov (United States)

    Thomas P. Holmes; Wiktor L. Adamowicz

    2003-01-01

    Stated preference methods of environmental valuation have been used by economists for decades where behavioral data have limitations. The contingent valuation method (Chapter 5) is the oldest stated preference approach, and hundreds of contingent valuation studies have been conducted. More recently, and especially over the last decade, a class of stated preference...

  1. A Comparison of Item Selection Procedures Using Different Ability Estimation Methods in Computerized Adaptive Testing Based on the Generalized Partial Credit Model

    Science.gov (United States)

    Ho, Tsung-Han

    2010-01-01

    Computerized adaptive testing (CAT) provides a highly efficient alternative to the paper-and-pencil test. By selecting items that match examinees' ability levels, CAT not only can shorten test length and administration time but it can also increase measurement precision and reduce measurement error. In CAT, maximum information (MI) is the most…

  2. Analysis of Criteria Influencing Contractor Selection Using TOPSIS Method

    Science.gov (United States)

    Alptekin, Orkun; Alptekin, Nesrin

    2017-10-01

    Selection of the most suitable contractor is an important process in public construction projects. This process is a major decision which may influence the progress and success of a construction project. Improper selection of contractors may lead to problems such as bad quality of work and delay in project duration. Especially in the construction projects of public buildings, the proper choice of contractor is beneficial to the public institution. Public procurement processes have different characteristics in respect to dissimilarities in political, social and economic features of every country. In Turkey, Turkish Public Procurement Law PPL 4734 is the main regulatory law for the procurement of the public buildings. According to the PPL 4734, public construction administrators have to contract with the lowest bidder who has the minimum requirements according to the criteria in prequalification process. Public administrators are not sufficient for selection of the proper contractor because of the restrictive provisions of the PPL 4734. The lowest bid method does not enable public construction administrators to select the most qualified contractor and they have realised the fact that the selection of a contractor based on lowest bid alone is inadequate and may lead to the failure of the project in terms of time delay Eand poor quality standards. In order to evaluate the overall efficiency of a project, it is necessary to identify selection criteria. This study aims to focus on identify importance of other criteria besides lowest bid criterion in contractor selection process of PPL 4734. In this study, a survey was conducted to staff of Department of Construction Works of Eskisehir Osmangazi University. According to TOPSIS (Technique for Order Preference by Similarity to the Ideal Solution) for analysis results, termination of construction work in previous tenders is the most important criterion of 12 determined criteria. The lowest bid criterion is ranked in rank 5.

  3. Importance of MS selectivity and chromatographic separation in LC-MS/MS-based methods when investigating pharmaceutical metabolites in water. Dipyrone as a case of study.

    Science.gov (United States)

    Ibáñez, M; Gracia-Lor, E; Sancho, J V; Hernández, F

    2012-08-01

    Pharmaceuticals are emerging contaminants of increasing concern because of their presence in the aquatic environment and potential to reach drinking-water sources. After human and/or veterinary consumption, pharmaceuticals can be excreted in unchanged form, as the parent compound, and/or as free or conjugated metabolites. Determination of most pharmaceuticals and metabolites in the environment is commonly made by liquid chromatography (LC) coupled to mass spectrometry (MS). LC coupled to tandem MS is the technique of choice nowadays in this field. The acquisition of two selected reaction monitoring (SRM) transitions together with the retention time is the most widely accepted criterion for a safe quantification and confirmation assay. However, scarce attention is normally paid to the selectivity of the selected transitions as well as to the chromatographic separation. In this work, the importance of full spectrum acquisition high-resolution MS data using a hybrid quadrupole time-of-flight analyser and/or a suitable chromatographic separation (to reduce the possibility of co-eluting interferences) is highlighted when investigating pharmaceutical metabolites that share common fragment ions. For this purpose, the analytical challenge associated to the determination of metabolites of the widely used analgesic dipyrone (also known as metamizol) in urban wastewater is discussed. Examples are given on the possibilities of reporting false positives of dypirone metabolites by LC-MS/MS under SRM mode due to a wrong assignment of identity of the compounds detected. Copyright © 2012 John Wiley & Sons, Ltd.

  4. 结合重心法和层次分析法研究垃圾转运站选址%Location Selection of Waste Transfer Station Based on Combining Gravity Method and AHP Model

    Institute of Scientific and Technical Information of China (English)

    曹勇锋; 荣宏伟; 张可方; 张朝升

    2012-01-01

    Gravity method was combined with AHP model to study the location selection of rural waste transfer station. Based on analysis of rural household yield and distribution, gravity method was used for initial selection of primary locations, then using AHP model to screen out standby locations to obtain the optimal location. Compared with traditional methods of location selection, the method not only takes costs into account, but also the impact location of environmental benefits, basic conditions as well as laws and regulations. The result is closer to reality. The method is used and verified in a southern rural area, with satisfactory location selection plan.%将重心法和层次分析法结合起来研究农村生活垃圾转运站选址问题,首先分析农村生活垃圾产生量及分布,采用重心法对转运站进行初始选址,得出备选地址,然后运用层次分析法对备选地址进行筛选,得到最佳选址方案.与传统选址方法相比,该方法不仅考虑了费用,还综合考虑了影响选址的环境效益、基础条件、法律法规等因素,因此得到的结果更贴近实际.将该方法应用于南方某农村,得到了满意的选址方案.

  5. An opinion formation based binary optimization approach for feature selection

    Science.gov (United States)

    Hamedmoghadam, Homayoun; Jalili, Mahdi; Yu, Xinghuo

    2018-02-01

    This paper proposed a novel optimization method based on opinion formation in complex network systems. The proposed optimization technique mimics human-human interaction mechanism based on a mathematical model derived from social sciences. Our method encodes a subset of selected features to the opinion of an artificial agent and simulates the opinion formation process among a population of agents to solve the feature selection problem. The agents interact using an underlying interaction network structure and get into consensus in their opinions, while finding better solutions to the problem. A number of mechanisms are employed to avoid getting trapped in local minima. We compare the performance of the proposed method with a number of classical population-based optimization methods and a state-of-the-art opinion formation based method. Our experiments on a number of high dimensional datasets reveal outperformance of the proposed algorithm over others.

  6. Effect of cooking methods on the micronutrient profile of selected ...

    African Journals Online (AJOL)

    Effect of cooking methods on the micronutrient profile of selected vegetables: okra fruit ( Abelmoshcus esculentus ), fluted pumpkin ( Telfairia occidentalis ), African spinach ( Amarantus viridis ), and scent leaf ( Ocumum gratissimum.

  7. Norm based Threshold Selection for Fault Detectors

    DEFF Research Database (Denmark)

    Rank, Mike Lind; Niemann, Henrik

    1998-01-01

    The design of fault detectors for fault detection and isolation (FDI) in dynamic systems is considered from a norm based point of view. An analysis of norm based threshold selection is given based on different formulations of FDI problems. Both the nominal FDI problem as well as the uncertain FDI...... problem are considered. Based on this analysis, a performance index based on norms of the involved transfer functions is given. The performance index allows us also to optimize the structure of the fault detection filter directly...

  8. Efficient Multi-Label Feature Selection Using Entropy-Based Label Selection

    Directory of Open Access Journals (Sweden)

    Jaesung Lee

    2016-11-01

    Full Text Available Multi-label feature selection is designed to select a subset of features according to their importance to multiple labels. This task can be achieved by ranking the dependencies of features and selecting the features with the highest rankings. In a multi-label feature selection problem, the algorithm may be faced with a dataset containing a large number of labels. Because the computational cost of multi-label feature selection increases according to the number of labels, the algorithm may suffer from a degradation in performance when processing very large datasets. In this study, we propose an efficient multi-label feature selection method based on an information-theoretic label selection strategy. By identifying a subset of labels that significantly influence the importance of features, the proposed method efficiently outputs a feature subset. Experimental results demonstrate that the proposed method can identify a feature subset much faster than conventional multi-label feature selection methods for large multi-label datasets.

  9. Primitive polynomials selection method for pseudo-random number generator

    Science.gov (United States)

    Anikin, I. V.; Alnajjar, Kh

    2018-01-01

    In this paper we suggested the method for primitive polynomials selection of special type. This kind of polynomials can be efficiently used as a characteristic polynomials for linear feedback shift registers in pseudo-random number generators. The proposed method consists of two basic steps: finding minimum-cost irreducible polynomials of the desired degree and applying primitivity tests to get the primitive ones. Finally two primitive polynomials, which was found by the proposed method, used in pseudorandom number generator based on fuzzy logic (FRNG) which had been suggested before by the authors. The sequences generated by new version of FRNG have low correlation magnitude, high linear complexity, less power consumption, is more balanced and have better statistical properties.

  10. Automatic Peak Selection by a Benjamini-Hochberg-Based Algorithm

    KAUST Repository

    Abbas, Ahmed; Kong, Xin-Bing; Liu, Zhi; Jing, Bing-Yi; Gao, Xin

    2013-01-01

    A common issue in bioinformatics is that computational methods often generate a large number of predictions sorted according to certain confidence scores. A key problem is then determining how many predictions must be selected to include most of the true predictions while maintaining reasonably high precision. In nuclear magnetic resonance (NMR)-based protein structure determination, for instance, computational peak picking methods are becoming more and more common, although expert-knowledge remains the method of choice to determine how many peaks among thousands of candidate peaks should be taken into consideration to capture the true peaks. Here, we propose a Benjamini-Hochberg (B-H)-based approach that automatically selects the number of peaks. We formulate the peak selection problem as a multiple testing problem. Given a candidate peak list sorted by either volumes or intensities, we first convert the peaks into p-values and then apply the B-H-based algorithm to automatically select the number of peaks. The proposed approach is tested on the state-of-the-art peak picking methods, including WaVPeak [1] and PICKY [2]. Compared with the traditional fixed number-based approach, our approach returns significantly more true peaks. For instance, by combining WaVPeak or PICKY with the proposed method, the missing peak rates are on average reduced by 20% and 26%, respectively, in a benchmark set of 32 spectra extracted from eight proteins. The consensus of the B-H-selected peaks from both WaVPeak and PICKY achieves 88% recall and 83% precision, which significantly outperforms each individual method and the consensus method without using the B-H algorithm. The proposed method can be used as a standard procedure for any peak picking method and straightforwardly applied to some other prediction selection problems in bioinformatics. The source code, documentation and example data of the proposed method is available at http://sfb.kaust.edu.sa/pages/software.aspx. © 2013

  11. Automatic Peak Selection by a Benjamini-Hochberg-Based Algorithm

    KAUST Repository

    Abbas, Ahmed

    2013-01-07

    A common issue in bioinformatics is that computational methods often generate a large number of predictions sorted according to certain confidence scores. A key problem is then determining how many predictions must be selected to include most of the true predictions while maintaining reasonably high precision. In nuclear magnetic resonance (NMR)-based protein structure determination, for instance, computational peak picking methods are becoming more and more common, although expert-knowledge remains the method of choice to determine how many peaks among thousands of candidate peaks should be taken into consideration to capture the true peaks. Here, we propose a Benjamini-Hochberg (B-H)-based approach that automatically selects the number of peaks. We formulate the peak selection problem as a multiple testing problem. Given a candidate peak list sorted by either volumes or intensities, we first convert the peaks into p-values and then apply the B-H-based algorithm to automatically select the number of peaks. The proposed approach is tested on the state-of-the-art peak picking methods, including WaVPeak [1] and PICKY [2]. Compared with the traditional fixed number-based approach, our approach returns significantly more true peaks. For instance, by combining WaVPeak or PICKY with the proposed method, the missing peak rates are on average reduced by 20% and 26%, respectively, in a benchmark set of 32 spectra extracted from eight proteins. The consensus of the B-H-selected peaks from both WaVPeak and PICKY achieves 88% recall and 83% precision, which significantly outperforms each individual method and the consensus method without using the B-H algorithm. The proposed method can be used as a standard procedure for any peak picking method and straightforwardly applied to some other prediction selection problems in bioinformatics. The source code, documentation and example data of the proposed method is available at http://sfb.kaust.edu.sa/pages/software.aspx. © 2013

  12. An objective method for High Dynamic Range source content selection

    DEFF Research Database (Denmark)

    Narwaria, Manish; Mantel, Claire; Da Silva, Matthieu Perreira

    2014-01-01

    With the aim of improving the immersive experience of the end user, High Dynamic Range (HDR) imaging has been gaining popularity. Therefore, proper validation and performance benchmarking of HDR processing algorithms is a key step towards standardization and commercial deployment. A crucial...... component of such validation studies is the selection of a challenging and balanced set of source (reference) HDR content. In order to facilitate this, we present an objective method based on the premise that a more challenging HDR scene encapsulates higher contrast, and as a result will show up more...

  13. Proteomics in pulmonary research: selected methodical aspects

    Directory of Open Access Journals (Sweden)

    Martin Petrek

    2007-10-01

    Full Text Available Recent years witness rapid expansion of applications of proteomics to clinical research including non-malignant lung disorders. These developments bring along the need for standardisation of proteomic experiments. This paper briefly reviews basic methodical aspects of appliedproteomic studies using SELDI-TOF mass spectrometry platform as example but also emphasizes general aspects of quality assurance in proteomics. Key-words: lung proteome, quality assurance, SELDI-TOF MS

  14. METHODS OF SELECTING THE EFFECTIVE MODELS OF BUILDINGS REPROFILING PROJECTS

    Directory of Open Access Journals (Sweden)

    Александр Иванович МЕНЕЙЛЮК

    2016-02-01

    Full Text Available The article highlights the important task of project management in reprofiling of buildings. It is expedient to pay attention to selecting effective engineering solutions to reduce the duration and cost reduction at the project management in the construction industry. This article presents a methodology for the selection of efficient organizational and technical solutions for the reconstruction of buildings reprofiling. The method is based on a compilation of project variants in the program Microsoft Project and experimental statistical analysis using the program COMPEX. The introduction of this technique in the realigning of buildings allows choosing efficient models of projects, depending on the given constraints. Also, this technique can be used for various construction projects.

  15. A systematic and practical method for selecting systems engineering tools

    DEFF Research Database (Denmark)

    Munck, Allan; Madsen, Jan

    2017-01-01

    analyses of the actual needs and the available tools. Grouping needs into categories, allow us to obtain a comprehensive set of requirements for the tools. The entire model-based systems engineering discipline was categorized for a modeling tool case to enable development of a tool specification...... in successful operation since 2013 at GN Hearing. We further utilized the method to select a set of tools that we used on pilot cases at GN Hearing for modeling, simulating and formally verifying embedded systems.......The complexity of many types of systems has grown considerably over the last decades. Using appropriate systems engineering tools therefore becomes increasingly important. Starting the tool selection process can be intimidating because organizations often only have a vague idea about what they need...

  16. A study on selective precipitation of U(VI) by hydrophilic cyclic urea derivatives for development of a reprocessing system based on precipitation method

    International Nuclear Information System (INIS)

    Suzuki, Tomoya; Takao, Koichiro; Kawasaki, Takeshi; Harada, Masayuki; Ikeda, Yasuhisa; Nogami, Masanobu

    2014-01-01

    Selective precipitation ability of 2-imidazolidone (EU) and tetrahydro-2-pyrimidinone (PU) for U(VI) species in HNO 3 solutions containing U(VI), U(IV) (simulant of Pu(IV)), and simulated fission products (FPs) was investigated. As a result, it was found that these compounds precipitate almost quantitatively U(VI) as UO 2 (NO 3 ) 2 L 2 (L = EU, PU) from 3.0 M HNO 3 solution. In contrast, these urea derivatives form neither solid precipitates nor oily products with U(IV) in HNO 3 solutions containing only U(IV) species and even in U(VI)-U(IV) admixture system. Therefore, the separation of U(VI) from U(IV) was demonstrated to be achieved in use of EU and PU. Furthermore, EU and PU are capable to remove most of simulated FPs[Sr(II), Ru(III), Rh(III), Re(VII) La(III), Ce(III), Pr(III), Nd(III), and Sm(III)] from U(VI) to give their decontamination factors (DFs) higher than 100, while those values of Zr(IV), Mo(VI), Pd(II), and Ba(II) are necessary to be improved in both systems. From these results, it is expected that EU and PU are the promising precipitants for selective separation of U(VI) from HNO 3 solutions dissolving spent FBR fuels. (author)

  17. Will genomic selection be a practical method for plant breeding?

    Science.gov (United States)

    Nakaya, Akihiro; Isobe, Sachiko N

    2012-11-01

    Genomic selection or genome-wide selection (GS) has been highlighted as a new approach for marker-assisted selection (MAS) in recent years. GS is a form of MAS that selects favourable individuals based on genomic estimated breeding values. Previous studies have suggested the utility of GS, especially for capturing small-effect quantitative trait loci, but GS has not become a popular methodology in the field of plant breeding, possibly because there is insufficient information available on GS for practical use. In this review, GS is discussed from a practical breeding viewpoint. Statistical approaches employed in GS are briefly described, before the recent progress in GS studies is surveyed. GS practices in plant breeding are then reviewed before future prospects are discussed. Statistical concepts used in GS are discussed with genetic models and variance decomposition, heritability, breeding value and linear model. Recent progress in GS studies is reviewed with a focus on empirical studies. For the practice of GS in plant breeding, several specific points are discussed including linkage disequilibrium, feature of populations and genotyped markers and breeding scheme. Currently, GS is not perfect, but it is a potent, attractive and valuable approach for plant breeding. This method will be integrated into many practical breeding programmes in the near future with further advances and the maturing of its theory.

  18. Sensor and method for measurment of select components of a material based on detection of radiation after interaction with the material

    International Nuclear Information System (INIS)

    Chase, L.M.; Anderson, L.M.; Norton, M.K.

    1993-01-01

    A sensor is described for measuring one or more select components of a sheet, comprising: a radiation source for emitting radiation toward the sheet; a plurality of detecting means, wherein at least one detecting means is offset from the source, for detecting radiation after interaction with the sheet; means for directing the radiation so that the radiation makes multiple interactions with the sheet in moving from the source to the detecting means, wherein the directing means includes a first reflector and second reflector defining a sheet space for the sheet to occupy; means for computing a ratio of the intensity of the detected radiation when the sheet is absent from the sheet space and the intensity of the detected radiation when the sheet occupies the sheet space; and means for computing the absorption power of the sheet from the intensity of the detected radiation

  19. Selection of industrial robots using the Polygons area method

    Directory of Open Access Journals (Sweden)

    Mortaza Honarmande Azimi

    2014-08-01

    Full Text Available Selection of robots from the several proposed alternatives is a very important and tedious task. Decision makers are not limited to one method and several methods have been proposed for solving this problem. This study presents Polygons Area Method (PAM as a multi attribute decision making method for robot selection problem. In this method, the maximum polygons area obtained from the attributes of an alternative robot on the radar chart is introduced as a decision-making criterion. The results of this method are compared with other typical multiple attribute decision-making methods (SAW, WPM, TOPSIS, and VIKOR by giving two examples. To find similarity in ranking given by different methods, Spearman’s rank correlation coefficients are obtained for different pairs of MADM methods. It was observed that the introduced method is in good agreement with other well-known MADM methods in the robot selection problem.

  20. Selected methods of rehabilitation in systemic sclerosis

    Directory of Open Access Journals (Sweden)

    Agnieszka Gerkowicz

    2017-09-01

    Full Text Available Systemic sclerosis is a chronic connective tissue disease characterized by microvascular abnormalities, immune disturbances and progressive fibrosis of the skin and internal organs. Skin involvement may result in contractures, leading to marked loss of hand mobility, adversely affecting the performance of daily activities and decreasing the quality of life. Face involvement not only causes functional loss, but also lowers the self-esteem of patients. Increasing attention has recently been focused on the need to rehabilitate patients with systemic sclerosis in order to prevent the development of joint contractures and loss of mobility. The study presents a review of the current literature on rehabilitation possibilities in patients with systemic sclerosis, with a special focus on physiotherapy methods.

  1. Diversified models for portfolio selection based on uncertain semivariance

    Science.gov (United States)

    Chen, Lin; Peng, Jin; Zhang, Bo; Rosyida, Isnaini

    2017-02-01

    Since the financial markets are complex, sometimes the future security returns are represented mainly based on experts' estimations due to lack of historical data. This paper proposes a semivariance method for diversified portfolio selection, in which the security returns are given subjective to experts' estimations and depicted as uncertain variables. In the paper, three properties of the semivariance of uncertain variables are verified. Based on the concept of semivariance of uncertain variables, two types of mean-semivariance diversified models for uncertain portfolio selection are proposed. Since the models are complex, a hybrid intelligent algorithm which is based on 99-method and genetic algorithm is designed to solve the models. In this hybrid intelligent algorithm, 99-method is applied to compute the expected value and semivariance of uncertain variables, and genetic algorithm is employed to seek the best allocation plan for portfolio selection. At last, several numerical examples are presented to illustrate the modelling idea and the effectiveness of the algorithm.

  2. Neutron shielding calculations in a proton therapy facility based on Monte Carlo simulations and analytical models: Criterion for selecting the method of choice

    International Nuclear Information System (INIS)

    Titt, U.; Newhauser, W. D.

    2005-01-01

    Proton therapy facilities are shielded to limit the amount of secondary radiation to which patients, occupational workers and members of the general public are exposed. The most commonly applied shielding design methods for proton therapy facilities comprise semi-empirical and analytical methods to estimate the neutron dose equivalent. This study compares the results of these methods with a detailed simulation of a proton therapy facility by using the Monte Carlo technique. A comparison of neutron dose equivalent values predicted by the various methods reveals the superior accuracy of the Monte Carlo predictions in locations where the calculations converge. However, the reliability of the overall shielding design increases if simulation results, for which solutions have not converged, e.g. owing to too few particle histories, can be excluded, and deterministic models are being used at these locations. Criteria to accept or reject Monte Carlo calculations in such complex structures are not well understood. An optimum rejection criterion would allow all converging solutions of Monte Carlo simulation to be taken into account, and reject all solutions with uncertainties larger than the design safety margins. In this study, the optimum rejection criterion of 10% was found. The mean ratio was 26, 62% of all receptor locations showed a ratio between 0.9 and 10, and 92% were between 1 and 100. (authors)

  3. Novel Selectivity-Based Forensic Toxicological Validation of a Paper Spray Mass Spectrometry Method for the Quantitative Determination of Eight Amphetamines in Whole Blood

    NARCIS (Netherlands)

    Teunissen, Sebastiaan F.; Fedick, Patrick W.; Berendsen, Bjorn J.A.; Nielen, Michel W.F.; Eberlin, Marcos N.; Graham Cooks, R.; Asten, van Arian C.

    2017-01-01

    Paper spray tandem mass spectrometry is used to identify and quantify eight individual amphetamines in whole blood in 1.3 min. The method has been optimized and fully validated according to forensic toxicology guidelines, for the quantification of amphetamine, methamphetamine,

  4. Duration and speed of speech events: A selection of methods

    Directory of Open Access Journals (Sweden)

    Gibbon Dafydd

    2015-07-01

    Full Text Available The study of speech timing, i.e. the duration and speed or tempo of speech events, has increased in importance over the past twenty years, in particular in connection with increased demands for accuracy, intelligibility and naturalness in speech technology, with applications in language teaching and testing, and with the study of speech timing patterns in language typology. H owever, the methods used in such studies are very diverse, and so far there is no accessible overview of these methods. Since the field is too broad for us to provide an exhaustive account, we have made two choices: first, to provide a framework of paradigmatic (classificatory, syntagmatic (compositional and functional (discourse-oriented dimensions for duration analysis; and second, to provide worked examples of a selection of methods associated primarily with these three dimensions. Some of the methods which are covered are established state-of-the-art approaches (e.g. the paradigmatic Classification and Regression Trees, CART , analysis, others are discussed in a critical light (e.g. so-called ‘rhythm metrics’. A set of syntagmatic approaches applies to the tokenisation and tree parsing of duration hierarchies, based on speech annotations, and a functional approach describes duration distributions with sociolinguistic variables. Several of the methods are supported by a new web-based software tool for analysing annotated speech data, the Time Group Analyser.

  5. Methods for producing thin film charge selective transport layers

    Science.gov (United States)

    Hammond, Scott Ryan; Olson, Dana C.; van Hest, Marinus Franciscus Antonius Maria

    2018-01-02

    Methods for producing thin film charge selective transport layers are provided. In one embodiment, a method for forming a thin film charge selective transport layer comprises: providing a precursor solution comprising a metal containing reactive precursor material dissolved into a complexing solvent; depositing the precursor solution onto a surface of a substrate to form a film; and forming a charge selective transport layer on the substrate by annealing the film.

  6. Will genomic selection be a practical method for plant breeding?

    OpenAIRE

    Nakaya, Akihiro; Isobe, Sachiko N.

    2012-01-01

    Background Genomic selection or genome-wide selection (GS) has been highlighted as a new approach for marker-assisted selection (MAS) in recent years. GS is a form of MAS that selects favourable individuals based on genomic estimated breeding values. Previous studies have suggested the utility of GS, especially for capturing small-effect quantitative trait loci, but GS has not become a popular methodology in the field of plant breeding, possibly because there is insufficient information avail...

  7. Feature Selection Based on Mutual Correlation

    Czech Academy of Sciences Publication Activity Database

    Haindl, Michal; Somol, Petr; Ververidis, D.; Kotropoulos, C.

    2006-01-01

    Roč. 19, č. 4225 (2006), s. 569-577 ISSN 0302-9743. [Iberoamerican Congress on Pattern Recognition. CIARP 2006 /11./. Cancun, 14.11.2006-17.11.2006] R&D Projects: GA AV ČR 1ET400750407; GA MŠk 1M0572; GA AV ČR IAA2075302 EU Projects: European Commission(XE) 507752 - MUSCLE Institutional research plan: CEZ:AV0Z10750506 Keywords : feature selection Subject RIV: BD - Theory of Information Impact factor: 0.402, year: 2005 http://library.utia.cas.cz/separaty/historie/haindl-feature selection based on mutual correlation.pdf

  8. Evolutionary dynamics on graphs: Efficient method for weak selection

    Science.gov (United States)

    Fu, Feng; Wang, Long; Nowak, Martin A.; Hauert, Christoph

    2009-04-01

    Investigating the evolutionary dynamics of game theoretical interactions in populations where individuals are arranged on a graph can be challenging in terms of computation time. Here, we propose an efficient method to study any type of game on arbitrary graph structures for weak selection. In this limit, evolutionary game dynamics represents a first-order correction to neutral evolution. Spatial correlations can be empirically determined under neutral evolution and provide the basis for formulating the game dynamics as a discrete Markov process by incorporating a detailed description of the microscopic dynamics based on the neutral correlations. This framework is then applied to one of the most intriguing questions in evolutionary biology: the evolution of cooperation. We demonstrate that the degree heterogeneity of a graph impedes cooperation and that the success of tit for tat depends not only on the number of rounds but also on the degree of the graph. Moreover, considering the mutation-selection equilibrium shows that the symmetry of the stationary distribution of states under weak selection is skewed in favor of defectors for larger selection strengths. In particular, degree heterogeneity—a prominent feature of scale-free networks—generally results in a more pronounced increase in the critical benefit-to-cost ratio required for evolution to favor cooperation as compared to regular graphs. This conclusion is corroborated by an analysis of the effects of population structures on the fixation probabilities of strategies in general 2×2 games for different types of graphs. Computer simulations confirm the predictive power of our method and illustrate the improved accuracy as compared to previous studies.

  9. Inventory of LCIA selection methods for assessing toxic releases. Methods and typology report part B

    DEFF Research Database (Denmark)

    Larsen, Henrik Fred; Birkved, Morten; Hauschild, Michael Zwicky

    method(s) in Work package 8 (WP8) of the OMNIITOX project. The selection methods and the other CRS methods are described in detail, a set of evaluation criteria are developed and the methods are evaluated against these criteria. This report (Deliverable 11B (D11B)) gives the results from task 7.1d, 7.1e......This report describes an inventory of Life Cycle Impact Assessment (LCIA) selection methods for assessing toxic releases. It consists of an inventory of current selection methods and other Chemical Ranking and Scoring (CRS) methods assessed to be relevant for the development of (a) new selection...... and 7.1f of WP 7 for selection methods. The other part of D11 (D11A) is reported in another report and deals with characterisation methods. A selection method is a method for prioritising chemical emissions to be included in an LCIA characterisation of toxic releases, i.e. calculating indicator scores...

  10. Relationship of Source Selection Methods to Contract Outcomes: an Analysis of Air Force Source Selection

    Science.gov (United States)

    2015-12-01

    some occasions, performance is terminated early; this can occur due to either mutual agreement or a breach of contract by one of the parties (Garrett...Relationship of Source Selection Methods to Contract Outcomes: an Analysis of Air Force Source Selection December 2015 Capt Jacques Lamoureux, USAF...on the contract management process, with special emphasis on the source selection methods of tradeoff and lowest price technically acceptable (LPTA

  11. A Fast Adaptive Receive Antenna Selection Method in MIMO System

    Directory of Open Access Journals (Sweden)

    Chaowei Wang

    2013-01-01

    Full Text Available Antenna selection has been regarded as an effective method to acquire the diversity benefits of multiple antennas while potentially reduce hardware costs. This paper focuses on receive antenna selection. According to the proportion between the numbers of total receive antennas and selected antennas and the influence of each antenna on system capacity, we propose a fast adaptive antenna selection algorithm for wireless multiple-input multiple-output (MIMO systems. Mathematical analysis and numerical results show that our algorithm significantly reduces the computational complexity and memory requirement and achieves considerable system capacity gain compared with the optimal selection technique in the same time.

  12. 基于区间数灰色关联决策的电力目标选择方法%Selection Method of Power Targets Based on Interval Number Grey Relation Decision-Making

    Institute of Scientific and Technical Information of China (English)

    吴畏; 赵文杰; 刘辉

    2011-01-01

    Aiming at the target selection problems under unpredictable information in the process of battlefield decision-making, which the tradition methods can't solve, the method of interval number grey relation decision-making is applied to propose the model of power targets selection through an analysis of the parameters involved in determining the importance of power targets. In this model, the factors' weights are obtained by combination weighting way. The method is scientific and reasonable based on the illustration of an application example, and plays an important role in assisting wartime command decision-making.%针对传统方法不能很好地解决战场决策信息不确定的目标选择问题,通过对影响电力目标重要程度的评价指标进行分析,应用区间数灰色关联决策方法完成了电力目标选择模型的构建.模型中的指标权重通过组合赋权法获得.经实例验证证明,该方法是科学、合理的,对战时的指挥决策起到了较好的辅助作用.

  13. Identification of novel inhibitors for Pim-1 kinase using pharmacophore modeling based on a novel method for selecting pharmacophore generation subsets

    Science.gov (United States)

    Shahin, Rand; Swellmeen, Lubna; Shaheen, Omar; Aboalhaija, Nour; Habash, Maha

    2016-01-01

    Targeting Proviral integration-site of murine Moloney leukemia virus 1 kinase, hereafter called Pim-1 kinase, is a promising strategy for treating different kinds of human cancer. Headed for this a total list of 328 formerly reported Pim-1 kinase inhibitors has been explored and divided based on the pharmacophoric features of the most active molecules into 10 subsets projected to represent potential active binding manners accessible to ligands within the binding pocket of Pim-1 kinase. Discovery Studio 4.1 (DS 4.1) was employed to detect potential pharmacophoric active binding manners anticipated by Pim-1 Kinase inhibitors. The pharmacophoric models were then allowed to compete within Quantitative Structure Activity Relationship (QSAR) framework with other 2D descriptors. Accordingly Genetic algorithm and multiple linear regression investigation were engaged to find the finest QSAR equation that has the best predictive power r 262 2 = 0.70, F = 119.14, r LOO 2 = 0.693, r PRESS 2 against 66 external test inhibitors = 0.71 q2 = 0.55. Three different pharmacophores appeared in the successful QSAR equation this represents three different binding modes for inhibitors within the Pim-1 kinase binding pocket. Pharmacophoric models were later used to screen compounds within the National Cancer Institute database. Several low micromolar Pim-1 Kinase inhibitors were captured. The most potent hits show IC50 values of 0.77 and 1.03 µM. Also, upon analyzing the successful QSAR Equation we found that some polycyclic aromatic electron-rich structures namely 6-Chloro-2-methoxy-acridine can be considered as putative hits for Pim-1 kinase inhibition.

  14. Methods in Logic Based Control

    DEFF Research Database (Denmark)

    Christensen, Georg Kronborg

    1999-01-01

    Desing and theory of Logic Based Control systems.Boolean Algebra, Karnaugh Map, Quine McClusky's algorithm. Sequential control design. Logic Based Control Method, Cascade Control Method. Implementation techniques: relay, pneumatic, TTL/CMOS,PAL and PLC- and Soft_PLC implementation. PLC...

  15. Activity based costing (ABC Method

    Directory of Open Access Journals (Sweden)

    Prof. Ph.D. Saveta Tudorache

    2008-05-01

    Full Text Available In the present paper the need and advantages are presented of using the Activity BasedCosting method, need arising from the need of solving the information pertinence issue. This issue has occurreddue to the limitation of classic methods in this field, limitation also reflected by the disadvantages ofsuch classic methods in establishing complete costs.

  16. Reference satellite selection method for GNSS high-precision relative positioning

    Directory of Open Access Journals (Sweden)

    Xiao Gao

    2017-03-01

    Full Text Available Selecting the optimal reference satellite is an important component of high-precision relative positioning because the reference satellite directly influences the strength of the normal equation. The reference satellite selection methods based on elevation and positional dilution of precision (PDOP value were compared. Results show that all the above methods cannot select the optimal reference satellite. We introduce condition number of the design matrix in the reference satellite selection method to improve structure of the normal equation, because condition number can indicate the ill condition of the normal equation. The experimental results show that the new method can improve positioning accuracy and reliability in precise relative positioning.

  17. Discrete Biogeography Based Optimization for Feature Selection in Molecular Signatures.

    Science.gov (United States)

    Liu, Bo; Tian, Meihong; Zhang, Chunhua; Li, Xiangtao

    2015-04-01

    Biomarker discovery from high-dimensional data is a complex task in the development of efficient cancer diagnoses and classification. However, these data are usually redundant and noisy, and only a subset of them present distinct profiles for different classes of samples. Thus, selecting high discriminative genes from gene expression data has become increasingly interesting in the field of bioinformatics. In this paper, a discrete biogeography based optimization is proposed to select the good subset of informative gene relevant to the classification. In the proposed algorithm, firstly, the fisher-markov selector is used to choose fixed number of gene data. Secondly, to make biogeography based optimization suitable for the feature selection problem; discrete migration model and discrete mutation model are proposed to balance the exploration and exploitation ability. Then, discrete biogeography based optimization, as we called DBBO, is proposed by integrating discrete migration model and discrete mutation model. Finally, the DBBO method is used for feature selection, and three classifiers are used as the classifier with the 10 fold cross-validation method. In order to show the effective and efficiency of the algorithm, the proposed algorithm is tested on four breast cancer dataset benchmarks. Comparison with genetic algorithm, particle swarm optimization, differential evolution algorithm and hybrid biogeography based optimization, experimental results demonstrate that the proposed method is better or at least comparable with previous method from literature when considering the quality of the solutions obtained. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Classification and Target Group Selection Based Upon Frequent Patterns

    NARCIS (Netherlands)

    W.H.L.M. Pijls (Wim); R. Potharst (Rob)

    2000-01-01

    textabstractIn this technical report , two new algorithms based upon frequent patterns are proposed. One algorithm is a classification method. The other one is an algorithm for target group selection. In both algorithms, first of all, the collection of frequent patterns in the training set is

  19. SELECTING A MANAGEMENT SYSTEM HOSPITAL BY A METHOD MULTICRITERIA

    Directory of Open Access Journals (Sweden)

    Vitorino, Sidney L.

    2016-12-01

    Full Text Available The objective of this report is to assess how the multi-criteria method Analytic Hierarchy Process [HP] can help a hospital complex to choose a more suitable management system, known as Enterprise Resource Planning (ERP. The choice coated is very complex due to the novelty of the process of choosing and conflicts generated between areas that did not have a single view of organizational needs, generating a lot of pressure in the department responsible for implementing systems. To assist in this process, he was hired an expert consultant in decision-making and AHP, which in its role of facilitator, contributed to the criteria for system selection were defined, and the choice to occur within a consensual process. We used the study of a single case, based on two indepth interviews with the consultant and the project manager, and documents generated by the advisory and the tool that supported the method. The results of this analysis showed that the method could effectively collaborate in the system acquisition process, but knowledge of the problems of employees and senior management support, it was not used in new decisions of the organization. We conclude that this method contributed to the consensus in the procurement process, team commitment and engagement of those involved.

  20. Alternative microbial methods: An overview and selection criteria.

    NARCIS (Netherlands)

    Jasson, V.; Jacxsens, L.; Luning, P.A.; Rajkovic, A.; Uyttendaele, M.

    2010-01-01

    This study provides an overview and criteria for the selection of a method, other than the reference method, for microbial analysis of foods. In a first part an overview of the general characteristics of rapid methods available, both for enumeration and detection, is given with reference to relevant

  1. The Hull Method for Selecting the Number of Common Factors

    Science.gov (United States)

    Lorenzo-Seva, Urbano; Timmerman, Marieke E.; Kiers, Henk A. L.

    2011-01-01

    A common problem in exploratory factor analysis is how many factors need to be extracted from a particular data set. We propose a new method for selecting the number of major common factors: the Hull method, which aims to find a model with an optimal balance between model fit and number of parameters. We examine the performance of the method in an…

  2. Orthogonal feature selection method. [For preprocessing of man spectral data

    Energy Technology Data Exchange (ETDEWEB)

    Kowalski, B R [Univ. of Washington, Seattle; Bender, C F

    1976-01-01

    A new method of preprocessing spectral data for extraction of molecular structural information is desired. This SELECT method generates orthogonal features that are important for classification purposes and that also retain their identity to the original measurements. A brief introduction to chemical pattern recognition is presented. A brief description of the method and an application to mass spectral data analysis follow. (BLM)

  3. A new screening method for selection of desired recombinant ...

    African Journals Online (AJOL)

    A new screening method for selection of desired recombinant plasmids in molecular cloning. ... African Journal of Biotechnology ... Regarding the facts of this study, after digestion process, the products directly were subjected to ligation. Due to ...

  4. NetProt: Complex-based Feature Selection.

    Science.gov (United States)

    Goh, Wilson Wen Bin; Wong, Limsoon

    2017-08-04

    Protein complex-based feature selection (PCBFS) provides unparalleled reproducibility with high phenotypic relevance on proteomics data. Currently, there are five PCBFS paradigms, but not all representative methods have been implemented or made readily available. To allow general users to take advantage of these methods, we developed the R-package NetProt, which provides implementations of representative feature-selection methods. NetProt also provides methods for generating simulated differential data and generating pseudocomplexes for complex-based performance benchmarking. The NetProt open source R package is available for download from https://github.com/gohwils/NetProt/releases/ , and online documentation is available at http://rpubs.com/gohwils/204259 .

  5. Development, validation and application of a micro-liquid chromatography-tandem mass spectrometry based method for simultaneous quantification of selected protein biomarkers of endothelial dysfunction in murine plasma.

    Science.gov (United States)

    Suraj, Joanna; Kurpińska, Anna; Olkowicz, Mariola; Niedzielska-Andres, Ewa; Smolik, Magdalena; Zakrzewska, Agnieszka; Jasztal, Agnieszka; Sitek, Barbara; Chlopicki, Stefan; Walczak, Maria

    2018-02-05

    The objective of this study was to develop and validate the method based on micro-liquid chromatography-tandem mass spectrometry (microLC/MS-MRM) for simultaneous determination of adiponectin (ADN), von Willebrand factor (vWF), soluble form of vascular cell adhesion molecule 1 (sVCAM-1), soluble form of intercellular adhesion molecule 1 (sICAM-1) and syndecan-1 (SDC-1) in mouse plasma. The calibration range was established from 2.5pmol/mL to 5000pmol/mL for ADN; 5pmol/mL to 5000pmol/mL for vWF; 0.375pmol/mL to 250pmol/mL for sVCAM-1 and sICAM-1; and 0.25pmol/mL to 250pmol/mL for SDC-1. The method was applied to measure the plasma concentration of selected proteins in mice fed high-fat diet (HFD), and revealed the pro-thrombotic status by increased concentration of vWF (1.31±0.17 nmol/mL (Control) vs 1.98±0.09 nmol/mL (HFD), p <0.05) and the dysregulation of adipose tissue metabolism by decreased concentration of ADN (0.62±0.08 nmol/mL (Control) vs 0.37±0.06 nmol/mL (HFD), p <0.05). In conclusion, the microLC/MS-MRM-based method allows for reliable measurements of selected protein biomarkers of endothelial dysfunction in mouse plasma. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. The Choice Method of Selected Material has influence single evaporation flash method

    International Nuclear Information System (INIS)

    Sunaryo, Geni Rina; Sumijanto; Nurul L, Siti

    2000-01-01

    The final objective of this research is to design the mini scale of desalination installation. It has been started from 1997/1998 and has been doing for this 3 years. Where the study on the assessment of various desalination system has been done in the first year and thermodynamic in the second year. In this third year, literatully study on material resistance from outside pressure has been done. The number of pressure for single evaporator flashing method is mainly depend on the temperature that applied in that system. In this paper, the configuration stage, the choice method of selecting material for main evaporator vessel, tube, tube plates, water boxes, pipework, and valves for multistage flash distillation will be described. The choice of selecting material for MSF is base on economical consideration, cheap, high resistance and easy to be maintained

  7. [Electroencephalogram Feature Selection Based on Correlation Coefficient Analysis].

    Science.gov (United States)

    Zhou, Jinzhi; Tang, Xiaofang

    2015-08-01

    In order to improve the accuracy of classification with small amount of motor imagery training data on the development of brain-computer interface (BCD systems, we proposed an analyzing method to automatically select the characteristic parameters based on correlation coefficient analysis. Throughout the five sample data of dataset IV a from 2005 BCI Competition, we utilized short-time Fourier transform (STFT) and correlation coefficient calculation to reduce the number of primitive electroencephalogram dimension, then introduced feature extraction based on common spatial pattern (CSP) and classified by linear discriminant analysis (LDA). Simulation results showed that the average rate of classification accuracy could be improved by using correlation coefficient feature selection method than those without using this algorithm. Comparing with support vector machine (SVM) optimization features algorithm, the correlation coefficient analysis can lead better selection parameters to improve the accuracy of classification.

  8. An Identification Key for Selecting Methods for Sustainability Assessments

    Directory of Open Access Journals (Sweden)

    Michiel C. Zijp

    2015-03-01

    Full Text Available Sustainability assessments can play an important role in decision making. This role starts with selecting appropriate methods for a given situation. We observed that scientists, consultants, and decision-makers often do not systematically perform a problem analyses that guides the choice of the method, partly related to a lack of systematic, though sufficiently versatile approaches to do so. Therefore, we developed and propose a new step towards method selection on the basis of question articulation: the Sustainability Assessment Identification Key. The identification key was designed to lead its user through all important choices needed for comprehensive question articulation. Subsequently, methods that fit the resulting specific questions are suggested by the key. The key consists of five domains, of which three determine method selection and two the design or use of the method. Each domain consists of four or more criteria that need specification. For example in the domain “system boundaries”, amongst others, the spatial and temporal scales are specified. The key was tested (retrospectively on a set of thirty case studies. Using the key appeared to contribute to improved: (i transparency in the link between the question and method selection; (ii consistency between questions asked and answers provided; and (iii internal consistency in methodological design. There is latitude to develop the current initial key further, not only for selecting methods pertinent to a problem definition, but also as a principle for associated opportunities such as stakeholder identification.

  9. Development of a method based on on-line reversed phase liquid chromatography and gas chromatography coupled by means of an adsorption-desorption interface for the analysis of selected chiral volatile compounds in methyl jasmonate treated strawberries.

    Science.gov (United States)

    de la Peña Moreno, Fernando; Blanch, Gracia Patricia; Flores, Gema; Ruiz Del Castillo, Maria Luisa

    2010-02-12

    A method based on the use of the through oven transfer adsorption-desorption (TOTAD) interface in on-line coupling between reversed phase liquid chromatography and gas chromatography (RPLC-GC) for the determination of chiral volatile compounds was developed. In particular, the method was applied to the study of the influence of methyl jasmonate (MJ) treatment on the production and enantiomeric composition of selected aroma compounds in strawberry. The compounds studied were ethyl 2-methylbutanoate, linalool and 4-hydroxy-2,5-dimethyl-3(2H)-furanone (i.e. furaneol), which were examined on days 3, 6 and 9 after treatment. The method developed resulted in relative standard deviations (RSDs) of 21.6%, 8.1% and 9.8% and limits of detection (LD) of 0.04, 0.07 and 0.02mg/l for ethyl 2-methylbutanoate, linalool and furaneol, respectively. The application of the RPLC-TOTAD-GC method allowed higher levels of ethyl 2-methylbutanoate, linalool and furaneol to be detected, particularly after 9 days of treatment. Besides, MJ demonstrated to affect the enantiomeric distribution of ethyl 2-methylbutanoate. On the contrary, the enantiomeric composition of linalool and furaneol kept constant in both control and MJ-treated strawberries throughout the study. These results are discussed. Copyright 2009 Elsevier B.V. All rights reserved.

  10. Wavelength selection method with standard deviation: application to pulse oximetry.

    Science.gov (United States)

    Vazquez-Jaccaud, Camille; Paez, Gonzalo; Strojnik, Marija

    2011-07-01

    Near-infrared spectroscopy provides useful biological information after the radiation has penetrated through the tissue, within the therapeutic window. One of the significant shortcomings of the current applications of spectroscopic techniques to a live subject is that the subject may be uncooperative and the sample undergoes significant temporal variations, due to his health status that, from radiometric point of view, introduce measurement noise. We describe a novel wavelength selection method for monitoring, based on a standard deviation map, that allows low-noise sensitivity. It may be used with spectral transillumination, transmission, or reflection signals, including those corrupted by noise and unavoidable temporal effects. We apply it to the selection of two wavelengths for the case of pulse oximetry. Using spectroscopic data, we generate a map of standard deviation that we propose as a figure-of-merit in the presence of the noise introduced by the living subject. Even in the presence of diverse sources of noise, we identify four wavelength domains with standard deviation, minimally sensitive to temporal noise, and two wavelengths domains with low sensitivity to temporal noise.

  11. Supplier selection based on improved MOGA and its application in nuclear power equipment procurement

    International Nuclear Information System (INIS)

    Yan Zhaojun; Wang Dezhong; Zhou Lei

    2007-01-01

    Considering the fact that there are few objective and available methods supporting the supplier selection in nuclear power equipment purchasing process, a supplier selection method based on improved multi-objective genetic algorithm (MOGA) is proposed. The simulation results demonstrate the effectiveness and efficiency of this method for the supplier selection in nuclear power equipment procurement process. (authors)

  12. Method for selection of optimal road safety composite index with examples from DEA and TOPSIS method.

    Science.gov (United States)

    Rosić, Miroslav; Pešić, Dalibor; Kukić, Dragoslav; Antić, Boris; Božović, Milan

    2017-01-01

    Concept of composite road safety index is a popular and relatively new concept among road safety experts around the world. As there is a constant need for comparison among different units (countries, municipalities, roads, etc.) there is need to choose an adequate method which will make comparison fair to all compared units. Usually comparisons using one specific indicator (parameter which describes safety or unsafety) can end up with totally different ranking of compared units which is quite complicated for decision maker to determine "real best performers". Need for composite road safety index is becoming dominant since road safety presents a complex system where more and more indicators are constantly being developed to describe it. Among wide variety of models and developed composite indexes, a decision maker can come to even bigger dilemma than choosing one adequate risk measure. As DEA and TOPSIS are well-known mathematical models and have recently been increasingly used for risk evaluation in road safety, we used efficiencies (composite indexes) obtained by different models, based on DEA and TOPSIS, to present PROMETHEE-RS model for selection of optimal method for composite index. Method for selection of optimal composite index is based on three parameters (average correlation, average rank variation and average cluster variation) inserted into a PROMETHEE MCDM method in order to choose the optimal one. The model is tested by comparing 27 police departments in Serbia. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Method selection for mercury removal from hard coal

    Directory of Open Access Journals (Sweden)

    Dziok Tadeusz

    2017-01-01

    Full Text Available Mercury is commonly found in coal and the coal utilization processes constitute one of the main sources of mercury emission to the environment. This issue is particularly important for Poland, because the Polish energy production sector is based on brown and hard coal. The forecasts show that this trend in energy production will continue in the coming years. At the time of the emission limits introduction, methods of reducing the mercury emission will have to be implemented in Poland. Mercury emission can be reduced as a result of using coal with a relatively low mercury content. In the case of the absence of such coals, the methods of mercury removal from coal can be implemented. The currently used and developing methods include the coal cleaning process (both the coal washing and the dry deshaling as well as the thermal pretreatment of coal (mild pyrolysis. The effectiveness of these methods various for different coals, which is caused by the diversity of coal origin, various characteristics of coal and, especially, by the various modes of mercury occurrence in coal. It should be mentioned that the coal cleaning process allows for the removal of mercury occurring in mineral matter, mainly in pyrite. The thermal pretreatment of coal allows for the removal of mercury occurring in organic matter as well as in the inorganic constituents characterized by a low temperature of mercury release. In this paper, the guidelines for the selection of mercury removal method from hard coal were presented. The guidelines were developed taking into consideration: the effectiveness of mercury removal from coal in the process of coal cleaning and thermal pretreatment, the synergy effect resulting from the combination of these processes, the direction of coal utilization as well as the influence of these processes on coal properties.

  14. Construction Tender Subcontract Selection using Case-based Reasoning

    Directory of Open Access Journals (Sweden)

    Due Luu

    2012-11-01

    Full Text Available Obtaining competitive quotations from suitably qualified subcontractors at tender tim n significantly increase the chance of w1nmng a construction project. Amidst an increasingly growing trend to subcontracting in Australia, selecting appropriate subcontractors for a construction project can be a daunting task requiring the analysis of complex and dynamic criteria such as past performance, suitable experience, track record of competitive pricing, financial stability and so on. Subcontractor selection is plagued with uncertainty and vagueness and these conditions are difficul_t o represent in generalised sets of rules. DeciSIOns pertaining to the selection of subcontr:act?s tender time are usually based on the mtu1t1onand past experience of construction estimators. Case-based reasoning (CBR may be an appropriate method of addressing the chal_lenges of selecting subcontractors because CBR 1s able to harness the experiential knowledge of practitioners. This paper reviews the practicality and suitability of a CBR approach for subcontractor tender selection through the development of a prototype CBR procurement advisory system. In this system, subcontractor selection cases are represented by a set of attributes elicited from experienced construction estimators. The results indicate that CBR can enhance the appropriateness of the selection of subcontractors for construction projects.

  15. Maintenance of the selected infant feeding methods amongst ...

    African Journals Online (AJOL)

    The focus of this study was to explore and describe influences on decision making related to infant feeding methods in the context of HIV and AIDS. Study objectives were: (1) to explore and describe the influences on decision making related to infant feeding methods selected by the mother during the antenatal period and ...

  16. The sequence relay selection strategy based on stochastic dynamic programming

    Science.gov (United States)

    Zhu, Rui; Chen, Xihao; Huang, Yangchao

    2017-07-01

    Relay-assisted (RA) network with relay node selection is a kind of effective method to improve the channel capacity and convergence performance. However, most of the existing researches about the relay selection did not consider the statically channel state information and the selection cost. This shortage limited the performance and application of RA network in practical scenarios. In order to overcome this drawback, a sequence relay selection strategy (SRSS) was proposed. And the performance upper bound of SRSS was also analyzed in this paper. Furthermore, in order to make SRSS more practical, a novel threshold determination algorithm based on the stochastic dynamic program (SDP) was given to work with SRSS. Numerical results are also presented to exhibit the performance of SRSS with SDP.

  17. Methods for model selection in applied science and engineering.

    Energy Technology Data Exchange (ETDEWEB)

    Field, Richard V., Jr.

    2004-10-01

    Mathematical models are developed and used to study the properties of complex systems and/or modify these systems to satisfy some performance requirements in just about every area of applied science and engineering. A particular reason for developing a model, e.g., performance assessment or design, is referred to as the model use. Our objective is the development of a methodology for selecting a model that is sufficiently accurate for an intended use. Information on the system being modeled is, in general, incomplete, so that there may be two or more models consistent with the available information. The collection of these models is called the class of candidate models. Methods are developed for selecting the optimal member from a class of candidate models for the system. The optimal model depends on the available information, the selected class of candidate models, and the model use. Classical methods for model selection, including the method of maximum likelihood and Bayesian methods, as well as a method employing a decision-theoretic approach, are formulated to select the optimal model for numerous applications. There is no requirement that the candidate models be random. Classical methods for model selection ignore model use and require data to be available. Examples are used to show that these methods can be unreliable when data is limited. The decision-theoretic approach to model selection does not have these limitations, and model use is included through an appropriate utility function. This is especially important when modeling high risk systems, where the consequences of using an inappropriate model for the system can be disastrous. The decision-theoretic method for model selection is developed and applied for a series of complex and diverse applications. These include the selection of the: (1) optimal order of the polynomial chaos approximation for non-Gaussian random variables and stationary stochastic processes, (2) optimal pressure load model to be

  18. Utilization of Selected Data Mining Methods for Communication Network Analysis

    Directory of Open Access Journals (Sweden)

    V. Ondryhal

    2011-06-01

    Full Text Available The aim of the project was to analyze the behavior of military communication networks based on work with real data collected continuously since 2005. With regard to the nature and amount of the data, data mining methods were selected for the purpose of analyses and experiments. The quality of real data is often insufficient for an immediate analysis. The article presents the data cleaning operations which have been carried out with the aim to improve the input data sample to obtain reliable models. Gradually, by means of properly chosen SW, network models were developed to verify generally valid patterns of network behavior as a bulk service. Furthermore, unlike the commercially available communication networks simulators, the models designed allowed us to capture nonstandard models of network behavior under an increased load, verify the correct sizing of the network to the increased load, and thus test its reliability. Finally, based on previous experience, the models enabled us to predict emergency situations with a reasonable accuracy.

  19. Automatic learning-based beam angle selection for thoracic IMRT

    International Nuclear Information System (INIS)

    Amit, Guy; Marshall, Andrea; Purdie, Thomas G.; Jaffray, David A.; Levinshtein, Alex; Hope, Andrew J.; Lindsay, Patricia; Pekar, Vladimir

    2015-01-01

    Purpose: The treatment of thoracic cancer using external beam radiation requires an optimal selection of the radiation beam directions to ensure effective coverage of the target volume and to avoid unnecessary treatment of normal healthy tissues. Intensity modulated radiation therapy (IMRT) planning is a lengthy process, which requires the planner to iterate between choosing beam angles, specifying dose–volume objectives and executing IMRT optimization. In thorax treatment planning, where there are no class solutions for beam placement, beam angle selection is performed manually, based on the planner’s clinical experience. The purpose of this work is to propose and study a computationally efficient framework that utilizes machine learning to automatically select treatment beam angles. Such a framework may be helpful for reducing the overall planning workload. Methods: The authors introduce an automated beam selection method, based on learning the relationships between beam angles and anatomical features. Using a large set of clinically approved IMRT plans, a random forest regression algorithm is trained to map a multitude of anatomical features into an individual beam score. An optimization scheme is then built to select and adjust the beam angles, considering the learned interbeam dependencies. The validity and quality of the automatically selected beams evaluated using the manually selected beams from the corresponding clinical plans as the ground truth. Results: The analysis included 149 clinically approved thoracic IMRT plans. For a randomly selected test subset of 27 plans, IMRT plans were generated using automatically selected beams and compared to the clinical plans. The comparison of the predicted and the clinical beam angles demonstrated a good average correspondence between the two (angular distance 16.8° ± 10°, correlation 0.75 ± 0.2). The dose distributions of the semiautomatic and clinical plans were equivalent in terms of primary target volume

  20. Combining AHP and DEA Methods for Selecting a Project Manager

    Directory of Open Access Journals (Sweden)

    Baruch Keren

    2014-07-01

    Full Text Available A project manager has a major influence on the success or failure of the project. A good project manager can match between the strategy and objectives of the organization and the goals of the project. Therefore, the selection of the appropriate project manager is a key factor for the success of the project. A potential project manager is judged by his or her proven performance and personal qualifications. This paper proposes a method to calculate the weighted scores and the full rank of candidates for managing a project, and to select the best of those candidates. The proposed method combines specific methodologies: the Data Envelopment Analysis (DEA and the Analytical Hierarchical Process (AHP and uses DEA Ranking Methods to enhance selection.

  1. Index Fund Selections with GAs and Classifications Based on Turnover

    Science.gov (United States)

    Orito, Yukiko; Motoyama, Takaaki; Yamazaki, Genji

    It is well known that index fund selections are important for the risk hedge of investment in a stock market. The`selection’means that for`stock index futures’, n companies of all ones in the market are selected. For index fund selections, Orito et al.(6) proposed a method consisting of the following two steps : Step 1 is to select N companies in the market with a heuristic rule based on the coefficient of determination between the return rate of each company in the market and the increasing rate of the stock price index. Step 2 is to construct a group of n companies by applying genetic algorithms to the set of N companies. We note that the rule of Step 1 is not unique. The accuracy of the results using their method depends on the length of time data (price data) in the experiments. The main purpose of this paper is to introduce a more`effective rule’for Step 1. The rule is based on turnover. The method consisting of Step 1 based on turnover and Step 2 is examined with numerical experiments for the 1st Section of Tokyo Stock Exchange. The results show that with our method, it is possible to construct the more effective index fund than the results of Orito et al.(6). The accuracy of the results using our method depends little on the length of time data (turnover data). The method especially works well when the increasing rate of the stock price index over a period can be viewed as a linear time series data.

  2. Selection method of terrain matching area for TERCOM algorithm

    Science.gov (United States)

    Zhang, Qieqie; Zhao, Long

    2017-10-01

    The performance of terrain aided navigation is closely related to the selection of terrain matching area. The different matching algorithms have different adaptability to terrain. This paper mainly studies the adaptability to terrain of TERCOM algorithm, analyze the relation between terrain feature and terrain characteristic parameters by qualitative and quantitative methods, and then research the relation between matching probability and terrain characteristic parameters by the Monte Carlo method. After that, we propose a selection method of terrain matching area for TERCOM algorithm, and verify the method correctness with real terrain data by simulation experiment. Experimental results show that the matching area obtained by the method in this paper has the good navigation performance and the matching probability of TERCOM algorithm is great than 90%

  3. A Selective Review of Multimodal Fusion Methods in Schizophrenia

    Directory of Open Access Journals (Sweden)

    Jing eSui

    2012-02-01

    Full Text Available Schizophrenia (SZ is one of the most cryptic and costly mental disorders in terms of human suffering and societal expenditure (van Os and Kapur, 2009. Though strong evidences for functional, structural and genetic abnormalities associated with this disease exist, there is yet no replicable finding which has proven accurate enough to be useful in clinical decision making (Fornito et al., 2009, and its diagnosis relies primarily upon symptom assessment (Williams et al., 2010a. It is likely in part that the lack of consistent neuroimaging findings is because most models favor only one data type or do not combine data from different imaging modalities effectively, thus missing potentially important differences which are only partially detected by each modality (Calhoun et al., 2006a. It is becoming increasingly clear that multi-modal fusion, a technique which takes advantage of the fact that each modality provides a limited view of the brain/gene and may uncover hidden relationships, is an important tool to help unravel the black box of schizophrenia. In this review paper, we survey a number of multimodal fusion applications which enable us to study the schizophrenia macro-connectome, including brain functional, structural and genetic aspects and may help us understand the disorder in a more comprehensive and integrated manner. We also provide a table that characterizes these applications by the methods used and compare these methods in detail, especially for multivariate models, which may serve as a valuable reference that helps readers select an appropriate method based on a given research.

  4. Uniform design based SVM model selection for face recognition

    Science.gov (United States)

    Li, Weihong; Liu, Lijuan; Gong, Weiguo

    2010-02-01

    Support vector machine (SVM) has been proved to be a powerful tool for face recognition. The generalization capacity of SVM depends on the model with optimal hyperparameters. The computational cost of SVM model selection results in application difficulty in face recognition. In order to overcome the shortcoming, we utilize the advantage of uniform design--space filling designs and uniformly scattering theory to seek for optimal SVM hyperparameters. Then we propose a face recognition scheme based on SVM with optimal model which obtained by replacing the grid and gradient-based method with uniform design. The experimental results on Yale and PIE face databases show that the proposed method significantly improves the efficiency of SVM model selection.

  5. Development of a thermodynamic data base for selected heavy metals

    International Nuclear Information System (INIS)

    Hageman, Sven; Scharge, Tina; Willms, Thomas

    2015-07-01

    The report on the development of a thermodynamic data base for selected heavy metals covers the description of experimental methods, the thermodynamic model for chromate, the thermodynamic model for dichromate, the thermodynamic model for manganese (II), the thermodynamic model for cobalt, the thermodynamic model for nickel, the thermodynamic model for copper (I), the thermodynamic model for copper(II), the thermodynamic model for mercury (0) and mercury (I), the thermodynamic model for mercury (III), the thermodynamic model for arsenate.

  6. MIS-based sensors with hydrogen selectivity

    Energy Technology Data Exchange (ETDEWEB)

    Li,; Dongmei, [Boulder, CO; Medlin, J William [Boulder, CO; McDaniel, Anthony H [Livermore, CA; Bastasz, Robert J [Livermore, CA

    2008-03-11

    The invention provides hydrogen selective metal-insulator-semiconductor sensors which include a layer of hydrogen selective material. The hydrogen selective material can be polyimide layer having a thickness between 200 and 800 nm. Suitable polyimide materials include reaction products of benzophenone tetracarboxylic dianhydride 4,4-oxydianiline m-phenylene diamine and other structurally similar materials.

  7. Entropy-based benchmarking methods

    NARCIS (Netherlands)

    Temurshoev, Umed

    2012-01-01

    We argue that benchmarking sign-volatile series should be based on the principle of movement and sign preservation, which states that a bench-marked series should reproduce the movement and signs in the original series. We show that the widely used variants of Denton (1971) method and the growth

  8. An iterative method for selecting degenerate multiplex PCR primers.

    Science.gov (United States)

    Souvenir, Richard; Buhler, Jeremy; Stormo, Gary; Zhang, Weixiong

    2007-01-01

    Single-nucleotide polymorphism (SNP) genotyping is an important molecular genetics process, which can produce results that will be useful in the medical field. Because of inherent complexities in DNA manipulation and analysis, many different methods have been proposed for a standard assay. One of the proposed techniques for performing SNP genotyping requires amplifying regions of DNA surrounding a large number of SNP loci. To automate a portion of this particular method, it is necessary to select a set of primers for the experiment. Selecting these primers can be formulated as the Multiple Degenerate Primer Design (MDPD) problem. The Multiple, Iterative Primer Selector (MIPS) is an iterative beam-search algorithm for MDPD. Theoretical and experimental analyses show that this algorithm performs well compared with the limits of degenerate primer design. Furthermore, MIPS outperforms an existing algorithm that was designed for a related degenerate primer selection problem.

  9. Investigation of the paired-gear method in selectivity studies

    DEFF Research Database (Denmark)

    Sistiaga, Manu; Herrmann, Bent; Larsen, R.B.

    2009-01-01

    was repeated throughout the eight cases in this investigation. When using the paired-gear method, the distribution of the estimated L50 and SR is wider; the distribution of the estimated split parameter has a higher variability than the true split; the estimated mean L50 and SR can be biased; the estimated...... recommend that the methodology used to obtain selectivity estimates using the paired-gear method be reviewed....

  10. Standard methods for rearing and selection of Apis mellifera queens

    DEFF Research Database (Denmark)

    Büchler, Ralph; Andonov, Sreten; Bienefeld, Kaspar

    2013-01-01

    Here we cover a wide range of methods currently in use and recommended in modern queen rearing, selection and breeding. The recommendations are meant to equally serve as standards for both scientific and practical beekeeping purposes. The basic conditions and different management techniques for q...

  11. Preparation of Iron Nanoparticles by Selective Leaching Method

    Czech Academy of Sciences Publication Activity Database

    Michalcová, A.; Vojtěch, D.; Kubatík, Tomáš František; Stehlíková, K.; Brabec, F.; Marek, I.

    2015-01-01

    Roč. 128, č. 4 (2015), s. 640-642 ISSN 0587-4246. [International Symposium on Physics of Materials (ISPMA) /13./. Prague, 31.08.2014-04.09.2014] Institutional support: RVO:61389021 Keywords : Iron nanoparticles * selective leaching method Subject RIV: JK - Corrosion ; Surface Treatment of Materials Impact factor: 0.525, year: 2015

  12. Sampling point selection for energy estimation in the quasicontinuum method

    NARCIS (Netherlands)

    Beex, L.A.A.; Peerlings, R.H.J.; Geers, M.G.D.

    2010-01-01

    The quasicontinuum (QC) method reduces computational costs of atomistic calculations by using interpolation between a small number of so-called repatoms to represent the displacements of the complete lattice and by selecting a small number of sampling atoms to estimate the total potential energy of

  13. Selecting and Using Mathematics Methods Texts: Nontrivial Tasks

    Science.gov (United States)

    Harkness, Shelly Sheats; Brass, Amy

    2017-01-01

    Mathematics methods textbooks/texts are important components of many courses for preservice teachers. Researchers should explore how these texts are selected and used. Within this paper we report the findings of a survey administered electronically to 132 members of the Association of Mathematics Teacher Educators (AMTE) in order to answer the…

  14. Underground Mining Method Selection Using WPM and PROMETHEE

    Science.gov (United States)

    Balusa, Bhanu Chander; Singam, Jayanthu

    2018-04-01

    The aim of this paper is to represent the solution to the problem of selecting suitable underground mining method for the mining industry. It is achieved by using two multi-attribute decision making techniques. These two techniques are weighted product method (WPM) and preference ranking organization method for enrichment evaluation (PROMETHEE). In this paper, analytic hierarchy process is used for weight's calculation of the attributes (i.e. parameters which are used in this paper). Mining method selection depends on physical parameters, mechanical parameters, economical parameters and technical parameters. WPM and PROMETHEE techniques have the ability to consider the relationship between the parameters and mining methods. The proposed techniques give higher accuracy and faster computation capability when compared with other decision making techniques. The proposed techniques are presented to determine the effective mining method for bauxite mine. The results of these techniques are compared with methods used in the earlier research works. The results show, conventional cut and fill method is the most suitable mining method.

  15. Enhanced individual selection for selecting fast growing fish: the "PROSPER" method, with application on brown trout (Salmo trutta fario

    Directory of Open Access Journals (Sweden)

    Vandeputte Marc

    2004-11-01

    Full Text Available Abstract Growth rate is the main breeding goal of fish breeders, but individual selection has often shown poor responses in fish species. The PROSPER method was developed to overcome possible factors that may contribute to this low success, using (1 a variable base population and high number of breeders (Ne > 100, (2 selection within groups with low non-genetic effects and (3 repeated growth challenges. Using calculations, we show that individual selection within groups, with appropriate management of maternal effects, can be superior to mass selection as soon as the maternal effect ratio exceeds 0.15, when heritability is 0.25. Practically, brown trout were selected on length at the age of one year with the PROSPER method. The genetic gain was evaluated against an unselected control line. After four generations, the mean response per generation in length at one year was 6.2% of the control mean, while the mean correlated response in weight was 21.5% of the control mean per generation. At the 4th generation, selected fish also appeared to be leaner than control fish when compared at the same size, and the response on weight was maximal (≈130% of the control mean between 386 and 470 days post fertilisation. This high response is promising, however, the key points of the method have to be investigated in more detail.

  16. Numerical Model based Reliability Estimation of Selective Laser Melting Process

    DEFF Research Database (Denmark)

    Mohanty, Sankhya; Hattel, Jesper Henri

    2014-01-01

    Selective laser melting is developing into a standard manufacturing technology with applications in various sectors. However, the process is still far from being at par with conventional processes such as welding and casting, the primary reason of which is the unreliability of the process. While...... of the selective laser melting process. A validated 3D finite-volume alternating-direction-implicit numerical technique is used to model the selective laser melting process, and is calibrated against results from single track formation experiments. Correlation coefficients are determined for process input...... parameters such as laser power, speed, beam profile, etc. Subsequently, uncertainties in the processing parameters are utilized to predict a range for the various outputs, using a Monte Carlo method based uncertainty analysis methodology, and the reliability of the process is established....

  17. Core Business Selection Based on Ant Colony Clustering Algorithm

    Directory of Open Access Journals (Sweden)

    Yu Lan

    2014-01-01

    Full Text Available Core business is the most important business to the enterprise in diversified business. In this paper, we first introduce the definition and characteristics of the core business and then descript the ant colony clustering algorithm. In order to test the effectiveness of the proposed method, Tianjin Port Logistics Development Co., Ltd. is selected as the research object. Based on the current situation of the development of the company, the core business of the company can be acquired by ant colony clustering algorithm. Thus, the results indicate that the proposed method is an effective way to determine the core business for company.

  18. Selection method and characterization of neutron monochromator natural crystals

    International Nuclear Information System (INIS)

    Stasiulevicius, R.; Kastner, G.F.

    2000-01-01

    Thermal neutrons are important analytical tools for microscopic material probe. These neutrons can be selected by diffraction technique using monocrystal, usually artificial. A crystal selection process was implemented and the characteristics of natural specimens were studied by activation analysis-k 0 method. The representative 120 samples, of which 21 best types, were irradiated in IPR-R1 and measured with a neutron diffractometer at IEA-R1m Brazilian reactors. These results are useful for database build up and ease the choice of appropriate natural crystal, with some advantage options: highest intensity diffracted, enlarging the energy operational interval and optimal performance in special applications. (author)

  19. Synthesis of trans-4,5-epoxy-(E)-2-decenal and its deuterated analog used for the development of a sensitive and selective quantification method based on isotope dilution assay with negative chemical ionization.

    Science.gov (United States)

    Lin, J; Fay, L B; Welti, D H; Blank, I

    1999-10-01

    The volatile compound trans-4,5-epoxy-(E)-2-decenal (1) was synthesized in two steps with good overall yields. The newly developed method is based on trans-epoxidation of (E)-2-octenal with alkaline hydrogen peroxide followed by a Wittig-type chain elongation with the ylide formylmethylene triphenylphosphorane. For the synthesis of [4,5-2H2]-trans-4,5-epoxy-(E)-2-decenal (d-1), [2,3-2H2]-(E)-2-octenal was prepared by reduction of 2-octyn-1-ol with lithium aluminum deuteride and subsequent oxidation of [2,3-2H2]-(E)-2-octen-1-ol with manganese oxide. Compound d1 was used as internal standard for the quantification of 1 by isotope dilution assay. Among various mass spectrometry (MS) ionization techniques tested, negative chemical ionization with ammonia as reagent gas gave best results with respect to both sensitivity and selectivity. The detection limit was found to be at about 1 pg of the analyte introduced into the gas chromatography-MS system.

  20. Factors of Selection of the Stock Allocation Method

    Directory of Open Access Journals (Sweden)

    Rohov Heorhii K.

    2014-03-01

    Full Text Available The article describes results of the author’s study of factors of making strategic decisions on selection of methods of stock allocation by public joint stock companies in Ukraine. The author used the Random forest mathematical apparatus of classification trees building and also informal methods. The article analyses the reasons that restrain public allocation of stock. It shows significant influence upon selection of a method of stock allocation of such factors as capital concentration, balance rate of corporate rights, sector of economy and significant participation of the institutes of common investment or the state in the authorised capital. The built hierarchical model of classification of factors of the issuing policy of joint stock companies finds logical justification in specific features of the institutional environment, however, it does not fit into the framework of the classical concept of the market economy. The model could be used both for formation of goals of corporate financial strategies and in the process of improvement of state regulation of activity of securities issuers. The prospect of further studies in this direction is identification of transformation of factors of selection of the stock allocation method under conditions of revival of the stock market.

  1. Applying the partitioned multiobjective risk method (PMRM) to portfolio selection.

    Science.gov (United States)

    Reyes Santos, Joost; Haimes, Yacov Y

    2004-06-01

    The analysis of risk-return tradeoffs and their practical applications to portfolio analysis paved the way for Modern Portfolio Theory (MPT), which won Harry Markowitz a 1992 Nobel Prize in Economics. A typical approach in measuring a portfolio's expected return is based on the historical returns of the assets included in a portfolio. On the other hand, portfolio risk is usually measured using volatility, which is derived from the historical variance-covariance relationships among the portfolio assets. This article focuses on assessing portfolio risk, with emphasis on extreme risks. To date, volatility is a major measure of risk owing to its simplicity and validity for relatively small asset price fluctuations. Volatility is a justified measure for stable market performance, but it is weak in addressing portfolio risk under aberrant market fluctuations. Extreme market crashes such as that on October 19, 1987 ("Black Monday") and catastrophic events such as the terrorist attack of September 11, 2001 that led to a four-day suspension of trading on the New York Stock Exchange (NYSE) are a few examples where measuring risk via volatility can lead to inaccurate predictions. Thus, there is a need for a more robust metric of risk. By invoking the principles of the extreme-risk-analysis method through the partitioned multiobjective risk method (PMRM), this article contributes to the modeling of extreme risks in portfolio performance. A measure of an extreme portfolio risk, denoted by f(4), is defined as the conditional expectation for a lower-tail region of the distribution of the possible portfolio returns. This article presents a multiobjective problem formulation consisting of optimizing expected return and f(4), whose solution is determined using Evolver-a software that implements a genetic algorithm. Under business-as-usual market scenarios, the results of the proposed PMRM portfolio selection model are found to be compatible with those of the volatility-based model

  2. Patch-based visual tracking with online representative sample selection

    Science.gov (United States)

    Ou, Weihua; Yuan, Di; Li, Donghao; Liu, Bin; Xia, Daoxun; Zeng, Wu

    2017-05-01

    Occlusion is one of the most challenging problems in visual object tracking. Recently, a lot of discriminative methods have been proposed to deal with this problem. For the discriminative methods, it is difficult to select the representative samples for the target template updating. In general, the holistic bounding boxes that contain tracked results are selected as the positive samples. However, when the objects are occluded, this simple strategy easily introduces the noises into the training data set and the target template and then leads the tracker to drift away from the target seriously. To address this problem, we propose a robust patch-based visual tracker with online representative sample selection. Different from previous works, we divide the object and the candidates into several patches uniformly and propose a score function to calculate the score of each patch independently. Then, the average score is adopted to determine the optimal candidate. Finally, we utilize the non-negative least square method to find the representative samples, which are used to update the target template. The experimental results on the object tracking benchmark 2013 and on the 13 challenging sequences show that the proposed method is robust to the occlusion and achieves promising results.

  3. Road Network Selection Based on Road Hierarchical Structure Control

    Directory of Open Access Journals (Sweden)

    HE Haiwei

    2015-04-01

    Full Text Available A new road network selection method based on hierarchical structure is studied. Firstly, road network is built as strokes which are then classified into hierarchical collections according to the criteria of betweenness centrality value (BC value. Secondly, the hierarchical structure of the strokes is enhanced using structural characteristic identification technique. Thirdly, the importance calculation model was established according to the relationships among the hierarchical structure of the strokes. Finally, the importance values of strokes are got supported with the model's hierarchical calculation, and with which the road network is selected. Tests are done to verify the advantage of this method by comparing it with other common stroke-oriented methods using three kinds of typical road network data. Comparision of the results show that this method had few need to semantic data, and could eliminate the negative influence of edge strokes caused by the criteria of BC value well. So, it is better to maintain the global hierarchical structure of road network, and suitable to meet with the selection of various kinds of road network at the same time.

  4. Multicriteria Personnel Selection by the Modified Fuzzy VIKOR Method

    Directory of Open Access Journals (Sweden)

    Rasim M. Alguliyev

    2015-01-01

    Full Text Available Personnel evaluation is an important process in human resource management. The multicriteria nature and the presence of both qualitative and quantitative factors make it considerably more complex. In this study, a fuzzy hybrid multicriteria decision-making (MCDM model is proposed to personnel evaluation. This model solves personnel evaluation problem in a fuzzy environment where both criteria and weights could be fuzzy sets. The triangular fuzzy numbers are used to evaluate the suitability of personnel and the approximate reasoning of linguistic values. For evaluation, we have selected five information culture criteria. The weights of the criteria were calculated using worst-case method. After that, modified fuzzy VIKOR is proposed to rank the alternatives. The outcome of this research is ranking and selecting best alternative with the help of fuzzy VIKOR and modified fuzzy VIKOR techniques. A comparative analysis of results by fuzzy VIKOR and modified fuzzy VIKOR methods is presented. Experiments showed that the proposed modified fuzzy VIKOR method has some advantages over fuzzy VIKOR method. Firstly, from a computational complexity point of view, the presented model is effective. Secondly, compared to fuzzy VIKOR method, it has high acceptable advantage compared to fuzzy VIKOR method.

  5. On a selection method of imaging condition in scintigraphy

    International Nuclear Information System (INIS)

    Ikeda, Hozumi; Kishimoto, Kenji; Shimonishi, Yoshihiro; Ohmura, Masahiro; Kosakai, Kazuhisa; Ochi, Hironobu

    1992-01-01

    Selection of imaging condition in scintigraphy was evaluated using analytic hierarchy process. First, a method of the selection was led by determining at the points of image quantity and imaging time. Influence of image quality was thought to depend on changes of system resolution, count density, image size, and image density. Also influence of imaging time was thought to depend on changes of system sensitivity and data acquisition time. Phantom study was done for paired comparison of these selection factors, and relations of sample data and the factors, that is Rollo phantom images were taken by changing count density, image size, and image density. Image quality was shown by calculating the score of visual evaluation that done by comparing of a pair of images in clearer cold lesion on the scintigrams. Imaging time was shown by relative values for changes of count density. However, system resolution and system sensitivity were constant in this study. Next, using these values analytic hierarchy process was adapted for this selection of imaging conditions. We conclude that this selection of imaging conditions can be analyzed quantitatively using analytic hierarchy process and this analysis develops theoretical consideration of imaging technique. (author)

  6. Selection of the signal synchronization method in software GPS receivers

    Directory of Open Access Journals (Sweden)

    Vlada S. Sokolović

    2011-04-01

    Full Text Available Introduction This paper presents a critical analysis of the signal processing flow carried out in a software GPS receiver and a critical comparison of different architectures for signal processing within the GPS receiver. A model of software receivers is shown. Based on the displayed model, a receiver has been realized in the MATLAB software package, in which the simulations of signal processing were carried out. The aim of this paper is to demonstrate the advantages and disadvantages of different methods of the synchronization of signals in the receiver, and to propose a solution acceptable for possible implementation. The signal processing flow was observed from the input circuit to the extraction of the bits of the navigation message. The entire signal processing was performed on the L1 signal and the data collected by the input circuit SE4110. A radio signal from the satellite was accepted with the input circuit, filtered and translated into a digital form. The input circuit ends with the hardware of the receiver. A digital signal from the input circuit is brought into the PC Pentium 4 (AMD 3000 + where the receiver is realized in Matlab. Model of software GPS receiver The first level of processing is signal acquisition. Signal acquisition was realized using the cyclic convolution. The acquisition process was carried out by measuring signals from satellites, and these parameters are passed to the next level of processing. The next level was done by monitoring the synchronization signal and extracting the navigation message bits. On the basis of the detection of the navigation message the receiver calculates the position of a satellite and then, based on the position of the satellite, its own position. Tracking of GPS signal synchronization In order to select the most acceptable method of signal synchronization in the receiver, different methods of signal synchronization are compared. The early-late-DLL (Delay Lock Loop, TDL (Tau Dither Loop

  7. Essay on Methods in Futures Studies and a Selective Bibliography

    DEFF Research Database (Denmark)

    Poulsen, Claus

    2005-01-01

    Futures studies is often conflated with science fiction or pop-futurism. Consequently there is a need for demarcation of what is futures studies and what is not. From the same reason the essay stresses the need for quality control to focus on futures research and its methods: Publications in futu...... programme are (only) partly reduced by applying Causal Layered Analysis as an internal quality control. The following selective bibliography is focussed on these methodological issues...

  8. Activity – based costing method

    Directory of Open Access Journals (Sweden)

    Èuchranová Katarína

    2001-06-01

    Full Text Available Activity based costing is a method of identifying and tracking the operating costs directly associated with processing items. It is the practice of focusing on some unit of output, such as a purchase order or an assembled automobile and attempting to determine its total as precisely as poccible based on the fixed and variable costs of the inputs.You use ABC to identify, quantify and analyze the various cost drivers (such as labor, materials, administrative overhead, rework. and to determine which ones are candidates for reduction.A processes any activity that accepts inputs, adds value to these inputs for customers and produces outputs for these customers. The customer may be either internal or external to the organization. Every activity within an organization comprimes one or more processes. Inputs, controls and resources are all supplied to the process.A process owner is the person responsible for performing and or controlling the activity.The direction of cost through their contact to partial activity and processes is a new modern theme today. Beginning of this method is connected with very important changes in the firm processes.ABC method is a instrument , that bring a competitive advantages for the firm.

  9. Efficient nonparametric and asymptotic Bayesian model selection methods for attributed graph clustering

    KAUST Repository

    Xu, Zhiqiang

    2017-02-16

    Attributed graph clustering, also known as community detection on attributed graphs, attracts much interests recently due to the ubiquity of attributed graphs in real life. Many existing algorithms have been proposed for this problem, which are either distance based or model based. However, model selection in attributed graph clustering has not been well addressed, that is, most existing algorithms assume the cluster number to be known a priori. In this paper, we propose two efficient approaches for attributed graph clustering with automatic model selection. The first approach is a popular Bayesian nonparametric method, while the second approach is an asymptotic method based on a recently proposed model selection criterion, factorized information criterion. Experimental results on both synthetic and real datasets demonstrate that our approaches for attributed graph clustering with automatic model selection significantly outperform the state-of-the-art algorithm.

  10. Efficient nonparametric and asymptotic Bayesian model selection methods for attributed graph clustering

    KAUST Repository

    Xu, Zhiqiang; Cheng, James; Xiao, Xiaokui; Fujimaki, Ryohei; Muraoka, Yusuke

    2017-01-01

    Attributed graph clustering, also known as community detection on attributed graphs, attracts much interests recently due to the ubiquity of attributed graphs in real life. Many existing algorithms have been proposed for this problem, which are either distance based or model based. However, model selection in attributed graph clustering has not been well addressed, that is, most existing algorithms assume the cluster number to be known a priori. In this paper, we propose two efficient approaches for attributed graph clustering with automatic model selection. The first approach is a popular Bayesian nonparametric method, while the second approach is an asymptotic method based on a recently proposed model selection criterion, factorized information criterion. Experimental results on both synthetic and real datasets demonstrate that our approaches for attributed graph clustering with automatic model selection significantly outperform the state-of-the-art algorithm.

  11. Computational intelligence-based polymerase chain reaction primer selection based on a novel teaching-learning-based optimisation.

    Science.gov (United States)

    Cheng, Yu-Huei

    2014-12-01

    Specific primers play an important role in polymerase chain reaction (PCR) experiments, and therefore it is essential to find specific primers of outstanding quality. Unfortunately, many PCR constraints must be simultaneously inspected which makes specific primer selection difficult and time-consuming. This paper introduces a novel computational intelligence-based method, Teaching-Learning-Based Optimisation, to select the specific and feasible primers. The specified PCR product lengths of 150-300 bp and 500-800 bp with three melting temperature formulae of Wallace's formula, Bolton and McCarthy's formula and SantaLucia's formula were performed. The authors calculate optimal frequency to estimate the quality of primer selection based on a total of 500 runs for 50 random nucleotide sequences of 'Homo species' retrieved from the National Center for Biotechnology Information. The method was then fairly compared with the genetic algorithm (GA) and memetic algorithm (MA) for primer selection in the literature. The results show that the method easily found suitable primers corresponding with the setting primer constraints and had preferable performance than the GA and the MA. Furthermore, the method was also compared with the common method Primer3 according to their method type, primers presentation, parameters setting, speed and memory usage. In conclusion, it is an interesting primer selection method and a valuable tool for automatic high-throughput analysis. In the future, the usage of the primers in the wet lab needs to be validated carefully to increase the reliability of the method.

  12. Evaluation of Stress Loaded Steel Samples Using Selected Electromagnetic Methods

    International Nuclear Information System (INIS)

    Chady, T.

    2004-01-01

    In this paper the magnetic leakage flux and eddy current method were used to evaluate changes of materials' properties caused by stress. Seven samples made of ferromagnetic material with different level of applied stress were prepared. First, the leakage magnetic fields were measured by scanning the surface of the specimens with GMR gradiometer. Next, the same samples were evaluated using an eddy current sensor. A comparison between results obtained from both methods was carried out. Finally, selected parameters of the measured signal were calculated and utilized to evaluate level of the applied stress. A strong coincidence between amount of the applied stress and the maximum amplitude of the derivative was confirmed

  13. The selective dynamical downscaling method for extreme-wind atlases

    DEFF Research Database (Denmark)

    Larsén, Xiaoli Guo; Badger, Jake; Hahmann, Andrea N.

    2012-01-01

    A selective dynamical downscaling method is developed to obtain extreme-wind atlases for large areas. The method is general, efficient and flexible. The method consists of three steps: (i) identifying storm episodes for a particular area, (ii) downscaling of the storms using mesoscale modelling...... and (iii) post-processing. The post-processing generalizes the winds from the mesoscale modelling to standard conditions, i.e. 10-m height over a homogeneous surface with roughness length of 5 cm. The generalized winds are then used to calculate the 50-year wind using the annual maximum method for each...... mesoscale grid point. The generalization of the mesoscale winds through the post-processing provides a framework for data validation and for applying further the mesoscale extreme winds at specific places using microscale modelling. The results are compared with measurements from two areas with different...

  14. Predictive Active Set Selection Methods for Gaussian Processes

    DEFF Research Database (Denmark)

    Henao, Ricardo; Winther, Ole

    2012-01-01

    We propose an active set selection framework for Gaussian process classification for cases when the dataset is large enough to render its inference prohibitive. Our scheme consists of a two step alternating procedure of active set update rules and hyperparameter optimization based upon marginal...... high impact to the classifier decision process while removing those that are less relevant. We introduce two active set rules based on different criteria, the first one prefers a model with interpretable active set parameters whereas the second puts computational complexity first, thus a model...... with active set parameters that directly control its complexity. We also provide both theoretical and empirical support for our active set selection strategy being a good approximation of a full Gaussian process classifier. Our extensive experiments show that our approach can compete with state...

  15. Variable selection in near-infrared spectroscopy: Benchmarking of feature selection methods on biodiesel data

    International Nuclear Information System (INIS)

    Balabin, Roman M.; Smirnov, Sergey V.

    2011-01-01

    During the past several years, near-infrared (near-IR/NIR) spectroscopy has increasingly been adopted as an analytical tool in various fields from petroleum to biomedical sectors. The NIR spectrum (above 4000 cm -1 ) of a sample is typically measured by modern instruments at a few hundred of wavelengths. Recently, considerable effort has been directed towards developing procedures to identify variables (wavelengths) that contribute useful information. Variable selection (VS) or feature selection, also called frequency selection or wavelength selection, is a critical step in data analysis for vibrational spectroscopy (infrared, Raman, or NIRS). In this paper, we compare the performance of 16 different feature selection methods for the prediction of properties of biodiesel fuel, including density, viscosity, methanol content, and water concentration. The feature selection algorithms tested include stepwise multiple linear regression (MLR-step), interval partial least squares regression (iPLS), backward iPLS (BiPLS), forward iPLS (FiPLS), moving window partial least squares regression (MWPLS), (modified) changeable size moving window partial least squares (CSMWPLS/MCSMWPLSR), searching combination moving window partial least squares (SCMWPLS), successive projections algorithm (SPA), uninformative variable elimination (UVE, including UVE-SPA), simulated annealing (SA), back-propagation artificial neural networks (BP-ANN), Kohonen artificial neural network (K-ANN), and genetic algorithms (GAs, including GA-iPLS). Two linear techniques for calibration model building, namely multiple linear regression (MLR) and partial least squares regression/projection to latent structures (PLS/PLSR), are used for the evaluation of biofuel properties. A comparison with a non-linear calibration model, artificial neural networks (ANN-MLP), is also provided. Discussion of gasoline, ethanol-gasoline (bioethanol), and diesel fuel data is presented. The results of other spectroscopic

  16. Selection of components based on their importance

    International Nuclear Information System (INIS)

    Stvan, F.

    2004-12-01

    A proposal is presented for sorting components of the Dukovany nuclear power plant with respect to their importance. The classification scheme includes property priority, property criticality and property structure. Each area has its criteria with weight coefficients to calculate the importance of each component by the Risk Priority Number method. The aim of the process is to generate a list of components in order of operating and safety importance, which will help spend funds to ensure operation and safety in an optimal manner. This proposal is linked to a proposal for a simple database which should serve to enter information and perform assessments. The present stage focused on a safety assessment of components categorized in safety classes BT1, BT2 and BT3 pursuant to Decree No. 76. Assessment was performed based ona PSE study for Level 1. The database includes inputs for entering financial data, which are represented by a potential damage resulting from the given failure and by the loss of MWh in financial terms. In a next input, the failure incidence intensity and time of correction can be entered. Information regarding the property structure, represented by the degree of backup and reparability of the component, is the last input available

  17. Selective adsorption-desorption method for the enrichment of krypton

    International Nuclear Information System (INIS)

    Yuasa, Y.; Ohta, M.; Watanabe, A.; Tani, A.; Takashima, N.

    1975-01-01

    Selective adsorption-desorption method has been developed as an effective means of enriching krypton and xenon gases. A seriesof laboratory-scale tests were performed to provide some basic data of the method when applied to off-gas streams of nuclear power plants. For the first step of the enrichment process of the experiments, krypton was adsorbed on solid adsorbents from dilute mixtures with air at temperatures ranging from -50 0 C to -170 0 C. After the complete breakthrough was obtained, the adsorption bed was evacuated at low temperature by a vacuum pump. By combining these two steps krypton was highly enriched on the adsorbents, and the enrichment factor for krypton was calculated as the product of individual enrichment factors of each step. Two types of adsorbents, coconut charcoal and molecular sieves 5A, were used. Experimental results showed that the present method gave the greater enrichment factor than the conventional method which used selective adsorption step only. (U.S.)

  18. Selection criteria for oxidation method in total organic carbon measurement.

    Science.gov (United States)

    Yoon, GeunSeok; Park, Sang-Min; Yang, Heuiwon; Tsang, Daniel C W; Alessi, Daniel S; Baek, Kitae

    2018-05-01

    During the measurement of total organic carbon (TOC), dissolved organic carbon is converted into CO 2 by using high temperature combustion (HTC) or wet chemical oxidation (WCO). However, the criteria for selecting the oxidation methods are not clear. In this study, the chemical structures of organic material were considered as a key factor to select the oxidation method used. Most non-degradable organic compounds showed a similar oxidation efficiency in both methods, including natural organic compounds, dyes, and pharmaceuticals, and thus both methods are appropriate to measure TOC in waters containing these compounds. However, only a fraction of the carbon in the halogenated compounds (perfluorooctanoic acid and trifluoroacetic acid) were oxidized using WCO, resulting in measured TOC values that are considerably lower than those determined by HTC. This result is likely due to the electronegativity of halogen elements which inhibits the approach of electron-rich sulfate radicals in the WCO, and the higher bond strength of carbon-halogen pairs as compared to carbon-hydrogen bonds, which results in a lower degree of oxidation of the compounds. Our results indicate that WCO could be used to oxidize most organic compounds, but may not be appropriate to quantify TOC in organic carbon pools that contain certain halogenated compounds. Copyright © 2018 Elsevier Ltd. All rights reserved.

  19. SADA: Ecological Risk Based Decision Support System for Selective Remediation

    Science.gov (United States)

    Spatial Analysis and Decision Assistance (SADA) is freeware that implements terrestrial ecological risk assessment and yields a selective remediation design using its integral geographical information system, based on ecological and risk assessment inputs. Selective remediation ...

  20. A Comparative Study of Feature Selection and Classification Methods for Gene Expression Data

    KAUST Repository

    Abusamra, Heba

    2013-05-01

    Microarray technology has enriched the study of gene expression in such a way that scientists are now able to measure the expression levels of thousands of genes in a single experiment. Microarray gene expression data gained great importance in recent years due to its role in disease diagnoses and prognoses which help to choose the appropriate treatment plan for patients. This technology has shifted a new era in molecular classification, interpreting gene expression data remains a difficult problem and an active research area due to their native nature of “high dimensional low sample size”. Such problems pose great challenges to existing classification methods. Thus, effective feature selection techniques are often needed in this case to aid to correctly classify different tumor types and consequently lead to a better understanding of genetic signatures as well as improve treatment strategies. This thesis aims on a comparative study of state-of-the-art feature selection methods, classification methods, and the combination of them, based on gene expression data. We compared the efficiency of three different classification methods including: support vector machines, k- nearest neighbor and random forest, and eight different feature selection methods, including: information gain, twoing rule, sum minority, max minority, gini index, sum of variances, t- statistics, and one-dimension support vector machine. Five-fold cross validation was used to evaluate the classification performance. Two publicly available gene expression data sets of glioma were used for this study. Different experiments have been applied to compare the performance of the classification methods with and without performing feature selection. Results revealed the important role of feature selection in classifying gene expression data. By performing feature selection, the classification accuracy can be significantly boosted by using a small number of genes. The relationship of features selected in

  1. Selected Tools and Methods from Quality Management Field

    Directory of Open Access Journals (Sweden)

    Kateřina BRODECKÁ

    2009-06-01

    Full Text Available Following paper describes selected tools and methods from Quality management field and their practical applications on defined examples. Solved examples were elaborated in the form of electronic support. This in detail elaborated electronic support provides students opportunity to thoroughly practice specific issues, help them to prepare for exams and consequently will lead to education improvement. Especially students of combined study form will appreciate this support. The paper specifies project objectives, subjects that will be covered by mentioned support, target groups, structure and the way of elaboration of electronic exercise book in view. The emphasis is not only on manual solution of selected examples that may help students to understand the principles and relationships, but also on solving and results interpreting of selected examples using software support. Statistic software Statgraphics Plus v 5.0 is used while working support, because it is free to use for all students of the faculty. Exemplary example from the subject Basic Statistical Methods of Quality Management is also part of this paper.

  2. Selecting device for processing method of radioactive gaseous wastes

    International Nuclear Information System (INIS)

    Sasaki, Ryoichi; Komoda, Norihisa.

    1976-01-01

    Object: To extend the period of replacement of a filter for adsorbing radioactive material by discharging waste gas containing radioactive material produced from an atomic power equipment after treating it by a method selected on the basis of the results of measurement of wind direction. Structure: Exhaust gas containing radioactive material produced from atomic power equipment is discharged after it is treated by a method selected on the basis of the results of wind direction measurement. For Instance, in case of sea wind the waste gas passes through a route selected for this case and is discharged through the waste gas outlet. When the sea wind disappears (that is, when a land wind or calm sets in), the exhaust gas is switched to a route for the case other than that of the sea wind, so that it passes through a filter consisting of active carbon where the radioactive material is removed through adsorption. The waste gas now free from the radioactive material is discharged through the waste gas outlet. (Moriyama, K.)

  3. Method selection for sustainability assessments: The case of recovery of resources from waste water.

    Science.gov (United States)

    Zijp, M C; Waaijers-van der Loop, S L; Heijungs, R; Broeren, M L M; Peeters, R; Van Nieuwenhuijzen, A; Shen, L; Heugens, E H W; Posthuma, L

    2017-07-15

    Sustainability assessments provide scientific support in decision procedures towards sustainable solutions. However, in order to contribute in identifying and choosing sustainable solutions, the sustainability assessment has to fit the decision context. Two complicating factors exist. First, different stakeholders tend to have different views on what a sustainability assessment should encompass. Second, a plethora of sustainability assessment methods exist, due to the multi-dimensional characteristic of the concept. Different methods provide other representations of sustainability. Based on a literature review, we present a protocol to facilitate method selection together with stakeholders. The protocol guides the exploration of i) the decision context, ii) the different views of stakeholders and iii) the selection of pertinent assessment methods. In addition, we present an online tool for method selection. This tool identifies assessment methods that meet the specifications obtained with the protocol, and currently contains characteristics of 30 sustainability assessment methods. The utility of the protocol and the tool are tested in a case study on the recovery of resources from domestic waste water. In several iterations, a combination of methods was selected, followed by execution of the selected sustainability assessment methods. The assessment results can be used in the first phase of the decision procedure that leads to a strategic choice for sustainable resource recovery from waste water in the Netherlands. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. A multicore based parallel image registration method.

    Science.gov (United States)

    Yang, Lin; Gong, Leiguang; Zhang, Hong; Nosher, John L; Foran, David J

    2009-01-01

    Image registration is a crucial step for many image-assisted clinical applications such as surgery planning and treatment evaluation. In this paper we proposed a landmark based nonlinear image registration algorithm for matching 2D image pairs. The algorithm was shown to be effective and robust under conditions of large deformations. In landmark based registration, the most important step is establishing the correspondence among the selected landmark points. This usually requires an extensive search which is often computationally expensive. We introduced a nonregular data partition algorithm using the K-means clustering algorithm to group the landmarks based on the number of available processing cores. The step optimizes the memory usage and data transfer. We have tested our method using IBM Cell Broadband Engine (Cell/B.E.) platform.

  5. Manual of selected physico-chemical analytical methods. IV

    International Nuclear Information System (INIS)

    Beran, M.; Klosova, E.; Krtil, J.; Sus, F.; Kuvik, V.; Vrbova, L.; Hamplova, M.; Lengyel, J.; Kelnar, L.; Zakouril, K.

    1990-11-01

    The Central Testing Laboratory of the Nuclear Research Institute at Rez has for a decade been participating in the development of analytical procedures and has been providing analyses of samples of different types and origin. The analytical procedures developed have been published in special journals and a number of them in the Manuals of analytical methods, in three parts. The 4th part of the Manual contains selected physico-chemical methods developed or modified by the Laboratory in the years 1986-1990 within the project ''Development of physico-chemical analytical methods''. In most cases, techniques are involved for non-nuclear applications. Some can find wider applications, especially in analyses of environmental samples. Others have been developed for specific cases of sample analyses or require special instrumentation (mass spectrometer), which partly restricts their applicability by other institutions. (author)

  6. Structuring AHP-based maintenance policy selection

    NARCIS (Netherlands)

    Goossens, Adriaan; Basten, Robertus Johannes Ida; Hummel, J. Marjan; van der Wegen, Leonardus L.M.

    2015-01-01

    We aim to structure the maintenance policy selection process for ships, using the Analytic Hierarchy Process (AHP). Maintenance is an important contributor to reach the intended life-time of capital technical assets, and it is gaining increasing interest and relevance. A maintenance policy is a

  7. Research on Big Data Attribute Selection Method in Submarine Optical Fiber Network Fault Diagnosis Database

    Directory of Open Access Journals (Sweden)

    Chen Ganlang

    2017-11-01

    Full Text Available At present, in the fault diagnosis database of submarine optical fiber network, the attribute selection of large data is completed by detecting the attributes of the data, the accuracy of large data attribute selection cannot be guaranteed. In this paper, a large data attribute selection method based on support vector machines (SVM for fault diagnosis database of submarine optical fiber network is proposed. Mining large data in the database of optical fiber network fault diagnosis, and calculate its attribute weight, attribute classification is completed according to attribute weight, so as to complete attribute selection of large data. Experimental results prove that ,the proposed method can improve the accuracy of large data attribute selection in fault diagnosis database of submarine optical fiber network, and has high use value.

  8. Selectively Encrypted Pull-Up Based Watermarking of Biometric data

    Science.gov (United States)

    Shinde, S. A.; Patel, Kushal S.

    2012-10-01

    Biometric authentication systems are becoming increasingly popular due to their potential usage in information security. However, digital biometric data (e.g. thumb impression) are themselves vulnerable to security attacks. There are various methods are available to secure biometric data. In biometric watermarking the data are embedded in an image container and are only retrieved if the secrete key is available. This container image is encrypted to have more security against the attack. As wireless devices are equipped with battery as their power supply, they have limited computational capabilities; therefore to reduce energy consumption we use the method of selective encryption of container image. The bit pull-up-based biometric watermarking scheme is based on amplitude modulation and bit priority which reduces the retrieval error rate to great extent. By using selective Encryption mechanism we expect more efficiency in time at the time of encryption as well as decryption. Significant reduction in error rate is expected to be achieved by the bit pull-up method.

  9. Variable selection methods in PLS regression - a comparison study on metabolomics data

    DEFF Research Database (Denmark)

    Karaman, İbrahim; Hedemann, Mette Skou; Knudsen, Knud Erik Bach

    . The aim of the metabolomics study was to investigate the metabolic profile in pigs fed various cereal fractions with special attention to the metabolism of lignans using LC-MS based metabolomic approach. References 1. Lê Cao KA, Rossouw D, Robert-Granié C, Besse P: A Sparse PLS for Variable Selection when...... integrated approach. Due to the high number of variables in data sets (both raw data and after peak picking) the selection of important variables in an explorative analysis is difficult, especially when different data sets of metabolomics data need to be related. Variable selection (or removal of irrelevant...... different strategies for variable selection on PLSR method were considered and compared with respect to selected subset of variables and the possibility for biological validation. Sparse PLSR [1] as well as PLSR with Jack-knifing [2] was applied to data in order to achieve variable selection prior...

  10. SORIOS – A method for evaluating and selecting environmental certificates and labels

    DEFF Research Database (Denmark)

    Kikkenborg Pedersen, Dennis; Dukovska-Popovska, Iskra; Ola Strandhagen, Jan

    2012-01-01

    This paper presents a general method for evaluating and selecting environmental certificates and labels for companies to use on products and services. The method is developed based on a case study using a Grounded Theory approach. The result is a generalized six-step method that features an initial...... searching strategy and an evaluation model that weighs the prerequisites, rewards and the organization of certificate or label against the strategic needs of a company....

  11. Analysis of Various Frequency Selective Shielding Glass by FDTD method

    OpenAIRE

    笠嶋, 善憲; Kasashima, Yoshinori

    2012-01-01

    A frequency Selective shielding (FSS) glass is a print of many same size antennas on a sheet of glass, and it has high shielding properties for one specific frequency. This time, the author analyzed characteristics of various FSSs whose antenna types are different by FDTD method. The antenna types are cross dipole, circular loop, square loop, circular patch, and square patch. As the result, the FSSs can be composed of the various types of the antennas, and the FSSs have broad-band shielding c...

  12. Feature selection gait-based gender classification under different circumstances

    Science.gov (United States)

    Sabir, Azhin; Al-Jawad, Naseer; Jassim, Sabah

    2014-05-01

    This paper proposes a gender classification based on human gait features and investigates the problem of two variations: clothing (wearing coats) and carrying bag condition as addition to the normal gait sequence. The feature vectors in the proposed system are constructed after applying wavelet transform. Three different sets of feature are proposed in this method. First, Spatio-temporal distance that is dealing with the distance of different parts of the human body (like feet, knees, hand, Human Height and shoulder) during one gait cycle. The second and third feature sets are constructed from approximation and non-approximation coefficient of human body respectively. To extract these two sets of feature we divided the human body into two parts, upper and lower body part, based on the golden ratio proportion. In this paper, we have adopted a statistical method for constructing the feature vector from the above sets. The dimension of the constructed feature vector is reduced based on the Fisher score as a feature selection method to optimize their discriminating significance. Finally k-Nearest Neighbor is applied as a classification method. Experimental results demonstrate that our approach is providing more realistic scenario and relatively better performance compared with the existing approaches.

  13. Cut Based Method for Comparing Complex Networks.

    Science.gov (United States)

    Liu, Qun; Dong, Zhishan; Wang, En

    2018-03-23

    Revealing the underlying similarity of various complex networks has become both a popular and interdisciplinary topic, with a plethora of relevant application domains. The essence of the similarity here is that network features of the same network type are highly similar, while the features of different kinds of networks present low similarity. In this paper, we introduce and explore a new method for comparing various complex networks based on the cut distance. We show correspondence between the cut distance and the similarity of two networks. This correspondence allows us to consider a broad range of complex networks and explicitly compare various networks with high accuracy. Various machine learning technologies such as genetic algorithms, nearest neighbor classification, and model selection are employed during the comparison process. Our cut method is shown to be suited for comparisons of undirected networks and directed networks, as well as weighted networks. In the model selection process, the results demonstrate that our approach outperforms other state-of-the-art methods with respect to accuracy.

  14. Learning from Past Classification Errors: Exploring Methods for Improving the Performance of a Deep Learning-based Building Extraction Model through Quantitative Analysis of Commission Errors for Optimal Sample Selection

    Science.gov (United States)

    Swan, B.; Laverdiere, M.; Yang, L.

    2017-12-01

    In the past five years, deep Convolutional Neural Networks (CNN) have been increasingly favored for computer vision applications due to their high accuracy and ability to generalize well in very complex problems; however, details of how they function and in turn how they may be optimized are still imperfectly understood. In particular, their complex and highly nonlinear network architecture, including many hidden layers and self-learned parameters, as well as their mathematical implications, presents open questions about how to effectively select training data. Without knowledge of the exact ways the model processes and transforms its inputs, intuition alone may fail as a guide to selecting highly relevant training samples. Working in the context of improving a CNN-based building extraction model used for the LandScan USA gridded population dataset, we have approached this problem by developing a semi-supervised, highly-scalable approach to select training samples from a dataset of identified commission errors. Due to the large scope this project, tens of thousands of potential samples could be derived from identified commission errors. To efficiently trim those samples down to a manageable and effective set for creating additional training sample, we statistically summarized the spectral characteristics of areas with rates of commission errors at the image tile level and grouped these tiles using affinity propagation. Highly representative members of each commission error cluster were then used to select sites for training sample creation. The model will be incrementally re-trained with the new training data to allow for an assessment of how the addition of different types of samples affects the model performance, such as precision and recall rates. By using quantitative analysis and data clustering techniques to select highly relevant training samples, we hope to improve model performance in a manner that is resource efficient, both in terms of training process

  15. A Comparative Study of Feature Selection and Classification Methods for Gene Expression Data of Glioma

    KAUST Repository

    Abusamra, Heba

    2013-11-01

    Microarray gene expression data gained great importance in recent years due to its role in disease diagnoses and prognoses which help to choose the appropriate treatment plan for patients. This technology has shifted a new era in molecular classification. Interpreting gene expression data remains a difficult problem and an active research area due to their native nature of “high dimensional low sample size”. Such problems pose great challenges to existing classification methods. Thus, effective feature selection techniques are often needed in this case to aid to correctly classify different tumor types and consequently lead to a better understanding of genetic signatures as well as improve treatment strategies. This paper aims on a comparative study of state-of-the- art feature selection methods, classification methods, and the combination of them, based on gene expression data. We compared the efficiency of three different classification methods including: support vector machines, k-nearest neighbor and random forest, and eight different feature selection methods, including: information gain, twoing rule, sum minority, max minority, gini index, sum of variances, t-statistics, and one-dimension support vector machine. Five-fold cross validation was used to evaluate the classification performance. Two publicly available gene expression data sets of glioma were used in the experiments. Results revealed the important role of feature selection in classifying gene expression data. By performing feature selection, the classification accuracy can be significantly boosted by using a small number of genes. The relationship of features selected in different feature selection methods is investigated and the most frequent features selected in each fold among all methods for both datasets are evaluated.

  16. A Comparative Study of Feature Selection and Classification Methods for Gene Expression Data of Glioma

    KAUST Repository

    Abusamra, Heba

    2013-01-01

    Microarray gene expression data gained great importance in recent years due to its role in disease diagnoses and prognoses which help to choose the appropriate treatment plan for patients. This technology has shifted a new era in molecular classification. Interpreting gene expression data remains a difficult problem and an active research area due to their native nature of “high dimensional low sample size”. Such problems pose great challenges to existing classification methods. Thus, effective feature selection techniques are often needed in this case to aid to correctly classify different tumor types and consequently lead to a better understanding of genetic signatures as well as improve treatment strategies. This paper aims on a comparative study of state-of-the- art feature selection methods, classification methods, and the combination of them, based on gene expression data. We compared the efficiency of three different classification methods including: support vector machines, k-nearest neighbor and random forest, and eight different feature selection methods, including: information gain, twoing rule, sum minority, max minority, gini index, sum of variances, t-statistics, and one-dimension support vector machine. Five-fold cross validation was used to evaluate the classification performance. Two publicly available gene expression data sets of glioma were used in the experiments. Results revealed the important role of feature selection in classifying gene expression data. By performing feature selection, the classification accuracy can be significantly boosted by using a small number of genes. The relationship of features selected in different feature selection methods is investigated and the most frequent features selected in each fold among all methods for both datasets are evaluated.

  17. Development of an optimal velocity selection method with velocity obstacle

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Min Geuk; Oh, Jun Ho [KAIST, Daejeon (Korea, Republic of)

    2015-08-15

    The Velocity obstacle (VO) method is one of the most well-known methods for local path planning, allowing consideration of dynamic obstacles and unexpected obstacles. Typical VO methods separate a velocity map into a collision area and a collision-free area. A robot can avoid collisions by selecting its velocity from within the collision-free area. However, if there are numerous obstacles near a robot, the robot will have very few velocity candidates. In this paper, a method for choosing optimal velocity components using the concept of pass-time and vertical clearance is proposed for the efficient movement of a robot. The pass-time is the time required for a robot to pass by an obstacle. By generating a latticized available velocity map for a robot, each velocity component can be evaluated using a cost function that considers the pass-time and other aspects. From the output of the cost function, even a velocity component that will cause a collision in the future can be chosen as a final velocity if the pass-time is sufficiently long enough.

  18. Selective Osmotic Shock (SOS)-Based Islet Isolation for Microencapsulation.

    Science.gov (United States)

    Enck, Kevin; McQuilling, John Patrick; Orlando, Giuseppe; Tamburrini, Riccardo; Sivanandane, Sittadjody; Opara, Emmanuel C

    2017-01-01

    Islet transplantation (IT) has recently been shown to be a promising alternative to pancreas transplantation for reversing diabetes. IT requires the isolation of the islets from the pancreas, and these islets can be used to fabricate a bio-artificial pancreas. Enzymatic digestion is the current gold standard procedure for islet isolation but has lingering concerns. One such concern is that it has been shown to damage the islets due to nonselective tissue digestion. This chapter provides a detailed description of a nonenzymatic method that we are exploring in our lab as an alternative to current enzymatic digestion procedures for islet isolation from human and nonhuman pancreatic tissues. This method is based on selective destruction and protection of specific cell types and has been shown to leave the extracellular matrix (ECM) of islets intact, which may thus enhance islet viability and functionality. We also show that these SOS-isolated islets can be microencapsulated for transplantation.

  19. Selection of suitable NDT methods for building inspection

    Science.gov (United States)

    Pauzi Ismail, Mohamad

    2017-11-01

    Construction of modern structures requires good quality concrete with adequate strength and durability. Several accidents occurred in the civil constructions and were reported in the media. Such accidents were due to poor workmanship and lack of systematic monitoring during the constructions. In addition, water leaking and cracking in residential houses was commonly reported too. Based on these facts, monitoring the quality of concrete in structures is becoming more and more important subject. This paper describes major Non-destructive Testing (NDT) methods for evaluating structural integrity of concrete building. Some interesting findings during actual NDT inspections on site are presented. The NDT methods used are explained, compared and discussed. The suitable methods are suggested as minimum NDT methods to cover parameters required in the inspection.

  20. An Ensemble Method with Integration of Feature Selection and Classifier Selection to Detect the Landslides

    Science.gov (United States)

    Zhongqin, G.; Chen, Y.

    2017-12-01

    Abstract Quickly identify the spatial distribution of landslides automatically is essential for the prevention, mitigation and assessment of the landslide hazard. It's still a challenging job owing to the complicated characteristics and vague boundary of the landslide areas on the image. The high resolution remote sensing image has multi-scales, complex spatial distribution and abundant features, the object-oriented image classification methods can make full use of the above information and thus effectively detect the landslides after the hazard happened. In this research we present a new semi-supervised workflow, taking advantages of recent object-oriented image analysis and machine learning algorithms to quick locate the different origins of landslides of some areas on the southwest part of China. Besides a sequence of image segmentation, feature selection, object classification and error test, this workflow ensemble the feature selection and classifier selection. The feature this study utilized were normalized difference vegetation index (NDVI) change, textural feature derived from the gray level co-occurrence matrices (GLCM), spectral feature and etc. The improvement of this study shows this algorithm significantly removes some redundant feature and the classifiers get fully used. All these improvements lead to a higher accuracy on the determination of the shape of landslides on the high resolution remote sensing image, in particular the flexibility aimed at different kinds of landslides.

  1. Technique for Increasing the Selectivity of the Method of Laser Fragmentation/Laser-Induced Fluorescence

    Science.gov (United States)

    Bobrovnikov, S. M.; Gorlov, E. V.; Zharkov, V. I.

    2018-05-01

    A technique for increasing the selectivity of the method of detecting high-energy materials (HEMs) based on laser fragmentation of HEM molecules with subsequent laser excitation of fluorescence of the characteristic NO fragments from the first vibrational level of the ground state is suggested.

  2. Method for fluoride routine determination in urine of personnel exposed, by ion selective electrode

    International Nuclear Information System (INIS)

    Pires, M.A.F.; Bellintani, S.A.

    1986-01-01

    A simple, fast and sensible method is outlined for the determination of fluoride in urine of workers that handle fluorine compounds. The determination is based on the measurement of fluoride by ion selective electrode. Cationic interference like Ca ++ , Mg ++ , Fe +++ and Al +++ are complexed by EDTA and citric acid. (Author) [pt

  3. Development of modelling method selection tool for health services management: from problem structuring methods to modelling and simulation methods.

    Science.gov (United States)

    Jun, Gyuchan T; Morris, Zoe; Eldabi, Tillal; Harper, Paul; Naseer, Aisha; Patel, Brijesh; Clarkson, John P

    2011-05-19

    There is an increasing recognition that modelling and simulation can assist in the process of designing health care policies, strategies and operations. However, the current use is limited and answers to questions such as what methods to use and when remain somewhat underdeveloped. The aim of this study is to provide a mechanism for decision makers in health services planning and management to compare a broad range of modelling and simulation methods so that they can better select and use them or better commission relevant modelling and simulation work. This paper proposes a modelling and simulation method comparison and selection tool developed from a comprehensive literature review, the research team's extensive expertise and inputs from potential users. Twenty-eight different methods were identified, characterised by their relevance to different application areas, project life cycle stages, types of output and levels of insight, and four input resources required (time, money, knowledge and data). The characterisation is presented in matrix forms to allow quick comparison and selection. This paper also highlights significant knowledge gaps in the existing literature when assessing the applicability of particular approaches to health services management, where modelling and simulation skills are scarce let alone money and time. A modelling and simulation method comparison and selection tool is developed to assist with the selection of methods appropriate to supporting specific decision making processes. In particular it addresses the issue of which method is most appropriate to which specific health services management problem, what the user might expect to be obtained from the method, and what is required to use the method. In summary, we believe the tool adds value to the scarce existing literature on methods comparison and selection.

  4. Selection of candidate plus phenotypes of Jatropha curcas L. using method of paired comparisons

    Energy Technology Data Exchange (ETDEWEB)

    Mishra, D.K. [Silviculture Division, Arid Forest Research Institute, P.O. Krishi Mandi, New Pali Road, Jodhpur 342005, Rajasthan (India)

    2009-03-15

    Jatropha curcas L. (Euphorbiaceae) is an oil bearing species with multiple uses and considerable potential as a biodiesel crop. The present communication deals with the method of selecting plus phenotypes of J. curcas for exploiting genetic variability for further improvement. Candidate plus tree selection is the first and most important stage in any tree improvement programme. The selection of candidate plus plants (CPPs) is based upon various important attributes associated with the species and their relative ranking. Relative preference between various traits and scoring for each trait has been worked out by using the method of paired comparisons for the selection of CPP in J. curcas L. The most important ones are seed and oil yields. (author)

  5. Content-based image retrieval: Color-selection exploited

    NARCIS (Netherlands)

    Broek, E.L. van den; Vuurpijl, L.G.; Kisters, P. M. F.; Schmid, J.C.M. von; Moens, M.F.; Busser, R. de; Hiemstra, D.; Kraaij, W.

    2002-01-01

    This research presents a new color selection interface that facilitates query-by-color in Content-Based Image Retrieval (CBIR). Existing CBIR color selection interfaces, are being judged as non-intuitive and difficult to use. Our interface copes with these problems of usability. It is based on 11

  6. Content-Based Image Retrieval: Color-selection exploited

    NARCIS (Netherlands)

    Moens, Marie-Francine; van den Broek, Egon; Vuurpijl, L.G.; de Brusser, Rik; Kisters, P.M.F.; Hiemstra, Djoerd; Kraaij, Wessel; von Schmid, J.C.M.

    2002-01-01

    This research presents a new color selection interface that facilitates query-by-color in Content-Based Image Retrieval (CBIR). Existing CBIR color selection interfaces, are being judged as non-intuitive and difficult to use. Our interface copes with these problems of usability. It is based on 11

  7. Tungsten based catalysts for selective deoxygenation

    NARCIS (Netherlands)

    Gosselink, R.W.|info:eu-repo/dai/nl/326164081; Stellwagen, D.R.; Bitter, J.H.|info:eu-repo/dai/nl/160581435

    2013-01-01

    Over the past decades, impending oil shortages combined with petroleum market instability have prompted a search for a new source of both transportation fuels and bulk chemicals. Renewable bio-based feedstocks such as sugars, grains, and seeds are assumed to be capable of contributing to a

  8. Breast cancer tumor classification using LASSO method selection approach

    International Nuclear Information System (INIS)

    Celaya P, J. M.; Ortiz M, J. A.; Martinez B, M. R.; Solis S, L. O.; Castaneda M, R.; Garza V, I.; Martinez F, M.; Ortiz R, J. M.

    2016-10-01

    Breast cancer is one of the leading causes of deaths worldwide among women. Early tumor detection is key in reducing breast cancer deaths and screening mammography is the widest available method for early detection. Mammography is the most common and effective breast cancer screening test. However, the rate of positive findings is very low, making the radiologic interpretation monotonous and biased toward errors. In an attempt to alleviate radiological workload, this work presents a computer-aided diagnosis (CAD x) method aimed to automatically classify tumor lesions into malign or benign as a means to a second opinion. The CAD x methos, extracts image features, and classifies the screening mammogram abnormality into one of two categories: subject at risk of having malignant tumor (malign), and healthy subject (benign). In this study, 143 abnormal segmentation s (57 malign and 86 benign) from the Breast Cancer Digital Repository (BCD R) public database were used to train and evaluate the CAD x system. Percentile-rank (p-rank) was used to standardize the data. Using the LASSO feature selection methodology, the model achieved a Leave-one-out-cross-validation area under the receiver operating characteristic curve (Auc) of 0.950. The proposed method has the potential to rank abnormal lesions with high probability of malignant findings aiding in the detection of potential malign cases as a second opinion to the radiologist. (Author)

  9. Selective methods for polyphenols and sulphur dioxide determination in wines.

    Science.gov (United States)

    García-Guzmán, Juan J; Hernández-Artiga, María P; Palacios-Ponce de León, Lourdes; Bellido-Milla, Dolores

    2015-09-01

    A critical review to the methods recommended by international bodies and widely used in the winery industry and research studies was performed. A Laccase biosensor was applied to the selective determination of polyphenols in wines. The biosensor response was characterised and it responds mainly to o-diphenols which are the principal polyphenols responsible for the stability and sensory qualities of wines. The spectrophotometric method to determine free and total sulphur dioxide recommended for beers was applied directly to wines. A sampling of 14 red and white wines was performed and they were analysed for biosensor polyphenol index (IBP) and sulphur dioxide concentration (SO2). The antioxidant capacity by the ABTS(+) spectrophotometric method was also determined. A correlation study was performed to elucidate the influence of the polyphenols and SO2 on the wines stability. High correlations were found between IBP and antioxidant capacity and low correlation between SO2 and antioxidant capacity. To evaluate the benefits of wine drinking a new parameter (IBP/SO2) is proposed. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Breast cancer tumor classification using LASSO method selection approach

    Energy Technology Data Exchange (ETDEWEB)

    Celaya P, J. M.; Ortiz M, J. A.; Martinez B, M. R.; Solis S, L. O.; Castaneda M, R.; Garza V, I.; Martinez F, M.; Ortiz R, J. M., E-mail: morvymm@yahoo.com.mx [Universidad Autonoma de Zacatecas, Av. Ramon Lopez Velarde 801, Col. Centro, 98000 Zacatecas, Zac. (Mexico)

    2016-10-15

    Breast cancer is one of the leading causes of deaths worldwide among women. Early tumor detection is key in reducing breast cancer deaths and screening mammography is the widest available method for early detection. Mammography is the most common and effective breast cancer screening test. However, the rate of positive findings is very low, making the radiologic interpretation monotonous and biased toward errors. In an attempt to alleviate radiological workload, this work presents a computer-aided diagnosis (CAD x) method aimed to automatically classify tumor lesions into malign or benign as a means to a second opinion. The CAD x methos, extracts image features, and classifies the screening mammogram abnormality into one of two categories: subject at risk of having malignant tumor (malign), and healthy subject (benign). In this study, 143 abnormal segmentation s (57 malign and 86 benign) from the Breast Cancer Digital Repository (BCD R) public database were used to train and evaluate the CAD x system. Percentile-rank (p-rank) was used to standardize the data. Using the LASSO feature selection methodology, the model achieved a Leave-one-out-cross-validation area under the receiver operating characteristic curve (Auc) of 0.950. The proposed method has the potential to rank abnormal lesions with high probability of malignant findings aiding in the detection of potential malign cases as a second opinion to the radiologist. (Author)

  11. Selection of disposal contractor by multi criteria decision making methods

    Directory of Open Access Journals (Sweden)

    Cenker Korkmazer

    2016-08-01

    Full Text Available Hazardous waste is substance that threaten people and environment in case of improper storage, disposal and transport due to its concentration, physical and chemical properties. Companies producing hazardous waste as a result of several activities mostly do not have any own disposal facilities. In addition, they do not pay attention enough to determine the right contractor as a disposal facility. On the other hand, there are various qualitative and quantitative criteria affecting the selection of the contractor and conflicting with each other. The aim of the performed study is to assist one of these companies producing hazardous waste in the selection of the best contractor that eliminates hazardous waste economic and harmless way. In the study, contractor weights in percentage is calculated by using Analytic Network Process (ANP as one of the multi-criteria decision making (MCDM methods and widely used in the literature which considers both qualitative and quantitative criteria. In the next step, by the help of the mathematical model, contractors that will be given which type of hazardous waste are identified. This integrated approach can be used as a guide for similar firms.

  12. A Fourier transform method for the selection of a smoothing interval

    International Nuclear Information System (INIS)

    Kekre, H.B.; Madan, V.K.; Bairi, B.R.

    1989-01-01

    A novel method for the selection of a smoothing interval for the widely used Savitzky and Golay's smoothing filter is proposed. Complementary bandwidths for the nuclear spectral data and the smoothing filter are defined. The criterion for the selection of smoothing interval is based on matching the bandwidths of the spectral data to the filter. Using the above method five real observed spectral peaks of different full width at half maximum, viz. 23.5, 19.5, 17, 8.5 and 6.5 channels, were smoothed and the results are presented. (orig.)

  13. Decision Support System For Approval New Student And Majoring Selection Based On Student’s Interest And Talent By Fuzzy Multiple Decision Making, Simple Additive Weighting And Buble Sort Method In SMK Telekomunikasi Tunas Harapan

    Directory of Open Access Journals (Sweden)

    Dewi Nurdiyah

    2016-11-01

    Full Text Available Decision Support System for New Student Acceptance aims to simplify the Decision Maker who is the Committee of New Student Acceptance to select prospective new students based on eight criteria. That is registration number, the average value of National Examinations, medical tests, interview, their achievements, salary of parents per month, number of siblings who are still in school and administration department and give recommendations for the major of accepted students based on their interests and talents. There are four major in SMK Telekomunikasi Tunas Harapan, these are Rekayasa Perangkat Lunak (RPL, Teknik Komputer dan Jaringan (TKJ, Multimedia and Teknik Kendaraan Ringan (TKR. And the talents be measured by math test, electro test, daw test and physics test.

  14. Selected Methods For Increases Reliability The Of Electronic Systems Security

    Directory of Open Access Journals (Sweden)

    Paś Jacek

    2015-11-01

    Full Text Available The article presents the issues related to the different methods to increase the reliability of electronic security systems (ESS for example, a fire alarm system (SSP. Reliability of the SSP in the descriptive sense is a property preservation capacity to implement the preset function (e.g. protection: fire airport, the port, logistics base, etc., at a certain time and under certain conditions, e.g. Environmental, despite the possible non-compliance by a specific subset of elements this system. Analyzing the available literature on the ESS-SSP is not available studies on methods to increase the reliability (several works similar topics but moving with respect to the burglary and robbery (Intrusion. Based on the analysis of the set of all paths in the system suitability of the SSP for the scenario mentioned elements fire events (device critical because of security.

  15. Method of selecting optimum cross arm lengths for a 750 kV transmission line

    Energy Technology Data Exchange (ETDEWEB)

    Aleksandrov, G N; Olorokov, V P

    1965-01-01

    A method is presented, based on both technical and economic considerations, for selecting cross arm lengths for intermediate poles of power transmission lines according to the effects of internal overvoltage, methods from probability theory and mathematical statistics employed. The problem of optimum pole size is considered in terms of the effect of internal overvoltages for a prescribed maximum level of 2.1 PU currently used in the USSR for the design of 750 kV lines.

  16. Method for routine determination of fluoride in urine by selective ion- electrode

    International Nuclear Information System (INIS)

    Pires, M.A.F.; Bellintani, S.A.

    1985-01-01

    A simple, fast and sensitive method is outlined for determining fluoride in urine of workers who handle fluoride compounds. The determination is based on the measurement of fluoride by ion selective electrode. Cationic interferents like Ca ++ , Mg ++ , Fe +++ and Al +++ are complexed by EDTA and citric acid. Common anions present in urine, such as Cl - , PO --- 4 and SO -- 4 do not interfere in the method. (Author) [pt

  17. Improved Frame Mode Selection for AMR-WB+ Based on Decision Tree

    Science.gov (United States)

    Kim, Jong Kyu; Kim, Nam Soo

    In this letter, we propose a coding mode selection method for the AMR-WB+ audio coder based on a decision tree. In order to reduce computation while maintaining good performance, decision tree classifier is adopted with the closed loop mode selection results as the target classification labels. The size of the decision tree is controlled by pruning, so the proposed method does not increase the memory requirement significantly. Through an evaluation test on a database covering both speech and music materials, the proposed method is found to achieve a much better mode selection accuracy compared with the open loop mode selection module in the AMR-WB+.

  18. Cermet based solar selective absorbers : further selectivity improvement and developing new fabrication technique

    OpenAIRE

    Nejati, Mohammadreza

    2008-01-01

    Spectral selectivity of cermet based selective absorbers were increased by inducing surface roughness on the surface of the cermet layer using a roughening technique (deposition on hot substrates) or by micro-structuring the metallic substrates before deposition of the absorber coating using laser and imprint structuring techniques. Cu-Al2O3 cermet absorbers with very rough surfaces and excellent selectivity were obtained by employing a roughness template layer under the infrared reflective l...

  19. Fuzzy decision-making: a new method in model selection via various validity criteria

    International Nuclear Information System (INIS)

    Shakouri Ganjavi, H.; Nikravesh, K.

    2001-01-01

    Modeling is considered as the first step in scientific investigations. Several alternative models may be candida ted to express a phenomenon. Scientists use various criteria to select one model between the competing models. Based on the solution of a Fuzzy Decision-Making problem, this paper proposes a new method in model selection. The method enables the scientist to apply all desired validity criteria, systematically by defining a proper Possibility Distribution Function due to each criterion. Finally, minimization of a utility function composed of the Possibility Distribution Functions will determine the best selection. The method is illustrated through a modeling example for the A verage Daily Time Duration of Electrical Energy Consumption in Iran

  20. ARTIFICIAL NEURAL NETWORKS BASED GEARS MATERIAL SELECTION HYBRID INTELLIGENT SYSTEM

    Institute of Scientific and Technical Information of China (English)

    X.C. Li; W.X. Zhu; G. Chen; D.S. Mei; J. Zhang; K.M. Chen

    2003-01-01

    An artificial neural networks(ANNs) based gear material selection hybrid intelligent system is established by analyzing the individual advantages and weakness of expert system (ES) and ANNs and the applications in material select of them. The system mainly consists of tow parts: ES and ANNs. By being trained with much data samples,the back propagation (BP) ANN gets the knowledge of gear materials selection, and is able to inference according to user input. The system realizes the complementing of ANNs and ES. Using this system, engineers without materials selection experience can conveniently deal with gear materials selection.

  1. A Comparative Study of Feature Selection and Classification Methods for Gene Expression Data

    KAUST Repository

    Abusamra, Heba

    2013-01-01

    Different experiments have been applied to compare the performance of the classification methods with and without performing feature selection. Results revealed the important role of feature selection in classifying gene expression data. By performing feature selection, the classification accuracy can be significantly boosted by using a small number of genes. The relationship of features selected in different feature selection methods is investigated and the most frequent features selected in each fold among all methods for both datasets are evaluated.

  2. Selection of Vendor Based on Intuitionistic Fuzzy Analytical Hierarchy Process

    Directory of Open Access Journals (Sweden)

    Prabjot Kaur

    2014-01-01

    Full Text Available Business environment is characterized by greater domestic and international competitive position in the global market. Vendors play a key role in achieving the so-called corporate competition. It is not easy however to identify good vendors because evaluation is based on multiple criteria. In practice, for VSP most of the input information about the criteria is not known precisely. Intuitionistic fuzzy set is an extension of the classical fuzzy set theory (FST, which is a suitable way to deal with impreciseness. In other words, the application of intuitionistic fuzzy sets instead of fuzzy sets means the introduction of another degree of freedom called nonmembership function into the set description. In this paper, we proposed a triangular intuitionistic fuzzy number based approach for the vendor selection problem using analytical hierarchy process. The crisp data of the vendors is represented in the form of triangular intuitionistic fuzzy numbers. By applying AHP which involves decomposition, pairwise comparison, and deriving priorities for the various levels of the hierarchy, an overall crisp priority is obtained for ranking the best vendor. A numerical example illustrates our method. Lastly a sensitivity analysis is performed to find the most critical criterion on the basis of which vendor is selected.

  3. Triptycene-based dianhydrides, polyimides, methods of making each, and methods of use

    KAUST Repository

    Ghanem, Bader; Pinnau, Ingo; Swaidan, Raja

    2015-01-01

    A triptycene-based monomer, a method of making a triptycene-based monomer, a triptycene-based aromatic polyimide, a method of making a triptycene- based aromatic polyimide, methods of using triptycene-based aromatic polyimides, structures incorporating triptycene-based aromatic polyimides, and methods of gas separation are provided. Embodiments of the triptycene-based monomers and triptycene-based aromatic polyimides have high permeabilities and excellent selectivities. Embodiments of the triptycene-based aromatic polyimides have one or more of the following characteristics: intrinsic microporosity, good thermal stability, and enhanced solubility. In an exemplary embodiment, the triptycene-based aromatic polyimides are microporous and have a high BET surface area. In an exemplary embodiment, the triptycene-based aromatic polyimides can be used to form a gas separation membrane.

  4. Triptycene-based dianhydrides, polyimides, methods of making each, and methods of use

    KAUST Repository

    Ghanem, Bader

    2015-12-30

    A triptycene-based monomer, a method of making a triptycene-based monomer, a triptycene-based aromatic polyimide, a method of making a triptycene- based aromatic polyimide, methods of using triptycene-based aromatic polyimides, structures incorporating triptycene-based aromatic polyimides, and methods of gas separation are provided. Embodiments of the triptycene-based monomers and triptycene-based aromatic polyimides have high permeabilities and excellent selectivities. Embodiments of the triptycene-based aromatic polyimides have one or more of the following characteristics: intrinsic microporosity, good thermal stability, and enhanced solubility. In an exemplary embodiment, the triptycene-based aromatic polyimides are microporous and have a high BET surface area. In an exemplary embodiment, the triptycene-based aromatic polyimides can be used to form a gas separation membrane.

  5. Features of mechanical snubbers and the method of selection

    International Nuclear Information System (INIS)

    Sunakoda, Katsuaki

    1978-01-01

    In the oil snubbers used in the high radiation environment of nuclear power stations, gas generation from oil and the deterioration of rubber material for sealing occur due to radiation damage, therefore periodical inspection and replacement are required during operation. The mechanical snubbers developed as aseismatic supporters in place of oil snubbers have entered the stage of practical use, and are made by two companies in USA and a company in Japan. Their features as compared with oil snubbers are as follows. The cost and time required for the maintenance were made as small as possible because the increase of the service life of mechanical components can be expected. The temperature dependence of mechanical snubbers is small. The matters demanding attention in the maintenance are the secular change of lubricating oil and the effect of radiation, and the rust prevention of ball screw bearings. These problems are being studied by Power Reactor and Nuclear Fuel Development Corp. for the fast prototype reactor Monju. The structural feature is to convert the thrust movement of equipments and pipings due to thermal expansion and contraction or earthquakes into rotating motion, using ball screws. The features and the construction of SMS type mechanical snubbers, the test and inspection prior to their shipping, the method of selection, and the method of handling them in actual places are explained. (Kako, I.)

  6. A nodal method based on matrix-response method

    International Nuclear Information System (INIS)

    Rocamora Junior, F.D.; Menezes, A.

    1982-01-01

    A nodal method based in the matrix-response method, is presented, and its application to spatial gradient problems, such as those that exist in fast reactors, near the core - blanket interface, is investigated. (E.G.) [pt

  7. A new and fast image feature selection method for developing an optimal mammographic mass detection scheme.

    Science.gov (United States)

    Tan, Maxine; Pu, Jiantao; Zheng, Bin

    2014-08-01

    Selecting optimal features from a large image feature pool remains a major challenge in developing computer-aided detection (CAD) schemes of medical images. The objective of this study is to investigate a new approach to significantly improve efficacy of image feature selection and classifier optimization in developing a CAD scheme of mammographic masses. An image dataset including 1600 regions of interest (ROIs) in which 800 are positive (depicting malignant masses) and 800 are negative (depicting CAD-generated false positive regions) was used in this study. After segmentation of each suspicious lesion by a multilayer topographic region growth algorithm, 271 features were computed in different feature categories including shape, texture, contrast, isodensity, spiculation, local topological features, as well as the features related to the presence and location of fat and calcifications. Besides computing features from the original images, the authors also computed new texture features from the dilated lesion segments. In order to select optimal features from this initial feature pool and build a highly performing classifier, the authors examined and compared four feature selection methods to optimize an artificial neural network (ANN) based classifier, namely: (1) Phased Searching with NEAT in a Time-Scaled Framework, (2) A sequential floating forward selection (SFFS) method, (3) A genetic algorithm (GA), and (4) A sequential forward selection (SFS) method. Performances of the four approaches were assessed using a tenfold cross validation method. Among these four methods, SFFS has highest efficacy, which takes 3%-5% of computational time as compared to GA approach, and yields the highest performance level with the area under a receiver operating characteristic curve (AUC) = 0.864 ± 0.034. The results also demonstrated that except using GA, including the new texture features computed from the dilated mass segments improved the AUC results of the ANNs optimized

  8. Selective Distance-Based K+ Quantification on Paper-Based Microfluidics.

    Science.gov (United States)

    Gerold, Chase T; Bakker, Eric; Henry, Charles S

    2018-04-03

    In this study, paper-based microfluidic devices (μPADs) capable of K + quantification in aqueous samples, as well as in human serum, using both colorimetric and distance-based methods are described. A lipophilic phase containing potassium ionophore I (valinomycin) was utilized to achieve highly selective quantification of K + in the presence of Na + , Li + , and Mg 2+ ions. Successful addition of a suspended lipophilic phase to a wax printed paper-based device is described and offers a solution to current approaches that rely on organic solvents, which damage wax barriers. The approach provides an avenue for future alkali/alkaline quantification utilizing μPADs. Colorimetric spot tests allowed for K + quantification from 0.1-5.0 mM using only 3.00 μL of sample solution. Selective distance-based quantification required small sample volumes (6.00 μL) and gave responses sensitive enough to distinguish between 1.0 and 2.5 mM of sample K + . μPADs using distance-based methods were also capable of differentiating between 4.3 and 6.9 mM K + in human serum samples. Distance-based methods required no digital analysis, electronic hardware, or pumps; any steps required for quantification could be carried out using the naked eye.

  9. Multispectral iris recognition based on group selection and game theory

    Science.gov (United States)

    Ahmad, Foysal; Roy, Kaushik

    2017-05-01

    A commercially available iris recognition system uses only a narrow band of the near infrared spectrum (700-900 nm) while iris images captured in the wide range of 405 nm to 1550 nm offer potential benefits to enhance recognition performance of an iris biometric system. The novelty of this research is that a group selection algorithm based on coalition game theory is explored to select the best patch subsets. In this algorithm, patches are divided into several groups based on their maximum contribution in different groups. Shapley values are used to evaluate the contribution of patches in different groups. Results show that this group selection based iris recognition

  10. [Bases and methods of suturing].

    Science.gov (United States)

    Vogt, P M; Altintas, M A; Radtke, C; Meyer-Marcotty, M

    2009-05-01

    If pharmaceutic modulation of scar formation does not improve the quality of the healing process over conventional healing, the surgeon must rely on personal skill and experience. Therefore a profound knowledge of wound healing based on experimental and clinical studies supplemented by postsurgical means of scar management and basic techniques of planning incisions, careful tissue handling, and thorough knowledge of suturing remain the most important ways to avoid abnormal scarring. This review summarizes the current experimental and clinical bases of surgical scar management.

  11. Prototype selection based on FCM and its application in discrimination between nuclear explosion and earthquake

    International Nuclear Information System (INIS)

    Han Shaoqing; Li Xihai; Song Zibiao; Liu Daizhi

    2007-01-01

    The synergetic pattern recognition is a new way of pattern recognition with many excellent features such as noise resistance and deformity resistance. But when it is used in the discrimination between nuclear explosion and earthquake using existing methods of prototype selection, the results are not satisfying. A new method of prototype selection based on FCM is proposed in this paper. First, each group of training samples is clustered into c groups using FCM; then c barycenters or centers are chosen as prototypes. Experiment results show that compared with existing methods of prototype selection this new method is effective and it increases the recognition ratio greatly. (authors)

  12. Evaluation and selection of decision-making methods to assess landfill mining projects.

    Science.gov (United States)

    Hermann, Robert; Baumgartner, Rupert J; Vorbach, Stefan; Ragossnig, Arne; Pomberger, Roland

    2015-09-01

    For the first time in Austria, fundamental technological and economic studies on recovering secondary raw materials from large landfills have been carried out, based on the 'LAMIS - Landfill Mining Austria' pilot project. A main focus of the research - and the subject of this article - was to develop an assessment or decision-making procedure that allows landfill owners to thoroughly examine the feasibility of a landfill mining project in advance. Currently there are no standard procedures that would sufficiently cover all the multiple-criteria requirements. The basic structure of the multiple attribute decision making process was used to narrow down on selection, conceptual design and assessment of suitable procedures. Along with a breakdown into preliminary and main assessment, the entire foundation required was created, such as definitions of requirements to an assessment method, selection and accurate description of the various assessment criteria and classification of the target system for the present 'landfill mining' vs. 'retaining the landfill in after-care' decision-making problem. Based on these studies, cost-utility analysis and the analytical-hierarchy process were selected from the range of multiple attribute decision-making procedures and examined in detail. Overall, both methods have their pros and cons with regard to their use for assessing landfill mining projects. Merging these methods or connecting them with single-criteria decision-making methods (like the net present value method) may turn out to be reasonable and constitute an appropriate assessment method. © The Author(s) 2015.

  13. [Analysis on the accuracy of simple selection method of Fengshi (GB 31)].

    Science.gov (United States)

    Li, Zhixing; Zhang, Haihua; Li, Suhe

    2015-12-01

    To explore the accuracy of simple selection method of Fengshi (GB 31). Through the study of the ancient and modern data,the analysis and integration of the acupuncture books,the comparison of the locations of Fengshi (GB 31) by doctors from all dynasties and the integration of modern anatomia, the modern simple selection method of Fengshi (GB 31) is definite, which is the same as the traditional way. It is believed that the simple selec tion method is in accord with the human-oriented thought of TCM. Treatment by acupoints should be based on the emerging nature and the individual difference of patients. Also, it is proposed that Fengshi (GB 31) should be located through the integration between the simple method and body surface anatomical mark.

  14. 44 CFR 321.2 - Selection of the mobilization base.

    Science.gov (United States)

    2010-10-01

    ... base. 321.2 Section 321.2 Emergency Management and Assistance FEDERAL EMERGENCY MANAGEMENT AGENCY, DEPARTMENT OF HOMELAND SECURITY PREPAREDNESS MAINTENANCE OF THE MOBILIZATION BASE (DEPARTMENT OF DEFENSE, DEPARTMENT OF ENERGY, MARITIME ADMINISTRATION) § 321.2 Selection of the mobilization base. (a) The Department...

  15. Innovation in values based public health nursing student selection: A qualitative evaluation of candidate and selection panel member perspectives.

    Science.gov (United States)

    McGraw, Caroline; Abbott, Stephen; Brook, Judy

    2018-02-19

    Values based recruitment emerges from the premise that a high degree of value congruence, or the extent to which an individual's values are similar to those of the health organization in which they work, leads to organizational effectiveness. The aim of this evaluation was to explore how candidates and selection panel members experienced and perceived innovative methods of values based public health nursing student selection. The evaluation was framed by a qualitative exploratory design involving semi-structured interviews and a group exercise. Data were thematically analyzed. Eight semi-structured interviews were conducted with selection panel members. Twenty-two successful candidates took part in a group exercise. The use of photo elicitation interviews and situational judgment questions in the context of selection to a university-run public health nursing educational program was explored. While candidates were ambivalent about the use of photo elicitation interviews, with some misunderstanding the task, selection panel members saw the benefits for improving candidate expression and reducing gaming and deception. Situational interview questions were endorsed by candidates and selection panel members due to their fidelity to real-life problems and the ability of panel members to discern value congruence from candidates' responses. Both techniques offered innovative solutions to candidate selection for entry to the public health nursing education program. © 2018 Wiley Periodicals, Inc.

  16. Sol-gel based sensor for selective formaldehyde determination

    Energy Technology Data Exchange (ETDEWEB)

    Bunkoed, Opas [Trace Analysis and Biosensor Research Center, Prince of Songkla University, Hat Yai, Songkhla 90112 (Thailand); Department of Chemistry and Center for Innovation in Chemistry, Faculty of Science, Prince of Songkla University, Hat Yai, Songkhla 90112 (Thailand); Davis, Frank [Cranfield Health, Cranfield University, Bedford MK43 0AL (United Kingdom); Kanatharana, Proespichaya, E-mail: proespichaya.K@psu.ac.th [Trace Analysis and Biosensor Research Center, Prince of Songkla University, Hat Yai, Songkhla 90112 (Thailand); Department of Chemistry and Center for Innovation in Chemistry, Faculty of Science, Prince of Songkla University, Hat Yai, Songkhla 90112 (Thailand); Thavarungkul, Panote [Trace Analysis and Biosensor Research Center, Prince of Songkla University, Hat Yai, Songkhla 90112 (Thailand); Department of Physics, Faculty of Science, Prince of Songkla University, Hat Yai, Songkhla 90112 (Thailand); Higson, Seamus P.J., E-mail: s.p.j.higson@cranfield.ac.uk [Cranfield Health, Cranfield University, Bedford MK43 0AL (United Kingdom)

    2010-02-05

    We report the development of transparent sol-gels with entrapped sensitive and selective reagents for the detection of formaldehyde. The sampling method is based on the adsorption of formaldehyde from the air and reaction with {beta}-diketones (for example acetylacetone) in a sol-gel matrix to produce a yellow product, lutidine, which was detected directly. The proposed method does not require preparation of samples prior to analysis and allows both screening by visual detection and quantitative measurement by simple spectrophotometry. The detection limit of 0.03 ppmv formaldehyde is reported which is lower than the maximum exposure concentrations recommended by both the World Health Organisation (WHO) and the Occupational Safety and Health Administration (OSHA). This sampling method was found to give good reproducibility, the relative standard deviation at 0.2 and 1 ppmv being 6.3% and 4.6%, respectively. Other carbonyl compounds i.e. acetaldehyde, benzaldehyde, acetone and butanone do not interfere with this analytical approach. Results are provided for the determination of formaldehyde in indoor air.

  17. Sol-gel based sensor for selective formaldehyde determination

    International Nuclear Information System (INIS)

    Bunkoed, Opas; Davis, Frank; Kanatharana, Proespichaya; Thavarungkul, Panote; Higson, Seamus P.J.

    2010-01-01

    We report the development of transparent sol-gels with entrapped sensitive and selective reagents for the detection of formaldehyde. The sampling method is based on the adsorption of formaldehyde from the air and reaction with β-diketones (for example acetylacetone) in a sol-gel matrix to produce a yellow product, lutidine, which was detected directly. The proposed method does not require preparation of samples prior to analysis and allows both screening by visual detection and quantitative measurement by simple spectrophotometry. The detection limit of 0.03 ppmv formaldehyde is reported which is lower than the maximum exposure concentrations recommended by both the World Health Organisation (WHO) and the Occupational Safety and Health Administration (OSHA). This sampling method was found to give good reproducibility, the relative standard deviation at 0.2 and 1 ppmv being 6.3% and 4.6%, respectively. Other carbonyl compounds i.e. acetaldehyde, benzaldehyde, acetone and butanone do not interfere with this analytical approach. Results are provided for the determination of formaldehyde in indoor air.

  18. Based on Penalty Function Method

    Directory of Open Access Journals (Sweden)

    Ishaq Baba

    2015-01-01

    Full Text Available The dual response surface for simultaneously optimizing the mean and variance models as separate functions suffers some deficiencies in handling the tradeoffs between bias and variance components of mean squared error (MSE. In this paper, the accuracy of the predicted response is given a serious attention in the determination of the optimum setting conditions. We consider four different objective functions for the dual response surface optimization approach. The essence of the proposed method is to reduce the influence of variance of the predicted response by minimizing the variability relative to the quality characteristics of interest and at the same time achieving the specific target output. The basic idea is to convert the constraint optimization function into an unconstraint problem by adding the constraint to the original objective function. Numerical examples and simulations study are carried out to compare performance of the proposed method with some existing procedures. Numerical results show that the performance of the proposed method is encouraging and has exhibited clear improvement over the existing approaches.

  19. Design Guidelines for a Content-Based Image Retrieval Color-Selection Interface

    NARCIS (Netherlands)

    Eggen, Berry; van den Broek, Egon; van der Veer, Gerrit C.; Kisters, Peter M.F.; Willems, Rob; Vuurpijl, Louis G.

    2004-01-01

    In Content-Based Image Retrieval (CBIR) two query-methods exist: query-by-example and query-by-memory. The user either selects an example image or selects image features retrieved from memory (such as color, texture, spatial attributes, and shape) to define his query. Hitherto, research on CBIR

  20. Automatic variable selection method and a comparison for quantitative analysis in laser-induced breakdown spectroscopy

    Science.gov (United States)

    Duan, Fajie; Fu, Xiao; Jiang, Jiajia; Huang, Tingting; Ma, Ling; Zhang, Cong

    2018-05-01

    In this work, an automatic variable selection method for quantitative analysis of soil samples using laser-induced breakdown spectroscopy (LIBS) is proposed, which is based on full spectrum correction (FSC) and modified iterative predictor weighting-partial least squares (mIPW-PLS). The method features automatic selection without artificial processes. To illustrate the feasibility and effectiveness of the method, a comparison with genetic algorithm (GA) and successive projections algorithm (SPA) for different elements (copper, barium and chromium) detection in soil was implemented. The experimental results showed that all the three methods could accomplish variable selection effectively, among which FSC-mIPW-PLS required significantly shorter computation time (12 s approximately for 40,000 initial variables) than the others. Moreover, improved quantification models were got with variable selection approaches. The root mean square errors of prediction (RMSEP) of models utilizing the new method were 27.47 (copper), 37.15 (barium) and 39.70 (chromium) mg/kg, which showed comparable prediction effect with GA and SPA.

  1. COMPANY VALUATION METHODS BASED ON PATRIMONY

    Directory of Open Access Journals (Sweden)

    SUCIU GHEORGHE

    2013-02-01

    Full Text Available The methods used for the company valuation can be divided into 3 main groups: methods based on patrimony,methods based on financial performance, methods based both on patrimony and on performance. The companyvaluation methods based on patrimony are implemented taking into account the balance sheet or the financialstatement. The financial statement refers to that type of balance in which the assets are arranged according to liquidity,and the liabilities according to their financial maturity date. The patrimonial methods are based on the principle thatthe value of the company equals that of the patrimony it owns. From a legal point of view, the patrimony refers to allthe rights and obligations of a company. The valuation of companies based on their financial performance can be donein 3 ways: the return value, the yield value, the present value of the cash flows. The mixed methods depend both onpatrimony and on financial performance or can make use of other methods.

  2. A pragmatic pairwise group-decision method for selection of sites for nuclear power plants

    International Nuclear Information System (INIS)

    Kutbi, I.I.

    1987-01-01

    A pragmatic pairwise group-decision approach is applied to compare two regions in order to select the more suitable one for construction of nulcear power plants in the Kingdom of Saudi Arabia. The selection methodology is based on pairwise comparison by forced choice. The method facilitates rating of the regions or sites using simple calculations. Two regions, one close to Dhahran on the Arabian Gulf and another close to Jeddah on the Red Sea, are evaluated. No specific site in either region is considered at this stage. The comparison is based on a set of selection criteria which include (i) topography, (ii) geology, (iii) seismology, (iv) meteorology, (v) oceanography, (vi) hydrology and (vii) proximetry to oil and gas fields. The comparison shows that the Jeddah region is more suitable than the Dhahran region. (orig.)

  3. Conditional maximum-entropy method for selecting prior distributions in Bayesian statistics

    Science.gov (United States)

    Abe, Sumiyoshi

    2014-11-01

    The conditional maximum-entropy method (abbreviated here as C-MaxEnt) is formulated for selecting prior probability distributions in Bayesian statistics for parameter estimation. This method is inspired by a statistical-mechanical approach to systems governed by dynamics with largely separated time scales and is based on three key concepts: conjugate pairs of variables, dimensionless integration measures with coarse-graining factors and partial maximization of the joint entropy. The method enables one to calculate a prior purely from a likelihood in a simple way. It is shown, in particular, how it not only yields Jeffreys's rules but also reveals new structures hidden behind them.

  4. Method for selecting FBR development strategies in the presence of uncertainty

    International Nuclear Information System (INIS)

    Fraley, D.W.; Burnham, J.B.

    1981-12-01

    This report describes the methods used to probabilistically analyze data related to the uranium supply the FBR's competitive dates, development strategies' time and costs, and economic benefits. It also describes the econometric methods used to calculate the economic risks of mistiming the development. Seven strategies for developing the FBR are analyzed. The various measures of a strategy's performance - timing, costs, benefits, and risks - are combined into several criteria which are used to evaluate the seven strategies. Methods are described for selecting a strategy based on a number of alternative criteria

  5. Clustering and training set selection methods for improving the accuracy of quantitative laser induced breakdown spectroscopy

    International Nuclear Information System (INIS)

    Anderson, Ryan B.; Bell, James F.; Wiens, Roger C.; Morris, Richard V.; Clegg, Samuel M.

    2012-01-01

    We investigated five clustering and training set selection methods to improve the accuracy of quantitative chemical analysis of geologic samples by laser induced breakdown spectroscopy (LIBS) using partial least squares (PLS) regression. The LIBS spectra were previously acquired for 195 rock slabs and 31 pressed powder geostandards under 7 Torr CO 2 at a stand-off distance of 7 m at 17 mJ per pulse to simulate the operational conditions of the ChemCam LIBS instrument on the Mars Science Laboratory Curiosity rover. The clustering and training set selection methods, which do not require prior knowledge of the chemical composition of the test-set samples, are based on grouping similar spectra and selecting appropriate training spectra for the partial least squares (PLS2) model. These methods were: (1) hierarchical clustering of the full set of training spectra and selection of a subset for use in training; (2) k-means clustering of all spectra and generation of PLS2 models based on the training samples within each cluster; (3) iterative use of PLS2 to predict sample composition and k-means clustering of the predicted compositions to subdivide the groups of spectra; (4) soft independent modeling of class analogy (SIMCA) classification of spectra, and generation of PLS2 models based on the training samples within each class; (5) use of Bayesian information criteria (BIC) to determine an optimal number of clusters and generation of PLS2 models based on the training samples within each cluster. The iterative method and the k-means method using 5 clusters showed the best performance, improving the absolute quadrature root mean squared error (RMSE) by ∼ 3 wt.%. The statistical significance of these improvements was ∼ 85%. Our results show that although clustering methods can modestly improve results, a large and diverse training set is the most reliable way to improve the accuracy of quantitative LIBS. In particular, additional sulfate standards and specifically

  6. Selective carbon monoxide oxidation over Ag-based composite oxides

    Energy Technology Data Exchange (ETDEWEB)

    Guldur, C. [Gazi University, Ankara (Turkey). Chemical Engineering Department; Balikci, F. [Gazi University, Ankara (Turkey). Institute of Science and Technology, Environmental Science Department

    2002-02-01

    We report our results of the synthesis of 1 : 1 molar ratio of the silver cobalt and silver manganese composite oxide catalysts to remove carbon monoxide from hydrogen-rich fuels by the catalytic oxidation reaction. Catalysts were synthesized by the co-precipitation method. XRD, BET, TGA, catalytic activity and catalyst deactivation studies were used to identify active catalysts. Both CO oxidation and selective CO oxidation were carried out in a microreactor using a reaction gas mixture of 1 vol% CO in air and another gas mixture was prepared by mixing 1 vol% CO, 2 vol% O{sub 2}, 84 vol% H{sub 2}, the balance being He. 15 vol% CO{sub 2} was added to the reactant gas mixture in order to determine the effect of CO{sub 2}, reaction gases were passed through the humidifier to determine the effect of the water vapor on the oxidation reaction. It was demonstrated that metal oxide base was decomposed to the metallic phase and surface areas of the catalysts were decreased when the calcination temperature increased from 200{sup o}C to 500{sup o}C. Ag/Co composite oxide catalyst calcined at 200{sup o}C gave good activity at low temperatures and 90% of CO conversion at 180{sup o}C was obtained for the selective CO oxidation reaction. The addition of the impurities (CO{sub 2} or H{sub 2}O) decreased the activity of catalyst for selective CO oxidation in order to get highly rich hydrogen fuels. (author)

  7. Application of two-dimensional binary fingerprinting methods for the design of selective Tankyrase I inhibitors.

    Science.gov (United States)

    Muddukrishna, B S; Pai, Vasudev; Lobo, Richard; Pai, Aravinda

    2017-11-22

    In the present study, five important binary fingerprinting techniques were used to model novel flavones for the selective inhibition of Tankyrase I. From the fingerprints used: the fingerprint atom pairs resulted in a statistically significant 2D QSAR model using a kernel-based partial least square regression method. This model indicates that the presence of electron-donating groups positively contributes to activity, whereas the presence of electron withdrawing groups negatively contributes to activity. This model could be used to develop more potent as well as selective analogues for the inhibition of Tankyrase I. Schematic representation of 2D QSAR work flow.

  8. A comparison of U.S. and European methods for accident scenario, identificaton, selection and quantification

    International Nuclear Information System (INIS)

    Cadwallader, L.C.; Djerassi, H.; Lampin, I.

    1989-10-01

    This paper presents a comparison of the varying methods used to identify and select accident-initiating events for safety analysis and probabilistic risk assessment (PRA). Initiating events are important in that they define the extent of a given safety analysis or PRA. Comprehensiveness in identification and selection of initiating events is necessary to ensure that a thorough analysis is being performed. While total completeness cannot ever be realized, inclusion of all safety significant events can be attained. The European approach to initiating event identification and selection arises from within a newly developed Safety Analysis methodology framework. This is a functional approach, with accident initiators based on events that will cause a system or facility loss of function. The US method divides accident initiators into two groups, internal accident initiators into two groups, internal and external events. Since traditional US PRA techniques are applied to fusion facilities, the recommended PRA-based approach is a review of historical safety documents coupled with a facility-level Master Logic Diagram. The US and European methods are described, and both are applied to a proposed International Thermonuclear Experiment Reactor (ITER) Magnet System in a sample problem. Contrasts in the US and European methods are discussed. Within their respective frameworks, each method can provide the comprehensiveness of safety-significant events needed for a thorough analysis. 4 refs., 8 figs., 11 tabs

  9. Improving the time efficiency of the Fourier synthesis method for slice selection in magnetic resonance imaging.

    Science.gov (United States)

    Tahayori, B; Khaneja, N; Johnston, L A; Farrell, P M; Mareels, I M Y

    2016-01-01

    The design of slice selective pulses for magnetic resonance imaging can be cast as an optimal control problem. The Fourier synthesis method is an existing approach to solve these optimal control problems. In this method the gradient field as well as the excitation field are switched rapidly and their amplitudes are calculated based on a Fourier series expansion. Here, we provide a novel insight into the Fourier synthesis method via representing the Bloch equation in spherical coordinates. Based on the spherical Bloch equation, we propose an alternative sequence of pulses that can be used for slice selection which is more time efficient compared to the original method. Simulation results demonstrate that while the performance of both methods is approximately the same, the required time for the proposed sequence of pulses is half of the original sequence of pulses. Furthermore, the slice selectivity of both sequences of pulses changes with radio frequency field inhomogeneities in a similar way. We also introduce a measure, referred to as gradient complexity, to compare the performance of both sequences of pulses. This measure indicates that for a desired level of uniformity in the excited slice, the gradient complexity for the proposed sequence of pulses is less than the original sequence. Copyright © 2015 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  10. Detection of biomarkers for Hepatocellular Carcinoma using a hybrid univariate gene selection methods

    Directory of Open Access Journals (Sweden)

    Abdel Samee Nagwan M

    2012-08-01

    Full Text Available Abstract Background Discovering new biomarkers has a great role in improving early diagnosis of Hepatocellular carcinoma (HCC. The experimental determination of biomarkers needs a lot of time and money. This motivates this work to use in-silico prediction of biomarkers to reduce the number of experiments required for detecting new ones. This is achieved by extracting the most representative genes in microarrays of HCC. Results In this work, we provide a method for extracting the differential expressed genes, up regulated ones, that can be considered candidate biomarkers in high throughput microarrays of HCC. We examine the power of several gene selection methods (such as Pearson’s correlation coefficient, Cosine coefficient, Euclidean distance, Mutual information and Entropy with different estimators in selecting informative genes. A biological interpretation of the highly ranked genes is done using KEGG (Kyoto Encyclopedia of Genes and Genomes pathways, ENTREZ and DAVID (Database for Annotation, Visualization, and Integrated Discovery databases. The top ten genes selected using Pearson’s correlation coefficient and Cosine coefficient contained six genes that have been implicated in cancer (often multiple cancers genesis in previous studies. A fewer number of genes were obtained by the other methods (4 genes using Mutual information, 3genes using Euclidean distance and only one gene using Entropy. A better result was obtained by the utilization of a hybrid approach based on intersecting the highly ranked genes in the output of all investigated methods. This hybrid combination yielded seven genes (2 genes for HCC and 5 genes in different types of cancer in the top ten genes of the list of intersected genes. Conclusions To strengthen the effectiveness of the univariate selection methods, we propose a hybrid approach by intersecting several of these methods in a cascaded manner. This approach surpasses all of univariate selection methods when

  11. Accuracy of multi-trait genomic selection using different methods

    NARCIS (Netherlands)

    Calus, M.P.L.; Veerkamp, R.F.

    2011-01-01

    Background Genomic selection has become a very important tool in animal genetics and is rapidly emerging in plant genetics. It holds the promise to be particularly beneficial to select for traits that are difficult or expensive to measure, such as traits that are measured in one environment and

  12. A high order regularisation method for solving the Poisson equation and selected applications using vortex methods

    DEFF Research Database (Denmark)

    Hejlesen, Mads Mølholm

    ring dynamics is presented based on the alignment of the vorticity vector with the principal axis of the strain rate tensor.A novel iterative implementation of the Brinkman penalisation method is introduced for the enforcement of a fluid-solid interface in re-meshed vortex methods. The iterative scheme...... is included to explicitly fulfil the kinematic constraints of the flow field. The high order, unbounded particle-mesh based vortex method is used to simulate the instability, transition to turbulence and eventual destruction of a single vortex ring. From the simulation data, a novel analysis on the vortex...

  13. ADM guidance-Ceramics: Fracture toughness testing and method selection.

    Science.gov (United States)

    Cesar, Paulo Francisco; Della Bona, Alvaro; Scherrer, Susanne S; Tholey, Michael; van Noort, Richard; Vichi, Alessandro; Kelly, Robert; Lohbauer, Ulrich

    2017-06-01

    The objective is within the scope of the Academy of Dental Materials Guidance Project, which is to provide dental materials researchers with a critical analysis of fracture toughness (FT) tests such that the assessment of the FT of dental ceramics is conducted in a reliable, repeatable and reproducible way. Fracture mechanics theory and FT methodologies were critically reviewed to introduce basic fracture principles and determine the main advantages and disadvantages of existing FT methods from the standpoint of the dental researcher. The recommended methods for FT determination of dental ceramics were the Single Edge "V" Notch Beam (SEVNB), Single Edge Precracked Beam (SEPB), Chevron Notch Beam (CNB), and Surface Crack in Flexure (SCF). SEVNB's main advantage is the ease of producing the notch via a cutting disk, SEPB allows for production of an atomically sharp crack generated by a specific precracking device, CNB is technically difficult, but based on solid fracture mechanics solutions, and SCF involves fracture from a clinically sized precrack. The IF test should be avoided due to heavy criticism that has arisen in the engineering field regarding the empirical nature of the calculations used for FT determination. Dental researchers interested in FT measurement of dental ceramics should start with a broad review of fracture mechanics theory to understand the underlying principles involved in fast fracture of ceramics. The choice of FT methodology should be based on the pros and cons of each test, as described in this literature review. Copyright © 2017 The Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.

  14. METHODICAL BASES OF MANAGEMENT OF INSURANCE PORTFOLIO

    Directory of Open Access Journals (Sweden)

    Serdechna Yulia

    2018-01-01

    Full Text Available Introduction. Despite the considerable arsenal of developments in the issues of assessing the management of the insurance portfolio remains unresolved. In order to detail, specify and further systematize the indicators for the indicated evaluation, the publications of scientists are analyzed. The purpose of the study is to analyze existing methods by which it is possible to formulate and manage the insurance portfolio in order to achieve its balance, which will contribute to ensuring the financial reliability of the insurance company. Results. The description of the essence of the concept of “management of insurance portfolio”, as the application of actuarial methods and techniques to the combination of various insurance risks offered for insurance or are already part of the insurance portfolio, allowing to adjust the size and structure of the portfolio in order to ensure its financial stability, achievement the maximum level of income of an insurance organization, preservation of the value of its equity and financial security of insurance liabilities. It is determined that the main methods by which the insurer’s insurance portfolio can be formed and managed is the selection of risks; reinsurance operations that ensure diversification of risks; formation and placement of insurance reserves, which form the financial basis of insurance activities. The method of managing an insurance portfolio, which can be both active and passive, is considered. Conclusions. It is determined that the insurance portfolio is the basis on which all the activities of the insurer are based and which determines its financial stability. The combination of methods and technologies applied to the insurance portfolio is a management method that can be both active and passive and has a number of specific methods through which the insurer’s insurance portfolio can be formed and managed. It is substantiated that each insurance company aims to form an efficient and

  15. Evaluation and selection of in-situ leaching mining method using analytic hierarchy process

    International Nuclear Information System (INIS)

    Zhao Heyong; Tan Kaixuan; Liu Huizhen

    2007-01-01

    According to the complicated conditions and main influence factors of in-situ leaching min- ing, a model and processes of analytic hierarchy are established for evaluation and selection of in-situ leaching mining methods based on analytic hierarchy process. Taking a uranium mine in Xinjiang of China for example, the application of this model is presented. The results of analyses and calculation indicate that the acid leaching is the optimum project. (authors)

  16. Supplier selection criteria and methods: past, present and future

    OpenAIRE

    MUKHERJEE, KRISHNENDU

    2014-01-01

    Sole purpose of supplier selection is not limited to get supply at low cost and at right time. Supplier selection is a strategic decision to fulfil company’s goal for long period of time at low risk. To accomplish this objective companies are moving from reactive buying to proactive buying to give more priority to co-creation of wealth with supplier/s. Considering this issue an attempt has been made in this paper to give systematic review of supplier selection and evaluation process from 2005...

  17. portfolio optimization based on nonparametric estimation methods

    Directory of Open Access Journals (Sweden)

    mahsa ghandehari

    2017-03-01

    Full Text Available One of the major issues investors are facing with in capital markets is decision making about select an appropriate stock exchange for investing and selecting an optimal portfolio. This process is done through the risk and expected return assessment. On the other hand in portfolio selection problem if the assets expected returns are normally distributed, variance and standard deviation are used as a risk measure. But, the expected returns on assets are not necessarily normal and sometimes have dramatic differences from normal distribution. This paper with the introduction of conditional value at risk ( CVaR, as a measure of risk in a nonparametric framework, for a given expected return, offers the optimal portfolio and this method is compared with the linear programming method. The data used in this study consists of monthly returns of 15 companies selected from the top 50 companies in Tehran Stock Exchange during the winter of 1392 which is considered from April of 1388 to June of 1393. The results of this study show the superiority of nonparametric method over the linear programming method and the nonparametric method is much faster than the linear programming method.

  18. The effects of predictor method factors on selection outcomes: A modular approach to personnel selection procedures.

    Science.gov (United States)

    Lievens, Filip; Sackett, Paul R

    2017-01-01

    Past reviews and meta-analyses typically conceptualized and examined selection procedures as holistic entities. We draw on the product design literature to propose a modular approach as a complementary perspective to conceptualizing selection procedures. A modular approach means that a product is broken down into its key underlying components. Therefore, we start by presenting a modular framework that identifies the important measurement components of selection procedures. Next, we adopt this modular lens for reviewing the available evidence regarding each of these components in terms of affecting validity, subgroup differences, and applicant perceptions, as well as for identifying new research directions. As a complement to the historical focus on holistic selection procedures, we posit that the theoretical contributions of a modular approach include improved insight into the isolated workings of the different components underlying selection procedures and greater theoretical connectivity among different selection procedures and their literatures. We also outline how organizations can put a modular approach into operation to increase the variety in selection procedures and to enhance the flexibility in designing them. Overall, we believe that a modular perspective on selection procedures will provide the impetus for programmatic and theory-driven research on the different measurement components of selection procedures. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  19. Aptamer Selection Express: A Novel Method for Rapid Single-Step Selection and Sensing of Aptamers

    National Research Council Canada - National Science Library

    Fan, Maomian; Roper, Shelly; Andrews, Carrie; Allman, Amity; Bruno, John; Kiel, Jonathan

    2008-01-01

    ...). This process has been used to select aptamers against different types of targets (Bacillus anthracis spores, Bacillus thuringiensis spores, MS-2 bacteriophage, ovalbumin, and botulinum neurotoxin...

  20. Performance Measurement Model for the Supplier Selection Based on AHP

    Directory of Open Access Journals (Sweden)

    Fabio De Felice

    2015-10-01

    Full Text Available The performance of the supplier is a crucial factor for the success or failure of any company. Rational and effective decision making in terms of the supplier selection process can help the organization to optimize cost and quality functions. The nature of supplier selection processes is generally complex, especially when the company has a large variety of products and vendors. Over the years, several solutions and methods have emerged for addressing the supplier selection problem (SSP. Experience and studies have shown that there is no best way for evaluating and selecting a specific supplier process, but that it varies from one organization to another. The aim of this research is to demonstrate how a multiple attribute decision making approach can be effectively applied for the supplier selection process.

  1. Assessment of different quit smoking methods selected by patients in tobacco cessation centers in Iran

    Directory of Open Access Journals (Sweden)

    Gholamreza Heydari

    2015-01-01

    Full Text Available Background: Health systems play key roles in identifying tobacco users and providing evidence-based care to help them quit. This treatment includes different methods such as simple medical consultation, medication, and telephone counseling. To assess different quit smoking methods selected by patients in tobacco cessation centers in Iran in order to identify those that are most appropriate for the country health system. Methods: In this cross-sectional and descriptive study, a random sample of all quit centers at the country level was used to obtain a representative sample. Patients completed the self-administered questionnaire which contained 10 questions regarding the quality, cost, effect, side effects and the results of quitting methods using a 5-point Likert-type scale. Percentages, frequencies, mean, T-test, and variance analyses were computed for all study variables. Results: A total of 1063 smokers returned completed survey questionnaires. The most frequently used methods were Nicotine Replacement Therapy (NRT and combination therapy (NRT and Counseling with 228 and 163 individuals reporting these respectively. The least used methods were hypnotism (n = 8 and the quit and win (n = 17. The methods which gained the maximum scores were respectively the combined method, personal and Champix with means of 21.4, 20.4 and 18.4. The minimum scores were for e-cigarettes, hypnotism and education with means of 12.8, 11 and 10.8, respectively. There were significant differences in mean scores based on different cities and different methods. Conclusions: According to smokers′ selection the combined therapy, personal methods and Champix are the most effective methods for quit smoking and these methods could be much more considered in the country health system.

  2. Local Strategy Combined with a Wavelength Selection Method for Multivariate Calibration

    Directory of Open Access Journals (Sweden)

    Haitao Chang

    2016-06-01

    Full Text Available One of the essential factors influencing the prediction accuracy of multivariate calibration models is the quality of the calibration data. A local regression strategy, together with a wavelength selection approach, is proposed to build the multivariate calibration models based on partial least squares regression. The local algorithm is applied to create a calibration set of spectra similar to the spectrum of an unknown sample; the synthetic degree of grey relation coefficient is used to evaluate the similarity. A wavelength selection method based on simple-to-use interactive self-modeling mixture analysis minimizes the influence of noisy variables, and the most informative variables of the most similar samples are selected to build the multivariate calibration model based on partial least squares regression. To validate the performance of the proposed method, ultraviolet-visible absorbance spectra of mixed solutions of food coloring analytes in a concentration range of 20–200 µg/mL is measured. Experimental results show that the proposed method can not only enhance the prediction accuracy of the calibration model, but also greatly reduce its complexity.

  3. Methods Dealing with Complexity in Selecting Joint Venture Contractors for Large-Scale Infrastructure Projects

    Directory of Open Access Journals (Sweden)

    Ru Liang

    2018-01-01

    Full Text Available The magnitude of business dynamics has increased rapidly due to increased complexity, uncertainty, and risk of large-scale infrastructure projects. This fact made it increasingly tough to “go alone” into a contractor. As a consequence, joint venture contractors with diverse strengths and weaknesses cooperatively bid for bidding. Understanding project complexity and making decision on the optimal joint venture contractor is challenging. This paper is to study how to select joint venture contractors for undertaking large-scale infrastructure projects based on a multiattribute mathematical model. Two different methods are developed to solve the problem. One is based on ideal points and the other one is based on balanced ideal advantages. Both of the two methods consider individual difference in expert judgment and contractor attributes. A case study of Hong Kong-Zhuhai-Macao-Bridge (HZMB project in China is used to demonstrate how to apply these two methods and their advantages.

  4. Development of Base Transceiver Station Selection Algorithm for ...

    African Journals Online (AJOL)

    TEMS) equipment was carried out on the existing BTSs, and a linear algorithm optimization program based on the spectral link efficiency of each BTS was developed, the output of this site optimization gives the selected number of base station sites ...

  5. An active learning representative subset selection method using net analyte signal

    Science.gov (United States)

    He, Zhonghai; Ma, Zhenhe; Luan, Jingmin; Cai, Xi

    2018-05-01

    To guarantee accurate predictions, representative samples are needed when building a calibration model for spectroscopic measurements. However, in general, it is not known whether a sample is representative prior to measuring its concentration, which is both time-consuming and expensive. In this paper, a method to determine whether a sample should be selected into a calibration set is presented. The selection is based on the difference of Euclidean norm of net analyte signal (NAS) vector between the candidate and existing samples. First, the concentrations and spectra of a group of samples are used to compute the projection matrix, NAS vector, and scalar values. Next, the NAS vectors of candidate samples are computed by multiplying projection matrix with spectra of samples. Scalar value of NAS is obtained by norm computation. The distance between the candidate set and the selected set is computed, and samples with the largest distance are added to selected set sequentially. Last, the concentration of the analyte is measured such that the sample can be used as a calibration sample. Using a validation test, it is shown that the presented method is more efficient than random selection. As a result, the amount of time and money spent on reference measurements is greatly reduced.

  6. Cooperative Technique Based on Sensor Selection in Wireless Sensor Network

    Directory of Open Access Journals (Sweden)

    ISLAM, M. R.

    2009-02-01

    Full Text Available An energy efficient cooperative technique is proposed for the IEEE 1451 based Wireless Sensor Networks. Selected numbers of Wireless Transducer Interface Modules (WTIMs are used to form a Multiple Input Single Output (MISO structure wirelessly connected with a Network Capable Application Processor (NCAP. Energy efficiency and delay of the proposed architecture are derived for different combination of cluster size and selected number of WTIMs. Optimized constellation parameters are used for evaluating derived parameters. The results show that the selected MISO structure outperforms the unselected MISO structure and it shows energy efficient performance than SISO structure after a certain distance.

  7. Feature Selection Methods for Zero-Shot Learning of Neural Activity

    Directory of Open Access Journals (Sweden)

    Carlos A. Caceres

    2017-06-01

    Full Text Available Dimensionality poses a serious challenge when making predictions from human neuroimaging data. Across imaging modalities, large pools of potential neural features (e.g., responses from particular voxels, electrodes, and temporal windows have to be related to typically limited sets of stimuli and samples. In recent years, zero-shot prediction models have been introduced for mapping between neural signals and semantic attributes, which allows for classification of stimulus classes not explicitly included in the training set. While choices about feature selection can have a substantial impact when closed-set accuracy, open-set robustness, and runtime are competing design objectives, no systematic study of feature selection for these models has been reported. Instead, a relatively straightforward feature stability approach has been adopted and successfully applied across models and imaging modalities. To characterize the tradeoffs in feature selection for zero-shot learning, we compared correlation-based stability to several other feature selection techniques on comparable data sets from two distinct imaging modalities: functional Magnetic Resonance Imaging and Electrocorticography. While most of the feature selection methods resulted in similar zero-shot prediction accuracies and spatial/spectral patterns of selected features, there was one exception; A novel feature/attribute correlation approach was able to achieve those accuracies with far fewer features, suggesting the potential for simpler prediction models that yield high zero-shot classification accuracy.

  8. Aplication of AHP method in partner's selection process for supply chain development

    Directory of Open Access Journals (Sweden)

    Barac Nada

    2012-06-01

    Full Text Available The process of developing a supply chain is long and complex. with many restrictions and obstacles that accompany it. In this paper the authors focus on the first stage in developing the supply chain and the selection process and selection of partners. This phase of the development significantly affect the competitive position of the supply chain and create value for the consumer. Selected partners or 'links' of the supply chain influence the future performance of the chain which points to the necessity of full commitment to this process. The process of selection and choice of partner is conditioned by the key criteria that are used on that occasion. The use of inadequate criteria may endanger the whole process of building a supply chain partner selection through inadequate future supply chain needs. This paper is an analysis of partner selection based on key criteria used by managers in Serbia. For this purpose we used the AHP method. the results show that these are the top ranked criteria in terms of managers.

  9. Using and reporting the Delphi method for selecting healthcare quality indicators: a systematic review.

    Science.gov (United States)

    Boulkedid, Rym; Abdoul, Hendy; Loustau, Marine; Sibony, Olivier; Alberti, Corinne

    2011-01-01

    Delphi technique is a structured process commonly used to developed healthcare quality indicators, but there is a little recommendation for researchers who wish to use it. This study aimed 1) to describe reporting of the Delphi method to develop quality indicators, 2) to discuss specific methodological skills for quality indicators selection 3) to give guidance about this practice. Three electronic data bases were searched over a 30 years period (1978-2009). All articles that used the Delphi method to select quality indicators were identified. A standardized data extraction form was developed. Four domains (questionnaire preparation, expert panel, progress of the survey and Delphi results) were assessed. Of 80 included studies, quality of reporting varied significantly between items (9% for year's number of experience of the experts to 98% for the type of Delphi used). Reporting of methodological aspects needed to evaluate the reliability of the survey was insufficient: only 39% (31/80) of studies reported response rates for all rounds, 60% (48/80) that feedback was given between rounds, 77% (62/80) the method used to achieve consensus and 57% (48/80) listed quality indicators selected at the end of the survey. A modified Delphi procedure was used in 49/78 (63%) with a physical meeting of the panel members, usually between Delphi rounds. Median number of panel members was 17(Q1:11; Q3:31). In 40/70 (57%) studies, the panel included multiple stakeholders, who were healthcare professionals in 95% (38/40) of cases. Among 75 studies describing criteria to select quality indicators, 28 (37%) used validity and 17(23%) feasibility. The use and reporting of the Delphi method for quality indicators selection need to be improved. We provide some guidance to the investigators to improve the using and reporting of the method in future surveys.

  10. Using and reporting the Delphi method for selecting healthcare quality indicators: a systematic review.

    Directory of Open Access Journals (Sweden)

    Rym Boulkedid

    Full Text Available OBJECTIVE: Delphi technique is a structured process commonly used to developed healthcare quality indicators, but there is a little recommendation for researchers who wish to use it. This study aimed 1 to describe reporting of the Delphi method to develop quality indicators, 2 to discuss specific methodological skills for quality indicators selection 3 to give guidance about this practice. METHODOLOGY AND MAIN FINDING: Three electronic data bases were searched over a 30 years period (1978-2009. All articles that used the Delphi method to select quality indicators were identified. A standardized data extraction form was developed. Four domains (questionnaire preparation, expert panel, progress of the survey and Delphi results were assessed. Of 80 included studies, quality of reporting varied significantly between items (9% for year's number of experience of the experts to 98% for the type of Delphi used. Reporting of methodological aspects needed to evaluate the reliability of the survey was insufficient: only 39% (31/80 of studies reported response rates for all rounds, 60% (48/80 that feedback was given between rounds, 77% (62/80 the method used to achieve consensus and 57% (48/80 listed quality indicators selected at the end of the survey. A modified Delphi procedure was used in 49/78 (63% with a physical meeting of the panel members, usually between Delphi rounds. Median number of panel members was 17(Q1:11; Q3:31. In 40/70 (57% studies, the panel included multiple stakeholders, who were healthcare professionals in 95% (38/40 of cases. Among 75 studies describing criteria to select quality indicators, 28 (37% used validity and 17(23% feasibility. CONCLUSION: The use and reporting of the Delphi method for quality indicators selection need to be improved. We provide some guidance to the investigators to improve the using and reporting of the method in future surveys.

  11. Enhancements to Graph based methods for Multi Document Summarization

    Directory of Open Access Journals (Sweden)

    Rengaramanujam Srinivasan

    2009-01-01

    Full Text Available This paper focuses its attention on extractivesummarization using popular graph based approaches. Graphbased methods can be broadly classified into two categories:non- PageRank type and PageRank type methods. Of themethods already proposed - the Centrality Degree methodbelongs to the former category while LexRank and ContinuousLexRank methods belong to later category. The paper goes on tosuggest two enhancements to both PageRank type and non-PageRank type methods. The first modification is that ofrecursively discounting the selected sentences, i.e. if a sentence isselected it is removed from further consideration and the nextsentence is selected based upon the contributions of theremaining sentences only. Next the paper suggests a method ofincorporating position weight to these schemes. In all 14methods –six of non- PageRank type and eight of PageRanktype have been investigated. To clearly distinguish betweenvarious schemes, we call the methods of incorporatingdiscounting and position weight enhancements over LexicalRank schemes as Sentence Rank (SR methods. Intrinsicevaluation of all the 14 graph based methods were done usingconventional Precision metric and metrics earlier proposed byus - Effectiveness1 (E1 and Effectiveness2 (E2. Experimentalstudy brings out that the proposed SR methods are superior toall the other methods.

  12. A simple method for finding explicit analytic transition densities of diffusion processes with general diploid selection.

    Science.gov (United States)

    Song, Yun S; Steinrücken, Matthias

    2012-03-01

    The transition density function of the Wright-Fisher diffusion describes the evolution of population-wide allele frequencies over time. This function has important practical applications in population genetics, but finding an explicit formula under a general diploid selection model has remained a difficult open problem. In this article, we develop a new computational method to tackle this classic problem. Specifically, our method explicitly finds the eigenvalues and eigenfunctions of the diffusion generator associated with the Wright-Fisher diffusion with recurrent mutation and arbitrary diploid selection, thus allowing one to obtain an accurate spectral representation of the transition density function. Simplicity is one of the appealing features of our approach. Although our derivation involves somewhat advanced mathematical concepts, the resulting algorithm is quite simple and efficient, only involving standard linear algebra. Furthermore, unlike previous approaches based on perturbation, which is applicable only when the population-scaled selection coefficient is small, our method is nonperturbative and is valid for a broad range of parameter values. As a by-product of our work, we obtain the rate of convergence to the stationary distribution under mutation-selection balance.

  13. A method for selecting cis-acting regulatory sequences that respond to small molecule effectors

    Directory of Open Access Journals (Sweden)

    Allas Ülar

    2010-08-01

    Full Text Available Abstract Background Several cis-acting regulatory sequences functioning at the level of mRNA or nascent peptide and specifically influencing transcription or translation have been described. These regulatory elements often respond to specific chemicals. Results We have developed a method that allows us to select cis-acting regulatory sequences that respond to diverse chemicals. The method is based on the β-lactamase gene containing a random sequence inserted into the beginning of the ORF. Several rounds of selection are used to isolate sequences that suppress β-lactamase expression in response to the compound under study. We have isolated sequences that respond to erythromycin, troleandomycin, chloramphenicol, meta-toluate and homoserine lactone. By introducing synonymous and non-synonymous mutations we have shown that at least in the case of erythromycin the sequences act at the peptide level. We have also tested the cross-activities of the constructs and found that in most cases the sequences respond most strongly to the compound on which they were isolated. Conclusions Several selected peptides showed ligand-specific changes in amino acid frequencies, but no consensus motif could be identified. This is consistent with previous observations on natural cis-acting peptides, showing that it is often impossible to demonstrate a consensus. Applying the currently developed method on a larger scale, by selecting and comparing an extended set of sequences, might allow the sequence rules underlying the activity of cis-acting regulatory peptides to be identified.

  14. Nuclear site selection and environmental protection. The decision making methods

    International Nuclear Information System (INIS)

    Bresson, G.; Lacourly, G.; Fitoussi, L.

    1975-01-01

    The selection of the site of a nuclear plant most often comes to seek out and compound between two trends: that of the operator who will try and reduce the cost price of his product to the lowest and that of the protectionist who will try and reduce to the minimum the hazards resulting from the plant operation. Such a compromise is the result of a more or less empirical choice, which enters within the frame of a cost-benefit analysis, in which theoretically, the choice between several possible solutions is made of the selection giving the higher advantage [fr

  15. Out of the box selection and application of UX evaluation methods and practical cases

    DEFF Research Database (Denmark)

    Obrist, Marianna; Knoche, Hendrik; Basapur, Santosh

    2013-01-01

    The scope of user experience supersedes the concept of usability and other performance oriented measures by including for example users' emotions, motivations and a strong focus on the context of use. The purpose of this tutorial is to motivate researchers and practitioners to think about...... the challenging questions around how to select and apply UX evaluation methods for different usage contexts, in particular for the "home" and "mobile" context, relevant for TV-based services. Next to a general understanding of UX evaluation and available methods, we will provide concrete UX evaluation case...

  16. Moment Conditions Selection Based on Adaptive Penalized Empirical Likelihood

    Directory of Open Access Journals (Sweden)

    Yunquan Song

    2014-01-01

    Full Text Available Empirical likelihood is a very popular method and has been widely used in the fields of artificial intelligence (AI and data mining as tablets and mobile application and social media dominate the technology landscape. This paper proposes an empirical likelihood shrinkage method to efficiently estimate unknown parameters and select correct moment conditions simultaneously, when the model is defined by moment restrictions in which some are possibly misspecified. We show that our method enjoys oracle-like properties; that is, it consistently selects the correct moment conditions and at the same time its estimator is as efficient as the empirical likelihood estimator obtained by all correct moment conditions. Moreover, unlike the GMM, our proposed method allows us to carry out confidence regions for the parameters included in the model without estimating the covariances of the estimators. For empirical implementation, we provide some data-driven procedures for selecting the tuning parameter of the penalty function. The simulation results show that the method works remarkably well in terms of correct moment selection and the finite sample properties of the estimators. Also, a real-life example is carried out to illustrate the new methodology.

  17. Knowledge based expert system approach to instrumentation selection (INSEL

    Directory of Open Access Journals (Sweden)

    S. Barai

    2004-08-01

    Full Text Available The selection of appropriate instrumentation for any structural measurement of civil engineering structure is a complex task. Recent developments in Artificial Intelligence (AI can help in an organized use of experiential knowledge available on instrumentation for laboratory and in-situ measurement. Usually, the instrumentation decision is based on the experience and judgment of experimentalists. The heuristic knowledge available for different types of measurement is domain dependent and the information is scattered in varied knowledge sources. The knowledge engineering techniques can help in capturing the experiential knowledge. This paper demonstrates a prototype knowledge based system for INstrument SELection (INSEL assistant where the experiential knowledge for various structural domains can be captured and utilized for making instrumentation decision. In particular, this Knowledge Based Expert System (KBES encodes the heuristics on measurement and demonstrates the instrument selection process with reference to steel bridges. INSEL runs on a microcomputer and uses an INSIGHT 2+ environment.

  18. SELECT NUMERICAL METHODS FOR MODELING THE DYNAMICS SYSTEMS

    Directory of Open Access Journals (Sweden)

    Tetiana D. Panchenko

    2016-07-01

    Full Text Available The article deals with the creation of methodical support for mathematical modeling of dynamic processes in elements of the systems and complexes. As mathematical models ordinary differential equations have been used. The coefficients of the equations of the models can be nonlinear functions of the process. The projection-grid method is used as the main tool. It has been described iterative method algorithms taking into account the approximate solution prior to the first iteration and proposed adaptive control computing process. The original method of estimation error in the calculation solutions as well as for a given level of error of the technique solutions purpose adaptive method for solving configuration parameters is offered. A method for setting an adaptive method for solving the settings for a given level of error is given. The proposed method can be used for distributed computing.

  19. Selecting The Best Initial Method For A Transportation Problem ...

    African Journals Online (AJOL)

    This paper is concerned with determining the best initial method for a transportation problem. Seven initial methods are considered and compared. One is a new method that has not been reported in the literature. Comparison is done on the basis of the number of iterations required to reach the final solution if the concerned ...

  20. On dynamic selection of households for direct marketing based on Markov chain models with memory

    NARCIS (Netherlands)

    Otter, Pieter W.

    A simple, dynamic selection procedure is proposed, based on conditional, expected profits using Markov chain models with memory. The method is easy to apply, only frequencies and mean values have to be calculated or estimated. The method is empirically illustrated using a data set from a charitable

  1. Selection of robust methods. Numerical examples and results

    Czech Academy of Sciences Publication Activity Database

    Víšek, Jan Ámos

    2005-01-01

    Roč. 21, č. 11 (2005), s. 1-58 ISSN 1212-074X R&D Projects: GA ČR(CZ) GA402/03/0084 Institutional research plan: CEZ:AV0Z10750506 Keywords : robust regression * model selection * uniform consistency of M-estimators Subject RIV: BA - General Mathematics

  2. GMDH Method with Genetic Selection Algorithm and Cloning

    Czech Academy of Sciences Publication Activity Database

    Jiřina, Marcel; Jiřina jr., M.

    2013-01-01

    Roč. 23, č. 5 (2013), s. 451-464 ISSN 1210-0552 Institutional support: RVO:67985807 Keywords : multivariate data * GMDH * linear regression * Gauss-Markov conditions * cloning * genetic selection * classification Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.412, year: 2013

  3. A holistic method for selecting tidal stream energy hotspots under technical, economic and functional constraints

    International Nuclear Information System (INIS)

    Vazquez, A.; Iglesias, G.

    2016-01-01

    Highlights: • A method for selecting the most suitable sites for tidal stream farms was presented. • The selection was based on relevant technical, economic and functional aspects. • As a case study, a model of the Bristol Channel was implemented and validated. - Abstract: Although a number of prospective locations for tidal stream farms have been identified, the development of a unified approach for selecting the optimum site in a region remains a current research topic. The objective of this work is to develop and apply a methodology for determining the most suitable sites for tidal stream farms, i.e. sites whose characteristics maximise power performance, minimise cost and avoid conflicts with competing uses of the marine space. Illustrated through a case study in the Bristol Channel, the method uses a validated hydrodynamics model to identify highly energetic areas and a geospatial Matlab-based program (designed ad hoc) to estimate the energy output that a tidal farm at the site with a given technology would have. This output is then used to obtain the spatial distribution of the levelised cost of energy and, on this basis, to preselect certain areas. Subsequently, potential conflicts with other functions of the marine space (e.g. fishing, shipping) are considered. The result is a selection of areas for tidal stream energy development based on a holistic approach, encompassing the relevant technical, economic and functional aspects. This methodology can lead to a significant improvement in the selection of tidal sites, thereby increasing the possibilities of project acceptance and development.

  4. Performance-Based Technology Selection Filter description report

    International Nuclear Information System (INIS)

    O'Brien, M.C.; Morrison, J.L.; Morneau, R.A.; Rudin, M.J.; Richardson, J.G.

    1992-05-01

    A formal methodology has been developed for identifying technology gaps and assessing innovative or postulated technologies for inclusion in proposed Buried Waste Integrated Demonstration (BWID) remediation systems. Called the Performance-Based Technology Selection Filter, the methodology provides a formalized selection process where technologies and systems are rated and assessments made based on performance measures, and regulatory and technical requirements. The results are auditable, and can be validated with field data. This analysis methodology will be applied to the remedial action of transuranic contaminated waste pits and trenches buried at the Idaho National Engineering Laboratory (INEL)

  5. Performance-Based Technology Selection Filter description report

    Energy Technology Data Exchange (ETDEWEB)

    O' Brien, M.C.; Morrison, J.L.; Morneau, R.A.; Rudin, M.J.; Richardson, J.G.

    1992-05-01

    A formal methodology has been developed for identifying technology gaps and assessing innovative or postulated technologies for inclusion in proposed Buried Waste Integrated Demonstration (BWID) remediation systems. Called the Performance-Based Technology Selection Filter, the methodology provides a formalized selection process where technologies and systems are rated and assessments made based on performance measures, and regulatory and technical requirements. The results are auditable, and can be validated with field data. This analysis methodology will be applied to the remedial action of transuranic contaminated waste pits and trenches buried at the Idaho National Engineering Laboratory (INEL).

  6. Objective Method for Selecting Outdoor Reporting Conditions for Photovoltaic Performance

    International Nuclear Information System (INIS)

    Maish, A.

    1999-01-01

    Outdoor performance of photovoltaic modules and systems depends on prevailing conditions at the time of measurement. Outdoor test conditions must be relevant to device performance and readily attainable. Flat-plate, nonconcentrator PV device performance is reported with respect to fixed conditions referred to as Standard Reporting Conditions (SRC) of 1 kW/m plane of array total irradiance, 25 C device temperature, and a reference spectral distribution at air mass 1.5 under certain atmospheric conditions. We report a method of analyzing historical meteorological and irradiance data to determine the range of outdoor environmental parameters and solar irradiance components that affect solar collector performance when the SRC 1 kW/m total irradiance value occurs outdoors. We used data from the 30 year U.S. National Solar Radiation Data Base (NSRDB) , restricting irradiance conditions to within +/- 25 W/m of 1 kW/m on a solar tracking flat-plate collector. The distributions of environmental parameter values under these conditions are non-Gaussian and site dependent. Therefore the median, as opposed to the mean, of the observed distributions is chosen to represent appropriate outdoor reporting conditions. We found the average medians for the direct beam component (834 W/m), ambient temperature (24.4 C), total column water vapor (1.4 cm), and air mass (1.43) are near commonly used SRC values. Average median wind speed (4.4 m/s) and broadband aerosol optical depth (0.08) were significantly different from commonly used values

  7. Proposed Project Selection Method for Human Support Research and Technology Development (HSR&TD)

    Science.gov (United States)

    Jones, Harry

    2005-01-01

    The purpose of HSR&TD is to deliver human support technologies to the Exploration Systems Mission Directorate (ESMD) that will be selected for future missions. This requires identifying promising candidate technologies and advancing them in technology readiness until they are acceptable. HSR&TD must select an may of technology development projects, guide them, and either terminate or continue them, so as to maximize the resulting number of usable advanced human support technologies. This paper proposes an effective project scoring methodology to support managing the HSR&TD project portfolio. Researchers strongly disagree as to what are the best technology project selection methods, or even if there are any proven ones. Technology development is risky and outstanding achievements are rare and unpredictable. There is no simple formula for success. Organizations that are satisfied with their project selection approach typically use a mix of financial, strategic, and scoring methods in an open, established, explicit, formal process. This approach helps to build consensus and develop management insight. It encourages better project proposals by clarifying the desired project attributes. We propose a project scoring technique based on a method previously used in a federal laboratory and supported by recent research. Projects are ranked by their perceived relevance, risk, and return - a new 3 R's. Relevance is the degree to which the project objective supports the HSR&TD goal of developing usable advanced human support technologies. Risk is the estimated probability that the project will achieve its specific objective. Return is the reduction in mission life cycle cost obtained if the project is successful. If the project objective technology performs a new function with no current cost, its return is the estimated cash value of performing the new function. The proposed project selection scoring method includes definitions of the criteria, a project evaluation

  8. Emotion of Physiological Signals Classification Based on TS Feature Selection

    Institute of Scientific and Technical Information of China (English)

    Wang Yujing; Mo Jianlin

    2015-01-01

    This paper propose a method of TS-MLP about emotion recognition of physiological signal.It can recognize emotion successfully by Tabu search which selects features of emotion’s physiological signals and multilayer perceptron that is used to classify emotion.Simulation shows that it has achieved good emotion classification performance.

  9. Progressive sampling-based Bayesian optimization for efficient and automatic machine learning model selection.

    Science.gov (United States)

    Zeng, Xueqiang; Luo, Gang

    2017-12-01

    Machine learning is broadly used for clinical data analysis. Before training a model, a machine learning algorithm must be selected. Also, the values of one or more model parameters termed hyper-parameters must be set. Selecting algorithms and hyper-parameter values requires advanced machine learning knowledge and many labor-intensive manual iterations. To lower the bar to machine learning, miscellaneous automatic selection methods for algorithms and/or hyper-parameter values have been proposed. Existing automatic selection methods are inefficient on large data sets. This poses a challenge for using machine learning in the clinical big data era. To address the challenge, this paper presents progressive sampling-based Bayesian optimization, an efficient and automatic selection method for both algorithms and hyper-parameter values. We report an implementation of the method. We show that compared to a state of the art automatic selection method, our method can significantly reduce search time, classification error rate, and standard deviation of error rate due to randomization. This is major progress towards enabling fast turnaround in identifying high-quality solutions required by many machine learning-based clinical data analysis tasks.

  10. Route Selection with Unspecified Sites Using Knowledge Based Genetic Algorithm

    Science.gov (United States)

    Kanoh, Hitoshi; Nakamura, Nobuaki; Nakamura, Tomohiro

    This paper addresses the problem of selecting a route to a given destination that traverses several non-specific sites (e.g. a bank, a gas station) as requested by a driver. The proposed solution uses a genetic algorithm that includes viral infection. The method is to generate two populations of viruses as domain specific knowledge in addition to a population of routes. A part of an arterial road is regarded as a main virus, and a road that includes a site is regarded as a site virus. An infection occurs between two points common to a candidate route and the virus, and involves the substitution of the intersections carried by the virus for those on the existing candidate route. Crossover and infection determine the easiest-to-drive and quasi-shortest route through the objective landmarks. Experiments using actual road maps show that this infection-based mechanism is an effective way of solving the problem. Our strategy is general, and can be effectively used in other optimization problems.

  11. Sustainable Supplier Performance Evaluation and Selection with Neofuzzy TOPSIS Method

    OpenAIRE

    Chaharsooghi, S. K.; Ashrafi, Mehdi

    2014-01-01

    Supplier selection plays an important role in the supply chain management and traditional criteria such as price, quality, and flexibility are considered for supplier performance evaluation in researches. In recent years sustainability has received more attention in the supply chain management literature with triple bottom line (TBL) describing the sustainability in supply chain management with social, environmental, and economic initiatives. This paper explores sustainability in supply chain...

  12. Comparison of selected methods of prediction of wine exports and imports

    Directory of Open Access Journals (Sweden)

    Radka Šperková

    2008-01-01

    Full Text Available For prediction of future events, there exist a number of methods usable in managerial practice. Decision on which of them should be used in a particular situation depends not only on the amount and quality of input information, but also on a subjective managerial judgement. Paper performs a practical application and consequent comparison of results of two selected methods, which are statistical method and deductive method. Both methods were used for predicting wine exports and imports in (from the Czech Republic. Prediction was done in 2003 and it related to the economic years 2003/2004, 2004/2005, 2005/2006, and 2006/2007, within which it was compared with the real values of the given indicators.Within the deductive methods there were characterized the most important factors of external environment including the most important influence according to authors’ opinion, which was the integration of the Czech Republic into the EU from 1st May, 2004. On the contrary, the statistical method of time-series analysis did not regard the integration, which is comes out of its principle. Statistics only calculates based on data from the past, and cannot incorporate the influence of irregular future conditions, just as the EU integration. Because of this the prediction based on deductive method was more optimistic and more precise in terms of its difference from real development in the given field.

  13. Multi-Objective Particle Swarm Optimization Approach for Cost-Based Feature Selection in Classification.

    Science.gov (United States)

    Zhang, Yong; Gong, Dun-Wei; Cheng, Jian

    2017-01-01

    Feature selection is an important data-preprocessing technique in classification problems such as bioinformatics and signal processing. Generally, there are some situations where a user is interested in not only maximizing the classification performance but also minimizing the cost that may be associated with features. This kind of problem is called cost-based feature selection. However, most existing feature selection approaches treat this task as a single-objective optimization problem. This paper presents the first study of multi-objective particle swarm optimization (PSO) for cost-based feature selection problems. The task of this paper is to generate a Pareto front of nondominated solutions, that is, feature subsets, to meet different requirements of decision-makers in real-world applications. In order to enhance the search capability of the proposed algorithm, a probability-based encoding technology and an effective hybrid operator, together with the ideas of the crowding distance, the external archive, and the Pareto domination relationship, are applied to PSO. The proposed PSO-based multi-objective feature selection algorithm is compared with several multi-objective feature selection algorithms on five benchmark datasets. Experimental results show that the proposed algorithm can automatically evolve a set of nondominated solutions, and it is a highly competitive feature selection method for solving cost-based feature selection problems.

  14. Attribute based selection of thermoplastic resin for vacuum infusion process

    DEFF Research Database (Denmark)

    Prabhakaran, R.T. Durai; Lystrup, Aage; Løgstrup Andersen, Tom

    2011-01-01

    The composite industry looks toward a new material system (resins) based on thermoplastic polymers for the vacuum infusion process, similar to the infusion process using thermosetting polymers. A large number of thermoplastics are available in the market with a variety of properties suitable...... for different engineering applications, and few of those are available in a not yet polymerised form suitable for resin infusion. The proper selection of a new resin system among these thermoplastic polymers is a concern for manufactures in the current scenario and a special mathematical tool would...... be beneficial. In this paper, the authors introduce a new decision making tool for resin selection based on significant attributes. This article provides a broad overview of suitable thermoplastic material systems for vacuum infusion process available in today’s market. An illustrative example—resin selection...

  15. An ontological knowledge based system for selection of process monitoring and analysis tools

    DEFF Research Database (Denmark)

    Singh, Ravendra; Gernaey, Krist; Gani, Rafiqul

    2010-01-01

    monitoring and analysis tools for a wide range of operations has made their selection a difficult, time consuming and challenging task. Therefore, an efficient and systematic knowledge base coupled with an inference system is necessary to support the optimal selection of process monitoring and analysis tools......, satisfying the process and user constraints. A knowledge base consisting of the process knowledge as well as knowledge on measurement methods and tools has been developed. An ontology has been designed for knowledge representation and management. The developed knowledge base has a dual feature. On the one...... procedures has been developed to retrieve the data/information stored in the knowledge base....

  16. A novel heterogeneous training sample selection method on space-time adaptive processing

    Science.gov (United States)

    Wang, Qiang; Zhang, Yongshun; Guo, Yiduo

    2018-04-01

    The performance of ground target detection about space-time adaptive processing (STAP) decreases when non-homogeneity of clutter power is caused because of training samples contaminated by target-like signals. In order to solve this problem, a novel nonhomogeneous training sample selection method based on sample similarity is proposed, which converts the training sample selection into a convex optimization problem. Firstly, the existing deficiencies on the sample selection using generalized inner product (GIP) are analyzed. Secondly, the similarities of different training samples are obtained by calculating mean-hausdorff distance so as to reject the contaminated training samples. Thirdly, cell under test (CUT) and the residual training samples are projected into the orthogonal subspace of the target in the CUT, and mean-hausdorff distances between the projected CUT and training samples are calculated. Fourthly, the distances are sorted in order of value and the training samples which have the bigger value are selective preference to realize the reduced-dimension. Finally, simulation results with Mountain-Top data verify the effectiveness of the proposed method.

  17. Dose selection based on physiologically based pharmacokinetic (PBPK) approaches.

    Science.gov (United States)

    Jones, Hannah M; Mayawala, Kapil; Poulin, Patrick

    2013-04-01

    Physiologically based pharmacokinetic (PBPK) models are built using differential equations to describe the physiology/anatomy of different biological systems. Readily available in vitro and in vivo preclinical data can be incorporated into these models to not only estimate pharmacokinetic (PK) parameters and plasma concentration-time profiles, but also to gain mechanistic insight into compound properties. They provide a mechanistic framework to understand and extrapolate PK and dose across in vitro and in vivo systems and across different species, populations and disease states. Using small molecule and large molecule examples from the literature and our own company, we have shown how PBPK techniques can be utilised for human PK and dose prediction. Such approaches have the potential to increase efficiency, reduce the need for animal studies, replace clinical trials and increase PK understanding. Given the mechanistic nature of these models, the future use of PBPK modelling in drug discovery and development is promising, however some limitations need to be addressed to realise its application and utility more broadly.

  18. Robot soccer action selection based on Q learning

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    This paper researches robot soccer action selection based on Q learning . The robot learn to activate particular behavior given their current situation and reward signal. We adopt neural network to implementations of Q learning for their generalization properties and limited computer memory requirements

  19. Evaporation rate-based selection of supramolecular chirality.

    Science.gov (United States)

    Hattori, Shingo; Vandendriessche, Stefaan; Koeckelberghs, Guy; Verbiest, Thierry; Ishii, Kazuyuki

    2017-03-09

    We demonstrate the evaporation rate-based selection of supramolecular chirality for the first time. P-type aggregates prepared by fast evaporation, and M-type aggregates prepared by slow evaporation are kinetic and thermodynamic products under dynamic reaction conditions, respectively. These findings provide a novel solution reaction chemistry under the dynamic reaction conditions.

  20. Validity of selected cardiovascular field-based test among Malaysian ...

    African Journals Online (AJOL)

    Based on emerge obese problem among Malaysian, this research is formulated to validate published tests among healthy female adult. Selected test namely; 20 meter multi-stage shuttle run, 2.4km run test, 1 mile walk test and Harvard Step test were correlated with laboratory test (Bruce protocol) to find the criterion validity ...

  1. Solar Thermal AIR Collector Based on New Type Selective Coating

    Directory of Open Access Journals (Sweden)

    Musiy, R.Y.

    2014-01-01

    Full Text Available Based on the best for optical performance and selective coating solar thermal air collector, which operates by solar power on the principle of simultaneous ventilation and heating facilities, is designed. It can be used for vacation homes, museums, wooden churches, warehouses, garages, houses, greenhouses etc.

  2. Exposure level from selected base station tower around Kuala Nerus

    African Journals Online (AJOL)

    Health risk due to RF radiation exposure from base station tower (BST) has been debated for years leading to public concerns. Thus, this preliminary study aims to measure, evaluate and analyze the exposure level on three selected BST around Kuala Nerus. The measurement of exposure level in terms of voltage ...

  3. Evaluation of methods for selecting the midventilation bin in 4DCT scans of lung cancer patients

    DEFF Research Database (Denmark)

    Nygaard, Ditte Eklund; Persson, Gitte Fredberg; Brink, Carsten

    2013-01-01

    based on: 1) visual evaluation of tumour displacement; 2) rigid registration of tumour position; 3) diaphragm displacement in the CC direction; and 4) carina displacement in the CC direction. Determination of the MidV bin based on the displacement of the manually delineated gross tumour volume (GTV.......4-5.4) mm, 1.9 (0.5-6.9) mm, 2.0 (0.5-12.3) mm and 1.1 (0.4-5.4) mm for the visual, rigid registration, diaphragm, carina, and reference method. Median (range) absolute difference between geometric MidV error for the evaluated methods and the reference method was 0.0 (0.0-1.2) mm, 0.0 (0.0-1.7) mm, 0.7 (0.......0-3.9) mm and 1.0 (0.0-6.9) mm for the visual, rigid registration, diaphragm and carina method. Conclusion. The visual and semi-automatic rigid registration methods were equivalent in accuracy for selecting the MidV bin of a 4DCT scan. The methods based on diaphragm and carina displacement cannot...

  4. Infrared face recognition based on LBP histogram and KW feature selection

    Science.gov (United States)

    Xie, Zhihua

    2014-07-01

    The conventional LBP-based feature as represented by the local binary pattern (LBP) histogram still has room for performance improvements. This paper focuses on the dimension reduction of LBP micro-patterns and proposes an improved infrared face recognition method based on LBP histogram representation. To extract the local robust features in infrared face images, LBP is chosen to get the composition of micro-patterns of sub-blocks. Based on statistical test theory, Kruskal-Wallis (KW) feature selection method is proposed to get the LBP patterns which are suitable for infrared face recognition. The experimental results show combination of LBP and KW features selection improves the performance of infrared face recognition, the proposed method outperforms the traditional methods based on LBP histogram, discrete cosine transform(DCT) or principal component analysis(PCA).

  5. Use of magnesium silicate as a selective absorbent in radioimmunological method of determination of insulin level in human serum

    Energy Technology Data Exchange (ETDEWEB)

    Bogoniowska, Z; Stelmasiak, T [Wojskowy Instytut Higieny i Epidemiologii, Warsaw (Poland)

    1974-01-01

    The authors present a radioimmunological method for determination of insulin (IRI) level in the human serum using magnesium silicate (talc) as adsorbent. The method is based on the phenomenon of selective adsorption of the free radioactive hormone. The optimal parameters for the method were determined. The serum level of IRI in clinically healthy subjects after oral glucose loading was established. The obtained results were compared with the results obtained by the radioimmunological method of double antibodies in stochastically grouped samples.

  6. Improving Classification of Protein Interaction Articles Using Context Similarity-Based Feature Selection.

    Science.gov (United States)

    Chen, Yifei; Sun, Yuxing; Han, Bing-Qing

    2015-01-01

    Protein interaction article classification is a text classification task in the biological domain to determine which articles describe protein-protein interactions. Since the feature space in text classification is high-dimensional, feature selection is widely used for reducing the dimensionality of features to speed up computation without sacrificing classification performance. Many existing feature selection methods are based on the statistical measure of document frequency and term frequency. One potential drawback of these methods is that they treat features separately. Hence, first we design a similarity measure between the context information to take word cooccurrences and phrase chunks around the features into account. Then we introduce the similarity of context information to the importance measure of the features to substitute the document and term frequency. Hence we propose new context similarity-based feature selection methods. Their performance is evaluated on two protein interaction article collections and compared against the frequency-based methods. The experimental results reveal that the context similarity-based methods perform better in terms of the F1 measure and the dimension reduction rate. Benefiting from the context information surrounding the features, the proposed methods can select distinctive features effectively for protein interaction article classification.

  7. A New Manufacturing Service Selection and Composition Method Using Improved Flower Pollination Algorithm

    Directory of Open Access Journals (Sweden)

    Wenyu Zhang

    2016-01-01

    Full Text Available With an increasing number of manufacturing services, the means by which to select and compose these manufacturing services have become a challenging problem. It can be regarded as a multiobjective optimization problem that involves a variety of conflicting quality of service (QoS attributes. In this study, a multiobjective optimization model of manufacturing service composition is presented that is based on QoS and an environmental index. Next, the skyline operator is applied to reduce the solution space. And then a new method called improved Flower Pollination Algorithm (FPA is proposed for solving the problem of manufacturing service selection and composition. The improved FPA enhances the performance of basic FPA by combining the latter with crossover and mutation operators of the Differential Evolution (DE algorithm. Finally, a case study is conducted to compare the proposed method with other evolutionary algorithms, including the Genetic Algorithm, DE, basic FPA, and extended FPA. The experimental results reveal that the proposed method performs best at solving the problem of manufacturing service selection and composition.

  8. A Parameter Selection Method for Wind Turbine Health Management through SCADA Data

    Directory of Open Access Journals (Sweden)

    Mian Du

    2017-02-01

    Full Text Available Wind turbine anomaly or failure detection using machine learning techniques through supervisory control and data acquisition (SCADA system is drawing wide attention from academic and industry While parameter selection is important for modelling a wind turbine’s condition, only a few papers have been published focusing on this issue and in those papers interconnections among sub-components in a wind turbine are used to address this problem. However, merely the interconnections for decision making sometimes is too general to provide a parameter list considering the differences of each SCADA dataset. In this paper, a method is proposed to provide more detailed suggestions on parameter selection based on mutual information. First, the copula is proven to be capable of simplifying the estimation of mutual information. Then an empirical copulabased mutual information estimation method (ECMI is introduced for application. After that, a real SCADA dataset is adopted to test the method, and the results show the effectiveness of the ECMI in providing parameter selection suggestions when physical knowledge is not accurate enough.

  9. A nodal method based on the response-matrix method

    International Nuclear Information System (INIS)

    Cunha Menezes Filho, A. da; Rocamora Junior, F.D.

    1983-02-01

    A nodal approach based on the Response-Matrix method is presented with the purpose of investigating the possibility of mixing two different allocations in the same problem. It is found that the use of allocation of albedo combined with allocation of direct reflection produces good results for homogeneous fast reactor configurations. (Author) [pt

  10. Transmit antenna selection based on shadowing side information

    KAUST Repository

    Yilmaz, Ferkan

    2011-05-01

    In this paper, we propose a new transmit antenna selection scheme based on shadowing side information. In the proposed scheme, single transmit antenna which has the highest shadowing coefficient is selected. By the proposed technique, usage of the feedback channel and channel estimation complexity at the receiver can be reduced. We consider independent but not identically distributed Generalized-K composite fading model, which is a general composite fading & shadowing channel model for wireless environments. Exact closed-form outage probability, moment generating function and symbol error probability expressions are derived. In addition, theoretical performance results are validated by Monte Carlo simulations. © 2011 IEEE.

  11. Transmit antenna selection based on shadowing side information

    KAUST Repository

    Yilmaz, Ferkan; Yilmaz, Ahmet Oǧuz; Alouini, Mohamed-Slim; Kucur, Oǧuz

    2011-01-01

    In this paper, we propose a new transmit antenna selection scheme based on shadowing side information. In the proposed scheme, single transmit antenna which has the highest shadowing coefficient is selected. By the proposed technique, usage of the feedback channel and channel estimation complexity at the receiver can be reduced. We consider independent but not identically distributed Generalized-K composite fading model, which is a general composite fading & shadowing channel model for wireless environments. Exact closed-form outage probability, moment generating function and symbol error probability expressions are derived. In addition, theoretical performance results are validated by Monte Carlo simulations. © 2011 IEEE.

  12. An Empirical Study of Wrappers for Feature Subset Selection based on a Parallel Genetic Algorithm: The Multi-Wrapper Model

    KAUST Repository

    Soufan, Othman

    2012-09-01

    Feature selection is the first task of any learning approach that is applied in major fields of biomedical, bioinformatics, robotics, natural language processing and social networking. In feature subset selection problem, a search methodology with a proper criterion seeks to find the best subset of features describing data (relevance) and achieving better performance (optimality). Wrapper approaches are feature selection methods which are wrapped around a classification algorithm and use a performance measure to select the best subset of features. We analyze the proper design of the objective function for the wrapper approach and highlight an objective based on several classification algorithms. We compare the wrapper approaches to different feature selection methods based on distance and information based criteria. Significant improvement in performance, computational time, and selection of minimally sized feature subsets is achieved by combining different objectives for the wrapper model. In addition, considering various classification methods in the feature selection process could lead to a global solution of desirable characteristics.

  13. Statistical methods and applications from a historical perspective selected issues

    CERN Document Server

    Mignani, Stefania

    2014-01-01

    The book showcases a selection of peer-reviewed papers, the preliminary versions of which were presented at a conference held 11-13 June 2011 in Bologna and organized jointly by the Italian Statistical Society (SIS), the National Institute of Statistics (ISTAT) and the Bank of Italy. The theme of the conference was "Statistics in the 150 years of the Unification of Italy." The celebration of the anniversary of Italian unification provided the opportunity to examine and discuss the methodological aspects and applications from a historical perspective and both from a national and international point of view. The critical discussion on the issues of the past has made it possible to focus on recent advances, considering the studies of socio-economic and demographic changes in European countries.

  14. Multi criteria decision making methods for location selection of distribution centers

    Directory of Open Access Journals (Sweden)

    Romita Chakraborty

    2013-10-01

    Full Text Available In recent years, major challenges such as, increase in inflexible consumer demands and to improve the competitive advantage, it has become necessary for various industrial organizations all over the world to focus on strategies that will help them achieve cost reduction, continual quality improvement, increased customer satisfaction and on time delivery performance. As a result, selection of the most suitable and optimal facility location for a new organization or expansion of an existing location is one of the most important strategic issues, required to fulfill all of these above mentioned objectives. In order to sustain in the global competitive market of 21st century, many industrial organizations have begun to concentrate on the proper selection of the plant site or best facility location. The best location is that which results in higher economic benefits through increased productivity and good distribution network. When a choice is to be made from among several alternative facility locations, it is necessary to compare their performance characteristics in a decisive way. As the facility location selection problem involves multiple conflicting criteria and a finite set of potential candidate alternatives, different multi-criteria decision-making (MCDM methods can be effectively applied to solve such type of problem. In this paper, four well known MCDM methods have been applied on a facility location selection problem and their relative ranking performances are compared. Because of disagreement in the ranks obtained by the four different MCDM methods a final ranking method based on REGIME has been proposed by the authors to facilitate the decision making process.

  15. Comparative studies of praseodymium(III) selective sensors based on newly synthesized Schiff's bases

    International Nuclear Information System (INIS)

    Gupta, Vinod K.; Goyal, Rajendra N.; Pal, Manoj K.; Sharma, Ram A.

    2009-01-01

    Praseodymium ion selective polyvinyl chloride (PVC) membrane sensors, based on two new Schiff's bases 1,3-diphenylpropane-1,3-diylidenebis(azan-1-ylidene)diphenol (M 1 ) and N,N'-bis(pyridoxylideneiminato) ethylene (M 2 ) have been developed and studied. The sensor having membrane composition of PVC: o-NPOE: ionophore (M 1 ): NaTPB (w/w; mg) of 150: 300: 8: 5 showed best performances in comparison to M 2 based membranes. The sensor based on (M 1 ) exhibits the working concentration range 1.0 x 10 -8 to 1.0 x 10 -2 M with a detection limit of 5.0 x 10 -9 M and a Nernstian slope 20.0 ± 0.3 mV decade -1 of activity. It exhibited a quick response time as <8 s and its potential responses were pH independent across the range of 3.5-8.5.The influence of the membrane composition and possible interfering ions have also been investigated on the response properties of the electrode. The sensor has been found to work satisfactorily in partially non-aqueous media up to 15% (v/v) content of methanol, ethanol or acetonitrile and could be used for a period of 3 months. The selectivity coefficients determined by using fixed interference method (FIM) indicate high selectivity for praseodymium(III) ions over wide variety of other cations. To asses its analytical applicability the prepared sensor was successfully applied for determination of praseodymium(III) in spiked water samples.

  16. Application of the selected physical methods in biological research

    Directory of Open Access Journals (Sweden)

    Jaromír Tlačbaba

    2013-01-01

    Full Text Available This paper deals with the application of acoustic emission (AE, which is a part of the non-destructive methods, currently having an extensive application. This method is used for measuring the internal defects of materials. AE has a high potential in further research and development to extend the application of this method even in the field of process engineering. For that matter, it is the most elaborate acoustic emission monitoring in laboratory conditions with regard to external stimuli. The aim of the project is to apply the acoustic emission recording the activity of bees in different seasons. The mission is to apply a new perspective on the behavior of colonies by means of acoustic emission, which collects a sound propagation in the material. Vibration is one of the integral part of communication in the community. Sensing colonies with the support of this method is used for understanding of colonies biological behavior to stimuli clutches, colony development etc. Simulating conditions supported by acoustic emission monitoring system the illustrate colonies activity. Collected information will be used to represent a comprehensive view of the life cycle and behavior of honey bees (Apis mellifera. Use of information about the activities of bees gives a comprehensive perspective on using of acoustic emission in the field of biological research.

  17. Assessment of four different methods for selecting biosurfactant ...

    African Journals Online (AJOL)

    ... and ease of use to screen biosurfactant producing six extremely halophilic bacteria isolated from saline soil of Chott El Hodna-M'sila (Algeria), which is considered as a thalassohaline environment. Results from screening methods revealed that, CH2 and CH5 strains are potential candidates for biosurfactant production.

  18. A rapid and highly selective method for the estimation of pyro-, tri- and orthophosphates.

    Science.gov (United States)

    Kamat, D R; Savant, V V; Sathyanarayana, D N

    1995-03-01

    A rapid, highly selective and simple method has been developed for the quantitative determination of pyro-, tri- and orthophosphates. The method is based on the formation of a solid complex of bis(ethylenediamine)cobalt(III) species with pyrophosphate at pH 4.2-4.3, with triphosphate at pH 2.0-2.1 and with orthophosphate at pH 8.2-8.6. The proposed method for pyro- and triphosphates differs from the available method, which is based on the formation of an adduct with tris(ethylenediamine)cobalt(III) species. The complexes have the composition [Co(en)(2)HP(2)O(7)]4H(2)O and [Co(en)(2)H(2)P(3)O(10)]2H(2)O, respectively. The precipitation is instantaneous and quantitative under the recommended optimum conditions giving 99.5% gravimetric yield in both cases. There is no interferences from orthophosphate, trimetaphosphate and pyrophosphate species in the triphosphate estimation up to 5% of each component. The efficacy of the method has been established by determining pyrophosphate and triphosphate contents in various matrices. In the case of orthophosphate, the proposed method differs from the available methods such as ammonium phosphomolybdate, vanadophosphomolybdate and quinoline phosphomolybdate, which are based on the formation of a precipitate, followed by either titrimetry or gravimetry. The precipitation is instantaneous and the method is simple. Under the recommended pH and other reaction conditions, gravimetric yields of 99.6-100% are obtainable. The method is applicable to orthophosphoric acid and a variety of phosphate salts.

  19. FiGS: a filter-based gene selection workbench for microarray data

    Directory of Open Access Journals (Sweden)

    Yun Taegyun

    2010-01-01

    Full Text Available Abstract Background The selection of genes that discriminate disease classes from microarray data is widely used for the identification of diagnostic biomarkers. Although various gene selection methods are currently available and some of them have shown excellent performance, no single method can retain the best performance for all types of microarray datasets. It is desirable to use a comparative approach to find the best gene selection result after rigorous test of different methodological strategies for a given microarray dataset. Results FiGS is a web-based workbench that automatically compares various gene selection procedures and provides the optimal gene selection result for an input microarray dataset. FiGS builds up diverse gene selection procedures by aligning different feature selection techniques and classifiers. In addition to the highly reputed techniques, FiGS diversifies the gene selection procedures by incorporating gene clustering options in the feature selection step and different data pre-processing options in classifier training step. All candidate gene selection procedures are evaluated by the .632+ bootstrap errors and listed with their classification accuracies and selected gene sets. FiGS runs on parallelized computing nodes that capacitate heavy computations. FiGS is freely accessible at http://gexp.kaist.ac.kr/figs. Conclusion FiGS is an web-based application that automates an extensive search for the optimized gene selection analysis for a microarray dataset in a parallel computing environment. FiGS will provide both an efficient and comprehensive means of acquiring optimal gene sets that discriminate disease states from microarray datasets.

  20. Methodical features of selection of radiation-resistant semiconductor devices on the base of initial informative parameters; Metodicheskie osobennosti otbora radiatsionno-stojkikh poluprovodnikovykh priborov po nachal`nym informativnym parametram

    Energy Technology Data Exchange (ETDEWEB)

    Zhukov, Yu N [and others

    1994-12-31

    A method for evaluating the statistic interrelation of informational parameter initial values with radiation resistance of semiconducting devices using the information content factor which is invariant relative to the election scope and confidence probability is proposed.

  1. A multi-fidelity analysis selection method using a constrained discrete optimization formulation

    Science.gov (United States)

    Stults, Ian C.

    The purpose of this research is to develop a method for selecting the fidelity of contributing analyses in computer simulations. Model uncertainty is a significant component of result validity, yet it is neglected in most conceptual design studies. When it is considered, it is done so in only a limited fashion, and therefore brings the validity of selections made based on these results into question. Neglecting model uncertainty can potentially cause costly redesigns of concepts later in the design process or can even cause program cancellation. Rather than neglecting it, if one were to instead not only realize the model uncertainty in tools being used but also use this information to select the tools for a contributing analysis, studies could be conducted more efficiently and trust in results could be quantified. Methods for performing this are generally not rigorous or traceable, and in many cases the improvement and additional time spent performing enhanced calculations are washed out by less accurate calculations performed downstream. The intent of this research is to resolve this issue by providing a method which will minimize the amount of time spent conducting computer simulations while meeting accuracy and concept resolution requirements for results. In many conceptual design programs, only limited data is available for quantifying model uncertainty. Because of this data sparsity, traditional probabilistic means for quantifying uncertainty should be reconsidered. This research proposes to instead quantify model uncertainty using an evidence theory formulation (also referred to as Dempster-Shafer theory) in lieu of the traditional probabilistic approach. Specific weaknesses in using evidence theory for quantifying model uncertainty are identified and addressed for the purposes of the Fidelity Selection Problem. A series of experiments was conducted to address these weaknesses using n-dimensional optimization test functions. These experiments found that model

  2. On the selection of optimized carbon nano tube synthesis method using analytic hierarchy process

    International Nuclear Information System (INIS)

    Besharati, M. K.; Afaghi Khatibi, A.; Akbari, M.

    2008-01-01

    Evidence from the early and late industrializes shows that technology, as the commercial application of scientific knowledge, has been a major driver of industrial and economic development. International technology transfer is now being recognized as having played an important role in the development of the most successful late industrializes of the second half of the twentieth Century. Our society stands to be significantly influenced by carbon nano tubes, shaped by nano tube applications in every aspect, just as silicon-based technology still shapes society today. Nano tubes can be formed in various structures using several different processing methods. In this paper, the synthesis methods used to produce nano tubes in industrial or laboratory scales are discussed and a comparison is made. A technical feasibility study is conducted by using the multi criteria decision-making model, namely Analytic Hierarchy Process. The article ends with a discussion of selecting the best method of Technology Transferring of Carbon Nano tubes to Iran

  3. A Novel Extension Decision-Making Method for Selecting Solar Power Systems

    Directory of Open Access Journals (Sweden)

    Meng-Hui Wang

    2013-01-01

    Full Text Available Due to the complex parameters of a solar power system, the designer not only must think about the load demand but also needs to consider the price, weight, and annual power generating capacity (APGC and maximum power of the solar system. It is an important task to find the optimal solar power system with many parameters. Therefore, this paper presents a novel decision-making method based on the extension theory; we call it extension decision-making method (EDMM. Using the EDMM can make it quick to select the optimal solar power system. The paper proposed this method not only to provide a useful estimated tool for the solar system engineers but also to supply the important reference with the installation of solar systems to the consumer.

  4. Hot-spot selection and evaluation methods for whole slice images of meningiomas and oligodendrogliomas.

    Science.gov (United States)

    Swiderska, Zaneta; Markiewicz, Tomasz; Grala, Bartlomiej; Slodkowska, Janina

    2015-01-01

    The paper presents a combined method for an automatic hot-spot areas selection based on penalty factor in the whole slide images to support the pathomorphological diagnostic procedure. The studied slides represent the meningiomas and oligodendrogliomas tumor on the basis of the Ki-67/MIB-1 immunohistochemical reaction. It allows determining the tumor proliferation index as well as gives an indication to the medical treatment and prognosis. The combined method based on mathematical morphology, thresholding, texture analysis and classification is proposed and verified. The presented algorithm includes building a specimen map, elimination of hemorrhages from them, two methods for detection of hot-spot fields with respect to an introduced penalty factor. Furthermore, we propose localization concordance measure to evaluation localization of hot spot selection by the algorithms in respect to the expert's results. Thus, the results of the influence of the penalty factor are presented and discussed. It was found that the best results are obtained for 0.2 value of them. They confirm effectiveness of applied approach.

  5. New evaluation methods for conceptual design selection using computational intelligence techniques

    Energy Technology Data Exchange (ETDEWEB)

    Huang, Hong Zhong; Liu, Yu; Li, Yanfeng; Wang, Zhonglai [University of Electronic Science and Technology of China, Chengdu (China); Xue, Lihua [Higher Education Press, Beijing (China)

    2013-03-15

    The conceptual design selection, which aims at choosing the best or most desirable design scheme among several candidates for the subsequent detailed design stage, oftentimes requires a set of tools to conduct design evaluation. Using computational intelligence techniques, such as fuzzy logic, neural network, genetic algorithm, and physical programming, several design evaluation methods are put forth in this paper to realize the conceptual design selection under different scenarios. Depending on whether an evaluation criterion can be quantified or not, the linear physical programming (LPP) model and the RAOGA-based fuzzy neural network (FNN) model can be utilized to evaluate design alternatives in conceptual design stage. Furthermore, on the basis of Vanegas and Labib's work, a multi-level conceptual design evaluation model based on the new fuzzy weighted average (NFWA) and the fuzzy compromise decision-making method is developed to solve the design evaluation problem consisting of many hierarchical criteria. The effectiveness of the proposed methods is demonstrated via several illustrative examples.

  6. New evaluation methods for conceptual design selection using computational intelligence techniques

    International Nuclear Information System (INIS)

    Huang, Hong Zhong; Liu, Yu; Li, Yanfeng; Wang, Zhonglai; Xue, Lihua

    2013-01-01

    The conceptual design selection, which aims at choosing the best or most desirable design scheme among several candidates for the subsequent detailed design stage, oftentimes requires a set of tools to conduct design evaluation. Using computational intelligence techniques, such as fuzzy logic, neural network, genetic algorithm, and physical programming, several design evaluation methods are put forth in this paper to realize the conceptual design selection under different scenarios. Depending on whether an evaluation criterion can be quantified or not, the linear physical programming (LPP) model and the RAOGA-based fuzzy neural network (FNN) model can be utilized to evaluate design alternatives in conceptual design stage. Furthermore, on the basis of Vanegas and Labib's work, a multi-level conceptual design evaluation model based on the new fuzzy weighted average (NFWA) and the fuzzy compromise decision-making method is developed to solve the design evaluation problem consisting of many hierarchical criteria. The effectiveness of the proposed methods is demonstrated via several illustrative examples.

  7. Scalable power selection method for wireless mesh networks

    CSIR Research Space (South Africa)

    Olwal, TO

    2009-01-01

    Full Text Available This paper addresses the problem of a scalable dynamic power control (SDPC) for wireless mesh networks (WMNs) based on IEEE 802.11 standards. An SDPC model that accounts for architectural complexities witnessed in multiple radios and hops...

  8. Agile Methods: Selected DoD Management and Acquisition Concerns

    Science.gov (United States)

    2011-10-01

    PreProcessor ( PHP )/ mySQL -based forum type website that already exists in the .com and simply move it to the .mil, it could take $3-5 million and a year to...Secretary of Defense OT&E operational test and evaluation PCAP Programmer Capability PDR Preliminary Design Review PEX Patriot Excalibur PHP

  9. Rough sets selected methods and applications in management and engineering

    CERN Document Server

    Peters, Georg; Ślęzak, Dominik; Yao, Yiyu

    2012-01-01

    Introduced in the early 1980s, Rough Set Theory has become an important part of soft computing in the last 25 years. This book provides a practical, context-based analysis of rough set theory, with each chapter exploring a real-world application of Rough Sets.

  10. Clustering and training set selection methods for improving the accuracy of quantitative laser induced breakdown spectroscopy

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, Ryan B., E-mail: randerson@astro.cornell.edu [Cornell University Department of Astronomy, 406 Space Sciences Building, Ithaca, NY 14853 (United States); Bell, James F., E-mail: Jim.Bell@asu.edu [Arizona State University School of Earth and Space Exploration, Bldg.: INTDS-A, Room: 115B, Box 871404, Tempe, AZ 85287 (United States); Wiens, Roger C., E-mail: rwiens@lanl.gov [Los Alamos National Laboratory, P.O. Box 1663 MS J565, Los Alamos, NM 87545 (United States); Morris, Richard V., E-mail: richard.v.morris@nasa.gov [NASA Johnson Space Center, 2101 NASA Parkway, Houston, TX 77058 (United States); Clegg, Samuel M., E-mail: sclegg@lanl.gov [Los Alamos National Laboratory, P.O. Box 1663 MS J565, Los Alamos, NM 87545 (United States)

    2012-04-15

    We investigated five clustering and training set selection methods to improve the accuracy of quantitative chemical analysis of geologic samples by laser induced breakdown spectroscopy (LIBS) using partial least squares (PLS) regression. The LIBS spectra were previously acquired for 195 rock slabs and 31 pressed powder geostandards under 7 Torr CO{sub 2} at a stand-off distance of 7 m at 17 mJ per pulse to simulate the operational conditions of the ChemCam LIBS instrument on the Mars Science Laboratory Curiosity rover. The clustering and training set selection methods, which do not require prior knowledge of the chemical composition of the test-set samples, are based on grouping similar spectra and selecting appropriate training spectra for the partial least squares (PLS2) model. These methods were: (1) hierarchical clustering of the full set of training spectra and selection of a subset for use in training; (2) k-means clustering of all spectra and generation of PLS2 models based on the training samples within each cluster; (3) iterative use of PLS2 to predict sample composition and k-means clustering of the predicted compositions to subdivide the groups of spectra; (4) soft independent modeling of class analogy (SIMCA) classification of spectra, and generation of PLS2 models based on the training samples within each class; (5) use of Bayesian information criteria (BIC) to determine an optimal number of clusters and generation of PLS2 models based on the training samples within each cluster. The iterative method and the k-means method using 5 clusters showed the best performance, improving the absolute quadrature root mean squared error (RMSE) by {approx} 3 wt.%. The statistical significance of these improvements was {approx} 85%. Our results show that although clustering methods can modestly improve results, a large and diverse training set is the most reliable way to improve the accuracy of quantitative LIBS. In particular, additional sulfate standards and

  11. Achievement of extreme resolution for the selective by depth Moessbauer method on conversion electrons

    International Nuclear Information System (INIS)

    Babenkov, M.I.; Zhdanov, V.S.; Ryzhikh, V.Yu.; Chubisov, M.A.

    2001-01-01

    At the Institute of Nuclear Physics of the National Nuclear Center of the Republic of Kazakhstan the depth selective conversion electrons Moessbauer spectroscopy (DSCEMS) method was realized on the facility designed on the magnet sector beta-spectrometer base with the dual focusing equipped with non-equipotential electron source in the multi-ribbon variant and the position-sensitive detector. In the work the model statistical calculations of energy and angular distributions experienced not so many times of inelastic scattering acts were carried out

  12. Color image definition evaluation method based on deep learning method

    Science.gov (United States)

    Liu, Di; Li, YingChun

    2018-01-01

    In order to evaluate different blurring levels of color image and improve the method of image definition evaluation, this paper proposed a method based on the depth learning framework and BP neural network classification model, and presents a non-reference color image clarity evaluation method. Firstly, using VGG16 net as the feature extractor to extract 4,096 dimensions features of the images, then the extracted features and labeled images are employed in BP neural network to train. And finally achieve the color image definition evaluation. The method in this paper are experimented by using images from the CSIQ database. The images are blurred at different levels. There are 4,000 images after the processing. Dividing the 4,000 images into three categories, each category represents a blur level. 300 out of 400 high-dimensional features are trained in VGG16 net and BP neural network, and the rest of 100 samples are tested. The experimental results show that the method can take full advantage of the learning and characterization capability of deep learning. Referring to the current shortcomings of the major existing image clarity evaluation methods, which manually design and extract features. The method in this paper can extract the images features automatically, and has got excellent image quality classification accuracy for the test data set. The accuracy rate is 96%. Moreover, the predicted quality levels of original color images are similar to the perception of the human visual system.

  13. Method of reducing tungsten selectivity to a contact sidewall

    International Nuclear Information System (INIS)

    Smith, G.C.

    1990-01-01

    This patent describes a method for forming a contact plug on a surface of a semiconductor body. It comprises: forming a dielectric layer over the surface of the semiconductor body, the dielectric layer having an aperture therethrough with sidewalls comprising silicon nitride; depositing a metal into the aperture in such a manner that the metal deposits upon the silicon nitride of the sidewalls of the aperture at a substantially greater rate than upon the surface of the dielectric layer

  14. SELECTION OF NON-CONVENTIONAL MACHINING PROCESSES USING THE OCRA METHOD

    Directory of Open Access Journals (Sweden)

    Miloš Madić

    2015-04-01

    Full Text Available Selection of the most suitable nonconventional machining process (NCMP for a given machining application can be viewed as multi-criteria decision making (MCDM problem with many conflicting and diverse criteria. To aid these selection processes, different MCDM methods have been proposed. This paper introduces the use of an almost unexplored MCDM method, i.e. operational competitiveness ratings analysis (OCRA method for solving the NCMP selection problems. Applicability, suitability and computational procedure of OCRA method have been demonstrated while solving three case studies dealing with selection of the most suitable NCMP. In each case study the obtained rankings were compared with those derived by the past researchers using different MCDM methods. The results obtained using the OCRA method have good correlation with those derived by the past researchers which validate the usefulness of this method while solving complex NCMP selection problems.

  15. THIRD PARTY LOGISTIC SERVICE PROVIDER SELECTION USING FUZZY AHP AND TOPSIS METHOD

    Directory of Open Access Journals (Sweden)

    Golam Kabir

    2012-03-01

    Full Text Available The use of third party logistic(3PL services providers is increasing globally to accomplish the strategic objectives. In the increasingly competitive environment, logistics strategic management requires systematic and structured approach to have cutting edge over the rival. Logistics service provider selection is a complex multi-criteria decision making process; in which, decision makers have to deals with the optimization of conflicting objectives such as quality, cost, and delivery time. In this paper, fuzzy analytic hierarchy process (FAHP approach based on technique for order preference by similarity to ideal solution (TOPSIS method has been proposed for evaluating and selecting an appropriate logistics service provider, where the ratings of each alternative and importance weight of each criterion are expressed in triangular fuzzy numbers.

  16. Project evaluation and selection using fuzzy Delphi method and zero - one goal programming

    Science.gov (United States)

    Alias, Suriana; Adna, Nofarziah; Arsad, Roslah; Soid, Siti Khuzaimah; Ali, Zaileha Md

    2014-12-01

    Project evaluation and selection is a factor affecting the impotence of board director in which is trying to maximize all the possible goals. Assessment of the problem occurred in organization plan is the first phase for decision making process. The company needs a group of expert to evaluate the problems. The Fuzzy Delphi Method (FDM) is a systematic procedure to evoke the group's opinion in order to get the best result to evaluate the project performance. This paper proposes an evaluation and selection of the best alternative project based on combination of FDM and Zero - One Goal Programming (ZOGP) formulation. ZOGP is used to solve the multi-criteria decision making for final decision part by using optimization software LINDO 6.1. An empirical example on an ongoing decision making project in Johor, Malaysia is implemented for case study.

  17. Improved targeted immunization strategies based on two rounds of selection

    Science.gov (United States)

    Xia, Ling-Ling; Song, Yu-Rong; Li, Chan-Chan; Jiang, Guo-Ping

    2018-04-01

    In the case of high degree targeted immunization where the number of vaccine is limited, when more than one node associated with the same degree meets the requirement of high degree centrality, how can we choose a certain number of nodes from those nodes, so that the number of immunized nodes will not exceed the limit? In this paper, we introduce a new idea derived from the selection process of second-round exam to solve this problem and then propose three improved targeted immunization strategies. In these proposed strategies, the immunized nodes are selected through two rounds of selection, where we increase the quotas of first-round selection according the evaluation criterion of degree centrality and then consider another characteristic parameter of node, such as node's clustering coefficient, betweenness and closeness, to help choose targeted nodes in the second-round selection. To validate the effectiveness of the proposed strategies, we compare them with the degree immunizations including the high degree targeted and the high degree adaptive immunizations using two metrics: the size of the largest connected component of immunized network and the number of infected nodes. Simulation results demonstrate that the proposed strategies based on two rounds of sorting are effective for heterogeneous networks and their immunization effects are better than that of the degree immunizations.

  18. Indicators for Monitoring Water, Sanitation, and Hygiene: A Systematic Review of Indicator Selection Methods

    Directory of Open Access Journals (Sweden)

    Stefanie Schwemlein

    2016-03-01

    Full Text Available Monitoring water, sanitation, and hygiene (WaSH is important to track progress, improve accountability, and demonstrate impacts of efforts to improve conditions and services, especially in low- and middle-income countries. Indicator selection methods enable robust monitoring of WaSH projects and conditions. However, selection methods are not always used and there are no commonly-used methods for selecting WaSH indicators. To address this gap, we conducted a systematic review of indicator selection methods used in WaSH-related fields. We present a summary of indicator selection methods for environment, international development, and water. We identified six methodological stages for selecting indicators for WaSH: define the purpose and scope; select a conceptual framework; search for candidate indicators; determine selection criteria; score indicators against criteria; and select a final suite of indicators. This summary of indicator selection methods provides a foundation for the critical assessment of existing methods. It can be used to inform future efforts to construct indicator sets in WaSH and related fields.

  19. Robust gene selection methods using weighting schemes for microarray data analysis.

    Science.gov (United States)

    Kang, Suyeon; Song, Jongwoo

    2017-09-02

    A common task in microarray data analysis is to identify informative genes that are differentially expressed between two different states. Owing to the high-dimensional nature of microarray data, identification of significant genes has been essential in analyzing the data. However, the performances of many gene selection techniques are highly dependent on the experimental conditions, such as the presence of measurement error or a limited number of sample replicates. We have proposed new filter-based gene selection techniques, by applying a simple modification to significance analysis of microarrays (SAM). To prove the effectiveness of the proposed method, we considered a series of synthetic datasets with different noise levels and sample sizes along with two real datasets. The following findings were made. First, our proposed methods outperform conventional methods for all simulation set-ups. In particular, our methods are much better when the given data are noisy and sample size is small. They showed relatively robust performance regardless of noise level and sample size, whereas the performance of SAM became significantly worse as the noise level became high or sample size decreased. When sufficient sample replicates were available, SAM and our methods showed similar performance. Finally, our proposed methods are competitive with traditional methods in classification tasks for microarrays. The results of simulation study and real data analysis have demonstrated that our proposed methods are effective for detecting significant genes and classification tasks, especially when the given data are noisy or have few sample replicates. By employing weighting schemes, we can obtain robust and reliable results for microarray data analysis.

  20. Mutual information based feature selection for medical image retrieval

    Science.gov (United States)

    Zhi, Lijia; Zhang, Shaomin; Li, Yan

    2018-04-01

    In this paper, authors propose a mutual information based method for lung CT image retrieval. This method is designed to adapt to different datasets and different retrieval task. For practical applying consideration, this method avoids using a large amount of training data. Instead, with a well-designed training process and robust fundamental features and measurements, the method in this paper can get promising performance and maintain economic training computation. Experimental results show that the method has potential practical values for clinical routine application.

  1. Application of Methods of Management Education in the Selected Organization

    Directory of Open Access Journals (Sweden)

    Ema Milistenferová

    2017-07-01

    Full Text Available The article deals with the issue of educating managers. The first part of the article discusses the requirements for managers skills, managerial competencies, a successful manager profile, and the importance of manager training. The second part of the article evaluates the questionnaire survey conducted at VSE Holding a.s. Focused on the effectiveness of the education of company managers. Based on the survey, individual areas of company manager education are proposed within the programs.

  2. History based batch method preserving tally means

    International Nuclear Information System (INIS)

    Shim, Hyung Jin; Choi, Sung Hoon

    2012-01-01

    In the Monte Carlo (MC) eigenvalue calculations, the sample variance of a tally mean calculated from its cycle-wise estimates is biased because of the inter-cycle correlations of the fission source distribution (FSD). Recently, we proposed a new real variance estimation method named the history-based batch method in which a MC run is treated as multiple runs with small number of histories per cycle to generate independent tally estimates. In this paper, the history-based batch method based on the weight correction is presented to preserve the tally mean from the original MC run. The effectiveness of the new method is examined for the weakly coupled fissile array problem as a function of the dominance ratio and the batch size, in comparison with other schemes available

  3. Fusion of remote sensing images based on pyramid decomposition with Baldwinian Clonal Selection Optimization

    Science.gov (United States)

    Jin, Haiyan; Xing, Bei; Wang, Lei; Wang, Yanyan

    2015-11-01

    In this paper, we put forward a novel fusion method for remote sensing images based on the contrast pyramid (CP) using the Baldwinian Clonal Selection Algorithm (BCSA), referred to as CPBCSA. Compared with classical methods based on the transform domain, the method proposed in this paper adopts an improved heuristic evolutionary algorithm, wherein the clonal selection algorithm includes Baldwinian learning. In the process of image fusion, BCSA automatically adjusts the fusion coefficients of different sub-bands decomposed by CP according to the value of the fitness function. BCSA also adaptively controls the optimal search direction of the coefficients and accelerates the convergence rate of the algorithm. Finally, the fusion images are obtained via weighted integration of the optimal fusion coefficients and CP reconstruction. Our experiments show that the proposed method outperforms existing methods in terms of both visual effect and objective evaluation criteria, and the fused images are more suitable for human visual or machine perception.

  4. Model-Based Method for Sensor Validation

    Science.gov (United States)

    Vatan, Farrokh

    2012-01-01

    Fault detection, diagnosis, and prognosis are essential tasks in the operation of autonomous spacecraft, instruments, and in situ platforms. One of NASA s key mission requirements is robust state estimation. Sensing, using a wide range of sensors and sensor fusion approaches, plays a central role in robust state estimation, and there is a need to diagnose sensor failure as well as component failure. Sensor validation can be considered to be part of the larger effort of improving reliability and safety. The standard methods for solving the sensor validation problem are based on probabilistic analysis of the system, from which the method based on Bayesian networks is most popular. Therefore, these methods can only predict the most probable faulty sensors, which are subject to the initial probabilities defined for the failures. The method developed in this work is based on a model-based approach and provides the faulty sensors (if any), which can be logically inferred from the model of the system and the sensor readings (observations). The method is also more suitable for the systems when it is hard, or even impossible, to find the probability functions of the system. The method starts by a new mathematical description of the problem and develops a very efficient and systematic algorithm for its solution. The method builds on the concepts of analytical redundant relations (ARRs).

  5. An improved culture method for selective isolation of Campylobacter jejuni from wastewater

    Directory of Open Access Journals (Sweden)

    Jinyong Kim

    2016-08-01

    Full Text Available Campylobacter jejuni is one of the leading foodborne pathogens worldwide. C. jejuni is isolated from a wide range of foods, domestic animals, wildlife, and environmental sources. The currently-available culture-based isolation methods are not highly effective for wastewater samples due to the low number of C. jejuni in the midst of competing bacteria. To detect and isolate C. jejuni from wastewater samples, in this study, we evaluated a few different enrichment conditions using five different antibiotics (i.e., cefoperazone, vancomycin, trimethoprim, polymyxin B, and rifampicin, to which C. jejuni is intrinsically resistant. The selectivity of each enrichment condition was measured with Ct value using quantitative real-time PCR (qRT-PCR, and multiplex PCR to determine Campylobacter species. In addition, the efficacy of Campylobacter isolation on different culture media after selective enrichment was examined by growing on Bolton and Preston agar plates. The addition of polymyxin B, rifampicin, or both to the Bolton selective supplements enhanced the selective isolation of C. jejuni. In particular, rifampicin supplementation and an increased culture temperature (i.e., 42°C had a decisive effect on the selective enrichment of C. jejuni from wastewater. The results of 16S rDNA sequencing also revealed that Enterococcus spp. and Pseudomonas aeruginosa are major competing bacteria in the enrichment conditions. Although it is known to be difficult to isolate Campylobacter from samples with heavy contamination, this study well exhibited that the manipulation of antibiotic selective pressure improves the isolation efficiency of fastidious Campylobacter from wastewater.

  6. Ranking and selection of commercial off-the-shelf using fuzzy distance based approach

    Directory of Open Access Journals (Sweden)

    Rakesh Garg

    2015-06-01

    Full Text Available There is a tremendous growth of the use of the component based software engineering (CBSE approach for the development of software systems. The selection of the best suited COTS components which fulfils the necessary requirement for the development of software(s has become a major challenge for the software developers. The complexity of the optimal selection problem increases with an increase in alternative potential COTS components and the corresponding selection criteria. In this research paper, the problem of ranking and selection of Data Base Management Systems (DBMS components is modeled as a multi-criteria decision making problem. A ‘Fuzzy Distance Based Approach (FDBA’ method is proposed for the optimal ranking and selection of DBMS COTS components of an e-payment system based on 14 selection criteria grouped under three major categories i.e. ‘Vendor Capabilities’, ‘Business Issues’ and ‘Cost’. The results of this method are compared with other Analytical Hierarchy Process (AHP which is termed as a typical multi-criteria decision making approach. The proposed methodology is explained with an illustrated example.

  7. Effects of the control method (Goč variety in selection forest management in Western Serbia

    Directory of Open Access Journals (Sweden)

    Medarević M.

    2010-01-01

    Full Text Available The control method, one of the most reliable methods of selection forest management, has been applied in selection forests of western Serbia in a somewhat modified form (Goč variety for fifty years. This paper analyzes the effects of the control method, i.e. its Goč variety, in the period from 1960/70 - 2000. It is based on the data of five successive complete inventories of the Forest Management Unit (FMU 'Tara', whose high selection forest of spruce, fir and beech (Piceo-Abieti-Fagetum subass. typicum trees on diluvium, brown and illimerised soil on limestone, and on limestone in formation with hornfels, are the best quality and the most spacious forests in the Management Class MC 491/1. The effects were monitored through the changes in the distribution of the number of trees and volume per diameter classes, separately for fir as the protagonist of the selection structure, and collectively at the level of a compartment, a typical representative of MC 491/1. Also, the analysis included the changes in the number of trees, volume, current volume increment, yield, and number of recruited trees per unit area (1 ha by tree species in MC 491/1, occupying an area of 2,648.78 ha. The study results show that in the study period the average volume in MC 491/1 increased by 18.8%, the percentage of conifers increased from 66.0% to 78.5%, and the bearer of the changes was fir. The volume of the mean fir tree increased by 35.9% and it attained 1.086 m3. The volume increment increased by 15.7%. The selection structure of conifers was satisfactory, but there were problems with beech regeneration, in its stable presence and in its achievement of the targeted structure. The number of trees per unit area (1 ha decreased, which in the long run could have detrimental consequences, but the sustainability in general was satisfactory. The levels of regeneration and recruitment were satisfactory. The health of the trees was improved; the stands were healthy, vital

  8. Selected asymptotic methods with applications to electromagnetics and antennas

    CERN Document Server

    Fikioris, George; Bakas, Odysseas N

    2013-01-01

    This book describes and illustrates the application of several asymptotic methods that have proved useful in the authors' research in electromagnetics and antennas. We first define asymptotic approximations and expansions and explain these concepts in detail. We then develop certain prerequisites from complex analysis such as power series, multivalued functions (including the concepts of branch points and branch cuts), and the all-important gamma function. Of particular importance is the idea of analytic continuation (of functions of a single complex variable); our discussions here include som

  9. A Selected Reaction Monitoring (SRM)-Based Method for Absolute Quantification of Aβ38, Aβ40, and Aβ42 in Cerebrospinal Fluid of Alzheimer's Disease Patients and Healthy Controls

    DEFF Research Database (Denmark)

    Pannee, Josef; Portelius, Erik; Oppermann, Madalina

    2013-01-01

    with mild to moderate dementia. Analytical characteristics of the method include a lower limit of quantification of 62.5 pg/mL for Aβ42 and coefficients of variations below 10%. In a pilot study on AD patients and controls, we verified disease-association with decreased levels of Aβ42 similar......). Samples were prepared by solid-phase extraction and quantification was performed using stable-isotope labeled Aβ peptides as internal standards. The diagnostic performance of the method was evaluated on two independent clinical materials with research volunteers who were cognitively normal and AD patients...

  10. Selected methods of waste monitoring using modern analytical techniques

    International Nuclear Information System (INIS)

    Hlavacek, I.; Hlavackova, I.

    1993-11-01

    Issues of the inspection and control of bituminized and cemented waste are discussed, and some methods of their nondestructive testing are described. Attention is paid to the inspection techniques, non-nuclear spectral techniques in particular, as employed for quality control of the wastes, waste concentrates, spent waste leaching solutions, as well as for the examination of environmental samples (waters and soils) from the surroundings of nuclear power plants. Some leaching tests used abroad for this purpose and practical analyses by the ICP-AES technique are given by way of example. The ICP-MS technique, which is unavailable in the Czech Republic, is routinely employed abroad for alpha nuclide measurements; examples of such analyses are also given. The next topic discussed includes the monitoring of organic acids and complexants to determine the degree of their thermal decomposition during the bituminization of wastes on an industrial line. All of the methods and procedures highlighted can be used as technical support during the monitoring of radioactive waste properties in industrial conditions, in the chemical and radiochemical analyses of wastes and related matter, in the calibration of nondestructive testing instrumentation, in the monitoring of contamination of the surroundings of nuclear facilities, and in trace analysis. (author). 10 tabs., 1 fig., 14 refs

  11. Attention-based Memory Selection Recurrent Network for Language Modeling

    OpenAIRE

    Liu, Da-Rong; Chuang, Shun-Po; Lee, Hung-yi

    2016-01-01

    Recurrent neural networks (RNNs) have achieved great success in language modeling. However, since the RNNs have fixed size of memory, their memory cannot store all the information about the words it have seen before in the sentence, and thus the useful long-term information may be ignored when predicting the next words. In this paper, we propose Attention-based Memory Selection Recurrent Network (AMSRN), in which the model can review the information stored in the memory at each previous time ...

  12. Risk-based audit selection of dairy farms.

    Science.gov (United States)

    van Asseldonk, M A P M; Velthuis, A G J

    2014-02-01

    Dairy farms are audited in the Netherlands on numerous process standards. Each farm is audited once every 2 years. Increasing demands for cost-effectiveness in farm audits can be met by introducing risk-based principles. This implies targeting subpopulations with a higher risk of poor process standards. To select farms for an audit that present higher risks, a statistical analysis was conducted to test the relationship between the outcome of farm audits and bulk milk laboratory results before the audit. The analysis comprised 28,358 farm audits and all conducted laboratory tests of bulk milk samples 12 mo before the audit. The overall outcome of each farm audit was classified as approved or rejected. Laboratory results included somatic cell count (SCC), total bacterial count (TBC), antimicrobial drug residues (ADR), level of butyric acid spores (BAB), freezing point depression (FPD), level of free fatty acids (FFA), and cleanliness of the milk (CLN). The bulk milk laboratory results were significantly related to audit outcomes. Rejected audits are likely to occur on dairy farms with higher mean levels of SCC, TBC, ADR, and BAB. Moreover, in a multivariable model, maxima for TBC, SCC, and FPD as well as standard deviations for TBC and FPD are risk factors for negative audit outcomes. The efficiency curve of a risk-based selection approach, on the basis of the derived regression results, dominated the current random selection approach. To capture 25, 50, or 75% of the population with poor process standards (i.e., audit outcome of rejected), respectively, only 8, 20, or 47% of the population had to be sampled based on a risk-based selection approach. Milk quality information can thus be used to preselect high-risk farms to be audited more frequently. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  13. Simultaneous Channel and Feature Selection of Fused EEG Features Based on Sparse Group Lasso

    Directory of Open Access Journals (Sweden)

    Jin-Jia Wang

    2015-01-01

    Full Text Available Feature extraction and classification of EEG signals are core parts of brain computer interfaces (BCIs. Due to the high dimension of the EEG feature vector, an effective feature selection algorithm has become an integral part of research studies. In this paper, we present a new method based on a wrapped Sparse Group Lasso for channel and feature selection of fused EEG signals. The high-dimensional fused features are firstly obtained, which include the power spectrum, time-domain statistics, AR model, and the wavelet coefficient features extracted from the preprocessed EEG signals. The wrapped channel and feature selection method is then applied, which uses the logistical regression model with Sparse Group Lasso penalized function. The model is fitted on the training data, and parameter estimation is obtained by modified blockwise coordinate descent and coordinate gradient descent method. The best parameters and feature subset are selected by using a 10-fold cross-validation. Finally, the test data is classified using the trained model. Compared with existing channel and feature selection methods, results show that the proposed method is more suitable, more stable, and faster for high-dimensional feature fusion. It can simultaneously achieve channel and feature selection with a lower error rate. The test accuracy on the data used from international BCI Competition IV reached 84.72%.

  14. Selection of heat disposal methods for a Hanford Nuclear Energy Center

    International Nuclear Information System (INIS)

    Young, J.R.; Kannberg, L.D.; Ramsdell, J.V.; Rickard, W.H.; Watson, D.G.

    1976-06-01

    Selection of the best method for disposal of the waste heat from a large power generation center requires a comprehensive comparison of the costs and environmental effects. The objective is to identify the heat dissipation method with the minimum total economic and environmental cost. A 20 reactor HNEC will dissipate about 50,000 MWt of waste heat; a 40 reactor HNEC would release about 100,000 MWt. This is a much larger discharge of heat than has occurred from other concentrated industrial facilities and consequently a special analysis is required to determine the permissibility of such a large heat disposal and the best methods of disposal. It is possible that some methods of disposal will not be permissible because of excessive environmental effects or that the optimum disposal method may include a combination of several methods. A preliminary analysis is presented of the Hanford Nuclear Energy Center heat disposal problem to determine the best methods for disposal and any obvious limitations on the amount of heat that can be released. The analysis is based, in part, on information from an interim conceptual study, a heat sink management analysis, and a meteorological analysis

  15. On the Feature Selection and Classification Based on Information Gain for Document Sentiment Analysis

    Directory of Open Access Journals (Sweden)

    Asriyanti Indah Pratiwi

    2018-01-01

    Full Text Available Sentiment analysis in a movie review is the needs of today lifestyle. Unfortunately, enormous features make the sentiment of analysis slow and less sensitive. Finding the optimum feature selection and classification is still a challenge. In order to handle an enormous number of features and provide better sentiment classification, an information-based feature selection and classification are proposed. The proposed method reduces more than 90% unnecessary features while the proposed classification scheme achieves 96% accuracy of sentiment classification. From the experimental results, it can be concluded that the combination of proposed feature selection and classification achieves the best performance so far.

  16. Assessing the accuracy and stability of variable selection methods for random forest modeling in ecology

    Science.gov (United States)

    Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological datasets there is limited guidance on variable selection methods for RF modeling. Typically, e...

  17. Investigation of Methods for Selectively Reinforcing Aluminum and Aluminum-Lithium Materials

    Science.gov (United States)

    Bird, R. Keith; Alexa, Joel A.; Messick, Peter L.; Domack, Marcia S.; Wagner, John A.

    2013-01-01

    Several studies have indicated that selective reinforcement offers the potential to significantly improve the performance of metallic structures for aerospace applications. Applying high-strength, high-stiffness fibers to the high-stress regions of aluminum-based structures can increase the structural load-carrying capability and inhibit fatigue crack initiation and growth. This paper discusses an investigation into potential methods for applying reinforcing fibers onto the surface of aluminum and aluminum-lithium plate. Commercially-available alumina-fiber reinforced aluminum alloy tapes were used as the reinforcing material. Vacuum hot pressing was used to bond the reinforcing tape to aluminum alloy 2219 and aluminum-lithium alloy 2195 base plates. Static and cyclic three-point bend testing and metallurgical analysis were used to evaluate the enhancement of mechanical performance and the integrity of the bond between the tape and the base plate. The tests demonstrated an increase in specific bending stiffness. In addition, no issues with debonding of the reinforcing tape from the base plate during bend testing were observed. The increase in specific stiffness indicates that selectively-reinforced structures could be designed with the same performance capabilities as a conventional unreinforced structure but with lower mass.

  18. Long-term response to genomic selection: effects of estimation method and reference population structure for different genetic architectures.

    Science.gov (United States)

    Bastiaansen, John W M; Coster, Albart; Calus, Mario P L; van Arendonk, Johan A M; Bovenhuis, Henk

    2012-01-24

    Genomic selection has become an important tool in the genetic improvement of animals and plants. The objective of this study was to investigate the impacts of breeding value estimation method, reference population structure, and trait genetic architecture, on long-term response to genomic selection without updating marker effects. Three methods were used to estimate genomic breeding values: a BLUP method with relationships estimated from genome-wide markers (GBLUP), a Bayesian method, and a partial least squares regression method (PLSR). A shallow (individuals from one generation) or deep reference population (individuals from five generations) was used with each method. The effects of the different selection approaches were compared under four different genetic architectures for the trait under selection. Selection was based on one of the three genomic breeding values, on pedigree BLUP breeding values, or performed at random. Selection continued for ten generations. Differences in long-term selection response were small. For a genetic architecture with a very small number of three to four quantitative trait loci (QTL), the Bayesian method achieved a response that was 0.05 to 0.1 genetic standard deviation higher than other methods in generation 10. For genetic architectures with approximately 30 to 300 QTL, PLSR (shallow reference) or GBLUP (deep reference) had an average advantage of 0.2 genetic standard deviation over the Bayesian method in generation 10. GBLUP resulted in 0.6% and 0.9% less inbreeding than PLSR and BM and on average a one third smaller reduction of genetic variance. Responses in early generations were greater with the shallow reference population while long-term response was not affected by reference population structure. The ranking of estimation methods was different with than without selection. Under selection, applying GBLUP led to lower inbreeding and a smaller reduction of genetic variance while a similar response to selection was

  19. Spectrum estimation method based on marginal spectrum

    International Nuclear Information System (INIS)

    Cai Jianhua; Hu Weiwen; Wang Xianchun

    2011-01-01

    FFT method can not meet the basic requirements of power spectrum for non-stationary signal and short signal. A new spectrum estimation method based on marginal spectrum from Hilbert-Huang transform (HHT) was proposed. The procession of obtaining marginal spectrum in HHT method was given and the linear property of marginal spectrum was demonstrated. Compared with the FFT method, the physical meaning and the frequency resolution of marginal spectrum were further analyzed. Then the Hilbert spectrum estimation algorithm was discussed in detail, and the simulation results were given at last. The theory and simulation shows that under the condition of short data signal and non-stationary signal, the frequency resolution and estimation precision of HHT method is better than that of FFT method. (authors)

  20. Rank-based model selection for multiple ions quantum tomography

    International Nuclear Information System (INIS)

    Guţă, Mădălin; Kypraios, Theodore; Dryden, Ian

    2012-01-01

    The statistical analysis of measurement data has become a key component of many quantum engineering experiments. As standard full state tomography becomes unfeasible for large dimensional quantum systems, one needs to exploit prior information and the ‘sparsity’ properties of the experimental state in order to reduce the dimensionality of the estimation problem. In this paper we propose model selection as a general principle for finding the simplest, or most parsimonious explanation of the data, by fitting different models and choosing the estimator with the best trade-off between likelihood fit and model complexity. We apply two well established model selection methods—the Akaike information criterion (AIC) and the Bayesian information criterion (BIC)—two models consisting of states of fixed rank and datasets such as are currently produced in multiple ions experiments. We test the performance of AIC and BIC on randomly chosen low rank states of four ions, and study the dependence of the selected rank with the number of measurement repetitions for one ion states. We then apply the methods to real data from a four ions experiment aimed at creating a Smolin state of rank 4. By applying the two methods together with the Pearson χ 2 test we conclude that the data can be suitably described with a model whose rank is between 7 and 9. Additionally we find that the mean square error of the maximum likelihood estimator for pure states is close to that of the optimal over all possible measurements. (paper)

  1. Probabilistic structural analysis methods for select space propulsion system components

    Science.gov (United States)

    Millwater, H. R.; Cruse, T. A.

    1989-01-01

    The Probabilistic Structural Analysis Methods (PSAM) project developed at the Southwest Research Institute integrates state-of-the-art structural analysis techniques with probability theory for the design and analysis of complex large-scale engineering structures. An advanced efficient software system (NESSUS) capable of performing complex probabilistic analysis has been developed. NESSUS contains a number of software components to perform probabilistic analysis of structures. These components include: an expert system, a probabilistic finite element code, a probabilistic boundary element code and a fast probability integrator. The NESSUS software system is shown. An expert system is included to capture and utilize PSAM knowledge and experience. NESSUS/EXPERT is an interactive menu-driven expert system that provides information to assist in the use of the probabilistic finite element code NESSUS/FEM and the fast probability integrator (FPI). The expert system menu structure is summarized. The NESSUS system contains a state-of-the-art nonlinear probabilistic finite element code, NESSUS/FEM, to determine the structural response and sensitivities. A broad range of analysis capabilities and an extensive element library is present.

  2.  Methods of detection of selected respiratory viruses

    Directory of Open Access Journals (Sweden)

    Ilona Stefańska

    2012-06-01

    Full Text Available  Respiratory viruses contribute to significant morbidity and mortality in healthy and immunocompromised individuals and are considered as a significant economic burden in the healthcare system. The similar clinical symptoms in the course of different viral and bacterial respiratory infections make the proper diagnosis difficult. An accurate and prompt diagnostics is crucial for infection control and patient management decisions, especially regarding the use of antibacterial or antiviral therapy and hospitalization. Moreover, the identification of the causative agent eliminates inappropriate use of antibiotics and may reduce the cost of healthcare.A wide variety of diagnostic procedures is applied for the detection of viral agents responsible for respiratory tract infections. For many years, the viral antigen detection and standard isolation technique in cell culture was the main method used in routine diagnostics. However, in recent years the nucleic acid amplification techniques have become widely used and have significantly improved the sensitivity of viral detection in clinical specimens. Molecular diagnostic assays have contributed to revealing high rates of co-infection (multiplex reactions and allow identification of agents that are difficult to culture.This paper discusses a number of technical aspects of the current most commonly used techniques, their general principles, main benefits and diagnostic value, but also some of their limitations.

  3. Feature selection based on SVM significance maps for classification of dementia

    NARCIS (Netherlands)

    E.E. Bron (Esther); M. Smits (Marion); J.C. van Swieten (John); W.J. Niessen (Wiro); S. Klein (Stefan)

    2014-01-01

    textabstractSupport vector machine significance maps (SVM p-maps) previously showed clusters of significantly different voxels in dementiarelated brain regions. We propose a novel feature selection method for classification of dementia based on these p-maps. In our approach, the SVM p-maps are

  4. Residue-based Coordinated Selection and Parameter Design of Multiple Power System Stabilizers (PSSs)

    DEFF Research Database (Denmark)

    Su, Chi; Hu, Weihao; Fang, Jiakun

    2013-01-01

    data from time domain simulations. Then a coordinated approach for multiple PSS selection and parameter design based on residue method is proposed and realized in MATLAB m-files. Particle swarm optimization (PSO) is adopted in the coordination process. The IEEE 39-bus New England system model...

  5. An entropy-based improved k-top scoring pairs (TSP) method for ...

    African Journals Online (AJOL)

    An entropy-based improved k-top scoring pairs (TSP) (Ik-TSP) method was presented in this study for the classification and prediction of human cancers based on gene-expression data. We compared Ik-TSP classifiers with 5 different machine learning methods and the k-TSP method based on 3 different feature selection ...

  6. Robust Feature Selection from Microarray Data Based on Cooperative Game Theory and Qualitative Mutual Information

    Directory of Open Access Journals (Sweden)

    Atiyeh Mortazavi

    2016-01-01

    Full Text Available High dimensionality of microarray data sets may lead to low efficiency and overfitting. In this paper, a multiphase cooperative game theoretic feature selection approach is proposed for microarray data classification. In the first phase, due to high dimension of microarray data sets, the features are reduced using one of the two filter-based feature selection methods, namely, mutual information and Fisher ratio. In the second phase, Shapley index is used to evaluate the power of each feature. The main innovation of the proposed approach is to employ Qualitative Mutual Information (QMI for this purpose. The idea of Qualitative Mutual Information causes the selected features to have more stability and this stability helps to deal with the problem of data imbalance and scarcity. In the third phase, a forward selection scheme is applied which uses a scoring function to weight each feature. The performance of the proposed method is compared with other popular feature selection algorithms such as Fisher ratio, minimum redundancy maximum relevance, and previous works on cooperative game based feature selection. The average classification accuracy on eleven microarray data sets shows that the proposed method improves both average accuracy and average stability compared to other approaches.

  7. MCDM based evaluation and ranking of commercial off-the-shelf using fuzzy based matrix method

    Directory of Open Access Journals (Sweden)

    Rakesh Garg

    2017-04-01

    Full Text Available In today’s scenario, software has become an essential component in all kinds of systems. The size and the complexity of the software increases with a corresponding increase in its functionality, hence leads to the development of the modular software systems. Software developers emphasize on the concept of component based software engineering (CBSE for the development of modular software systems. The CBSE concept consists of dividing the software into a number of modules; selecting Commercial Off-the-Shelf (COTS for each module; and finally integrating the modules to develop the final software system. The selection of COTS for any module plays a vital role in software development. To address the problem of selection of COTS, a framework for ranking and selection of various COTS components for any software system based on expert opinion elicitation and fuzzy-based matrix methodology is proposed in this research paper. The selection problem is modeled as a multi-criteria decision making (MCDM problem. The evaluation criteria are identified through extensive literature study and the COTS components are ranked based on these identified and selected evaluation criteria using the proposed methods according to the value of a permanent function of their criteria matrices. The methodology is explained through an example and is validated by comparing with an existing method.

  8. ALIS-FLP: Amplified ligation selected fragment-length polymorphism method for microbial genotyping

    DEFF Research Database (Denmark)

    Brillowska-Dabrowska, A.; Wianecka, M.; Dabrowski, Slawomir

    2008-01-01

    A DNA fingerprinting method known as ALIS-FLP (amplified ligation selected fragment-length polymorphism) has been developed for selective and specific amplification of restriction fragments from TspRI restriction endonuclease digested genomic DNA. The method is similar to AFLP, but differs...

  9. Validity of a structured method of selecting abstracts for a plastic surgical scientific meeting

    NARCIS (Netherlands)

    van der Steen, LPE; Hage, JJ; Kon, M; Monstrey, SJ

    In 1999, the European Association of Plastic Surgeons accepted a structured method to assess and select the abstracts that are submitted for its yearly scientific meeting. The two criteria used to evaluate whether such a selection method is accurate were reliability and validity. The authors

  10. [New methods of patient selection for improved anticholinergic therapy].

    Science.gov (United States)

    Neuhaus, J; Schwalenberg, T; Schlichting, N; Schulze, M; Horn, L-C; Stolzenburg, J-U

    2007-09-01

    M3-specific inhibitors are currently preferred for anticholinergic therapy of OAB. However, not all of the patients profit from this regimen. This might reflect a heterogeneity of the patient group. The aim of this work is to define subgroups of patients with specific alterations of receptor expression and to profile the receptor expression individually. These receptor profiles might be used for the development of evidence-based "tailored" therapies. Detrusor probes from bladder carcinoma patients (BCa, n=9 F, n=7 male) and interstitial cystitis patients (IC, n=9 female) were examined using confocal immunofluorescence and PCR. M2, M3, P2X1-3, and H1-3 mRNAs were demonstrated in detrusor tissue. As revealed by immunofluorescence, the M2 receptor expression was significantly higher in female compared to male BCa tissues. In addition, the M2 receptor was further upregulated in IC vs BCa in female detrusor. IC patients showed specific alterations of their receptor profile. Individual receptor profiles might be used to optimize medicinal therapies.

  11. Risk-based selection of SSCs at Peach Bottom

    International Nuclear Information System (INIS)

    Krueger, G.A.; Marie, A.J.

    1993-01-01

    The purpose of identifying risk significant systems, structures, and components (SSCS) that are within the scope of the maintenance rule is to bring a higher level of attention to a subset of those SSCS. These risk-significant SSCs will have specific performance criteria established for them, and failure to meet this performance criteria will result in establishing goals to ensure the necessary improvement in performance. The Peach Bottom individual plant examination (IPE) results were used to provide insights for the verification of proposed probabilistic risk assessment (PRA) methods set forth in the Industry Maintenance Guidelines for Implementation of the Maintenance Rule. The objective of reviewing the methods for selection of SSCs that are considered risk significant was to ensure the methods used are logical, reproducible, and can be consistently applied

  12. Sensor selection of helicopter transmission systems based on physical model and sensitivity analysis

    Directory of Open Access Journals (Sweden)

    Lyu Kehong

    2014-06-01

    Full Text Available In the helicopter transmission systems, it is important to monitor and track the tooth damage evolution using lots of sensors and detection methods. This paper develops a novel approach for sensor selection based on physical model and sensitivity analysis. Firstly, a physical model of tooth damage and mesh stiffness is built. Secondly, some effective condition indicators (CIs are presented, and the optimal CIs set is selected by comparing their test statistics according to Mann–Kendall test. Afterwards, the selected CIs are used to generate a health indicator (HI through sen slop estimator. Then, the sensors are selected according to the monotonic relevance and sensitivity to the damage levels. Finally, the proposed method is verified by the simulation and experimental data. The results show that the approach can provide a guide for health monitoring of helicopter transmission systems, and it is effective to reduce the test cost and improve the system’s reliability.

  13. AHP-Based Optimal Selection of Garment Sizes for Online Shopping

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Garment online shopping has been accepted by more and more consumers in recent years. In online shopping, a buyer only chooses the garment size judged by his own experience without trying-on, so the selected garment may not be the fittest one for the buyer due to the variety of body's figures. Thus, we propose a method of optimal selection of garment sizes for online shopping based on Analytic Hierarchy Process (AHP). The hierarchical structure model for optimal selection of garment sizes is structured and the fittest garment for a buyer is found by calculating the matching degrees between individual's measurements and the corresponding key-part values of ready-to-wear clothing sizes. In order to demonstrate its feasibility, we provide an example of selecting the fittest sizes of men's bottom. The result shows that the proposed method is useful in online clothing sales application.

  14. Characterization of Catalytic Fast Pyrolysis Oils: The Importance of Solvent Selection for Analytical Method Development

    Energy Technology Data Exchange (ETDEWEB)

    Ferrell, Jack R [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Ware, Anne E [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2018-02-25

    Two catalytic fast pyrolysis (CFP) oils (bottom/heavy fraction) were analyzed in various solvents that are used in common analytical methods (nuclear magnetic resonance - NMR, gas chromatography - GC, gel permeation chromatography - GPC, thermogravimetric analysis - TGA) for oil characterization and speciation. A more accurate analysis of the CFP oils can be obtained by identification and exploitation of solvent miscibility characteristics. Acetone and tetrahydrofuran can be used to completely solubilize CFP oils for analysis by GC and tetrahydrofuran can be used for traditional organic GPC analysis of the oils. DMSO-d6 can be used to solubilize CFP oils for analysis by 13C NMR. The fractionation of oils into solvents that did not completely solubilize the whole oils showed that miscibility can be related to the oil properties. This allows for solvent selection based on physico-chemical properties of the oils. However, based on semi-quantitative comparisons of the GC chromatograms, the organic solvent fractionation schemes did not speciate the oils based on specific analyte type. On the other hand, chlorinated solvents did fractionate the oils based on analyte size to a certain degree. Unfortunately, like raw pyrolysis oil, the matrix of the CFP oils is complicated and is not amenable to simple liquid-liquid extraction (LLE) or solvent fractionation to separate the oils based on the chemical and/or physical properties of individual components. For reliable analyses, for each analytical method used, it is critical that the bio-oil sample is both completely soluble and also not likely to react with the chosen solvent. The adoption of the standardized solvent selection protocols presented here will allow for greater reproducibility of analysis across different users and facilities.

  15. Interchange Recognition Method Based on CNN

    Directory of Open Access Journals (Sweden)

    HE Haiwei

    2018-03-01

    Full Text Available The identification and classification of interchange structures in OSM data can provide important information for the construction of multi-scale model, navigation and location services, congestion analysis, etc. The traditional method of interchange identification relies on the low-level characteristics of artificial design, and cannot distinguish the complex interchange structure with interference section effectively. In this paper, a new method based on convolutional neural network for identification of the interchange is proposed. The method combines vector data with raster image, and uses neural network to learn the fuzzy characteristics of the interchange, and classifies the complex interchange structure in OSM. Experiments show that this method has strong anti-interference, and has achieved good results in the classification of complex interchange shape, and there is room for further improvement with the expansion of the case base and the optimization of neural network model.

  16. Recommendation advertising method based on behavior retargeting

    Science.gov (United States)

    Zhao, Yao; YIN, Xin-Chun; CHEN, Zhi-Min

    2011-10-01

    Online advertising has become an important business in e-commerce. Ad recommended algorithms are the most critical part in recommendation systems. We propose a recommendation advertising method based on behavior retargeting which can avoid leakage click of advertising due to objective reasons and can observe the changes of the user's interest in time. Experiments show that our new method can have a significant effect and can be further to apply to online system.

  17. Ion-selective electrodes in potentiometric titrations; a new method for processing and evaluating titration data.

    Science.gov (United States)

    Granholm, Kim; Sokalski, Tomasz; Lewenstam, Andrzej; Ivaska, Ari

    2015-08-12

    A new method to convert the potential of an ion-selective electrode to concentration or activity in potentiometric titration is proposed. The advantage of this method is that the electrode standard potential and the slope of the calibration curve do not have to be known. Instead two activities on the titration curve have to be estimated e.g. the starting activity before the titration begins and the activity at the end of the titration in the presence of large excess of titrant. This new method is beneficial when the analyte is in a complexed matrix or in a harsh environment which affects the properties of the electrode and the traditional calibration procedure with standard solutions cannot be used. The new method was implemented both in a method of linearization based on the Grans's plot and in determination of the stability constant of a complex and the concentration of the complexing ligand in the sample. The new method gave accurate results when using titrations data from experiments with samples of known composition and with real industrial harsh black liquor sample. A complexometric titration model was also developed. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. New visible and selective DNA staining method in gels with tetrazolium salts.

    Science.gov (United States)

    Paredes, Aaron J; Naranjo-Palma, Tatiana; Alfaro-Valdés, Hilda M; Barriga, Andrés; Babul, Jorge; Wilson, Christian A M

    2017-01-15

    DNA staining in gels has historically been carried out using silver staining and fluorescent dyes like ethidium bromide and SYBR Green I (SGI). Using fluorescent dyes allows recovery of the analyte, but requires instruments such as a transilluminator or fluorimeter to visualize the DNA. Here we described a new and simple method that allows DNA visualization to the naked eye by generating a colored precipitate. It works by soaking the acrylamide or agarose DNA gel in SGI and nitro blue tetrazolium (NBT) solution that, when exposed to sunlight, produces a purple insoluble formazan precipitate that remains in the gel after exposure to light. A calibration curve made with a DNA standard established a detection limit of approximately 180 pg/band at 500 bp. Selectivity of this assay was determined using different biomolecules, demonstrating a high selectivity for DNA. Integrity and functionality of the DNA recovered from gels was determined by enzymatic cutting with a restriction enzyme and by transforming competent cells after the different staining methods, respectively. Our method showed the best performance among the dyes employed. Based on its specificity, low cost and its adequacy for field work, this new methodology has enormous potential benefits to research and industry. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. Study on the selection method of feed water heater safety valves in nuclear power plants

    International Nuclear Information System (INIS)

    Shi Jianzhong; Huang Chao; Hu Youqing

    2014-01-01

    The selection of the high pressure feedwater heater's safety valve usually follows the principle recommended by HEI standards in thermal power plant. However, the nuclear power plant's heaters generally need to accept a lots of drain from a moisture separator reheater (MSR). When the drain regulating valve was failure in fully open position, a large number of high pressure steam will directly goes into the heater. It make high-pressure heater have a risk of overpressure. Therefore, the safety valve selection of the heaters for nuclear power plants not only need to follow the HEI standards, but also need to check his capacity in certain special conditions. The paper established a calculation method to determine the static running point of the heaters based on characteristic equations of the feed water heater, drain regulating valve and steam extraction pipings, and energy balance principle. The method can be used to calculate the equilibrium pressure of various special running conditions, so further determine whether the capacity of the safety valve meets the requirements of safety and emissions. The method proposed in this paper not only can be used for nuclear power plants, can also be used for thermal power plants. (authors)

  20. Maximum relevance, minimum redundancy band selection based on neighborhood rough set for hyperspectral data classification

    International Nuclear Information System (INIS)

    Liu, Yao; Chen, Yuehua; Tan, Kezhu; Xie, Hong; Wang, Liguo; Xie, Wu; Yan, Xiaozhen; Xu, Zhen

    2016-01-01

    Band selection is considered to be an important processing step in handling hyperspectral data. In this work, we selected informative bands according to the maximal relevance minimal redundancy (MRMR) criterion based on neighborhood mutual information. Two measures MRMR difference and MRMR quotient were defined and a forward greedy search for band selection was constructed. The performance of the proposed algorithm, along with a comparison with other methods (neighborhood dependency measure based algorithm, genetic algorithm and uninformative variable elimination algorithm), was studied using the classification accuracy of extreme learning machine (ELM) and random forests (RF) classifiers on soybeans’ hyperspectral datasets. The results show that the proposed MRMR algorithm leads to promising improvement in band selection and classification accuracy. (paper)