WorldWideScience

Sample records for methods performed similarly

  1. Study for the design method of multi-agent diagnostic system to improve diagnostic performance for similar abnormality

    International Nuclear Information System (INIS)

    Minowa, Hirotsugu; Gofuku, Akio

    2014-01-01

    Accidents on industrial plants cause large loss on human, economic, social credibility. In recent, studies of diagnostic methods using techniques of machine learning such as support vector machine is expected to detect the occurrence of abnormality in a plant early and correctly. There were reported that these diagnostic machines has high accuracy to diagnose the operating state of industrial plant under mono abnormality occurrence. But the each diagnostic machine on the multi-agent diagnostic system may misdiagnose similar abnormalities as a same abnormality if abnormalities to diagnose increases. That causes that a single diagnostic machine may show higher diagnostic performance than one of multi-agent diagnostic system because decision-making considering with misdiagnosis is difficult. Therefore, we study the design method for multi-agent diagnostic system to diagnose similar abnormality correctly. This method aimed to realize automatic generation of diagnostic system where the generation process and location of diagnostic machines are optimized to diagnose correctly the similar abnormalities which are evaluated from the similarity of process signals by statistical method. This paper explains our design method and reports the result evaluated our method applied to the process data of the fast-breeder reactor Monju

  2. Predicting the performance of fingerprint similarity searching.

    Science.gov (United States)

    Vogt, Martin; Bajorath, Jürgen

    2011-01-01

    Fingerprints are bit string representations of molecular structure that typically encode structural fragments, topological features, or pharmacophore patterns. Various fingerprint designs are utilized in virtual screening and their search performance essentially depends on three parameters: the nature of the fingerprint, the active compounds serving as reference molecules, and the composition of the screening database. It is of considerable interest and practical relevance to predict the performance of fingerprint similarity searching. A quantitative assessment of the potential that a fingerprint search might successfully retrieve active compounds, if available in the screening database, would substantially help to select the type of fingerprint most suitable for a given search problem. The method presented herein utilizes concepts from information theory to relate the fingerprint feature distributions of reference compounds to screening libraries. If these feature distributions do not sufficiently differ, active database compounds that are similar to reference molecules cannot be retrieved because they disappear in the "background." By quantifying the difference in feature distribution using the Kullback-Leibler divergence and relating the divergence to compound recovery rates obtained for different benchmark classes, fingerprint search performance can be quantitatively predicted.

  3. Scalar Similarity for Relaxed Eddy Accumulation Methods

    Science.gov (United States)

    Ruppert, Johannes; Thomas, Christoph; Foken, Thomas

    2006-07-01

    The relaxed eddy accumulation (REA) method allows the measurement of trace gas fluxes when no fast sensors are available for eddy covariance measurements. The flux parameterisation used in REA is based on the assumption of scalar similarity, i.e., similarity of the turbulent exchange of two scalar quantities. In this study changes in scalar similarity between carbon dioxide, sonic temperature and water vapour were assessed using scalar correlation coefficients and spectral analysis. The influence on REA measurements was assessed by simulation. The evaluation is based on observations over grassland, irrigated cotton plantation and spruce forest. Scalar similarity between carbon dioxide, sonic temperature and water vapour showed a distinct diurnal pattern and change within the day. Poor scalar similarity was found to be linked to dissimilarities in the energy contained in the low frequency part of the turbulent spectra ( definition.

  4. Analysis of the similarity factors of the villages in the areas of the nuclear power plants from the premature death-rate performed by fuzzy logic method

    International Nuclear Information System (INIS)

    Letkovicova, M.; Rehak, R.; Korec, J.; Mihaly, B.; Prikazsky, V.

    1998-01-01

    Our paper examines the surrounding areas of NPP from the proportion of premature death-rate which is one of the complex indicators of the health situation of the population. Specially, attention is focused on NPP in Bohunice (SE-EBO) which has been in operation for the last 30 years and NPP Mochovce (SE-EMO) which was still under construction when data was collected. WHO considers every death of the individual before 65 years of age a premature death case, except death cases of children younger that 1 year. Because of the diversity of the population, this factor is a standard for the population of Slovak Republic (SR) as well as for the european population. The objective of the work is to prove, that even a long term production of energy in NPP does not evoke health problems for the population living in the surrounding areas, which could be recorded through analysis of premature death cases. Using the fuzzy logic method when searching for similar objects and evaluating the influence of the NPP on its surrounding area seems more natural than classical accumulation method, which separates objects into groups. When using the classical accumulation method, the objects in particular accumulation group are more similar than 2 objects in different accumulation groups. When using the fuzzy logic method the similarity is defined more naturally. Within the observed regions of the NPP, the percentage of directly standardized premature death cases is almost identical with the average for the SR. The most closely observed region of SE-EMO up to 5 kilometers zone even shows the lowest percentage. Also we did not record any areas that would have unfavourable values from the wind streams perspective neither than from the local water streams recipients of SE-EBO Manivier and Dudvah. The region of SE-EMO is also within the SR average, unfavourable coherent areas of premature death case are non existent. Galanta city region comes out of the comparison with the relatively worse

  5. Efficient data retrieval method for similar plasma waveforms in EAST

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Ying, E-mail: liuying-ipp@szu.edu.cn [SZU-CASIPP Joint Laboratory for Applied Plasma, Shenzhen University, Shenzhen 518060 (China); Huang, Jianjun; Zhou, Huasheng; Wang, Fan [SZU-CASIPP Joint Laboratory for Applied Plasma, Shenzhen University, Shenzhen 518060 (China); Wang, Feng [Institute of Plasma Physics Chinese Academy of Sciences, Hefei 230031 (China)

    2016-11-15

    Highlights: • The proposed method is carried out by means of bounding envelope and angle distance. • It allows retrieving for whole similar waveforms of any time length. • In addition, the proposed method is also possible to retrieve subsequences. - Abstract: Fusion research relies highly on data analysis due to its massive-sized database. In the present work, we propose an efficient method for searching and retrieving similar plasma waveforms in Experimental Advanced Superconducting Tokamak (EAST). Based on Piecewise Linear Aggregate Approximation (PLAA) for extracting feature values, the searching process is accomplished in two steps. The first one is coarse searching to narrow down the search space, which is carried out by means of bounding envelope. The second step is fine searching to retrieval similar waveforms, which is implemented by the angle distance. The proposed method is tested in EAST databases and turns out to have good performance in retrieving similar waveforms.

  6. Centrifugal fans: Similarity, scaling laws, and fan performance

    Science.gov (United States)

    Sardar, Asad Mohammad

    Centrifugal fans are rotodynamic machines used for moving air continuously against moderate pressures through ventilation and air conditioning systems. There are five major topics presented in this thesis: (1) analysis of the fan scaling laws and consequences of dynamic similarity on modelling; (2) detailed flow visualization studies (in water) covering the flow path starting at the fan blade exit to the evaporator core of an actual HVAC fan scroll-diffuser module; (3) mean velocity and turbulence intensity measurements (flow field studies) at the inlet and outlet of large scale blower; (4) fan installation effects on overall fan performance and evaluation of fan testing methods; (5) two point coherence and spectral measurements conducted on an actual HVAC fan module for flow structure identification of possible aeroacoustic noise sources. A major objective of the study was to identity flow structures within the HVAC module that are responsible for noise and in particular "rumble noise" generation. Possible mechanisms for the generation of flow induced noise in the automotive HVAC fan module are also investigated. It is demonstrated that different modes of HVAC operation represent very different internal flow characteristics. This has implications on both fan HVAC airflow performance and noise characteristics. It is demonstrated from principles of complete dynamic similarity that fan scaling laws require that Reynolds, number matching is a necessary condition for developing scale model fans or fan test facilities. The physical basis for the fan scaling laws derived was established from both pure dimensional analysis and also from the fundamental equations of fluid motion. Fan performance was measured in a three times scale model (large scale blower) in air of an actual forward curved automotive HVAC blower. Different fan testing methods (based on AMCA fan test codes) were compared on the basis of static pressure measurements. Also, the flow through an actual HVAC

  7. Assessing semantic similarity of texts - Methods and algorithms

    Science.gov (United States)

    Rozeva, Anna; Zerkova, Silvia

    2017-12-01

    Assessing the semantic similarity of texts is an important part of different text-related applications like educational systems, information retrieval, text summarization, etc. This task is performed by sophisticated analysis, which implements text-mining techniques. Text mining involves several pre-processing steps, which provide for obtaining structured representative model of the documents in a corpus by means of extracting and selecting the features, characterizing their content. Generally the model is vector-based and enables further analysis with knowledge discovery approaches. Algorithms and measures are used for assessing texts at syntactical and semantic level. An important text-mining method and similarity measure is latent semantic analysis (LSA). It provides for reducing the dimensionality of the document vector space and better capturing the text semantics. The mathematical background of LSA for deriving the meaning of the words in a given text by exploring their co-occurrence is examined. The algorithm for obtaining the vector representation of words and their corresponding latent concepts in a reduced multidimensional space as well as similarity calculation are presented.

  8. AREAL FEATURE MATCHING BASED ON SIMILARITY USING CRITIC METHOD

    Directory of Open Access Journals (Sweden)

    J. Kim

    2015-10-01

    Full Text Available In this paper, we propose an areal feature matching method that can be applied for many-to-many matching, which involves matching a simple entity with an aggregate of several polygons or two aggregates of several polygons with fewer user intervention. To this end, an affine transformation is applied to two datasets by using polygon pairs for which the building name is the same. Then, two datasets are overlaid with intersected polygon pairs that are selected as candidate matching pairs. If many polygons intersect at this time, we calculate the inclusion function between such polygons. When the value is more than 0.4, many of the polygons are aggregated as single polygons by using a convex hull. Finally, the shape similarity is calculated between the candidate pairs according to the linear sum of the weights computed in CRITIC method and the position similarity, shape ratio similarity, and overlap similarity. The candidate pairs for which the value of the shape similarity is more than 0.7 are determined as matching pairs. We applied the method to two geospatial datasets: the digital topographic map and the KAIS map in South Korea. As a result, the visual evaluation showed two polygons that had been well detected by using the proposed method. The statistical evaluation indicates that the proposed method is accurate when using our test dataset with a high F-measure of 0.91.

  9. Areal Feature Matching Based on Similarity Using Critic Method

    Science.gov (United States)

    Kim, J.; Yu, K.

    2015-10-01

    In this paper, we propose an areal feature matching method that can be applied for many-to-many matching, which involves matching a simple entity with an aggregate of several polygons or two aggregates of several polygons with fewer user intervention. To this end, an affine transformation is applied to two datasets by using polygon pairs for which the building name is the same. Then, two datasets are overlaid with intersected polygon pairs that are selected as candidate matching pairs. If many polygons intersect at this time, we calculate the inclusion function between such polygons. When the value is more than 0.4, many of the polygons are aggregated as single polygons by using a convex hull. Finally, the shape similarity is calculated between the candidate pairs according to the linear sum of the weights computed in CRITIC method and the position similarity, shape ratio similarity, and overlap similarity. The candidate pairs for which the value of the shape similarity is more than 0.7 are determined as matching pairs. We applied the method to two geospatial datasets: the digital topographic map and the KAIS map in South Korea. As a result, the visual evaluation showed two polygons that had been well detected by using the proposed method. The statistical evaluation indicates that the proposed method is accurate when using our test dataset with a high F-measure of 0.91.

  10. Information loss method to measure node similarity in networks

    Science.gov (United States)

    Li, Yongli; Luo, Peng; Wu, Chong

    2014-09-01

    Similarity measurement for the network node has been paid increasing attention in the field of statistical physics. In this paper, we propose an entropy-based information loss method to measure the node similarity. The whole model is established based on this idea that less information loss is caused by seeing two more similar nodes as the same. The proposed new method has relatively low algorithm complexity, making it less time-consuming and more efficient to deal with the large scale real-world network. In order to clarify its availability and accuracy, this new approach was compared with some other selected approaches on two artificial examples and synthetic networks. Furthermore, the proposed method is also successfully applied to predict the network evolution and predict the unknown nodes' attributions in the two application examples.

  11. A similarity measure method combining location feature for mammogram retrieval.

    Science.gov (United States)

    Wang, Zhiqiong; Xin, Junchang; Huang, Yukun; Li, Chen; Xu, Ling; Li, Yang; Zhang, Hao; Gu, Huizi; Qian, Wei

    2018-05-28

    Breast cancer, the most common malignancy among women, has a high mortality rate in clinical practice. Early detection, diagnosis and treatment can reduce the mortalities of breast cancer greatly. The method of mammogram retrieval can help doctors to find the early breast lesions effectively and determine a reasonable feature set for image similarity measure. This will improve the accuracy effectively for mammogram retrieval. This paper proposes a similarity measure method combining location feature for mammogram retrieval. Firstly, the images are pre-processed, the regions of interest are detected and the lesions are segmented in order to get the center point and radius of the lesions. Then, the method, namely Coherent Point Drift, is used for image registration with the pre-defined standard image. The center point and radius of the lesions after registration are obtained and the standard location feature of the image is constructed. This standard location feature can help figure out the location similarity between the image pair from the query image to each dataset image in the database. Next, the content feature of the image is extracted, including the Histogram of Oriented Gradients, the Edge Direction Histogram, the Local Binary Pattern and the Gray Level Histogram, and the image pair content similarity can be calculated using the Earth Mover's Distance. Finally, the location similarity and content similarity are fused to form the image fusion similarity, and the specified number of the most similar images can be returned according to it. In the experiment, 440 mammograms, which are from Chinese women in Northeast China, are used as the database. When fusing 40% lesion location feature similarity and 60% content feature similarity, the results have obvious advantages. At this time, precision is 0.83, recall is 0.76, comprehensive indicator is 0.79, satisfaction is 96.0%, mean is 4.2 and variance is 17.7. The results show that the precision and recall of this

  12. A Signal Processing Method to Explore Similarity in Protein Flexibility

    Directory of Open Access Journals (Sweden)

    Simina Vasilache

    2010-01-01

    Full Text Available Understanding mechanisms of protein flexibility is of great importance to structural biology. The ability to detect similarities between proteins and their patterns is vital in discovering new information about unknown protein functions. A Distance Constraint Model (DCM provides a means to generate a variety of flexibility measures based on a given protein structure. Although information about mechanical properties of flexibility is critical for understanding protein function for a given protein, the question of whether certain characteristics are shared across homologous proteins is difficult to assess. For a proper assessment, a quantified measure of similarity is necessary. This paper begins to explore image processing techniques to quantify similarities in signals and images that characterize protein flexibility. The dataset considered here consists of three different families of proteins, with three proteins in each family. The similarities and differences found within flexibility measures across homologous proteins do not align with sequence-based evolutionary methods.

  13. δ-Similar Elimination to Enhance Search Performance of Multiobjective Evolutionary Algorithms

    Science.gov (United States)

    Aguirre, Hernán; Sato, Masahiko; Tanaka, Kiyoshi

    In this paper, we propose δ-similar elimination to improve the search performance of multiobjective evolutionary algorithms in combinatorial optimization problems. This method eliminates similar individuals in objective space to fairly distribute selection among the different regions of the instantaneous Pareto front. We investigate four eliminating methods analyzing their effects using NSGA-II. In addition, we compare the search performance of NSGA-II enhanced by our method and NSGA-II enhanced by controlled elitism.

  14. A method for rapid similarity analysis of RNA secondary structures

    Directory of Open Access Journals (Sweden)

    Liu Na

    2006-11-01

    Full Text Available Abstract Background Owing to the rapid expansion of RNA structure databases in recent years, efficient methods for structure comparison are in demand for function prediction and evolutionary analysis. Usually, the similarity of RNA secondary structures is evaluated based on tree models and dynamic programming algorithms. We present here a new method for the similarity analysis of RNA secondary structures. Results Three sets of real data have been used as input for the example applications. Set I includes the structures from 5S rRNAs. Set II includes the secondary structures from RNase P and RNase MRP. Set III includes the structures from 16S rRNAs. Reasonable phylogenetic trees are derived for these three sets of data by using our method. Moreover, our program runs faster as compared to some existing ones. Conclusion The famous Lempel-Ziv algorithm can efficiently extract the information on repeated patterns encoded in RNA secondary structures and makes our method an alternative to analyze the similarity of RNA secondary structures. This method will also be useful to researchers who are interested in evolutionary analysis.

  15. Performative Schizoid Method

    DEFF Research Database (Denmark)

    Svabo, Connie

    2016-01-01

    is presented and an example is provided of a first exploratory engagement with it. The method is used in a specific project Becoming Iris, making inquiry into arts-based knowledge creation during a three month visiting scholarship at a small, independent visual art academy. Using the performative schizoid......A performative schizoid method is developed as a method contribution to performance as research. The method is inspired by contemporary research in the human and social sciences urging experimentation and researcher engagement with creative and artistic practice. In the article, the method...... method in Becoming Iris results in four audio-visual and performance-based productions, centered on an emergent theme of the scholartist as a bird in borrowed feathers. Interestingly, the moral lesson of the fable about the vain jackdaw, who dresses in borrowed peacock feathers and becomes a castout...

  16. Finding Similarities in Ancient Ceramics by EDXRF and Multivariate Methods

    International Nuclear Information System (INIS)

    Civici, N.; Stamati, F.

    1999-01-01

    We have studied 39 samples of fragments from ceramic roof tiles with different stamps(Diamalas and Heraion), dated between 330 to 170 BC and found at the archaeological site of Dimales, some 30 km from the Adriatic coast. The data from these samples were compared with those obtained from 7 samples of similar objects and period with the stamp H eraion , found at the archaeological site of APOLLONIA. The samples were analyzed by energy-dispersive X -ray fluorescence(EDXRF), using of the x-ray lines of the elements to the intensity of the Compton peak. The results have been treated with diverse multivariate methods. The application of hierarchical cluster analysis and factor analysis permitted the identification of two main clusters. The first cluster is composed from the ''Heraion'' samples discovered in Apollonia, while the second comprises all the samples discovered in Dimale independent of their stamp. (authors)

  17. Random walk-based similarity measure method for patterns in complex object

    Directory of Open Access Journals (Sweden)

    Liu Shihu

    2017-04-01

    Full Text Available This paper discusses the similarity of the patterns in complex objects. The complex object is composed both of the attribute information of patterns and the relational information between patterns. Bearing in mind the specificity of complex object, a random walk-based similarity measurement method for patterns is constructed. In this method, the reachability of any two patterns with respect to the relational information is fully studied, and in the case of similarity of patterns with respect to the relational information can be calculated. On this bases, an integrated similarity measurement method is proposed, and algorithms 1 and 2 show the performed calculation procedure. One can find that this method makes full use of the attribute information and relational information. Finally, a synthetic example shows that our proposed similarity measurement method is validated.

  18. Multicriteria decision-making method based on a cosine similarity ...

    African Journals Online (AJOL)

    the cosine similarity measure is often used in information retrieval, citation analysis, and automatic classification. However, it scarcely deals with trapezoidal fuzzy information and multicriteria decision-making problems. For this purpose, a cosine similarity measure between trapezoidal fuzzy numbers is proposed based on ...

  19. Similarity, Not Complexity, Determines Visual Working Memory Performance

    Science.gov (United States)

    Jackson, Margaret C.; Linden, David E. J.; Roberts, Mark V.; Kriegeskorte, Nikolaus; Haenschel, Corinna

    2015-01-01

    A number of studies have shown that visual working memory (WM) is poorer for complex versus simple items, traditionally accounted for by higher information load placing greater demands on encoding and storage capacity limits. Other research suggests that it may not be complexity that determines WM performance per se, but rather increased…

  20. Neutrosophic Cubic MCGDM Method Based on Similarity Measure

    Directory of Open Access Journals (Sweden)

    Surapati Pramanik

    2017-06-01

    Full Text Available The notion of neutrosophic cubic set is originated from the hybridization of the concept of neutrosophic set and interval valued neutrosophic set. We define similarity measure for neutrosophic cubic sets and prove some of its basic properties.

  1. Mixed Methods and Action Research: similar or different?

    OpenAIRE

    Wiśniewska, Danuta

    2011-01-01

    This article attempts to analyse and compare ELT studies grounded solely in mixed methods and ELT action research studies based on a mixed methods approach in order to identify to what degree action research studies combining different methods in a single study comply with the principles of rigorous mixed methods study.

  2. Similarity measurement method of high-dimensional data based on normalized net lattice subspace

    Institute of Scientific and Technical Information of China (English)

    Li Wenfa; Wang Gongming; Li Ke; Huang Su

    2017-01-01

    The performance of conventional similarity measurement methods is affected seriously by the curse of dimensionality of high-dimensional data.The reason is that data difference between sparse and noisy dimensionalities occupies a large proportion of the similarity, leading to the dissimilarities between any results.A similarity measurement method of high-dimensional data based on normalized net lattice subspace is proposed.The data range of each dimension is divided into several intervals, and the components in different dimensions are mapped onto the corresponding interval.Only the component in the same or adjacent interval is used to calculate the similarity.To validate this meth-od, three data types are used, and seven common similarity measurement methods are compared. The experimental result indicates that the relative difference of the method is increasing with the di-mensionality and is approximately two or three orders of magnitude higher than the conventional method.In addition, the similarity range of this method in different dimensions is [0, 1], which is fit for similarity analysis after dimensionality reduction.

  3. Optimizing top precision performance measure of content-based image retrieval by learning similarity function

    KAUST Repository

    Liang, Ru-Ze

    2017-04-24

    In this paper we study the problem of content-based image retrieval. In this problem, the most popular performance measure is the top precision measure, and the most important component of a retrieval system is the similarity function used to compare a query image against a database image. However, up to now, there is no existing similarity learning method proposed to optimize the top precision measure. To fill this gap, in this paper, we propose a novel similarity learning method to maximize the top precision measure. We model this problem as a minimization problem with an objective function as the combination of the losses of the relevant images ranked behind the top-ranked irrelevant image, and the squared Frobenius norm of the similarity function parameter. This minimization problem is solved as a quadratic programming problem. The experiments over two benchmark data sets show the advantages of the proposed method over other similarity learning methods when the top precision is used as the performance measure.

  4. Optimizing top precision performance measure of content-based image retrieval by learning similarity function

    KAUST Repository

    Liang, Ru-Ze; Shi, Lihui; Wang, Haoxiang; Meng, Jiandong; Wang, Jim Jing-Yan; Sun, Qingquan; Gu, Yi

    2017-01-01

    In this paper we study the problem of content-based image retrieval. In this problem, the most popular performance measure is the top precision measure, and the most important component of a retrieval system is the similarity function used to compare a query image against a database image. However, up to now, there is no existing similarity learning method proposed to optimize the top precision measure. To fill this gap, in this paper, we propose a novel similarity learning method to maximize the top precision measure. We model this problem as a minimization problem with an objective function as the combination of the losses of the relevant images ranked behind the top-ranked irrelevant image, and the squared Frobenius norm of the similarity function parameter. This minimization problem is solved as a quadratic programming problem. The experiments over two benchmark data sets show the advantages of the proposed method over other similarity learning methods when the top precision is used as the performance measure.

  5. Computer Facilitated Mathematical Methods in Chemical Engineering--Similarity Solution

    Science.gov (United States)

    Subramanian, Venkat R.

    2006-01-01

    High-performance computers coupled with highly efficient numerical schemes and user-friendly software packages have helped instructors to teach numerical solutions and analysis of various nonlinear models more efficiently in the classroom. One of the main objectives of a model is to provide insight about the system of interest. Analytical…

  6. An inter-comparison of similarity-based methods for organisation and classification of groundwater hydrographs

    Science.gov (United States)

    Haaf, Ezra; Barthel, Roland

    2018-04-01

    Classification and similarity based methods, which have recently received major attention in the field of surface water hydrology, namely through the PUB (prediction in ungauged basins) initiative, have not yet been applied to groundwater systems. However, it can be hypothesised, that the principle of "similar systems responding similarly to similar forcing" applies in subsurface hydrology as well. One fundamental prerequisite to test this hypothesis and eventually to apply the principle to make "predictions for ungauged groundwater systems" is efficient methods to quantify the similarity of groundwater system responses, i.e. groundwater hydrographs. In this study, a large, spatially extensive, as well as geologically and geomorphologically diverse dataset from Southern Germany and Western Austria was used, to test and compare a set of 32 grouping methods, which have previously only been used individually in local-scale studies. The resulting groupings are compared to a heuristic visual classification, which serves as a baseline. A performance ranking of these classification methods is carried out and differences in homogeneity of grouping results were shown, whereby selected groups were related to hydrogeological indices and geological descriptors. This exploratory empirical study shows that the choice of grouping method has a large impact on the object distribution within groups, as well as on the homogeneity of patterns captured in groups. The study provides a comprehensive overview of a large number of grouping methods, which can guide researchers when attempting similarity-based groundwater hydrograph classification.

  7. EVALUATION OF SEMANTIC SIMILARITY FOR SENTENCES IN NATURAL LANGUAGE BY MATHEMATICAL STATISTICS METHODS

    Directory of Open Access Journals (Sweden)

    A. E. Pismak

    2016-03-01

    Full Text Available Subject of Research. The paper is focused on Wiktionary articles structural organization in the aspect of its usage as the base for semantic network. Wiktionary community references, article templates and articles markup features are analyzed. The problem of numerical estimation for semantic similarity of structural elements in Wiktionary articles is considered. Analysis of existing software for semantic similarity estimation of such elements is carried out; algorithms of their functioning are studied; their advantages and disadvantages are shown. Methods. Mathematical statistics methods were used to analyze Wiktionary articles markup features. The method of semantic similarity computing based on statistics data for compared structural elements was proposed.Main Results. We have concluded that there is no possibility for direct use of Wiktionary articles as the source for semantic network. We have proposed to find hidden similarity between article elements, and for that purpose we have developed the algorithm for calculation of confidence coefficients proving that each pair of sentences is semantically near. The research of quantitative and qualitative characteristics for the developed algorithm has shown its major performance advantage over the other existing solutions in the presence of insignificantly higher error rate. Practical Relevance. The resulting algorithm may be useful in developing tools for automatic Wiktionary articles parsing. The developed method could be used in computing of semantic similarity for short text fragments in natural language in case of algorithm performance requirements are higher than its accuracy specifications.

  8. 29 CFR 1620.18 - Jobs performed under similar working conditions.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 4 2010-07-01 2010-07-01 false Jobs performed under similar working conditions. 1620.18... THE EQUAL PAY ACT § 1620.18 Jobs performed under similar working conditions. (a) In general. In order for the equal pay standard to apply, the jobs are required to be performed under similar working...

  9. Comparative analysis of chemical similarity methods for modular natural products with a hypothetical structure enumeration algorithm.

    Science.gov (United States)

    Skinnider, Michael A; Dejong, Chris A; Franczak, Brian C; McNicholas, Paul D; Magarvey, Nathan A

    2017-08-16

    Natural products represent a prominent source of pharmaceutically and industrially important agents. Calculating the chemical similarity of two molecules is a central task in cheminformatics, with applications at multiple stages of the drug discovery pipeline. Quantifying the similarity of natural products is a particularly important problem, as the biological activities of these molecules have been extensively optimized by natural selection. The large and structurally complex scaffolds of natural products distinguish their physical and chemical properties from those of synthetic compounds. However, no analysis of the performance of existing methods for molecular similarity calculation specific to natural products has been reported to date. Here, we present LEMONS, an algorithm for the enumeration of hypothetical modular natural product structures. We leverage this algorithm to conduct a comparative analysis of molecular similarity methods within the unique chemical space occupied by modular natural products using controlled synthetic data, and comprehensively investigate the impact of diverse biosynthetic parameters on similarity search. We additionally investigate a recently described algorithm for natural product retrobiosynthesis and alignment, and find that when rule-based retrobiosynthesis can be applied, this approach outperforms conventional two-dimensional fingerprints, suggesting it may represent a valuable approach for the targeted exploration of natural product chemical space and microbial genome mining. Our open-source algorithm is an extensible method of enumerating hypothetical natural product structures with diverse potential applications in bioinformatics.

  10. Improvement of training set structure in fusion data cleaning using Time-Domain Global Similarity method

    International Nuclear Information System (INIS)

    Liu, J.; Lan, T.; Qin, H.

    2017-01-01

    Traditional data cleaning identifies dirty data by classifying original data sequences, which is a class-imbalanced problem since the proportion of incorrect data is much less than the proportion of correct ones for most diagnostic systems in Magnetic Confinement Fusion (MCF) devices. When using machine learning algorithms to classify diagnostic data based on class-imbalanced training set, most classifiers are biased towards the major class and show very poor classification rates on the minor class. By transforming the direct classification problem about original data sequences into a classification problem about the physical similarity between data sequences, the class-balanced effect of Time-Domain Global Similarity (TDGS) method on training set structure is investigated in this paper. Meanwhile, the impact of improved training set structure on data cleaning performance of TDGS method is demonstrated with an application example in EAST POlarimetry-INTerferometry (POINT) system.

  11. Face Recognition Performance Improvement using a Similarity Score of Feature Vectors based on Probabilistic Histograms

    Directory of Open Access Journals (Sweden)

    SRIKOTE, G.

    2016-08-01

    Full Text Available This paper proposes an improved performance algorithm of face recognition to identify two face mismatch pairs in cases of incorrect decisions. The primary feature of this method is to deploy the similarity score with respect to Gaussian components between two previously unseen faces. Unlike the conventional classical vector distance measurement, our algorithms also consider the plot of summation of the similarity index versus face feature vector distance. A mixture of Gaussian models of labeled faces is also widely applicable to different biometric system parameters. By comparative evaluations, it has been shown that the efficiency of the proposed algorithm is superior to that of the conventional algorithm by an average accuracy of up to 1.15% and 16.87% when compared with 3x3 Multi-Region Histogram (MRH direct-bag-of-features and Principal Component Analysis (PCA-based face recognition systems, respectively. The experimental results show that similarity score consideration is more discriminative for face recognition compared to feature distance. Experimental results of Labeled Face in the Wild (LFW data set demonstrate that our algorithms are suitable for real applications probe-to-gallery identification of face recognition systems. Moreover, this proposed method can also be applied to other recognition systems and therefore additionally improves recognition scores.

  12. Gender Similarities in Math Performance from Middle School through High School

    Science.gov (United States)

    Scafidi, Tony; Bui, Khanh

    2010-01-01

    Using data from 10 states, Hyde, Lindberg, Linn, Ellis, and Williams (2008) found gender similarities in performance on standardized math tests. The present study attempted to replicate this finding with national data and to extend it by examining whether gender similarities in math performance are moderated by race, socioeconomic status, or math…

  13. Traditional and robust vector selection methods for use with similarity based models

    International Nuclear Information System (INIS)

    Hines, J. W.; Garvey, D. R.

    2006-01-01

    Vector selection, or instance selection as it is often called in the data mining literature, performs a critical task in the development of nonparametric, similarity based models. Nonparametric, similarity based modeling (SBM) is a form of 'lazy learning' which constructs a local model 'on the fly' by comparing a query vector to historical, training vectors. For large training sets the creation of local models may become cumbersome, since each training vector must be compared to the query vector. To alleviate this computational burden, varying forms of training vector sampling may be employed with the goal of selecting a subset of the training data such that the samples are representative of the underlying process. This paper describes one such SBM, namely auto-associative kernel regression (AAKR), and presents five traditional vector selection methods and one robust vector selection method that may be used to select prototype vectors from a larger data set in model training. The five traditional vector selection methods considered are min-max, vector ordering, combination min-max and vector ordering, fuzzy c-means clustering, and Adeli-Hung clustering. Each method is described in detail and compared using artificially generated data and data collected from the steam system of an operating nuclear power plant. (authors)

  14. High-performance permanent magnet brushless motors with balanced concentrated windings and similar slot and pole numbers

    International Nuclear Information System (INIS)

    Stumberger, Bojan; Stumberger, Gorazd; Hadziselimovic, Miralem; Hamler, Anton; Trlep, Mladen; Gorican, Viktor; Jesenik, Marko

    2006-01-01

    The paper presents a comparison between the performances of exterior-rotor permanent magnet brushless motors with distributed windings and the performances of exterior-rotor permanent magnet brushless motors with concentrated windings. Finite element method analysis is employed to determine the performance of each motor. It is shown that motors with concentrated windings and similar slot and pole numbers exhibit similar or better performances than motors with distributed windings for brushless AC (BLAC) operation mode and brushless DC (BLDC) operation mode as well

  15. Does Avatar Appearance Matter? How Team Visual Similarity and Member–Avatar Similarity Influence Virtual Team Performance

    NARCIS (Netherlands)

    van der Land, S.F.; Schouten, A.P.; Feldberg, J.F.M.; Huysman, M.H.; van den Hooff, B.J.

    2015-01-01

    This multimethod study investigated how avatar appearance influences virtual team performance. This study is the first to integrate the framework of social identity model of de-individuation effects (SIDE) and Self-Identification theory, using "morphing" techniques. Results were obtained from a 2

  16. Genomic similarity and kernel methods I: advancements by building on mathematical and statistical foundations.

    Science.gov (United States)

    Schaid, Daniel J

    2010-01-01

    Measures of genomic similarity are the basis of many statistical analytic methods. We review the mathematical and statistical basis of similarity methods, particularly based on kernel methods. A kernel function converts information for a pair of subjects to a quantitative value representing either similarity (larger values meaning more similar) or distance (smaller values meaning more similar), with the requirement that it must create a positive semidefinite matrix when applied to all pairs of subjects. This review emphasizes the wide range of statistical methods and software that can be used when similarity is based on kernel methods, such as nonparametric regression, linear mixed models and generalized linear mixed models, hierarchical models, score statistics, and support vector machines. The mathematical rigor for these methods is summarized, as is the mathematical framework for making kernels. This review provides a framework to move from intuitive and heuristic approaches to define genomic similarities to more rigorous methods that can take advantage of powerful statistical modeling and existing software. A companion paper reviews novel approaches to creating kernels that might be useful for genomic analyses, providing insights with examples [1]. Copyright © 2010 S. Karger AG, Basel.

  17. K-Line Patterns’ Predictive Power Analysis Using the Methods of Similarity Match and Clustering

    Directory of Open Access Journals (Sweden)

    Lv Tao

    2017-01-01

    Full Text Available Stock price prediction based on K-line patterns is the essence of candlestick technical analysis. However, there are some disputes on whether the K-line patterns have predictive power in academia. To help resolve the debate, this paper uses the data mining methods of pattern recognition, pattern clustering, and pattern knowledge mining to research the predictive power of K-line patterns. The similarity match model and nearest neighbor-clustering algorithm are proposed for solving the problem of similarity match and clustering of K-line series, respectively. The experiment includes testing the predictive power of the Three Inside Up pattern and Three Inside Down pattern with the testing dataset of the K-line series data of Shanghai 180 index component stocks over the latest 10 years. Experimental results show that (1 the predictive power of a pattern varies a great deal for different shapes and (2 each of the existing K-line patterns requires further classification based on the shape feature for improving the prediction performance.

  18. Methods of Evaluating Performances for Marketing Strategies

    OpenAIRE

    Ioan Cucu

    2005-01-01

    There are specific methods for assessing and improving the effectiveness of a marketing strategy. A marketer should state in the marketing plan what a marketing strategy is supposed to accomplish. These statements should set forth performance standards, which usually are stated in terms of profits, sales, or costs. Actual performance must be measured in similar terms so that comparisons are possible. This paper describes sales analysis and cost analysis, two general ways of evaluating the act...

  19. A FAST METHOD FOR MEASURING THE SIMILARITY BETWEEN 3D MODEL AND 3D POINT CLOUD

    Directory of Open Access Journals (Sweden)

    Z. Zhang

    2016-06-01

    Full Text Available This paper proposes a fast method for measuring the partial Similarity between 3D Model and 3D point Cloud (SimMC. It is crucial to measure SimMC for many point cloud-related applications such as 3D object retrieval and inverse procedural modelling. In our proposed method, the surface area of model and the Distance from Model to point Cloud (DistMC are exploited as measurements to calculate SimMC. Here, DistMC is defined as the weighted distance of the distances between points sampled from model and point cloud. Similarly, Distance from point Cloud to Model (DistCM is defined as the average distance of the distances between points in point cloud and model. In order to reduce huge computational burdens brought by calculation of DistCM in some traditional methods, we define SimMC as the ratio of weighted surface area of model to DistMC. Compared to those traditional SimMC measuring methods that are only able to measure global similarity, our method is capable of measuring partial similarity by employing distance-weighted strategy. Moreover, our method is able to be faster than other partial similarity assessment methods. We demonstrate the superiority of our method both on synthetic data and laser scanning data.

  20. Technological Similarity, Post-acquisition R&D Reorganization, and Innovation Performance in Horizontal Acquisition

    DEFF Research Database (Denmark)

    Colombo, Massimo G.; Rabbiosi, Larissa

    2014-01-01

    This paper aims to disentangle the mechanisms through which technological similarity between acquiring and acquired firms influences innovation in horizontal acquisitions. We develop a theoretical model that links technological similarity to: (i) two key aspects of post-acquisition reorganization...... of acquired R&D operations – the rationalization of the R&D operations and the replacement of the R&D top manager, and (ii) two intermediate effects that are closely associated with the post-acquisition innovation performance of the combined firm – improvements in R&D productivity and disruptions in R......&D personnel. We rely on PLS techniques to test our theoretical model using detailed information on 31 horizontal acquisitions in high- and medium-tech industries. Our results indicate that in horizontal acquisitions, technological similarity negatively affects post-acquisition innovation performance...

  1. Dynamic Time Warping Distance Method for Similarity Test of Multipoint Ground Motion Field

    Directory of Open Access Journals (Sweden)

    Yingmin Li

    2010-01-01

    Full Text Available The reasonability of artificial multi-point ground motions and the identification of abnormal records in seismic array observations, are two important issues in application and analysis of multi-point ground motion fields. Based on the dynamic time warping (DTW distance method, this paper discusses the application of similarity measurement in the similarity analysis of simulated multi-point ground motions and the actual seismic array records. Analysis results show that the DTW distance method not only can quantitatively reflect the similarity of simulated ground motion field, but also offers advantages in clustering analysis and singularity recognition of actual multi-point ground motion field.

  2. MEASURING THE PERFORMANCE OF SIMILARITY PROPAGATION IN AN SEMANTIC SEARCH ENGINE

    Directory of Open Access Journals (Sweden)

    S. K. Jayanthi

    2013-10-01

    Full Text Available In the current scenario, web page result personalization is playing a vital role. Nearly 80 % of the users expect the best results in the first page itself without having any persistence to browse longer in URL mode. This research work focuses on two main themes: Semantic web search through online and Domain based search through offline. The first part is to find an effective method which allows grouping similar results together using BookShelf Data Structure and organizing the various clusters. The second one is focused on the academic domain based search through offline. This paper focuses on finding documents which are similar and how Vector space can be used to solve it. So more weightage is given for the principles and working methodology of similarity propagation. Cosine similarity measure is used for finding the relevancy among the documents.

  3. Priming of disability and elderly stereotype in motor performance: similar or specific effects?

    Science.gov (United States)

    Ginsberg, Frederik; Rohmer, Odile; Louvet, Eva

    2012-04-01

    In three experimental studies, the effects of priming participants with the disability stereotype were investigated with respect to their subsequent motor performance. Also explored were effects of activating two similar stereotypes, persons with a disability and elderly people. In Study 1, participants were primed with the disability stereotype versus with a neutral prime, and then asked to perform on a motor coordination task. In Studies 2 and 3, a third condition was introduced: priming participants with the elderly stereotype. Results indicated that priming participants with the disability stereotype altered their motor performance: they showed decreased manual dexterity and performed slower than the non-primed participants. Priming with the elderly stereotype decreased only performance speed. These findings underline that prime-to-behavior effects may depend on activation of specific stereotype content.

  4. [Baking method of Platycladi Cacumen Carbonisatum based on similarity of UPLC fingerprints].

    Science.gov (United States)

    Shan, Mingqiu; Chen, Chao; Yao, Xiaodong; Ding, Anwei

    2010-09-01

    To establish a baking method of Platycladi Cacumen Carbonisatum for providing a new idea to Carbonic Herbs' research. Samples were prepared in an oven for different time at different temperatures separately. Then the fingerprints of the samples were determined by UPLC. According to the standard fingerprint, the similarities of the samples' fingerprints were compared. The similarities of 3 samples, which were baked at 230 degrees C for 20 min, 30 min and at 240 degrees C for 20 min, were above 0.96. According to the similarities of the fingerprints and in view of the appearances, Platycladi Cacumen Carbonizing should be baked at 230 degrees C for 20 min.

  5. Comparing the performance of biomedical clustering methods

    DEFF Research Database (Denmark)

    Wiwie, Christian; Baumbach, Jan; Röttger, Richard

    2015-01-01

    expression to protein domains. Performance was judged on the basis of 13 common cluster validity indices. We developed a clustering analysis platform, ClustEval (http://clusteval.mpi-inf.mpg.de), to promote streamlined evaluation, comparison and reproducibility of clustering results in the future......Identifying groups of similar objects is a popular first step in biomedical data analysis, but it is error-prone and impossible to perform manually. Many computational methods have been developed to tackle this problem. Here we assessed 13 well-known methods using 24 data sets ranging from gene....... This allowed us to objectively evaluate the performance of all tools on all data sets with up to 1,000 different parameter sets each, resulting in a total of more than 4 million calculated cluster validity indices. We observed that there was no universal best performer, but on the basis of this wide...

  6. Usefulness of computerized method for lung nodule detection on digital chest radiographs using similar subtraction images from different patients

    International Nuclear Information System (INIS)

    Aoki, Takatoshi; Oda, Nobuhiro; Yamashita, Yoshiko; Yamamoto, Keiji; Korogi, Yukunori

    2012-01-01

    Purpose: The purpose of this study is to evaluate the usefulness of a novel computerized method to select automatically the similar chest radiograph for image subtraction in the patients who have no previous chest radiographs and to assist the radiologists’ interpretation by presenting the “similar subtraction image” from different patients. Materials and methods: Institutional review board approval was obtained, and the requirement for informed patient consent was waived. A large database of approximately 15,000 normal chest radiographs was used for searching similar images of different patients. One hundred images of candidates were selected according to two clinical parameters and similarity of the lung field in the target image. We used the correlation value of chest region in the 100 images for searching the most similar image. The similar subtraction images were obtained by subtracting the similar image selected from the target image. Thirty cases with lung nodules and 30 cases without lung nodules were used for an observer performance test. Four attending radiologists and four radiology residents participated in this observer performance test. Results: The AUC for all radiologists increased significantly from 0.925 to 0.974 with the CAD (P = .004). When the computer output images were available, the average AUC for the residents was more improved (0.960 vs. 0.890) than for the attending radiologists (0.987 vs. 0.960). Conclusion: The novel computerized method for lung nodule detection using similar subtraction images from different patients would be useful to detect lung nodules on digital chest radiographs, especially for less experienced readers.

  7. Resisting distraction and response inhibition trigger similar enhancements of future performance.

    Science.gov (United States)

    Bissett, Patrick G; Grant, Lauren D; Weissman, Daniel H

    2017-10-01

    Resisting distraction and response inhibition are crucial aspects of cognitive control. Interestingly, each of these abilities transiently improves just after it is utilized. Competing views differ, however, as to whether utilizing either of these abilities (e.g., resisting distraction) enhances future performance involving the other ability (e.g., response inhibition). To distinguish between these views, we combined a Stroop-like task that requires resisting distraction with a restraint variant of the stop-signal task that requires response inhibition. We observed similar sequential-trial effects (i.e., performance enhancements) following trials in which participants (a) resisted distraction (i.e., incongruent go trials) and (b) inhibited a response (i.e., congruent stop trials). First, the congruency effect in go trials, which indexes overall distractibility, was smaller after both incongruent go trials and congruent stop trials than it was after congruent go trials. Second, stop failures were less frequent after both incongruent go trials and congruent stop trials than after congruent go trials. A control experiment ruled out the possibility that perceptual conflict or surprise engendered by occasional stop signals triggers sequential-trial effects independent of stopping. Thus, our findings support a novel, integrated view in which resisting distraction and response inhibition trigger similar sequential enhancements of future performance. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Depth compensating calculation method of computer-generated holograms using symmetry and similarity of zone plates

    Science.gov (United States)

    Wei, Hui; Gong, Guanghong; Li, Ni

    2017-10-01

    Computer-generated hologram (CGH) is a promising 3D display technology while it is challenged by heavy computation load and vast memory requirement. To solve these problems, a depth compensating CGH calculation method based on symmetry and similarity of zone plates is proposed and implemented on graphics processing unit (GPU). An improved LUT method is put forward to compute the distances between object points and hologram pixels in the XY direction. The concept of depth compensating factor is defined and used for calculating the holograms of points with different depth positions instead of layer-based methods. The proposed method is suitable for arbitrary sampling objects with lower memory usage and higher computational efficiency compared to other CGH methods. The effectiveness of the proposed method is validated by numerical and optical experiments.

  9. A Method for Generating Approximate Similarity Solutions of Nonlinear Partial Differential Equations

    Directory of Open Access Journals (Sweden)

    Mazhar Iqbal

    2014-01-01

    Full Text Available Standard application of similarity method to find solutions of PDEs mostly results in reduction to ODEs which are not easily integrable in terms of elementary or tabulated functions. Such situations usually demand solving reduced ODEs numerically. However, there are no systematic procedures available to utilize these numerical solutions of reduced ODE to obtain the solution of original PDE. A practical and tractable approach is proposed to deal with such situations and is applied to obtain approximate similarity solutions to different cases of an initial-boundary value problem of unsteady gas flow through a semi-infinite porous medium.

  10. Improved power performance assessment methods

    Energy Technology Data Exchange (ETDEWEB)

    Frandsen, S; Antoniou, I; Dahlberg, J A [and others

    1999-03-01

    The uncertainty of presently-used methods for retrospective assessment of the productive capacity of wind farms is unacceptably large. The possibilities of improving the accuracy have been investigated and are reported. A method is presented that includes an extended power curve and site calibration. In addition, blockage effects with respect to reference wind speed measurements are analysed. It is found that significant accuracy improvements are possible by the introduction of more input variables such as turbulence and wind shear, in addition to mean wind speed and air density. Also, the testing of several or all machines in the wind farm - instead of only one or two - may provide a better estimate of the average performance. (au)

  11. Testing statistical significance scores of sequence comparison methods with structure similarity

    Directory of Open Access Journals (Sweden)

    Leunissen Jack AM

    2006-10-01

    Full Text Available Abstract Background In the past years the Smith-Waterman sequence comparison algorithm has gained popularity due to improved implementations and rapidly increasing computing power. However, the quality and sensitivity of a database search is not only determined by the algorithm but also by the statistical significance testing for an alignment. The e-value is the most commonly used statistical validation method for sequence database searching. The CluSTr database and the Protein World database have been created using an alternative statistical significance test: a Z-score based on Monte-Carlo statistics. Several papers have described the superiority of the Z-score as compared to the e-value, using simulated data. We were interested if this could be validated when applied to existing, evolutionary related protein sequences. Results All experiments are performed on the ASTRAL SCOP database. The Smith-Waterman sequence comparison algorithm with both e-value and Z-score statistics is evaluated, using ROC, CVE and AP measures. The BLAST and FASTA algorithms are used as reference. We find that two out of three Smith-Waterman implementations with e-value are better at predicting structural similarities between proteins than the Smith-Waterman implementation with Z-score. SSEARCH especially has very high scores. Conclusion The compute intensive Z-score does not have a clear advantage over the e-value. The Smith-Waterman implementations give generally better results than their heuristic counterparts. We recommend using the SSEARCH algorithm combined with e-values for pairwise sequence comparisons.

  12. Local non-similarity method through the Crocco's transformation in boundary layer problem

    International Nuclear Information System (INIS)

    Jardim, R.G.M.

    1981-04-01

    The coordinate transformation developed by L. Crocco to obtain the solution of the compressible fluid flows over isotermal flat plates is originally employed in the present work, with the purpose of adding its inherent advantage to the Non-Similarity Method idealized by E.M. Sparrow, in the solution of the incompressible non-similar boundary layers. The Crocco's transformation is applied to the conservation equation for forced convection, laminar, constant properties and two-dimensional flows over solids. Two non-similar problems arisen from freestream velocity distribution, the cylinder in crossflow and the Howarth's retarded flow, are solved with a view to illustrating the new procedure. In those solutions the effect of frictional heat is also considered. The results of hydrodynamic and thermal problems are compared with available published information and good agreement was observed. (Author) [pt

  13. Enzyme sequence similarity improves the reaction alignment method for cross-species pathway comparison

    Energy Technology Data Exchange (ETDEWEB)

    Ovacik, Meric A. [Chemical and Biochemical Engineering Department, Rutgers University, Piscataway, NJ 08854 (United States); Androulakis, Ioannis P., E-mail: yannis@rci.rutgers.edu [Chemical and Biochemical Engineering Department, Rutgers University, Piscataway, NJ 08854 (United States); Biomedical Engineering Department, Rutgers University, Piscataway, NJ 08854 (United States)

    2013-09-15

    Pathway-based information has become an important source of information for both establishing evolutionary relationships and understanding the mode of action of a chemical or pharmaceutical among species. Cross-species comparison of pathways can address two broad questions: comparison in order to inform evolutionary relationships and to extrapolate species differences used in a number of different applications including drug and toxicity testing. Cross-species comparison of metabolic pathways is complex as there are multiple features of a pathway that can be modeled and compared. Among the various methods that have been proposed, reaction alignment has emerged as the most successful at predicting phylogenetic relationships based on NCBI taxonomy. We propose an improvement of the reaction alignment method by accounting for sequence similarity in addition to reaction alignment method. Using nine species, including human and some model organisms and test species, we evaluate the standard and improved comparison methods by analyzing glycolysis and citrate cycle pathways conservation. In addition, we demonstrate how organism comparison can be conducted by accounting for the cumulative information retrieved from nine pathways in central metabolism as well as a more complete study involving 36 pathways common in all nine species. Our results indicate that reaction alignment with enzyme sequence similarity results in a more accurate representation of pathway specific cross-species similarities and differences based on NCBI taxonomy.

  14. Enzyme sequence similarity improves the reaction alignment method for cross-species pathway comparison

    International Nuclear Information System (INIS)

    Ovacik, Meric A.; Androulakis, Ioannis P.

    2013-01-01

    Pathway-based information has become an important source of information for both establishing evolutionary relationships and understanding the mode of action of a chemical or pharmaceutical among species. Cross-species comparison of pathways can address two broad questions: comparison in order to inform evolutionary relationships and to extrapolate species differences used in a number of different applications including drug and toxicity testing. Cross-species comparison of metabolic pathways is complex as there are multiple features of a pathway that can be modeled and compared. Among the various methods that have been proposed, reaction alignment has emerged as the most successful at predicting phylogenetic relationships based on NCBI taxonomy. We propose an improvement of the reaction alignment method by accounting for sequence similarity in addition to reaction alignment method. Using nine species, including human and some model organisms and test species, we evaluate the standard and improved comparison methods by analyzing glycolysis and citrate cycle pathways conservation. In addition, we demonstrate how organism comparison can be conducted by accounting for the cumulative information retrieved from nine pathways in central metabolism as well as a more complete study involving 36 pathways common in all nine species. Our results indicate that reaction alignment with enzyme sequence similarity results in a more accurate representation of pathway specific cross-species similarities and differences based on NCBI taxonomy

  15. Class attendance, peer similarity, and academic performance in a large field study

    DEFF Research Database (Denmark)

    Kassarnig, Valentin; Bjerre-Nielsen, Andreas; Mones, Enys

    2017-01-01

    Identifying the factors that determine academic performance is an essential part of educational research. Existing research indicates that class attendance is a useful predictor of subsequent course achievements. The majority of the literature is, however, based on surveys and self-reports, methods....... Based on measured attendance data of nearly 1,000 undergraduate students, we demonstrate that early and consistent class attendance strongly correlates with academic performance. In addition, our novel dataset allows us to determine that attendance among social peers was substantially correlated (>0...

  16. Logarithmic Similarity Measure between Interval-Valued Fuzzy Sets and Its Fault Diagnosis Method

    Directory of Open Access Journals (Sweden)

    Zhikang Lu

    2018-02-01

    Full Text Available Fault diagnosis is an important task for the normal operation and maintenance of equipment. In many real situations, the diagnosis data cannot provide deterministic values and are usually imprecise or uncertain. Thus, interval-valued fuzzy sets (IVFSs are very suitable for expressing imprecise or uncertain fault information in real problems. However, existing literature scarcely deals with fault diagnosis problems, such as gasoline engines and steam turbines with IVFSs. However, the similarity measure is one of the important tools in fault diagnoses. Therefore, this paper proposes a new similarity measure of IVFSs based on logarithmic function and its fault diagnosis method for the first time. By the logarithmic similarity measure between the fault knowledge and some diagnosis-testing samples with interval-valued fuzzy information and its relation indices, we can determine the fault type and ranking order of faults corresponding to the relation indices. Then, the misfire fault diagnosis of the gasoline engine and the vibrational fault diagnosis of a turbine are presented to demonstrate the simplicity and effectiveness of the proposed diagnosis method. The fault diagnosis results of gasoline engine and steam turbine show that the proposed diagnosis method not only gives the main fault types of the gasoline engine and steam turbine but also provides useful information for multi-fault analyses and predicting future fault trends. Hence, the logarithmic similarity measure and its fault diagnosis method are main contributions in this study and they provide a useful new way for the fault diagnosis with interval-valued fuzzy information.

  17. Similar estimates of temperature impacts on global wheat yield by three independent methods

    DEFF Research Database (Denmark)

    Liu, Bing; Asseng, Senthold; Müller, Christoph

    2016-01-01

    The potential impact of global temperature change on global crop yield has recently been assessed with different methods. Here we show that grid-based and point-based simulations and statistical regressions (from historic records), without deliberate adaptation or CO2 fertilization effects, produ......-method ensemble, it was possible to quantify ‘method uncertainty’ in addition to model uncertainty. This significantly improves confidence in estimates of climate impacts on global food security.......The potential impact of global temperature change on global crop yield has recently been assessed with different methods. Here we show that grid-based and point-based simulations and statistical regressions (from historic records), without deliberate adaptation or CO2 fertilization effects, produce...... similar estimates of temperature impact on wheat yields at global and national scales. With a 1 °C global temperature increase, global wheat yield is projected to decline between 4.1% and 6.4%. Projected relative temperature impacts from different methods were similar for major wheat-producing countries...

  18. Class attendance, peer similarity, and academic performance in a large field study

    Science.gov (United States)

    Bjerre-Nielsen, Andreas; Mones, Enys; Lehmann, Sune; Lassen, David Dreyer

    2017-01-01

    Identifying the factors that determine academic performance is an essential part of educational research. Existing research indicates that class attendance is a useful predictor of subsequent course achievements. The majority of the literature is, however, based on surveys and self-reports, methods which have well-known systematic biases that lead to limitations on conclusions and generalizability as well as being costly to implement. Here we propose a novel method for measuring class attendance that overcomes these limitations by using location and bluetooth data collected from smartphone sensors. Based on measured attendance data of nearly 1,000 undergraduate students, we demonstrate that early and consistent class attendance strongly correlates with academic performance. In addition, our novel dataset allows us to determine that attendance among social peers was substantially correlated (>0.5), suggesting either an important peer effect or homophily with respect to attendance. PMID:29117190

  19. Comparison of computational performance of GA and PSO optimization techniques when designing similar systems - Typical PWR core case

    Energy Technology Data Exchange (ETDEWEB)

    Souza Lima, Carlos A. [Instituto de Engenharia Nuclear - Divisao de Reatores/PPGIEN, Rua Helio de Almeida 75, Cidade Universitaria - Ilha do Fundao, P.O. Box: 68550 - Zip Code: 21941-972, Rio de Janeiro (Brazil); Instituto Politecnico, Universidade do Estado do Rio de Janeiro, Pos-Graduacao em Modelagem Computacional, Rua Alberto Rangel - s/n, Vila Nova, Nova Friburgo, Zip Code: 28630-050, Nova Friburgo (Brazil); Lapa, Celso Marcelo F.; Pereira, Claudio Marcio do N.A. [Instituto de Engenharia Nuclear - Divisao de Reatores/PPGIEN, Rua Helio de Almeida 75, Cidade Universitaria - Ilha do Fundao, P.O. Box: 68550 - Zip Code: 21941-972, Rio de Janeiro (Brazil); Instituto Nacional de Ciencia e Tecnologia de Reatores Nucleares Inovadores (INCT) (Brazil); Cunha, Joao J. da [Eletronuclear Eletrobras Termonuclear - Gerencia de Analise de Seguranca Nuclear, Rua da Candelaria, 65, 7 andar. Centro, Zip Code: 20091-906, Rio de Janeiro (Brazil); Alvim, Antonio Carlos M. [Universidade Federal do Rio de Janeiro, COPPE/Nuclear, Cidade Universitaria - Ilha do Fundao s/n, P.O.Box 68509 - Zip Code: 21945-970, Rio de Janeiro (Brazil); Instituto Nacional de Ciencia e Tecnologia de Reatores Nucleares Inovadores (INCT) (Brazil)

    2011-06-15

    Research highlights: > Performance of PSO and GA techniques applied to similar system design. > This work uses ANGRA1 (two loop PWR) core as a prototype. > Results indicate that PSO technique is more adequate than GA to solve this kind of problem. - Abstract: This paper compares the performance of two optimization techniques, particle swarm optimization (PSO) and genetic algorithm (GA) applied to the design a typical reduced scale two loop Pressurized Water Reactor (PWR) core, at full power in single phase forced circulation flow. This comparison aims at analyzing the performance in reaching the global optimum, considering that both heuristics are based on population search methods, that is, methods whose population (candidate solution set) evolve from one generation to the next using a combination of deterministic and probabilistic rules. The simulated PWR, similar to ANGRA 1 power plant, was used as a case example to compare the performance of PSO and GA. Results from simulations indicated that PSO is more adequate to solve this kind of problem.

  20. Comparison of computational performance of GA and PSO optimization techniques when designing similar systems - Typical PWR core case

    International Nuclear Information System (INIS)

    Souza Lima, Carlos A.; Lapa, Celso Marcelo F.; Pereira, Claudio Marcio do N.A.; Cunha, Joao J. da; Alvim, Antonio Carlos M.

    2011-01-01

    Research highlights: → Performance of PSO and GA techniques applied to similar system design. → This work uses ANGRA1 (two loop PWR) core as a prototype. → Results indicate that PSO technique is more adequate than GA to solve this kind of problem. - Abstract: This paper compares the performance of two optimization techniques, particle swarm optimization (PSO) and genetic algorithm (GA) applied to the design a typical reduced scale two loop Pressurized Water Reactor (PWR) core, at full power in single phase forced circulation flow. This comparison aims at analyzing the performance in reaching the global optimum, considering that both heuristics are based on population search methods, that is, methods whose population (candidate solution set) evolve from one generation to the next using a combination of deterministic and probabilistic rules. The simulated PWR, similar to ANGRA 1 power plant, was used as a case example to compare the performance of PSO and GA. Results from simulations indicated that PSO is more adequate to solve this kind of problem.

  1. A Two-Stage Composition Method for Danger-Aware Services Based on Context Similarity

    Science.gov (United States)

    Wang, Junbo; Cheng, Zixue; Jing, Lei; Ota, Kaoru; Kansen, Mizuo

    Context-aware systems detect user's physical and social contexts based on sensor networks, and provide services that adapt to the user accordingly. Representing, detecting, and managing the contexts are important issues in context-aware systems. Composition of contexts is a useful method for these works, since it can detect a context by automatically composing small pieces of information to discover service. Danger-aware services are a kind of context-aware services which need description of relations between a user and his/her surrounding objects and between users. However when applying the existing composition methods to danger-aware services, they show the following shortcomings that (1) they have not provided an explicit method for representing composition of multi-user' contexts, (2) there is no flexible reasoning mechanism based on similarity of contexts, so that they can just provide services exactly following the predefined context reasoning rules. Therefore, in this paper, we propose a two-stage composition method based on context similarity to solve the above problems. The first stage is composition of the useful information to represent the context for a single user. The second stage is composition of multi-users' contexts to provide services by considering the relation of users. Finally the danger degree of the detected context is computed by using context similarity between the detected context and the predefined context. Context is dynamically represented based on two-stage composition rules and a Situation theory based Ontology, which combines the advantages of Ontology and Situation theory. We implement the system in an indoor ubiquitous environment, and evaluate the system through two experiments with the support of subjects. The experiment results show the method is effective, and the accuracy of danger detection is acceptable to a danger-aware system.

  2. Estimation of Cross-Lingual News Similarities Using Text-Mining Methods

    Directory of Open Access Journals (Sweden)

    Zhouhao Wang

    2018-01-01

    Full Text Available In this research, two estimation algorithms for extracting cross-lingual news pairs based on machine learning from financial news articles have been proposed. Every second, innumerable text data, including all kinds news, reports, messages, reviews, comments, and tweets are generated on the Internet, and these are written not only in English but also in other languages such as Chinese, Japanese, French, etc. By taking advantage of multi-lingual text resources provided by Thomson Reuters News, we developed two estimation algorithms for extracting cross-lingual news pairs from multilingual text resources. In our first method, we propose a novel structure that uses the word information and the machine learning method effectively in this task. Simultaneously, we developed a bidirectional Long Short-Term Memory (LSTM based method to calculate cross-lingual semantic text similarity for long text and short text, respectively. Thus, when an important news article is published, users can read similar news articles that are written in their native language using our method.

  3. Similar Estimates of Temperature Impacts on Global Wheat Yield by Three Independent Methods

    Science.gov (United States)

    Liu, Bing; Asseng, Senthold; Muller, Christoph; Ewart, Frank; Elliott, Joshua; Lobell, David B.; Martre, Pierre; Ruane, Alex C.; Wallach, Daniel; Jones, James W.; hide

    2016-01-01

    The potential impact of global temperature change on global crop yield has recently been assessed with different methods. Here we show that grid-based and point-based simulations and statistical regressions (from historic records), without deliberate adaptation or CO2 fertilization effects, produce similar estimates of temperature impact on wheat yields at global and national scales. With a 1 C global temperature increase, global wheat yield is projected to decline between 4.1% and 6.4%. Projected relative temperature impacts from different methods were similar for major wheat-producing countries China, India, USA and France, but less so for Russia. Point-based and grid-based simulations, and to some extent the statistical regressions, were consistent in projecting that warmer regions are likely to suffer more yield loss with increasing temperature than cooler regions. By forming a multi-method ensemble, it was possible to quantify 'method uncertainty' in addition to model uncertainty. This significantly improves confidence in estimates of climate impacts on global food security.

  4. Similar estimates of temperature impacts on global wheat yield by three independent methods

    Science.gov (United States)

    Liu, Bing; Asseng, Senthold; Müller, Christoph; Ewert, Frank; Elliott, Joshua; Lobell, David B.; Martre, Pierre; Ruane, Alex C.; Wallach, Daniel; Jones, James W.; Rosenzweig, Cynthia; Aggarwal, Pramod K.; Alderman, Phillip D.; Anothai, Jakarat; Basso, Bruno; Biernath, Christian; Cammarano, Davide; Challinor, Andy; Deryng, Delphine; Sanctis, Giacomo De; Doltra, Jordi; Fereres, Elias; Folberth, Christian; Garcia-Vila, Margarita; Gayler, Sebastian; Hoogenboom, Gerrit; Hunt, Leslie A.; Izaurralde, Roberto C.; Jabloun, Mohamed; Jones, Curtis D.; Kersebaum, Kurt C.; Kimball, Bruce A.; Koehler, Ann-Kristin; Kumar, Soora Naresh; Nendel, Claas; O'Leary, Garry J.; Olesen, Jørgen E.; Ottman, Michael J.; Palosuo, Taru; Prasad, P. V. Vara; Priesack, Eckart; Pugh, Thomas A. M.; Reynolds, Matthew; Rezaei, Ehsan E.; Rötter, Reimund P.; Schmid, Erwin; Semenov, Mikhail A.; Shcherbak, Iurii; Stehfest, Elke; Stöckle, Claudio O.; Stratonovitch, Pierre; Streck, Thilo; Supit, Iwan; Tao, Fulu; Thorburn, Peter; Waha, Katharina; Wall, Gerard W.; Wang, Enli; White, Jeffrey W.; Wolf, Joost; Zhao, Zhigan; Zhu, Yan

    2016-12-01

    The potential impact of global temperature change on global crop yield has recently been assessed with different methods. Here we show that grid-based and point-based simulations and statistical regressions (from historic records), without deliberate adaptation or CO2 fertilization effects, produce similar estimates of temperature impact on wheat yields at global and national scales. With a 1 °C global temperature increase, global wheat yield is projected to decline between 4.1% and 6.4%. Projected relative temperature impacts from different methods were similar for major wheat-producing countries China, India, USA and France, but less so for Russia. Point-based and grid-based simulations, and to some extent the statistical regressions, were consistent in projecting that warmer regions are likely to suffer more yield loss with increasing temperature than cooler regions. By forming a multi-method ensemble, it was possible to quantify `method uncertainty’ in addition to model uncertainty. This significantly improves confidence in estimates of climate impacts on global food security.

  5. Pulse shape analysis based on similarity and neural network with digital-analog fusion method

    International Nuclear Information System (INIS)

    Mardiyanto, M.P.; Uritani, A.; Sakai, H.; Kawarabayashi, J.; Iguchi, T.

    2000-01-01

    Through the measurement of 22 Na γ-rays, it has been demonstrated that the correction process was well done by fusing the similarity values with the pulse heights measured by the analog system, where at least four improvements in the energy spectrum characteristics were recognized, i.e., the increase of the peak-to-valley ratio, the photopeak area, the photopeak sharpness without discarding any events, and the 1,275 keV γ-ray photopeak was seen. The use of a slow digitizer was the main problem for this method. However, it can be solved easily using a faster digitizer. The fusion method was also applied for the beta-gamma mixed spectra separation. Mixed spectra of beta-gamma of the 137 Cs- 90 Sr mixed source could be separated well. We made a comparison between the energy spectrum of 137 Cs as a result of independent measurement with the result of the separation. After being compared, both FWHM agreed quite well. However, there was a slight difference between the two spectra on the peak-to-valley ratio. This separation method is simple and useful so that it can be applied for many other similar applications. (S.Y.)

  6. Similar inflammatory responses following sprint interval training performed in hypoxia and normoxia

    Directory of Open Access Journals (Sweden)

    Alan James Richardson

    2016-08-01

    Full Text Available Sprint interval training (SIT is an efficient intervention capable of improving aerobic capacity and exercise performance. This experiment aimed to determine differences in training adaptations and the inflammatory responses following 2 weeks of SIT (30s maximal work, 4 min recovery; 4-7 repetitions performed in normoxia or hypoxia. Forty-two untrained participants [(mean ± SD, age 21 ±1 yrs, body mass 72.1 ±11.4 kg and height 173 ±10 cm] were equally and randomly assigned to one of three groups; control (CONT; no training, n = 14, normoxic (NORM; SIT in FiO2: 0.21, n = 14 and normobaric hypoxic (HYP; SIT in FiO2: 0.15, n = 14. Participants completed a V̇O2peak test, a time to exhaustion (TTE trial (power = 80% V̇O2peak and had haematological [haemoglobin (Hb, haematocrit (Hct] and inflammatory markers [interleukin-6 (IL-6, tumor necrosis factor-α (TNFα] measured in a resting state, pre and post SIT. V̇O2peak (mL.kg-1.min-1 improved in HYP (+11.9% and NORM (+9.8%, but not CON (+0.9%. Similarly TTE improved in HYP (+32.2% and NORM (+33.0%, but not CON (+3.4% whilst the power at the anaerobic threshold (AT; W.kg-1 also improved in HYP (+13.3% and NORM (+8.0%, but not CON (-0.3%. AT (mL.kg-1.min-1 improved in HYP (+9.5%, but not NORM (+5% or CON (-0.3%. No between group change occurred in 30 s sprint performance or Hb and Hct. IL-6 increased in HYP (+17.4% and NORM (+20.1%, but not CON (+1.2% respectively. TNF-α increased in HYP (+10.8% NORM (+12.9% and CON (+3.4%.SIT in HYP and NORM increased VO2peak, power at AT and TTE performance in untrained individuals, improvements in AT occurred only when SIT was performed in HYP. Increases in IL-6 and TNFα reflect a training induced inflammatory response to SIT; hypoxic conditions do not exacerbate this.

  7. Dimensional analysis and self-similarity methods for engineers and scientists

    CERN Document Server

    Zohuri, Bahman

    2015-01-01

    This ground-breaking reference provides an overview of key concepts in dimensional analysis, and then pushes well beyond traditional applications in fluid mechanics to demonstrate how powerful this tool can be in solving complex problems across many diverse fields. Of particular interest is the book's coverage of  dimensional analysis and self-similarity methods in nuclear and energy engineering. Numerous practical examples of dimensional problems are presented throughout, allowing readers to link the book's theoretical explanations and step-by-step mathematical solutions to practical impleme

  8. Do Tillage Methods Affect Germination and Species Similarity of Soil Weed Seeds Bank?

    Directory of Open Access Journals (Sweden)

    Shahgholi Hassan

    2015-12-01

    Full Text Available Cultural practices such as tillage used for crop production influence the composition of the weed seed bank in the soil. In order to investigate the effects of different tillage methods on seed bank properties, species diversity and similarity, two laboratory and greenhouse experiments were carried out as randomized complete block design with four replications in 2011. Treatments included: once tillage per year (T1, twice tillage per year (T2, more than twice tillage (T3 and no tillage (T4. Laboratory results showed that the T3 and T4 treatments had the highest and the lowest observed seeds numbers, respectively. Between the laboratory observed weed seeds, the maximum weed seed numbers were Echinochloa crus-galli and Amaranthus retroflexus in the T3 treatment, while Chenopodium album, Polygonum aviculare and Cuscuta campestris had the highest seed numbers in the T2 treatment. At the greenhouse study, Chenopodium album, Amaranthus retroflexus and Hordeum morinum in the T2 treatment were dominant species. The highest diversity was observed in the T2 treatment, and Chenopodium album and Echinochloa crus-galli were dominant species in the T2 and T3 treatments. Maximum species similarity index was achieved from the T1 and T3 treatments. Thereby this study concluded that increasing of tillage number could affect the similarity index of weed seeds and subsequently alters the weed community composition.

  9. A Novel Drug-Mouse Phenotypic Similarity Method Detects Molecular Determinants of Drug Effects.

    Directory of Open Access Journals (Sweden)

    Jeanette Prinz

    2016-09-01

    Full Text Available The molecular mechanisms that translate drug treatment into beneficial and unwanted effects are largely unknown. We present here a novel approach to detect gene-drug and gene-side effect associations based on the phenotypic similarity of drugs and single gene perturbations in mice that account for the polypharmacological property of drugs. We scored the phenotypic similarity of human side effect profiles of 1,667 small molecules and biologicals to profiles of phenotypic traits of 5,384 mouse genes. The benchmarking with known relationships revealed a strong enrichment of physical and indirect drug-target connections, causative drug target-side effect links as well as gene-drug links involved in pharmacogenetic associations among phenotypically similar gene-drug pairs. The validation by in vitro assays and the experimental verification of an unknown connection between oxandrolone and prokineticin receptor 2 reinforces the ability of this method to provide new molecular insights underlying drug treatment. Thus, this approach may aid in the proposal of novel and personalized treatments.

  10. Similarity and symmetry methods applications in elasticity and mechanics of materials

    CERN Document Server

    Mladenov, Ivaïlo

    2014-01-01

    The principle aim of the book is to present a self-contained, modern account of similarity and symmetry methods, which are important mathematical tools for both physicists, engineers and applied mathematicians. The idea is to provide a balanced presentation of the mathematical techniques and applications of symmetry methods in mathematics, physics and engineering. That is why it includes recent developments and many examples in finding systematically conservation laws, local and nonlocal symmetries for ordinary and partial differential equations. The role of continuous symmetries in classical and quantum field theories is exposed at a technical level accessible even for non specialists. The importance of symmetries in continuum mechanics and mechanics of materials is highlighted through recent developments, such as the construction of constitutive models for various materials combining Lie symmetries with experimental data. As a whole this book is a unique collection of contributions from experts in the field...

  11. Load reduction test method of similarity theory and BP neural networks of large cranes

    Science.gov (United States)

    Yang, Ruigang; Duan, Zhibin; Lu, Yi; Wang, Lei; Xu, Gening

    2016-01-01

    Static load tests are an important means of supervising and detecting a crane's lift capacity. Due to space restrictions, however, there are difficulties and potential danger when testing large bridge cranes. To solve the loading problems of large-tonnage cranes during testing, an equivalency test is proposed based on the similarity theory and BP neural networks. The maximum stress and displacement of a large bridge crane is tested in small loads, combined with the training neural network of a similar structure crane through stress and displacement data which is collected by a physics simulation progressively loaded to a static load test load within the material scope of work. The maximum stress and displacement of a crane under a static load test load can be predicted through the relationship of stress, displacement, and load. By measuring the stress and displacement of small tonnage weights, the stress and displacement of large loads can be predicted, such as the maximum load capacity, which is 1.25 times the rated capacity. Experimental study shows that the load reduction test method can reflect the lift capacity of large bridge cranes. The load shedding predictive analysis for Sanxia 1200 t bridge crane test data indicates that when the load is 1.25 times the rated lifting capacity, the predicted displacement and actual displacement error is zero. The method solves the problem that lifting capacities are difficult to obtain and testing accidents are easily possible when 1.25 times related weight loads are tested for large tonnage cranes.

  12. Method of synthesis of abstract images with high self-similarity

    Science.gov (United States)

    Matveev, Nikolay V.; Shcheglov, Sergey A.; Romanova, Galina E.; Koneva, Ð.¢atiana A.

    2017-06-01

    Abstract images with high self-similarity could be used for drug-free stress therapy. This based on the fact that a complex visual environment has a high affective appraisal. To create such an image we can use the setup based on the three laser sources of small power and different colors (Red, Green, Blue), the image is the pattern resulting from the reflecting and refracting by the complicated form object placed into the laser ray paths. The images were obtained experimentally which showed the good therapy effect. However, to find and to choose the object which gives needed image structure is very difficult and requires many trials. The goal of the work is to develop a method and a procedure of finding the object form which if placed into the ray paths can provide the necessary structure of the image In fact the task means obtaining the necessary irradiance distribution on the given surface. Traditionally such problems are solved using the non-imaging optics methods. In the given case this task is very complicated because of the complicated structure of the illuminance distribution and its high non-linearity. Alternative way is to use the projected image of a mask with a given structure. We consider both ways and discuss how they can help to speed up the synthesis procedure for the given abstract image of the high self-similarity for the setups of drug-free therapy.

  13. Cold modalities with different thermodynamic properties have similar effects on muscular performance and activation.

    Science.gov (United States)

    Vieira, A; Oliveira, A B; Costa, J R; Herrera, E; Salvini, T F

    2013-10-01

    Although tissue cooling is widely used in the treatment of musculoskeletal injuries there is still controversy about its effects on muscular performance. The combination of cooling and exercise justifies the study of this topic. The aim was to compare the effects of ice pack and cold-water immersion on the muscular performance parameters of plantar flexors and muscular activation of the triceps surae. 41 healthy men (mean age: 22.1 years, SD: 2.9) were randomly assigned to cooling with either ice pack (n=20) or cold-water immersion (n=21). Independent variables were cold modality (ice pack or cold-water immersion) and pre- and post-cooling measurement time. Dependent variables were muscular performance (measured during isometric and concentric contractions of plantar flexors) and electromyography parameters of the triceps surae (median frequency and root mean square amplitude). Dependent-samples t-tests were used to compare pre- and post-cooling data and independent-samples t-tests were used to compare the difference (pre- and post-cooling) between groups. Ice pack increased isometric peak torque (mean: 9.00 Nm, P=0.01) and both cold modalities reduced muscular activation in triceps surae (Pimmersion and ice pack reduced peak torque and total work during dynamic isokinetic contraction at both velocities (mean: -11,00 Nm, Pimmersion decrease concentric muscular performance. These results indicate that these cooling methods should be chosen with caution, considering the type of task required during training or rehabilitation. New studies investigating other muscle groups and joints are necessary. © Georg Thieme Verlag KG Stuttgart · New York.

  14. Application of clustering methods: Regularized Markov clustering (R-MCL) for analyzing dengue virus similarity

    Science.gov (United States)

    Lestari, D.; Raharjo, D.; Bustamam, A.; Abdillah, B.; Widhianto, W.

    2017-07-01

    Dengue virus consists of 10 different constituent proteins and are classified into 4 major serotypes (DEN 1 - DEN 4). This study was designed to perform clustering against 30 protein sequences of dengue virus taken from Virus Pathogen Database and Analysis Resource (VIPR) using Regularized Markov Clustering (R-MCL) algorithm and then we analyze the result. By using Python program 3.4, R-MCL algorithm produces 8 clusters with more than one centroid in several clusters. The number of centroid shows the density level of interaction. Protein interactions that are connected in a tissue, form a complex protein that serves as a specific biological process unit. The analysis of result shows the R-MCL clustering produces clusters of dengue virus family based on the similarity role of their constituent protein, regardless of serotypes.

  15. Similarity Digest Search: A Survey and Comparative Analysis of Strategies to Perform Known File Filtering Using Approximate Matching

    Directory of Open Access Journals (Sweden)

    Vitor Hugo Galhardo Moia

    2017-01-01

    Full Text Available Digital forensics is a branch of Computer Science aiming at investigating and analyzing electronic devices in the search for crime evidence. There are several ways to perform this search. Known File Filter (KFF is one of them, where a list of interest objects is used to reduce/separate data for analysis. Holding a database of hashes of such objects, the examiner performs lookups for matches against the target device. However, due to limitations over hash functions (inability to detect similar objects, new methods have been designed, called approximate matching. This sort of function has interesting characteristics for KFF investigations but suffers mainly from high costs when dealing with huge data sets, as the search is usually done by brute force. To mitigate this problem, strategies have been developed to better perform lookups. In this paper, we present the state of the art of similarity digest search strategies, along with a detailed comparison involving several aspects, as time complexity, memory requirement, and search precision. Our results show that none of the approaches address at least these main aspects. Finally, we discuss future directions and present requirements for a new strategy aiming to fulfill current limitations.

  16. Similar judgment method of brain neural pathway using DT-MRI

    International Nuclear Information System (INIS)

    Watashiba, Yasuhiro; Sakamoto, Naohisa; Sakai, Koji; Koyamada, Koji; Kanazawa, Masanori; Doi, Akio

    2008-01-01

    Nowadays, the visualization of brain neural pathway extracted by the tractography technology is thought as a useful effective tool for the detection of involved area and the analysis of sick cause by comparison of difference of normal and patient's nerve fiber configurations and for the support of the surgery planning and the forecast of progress after an operation. So far, for the observation of the brain neural pathway, the method of the user's subjectively judging the 3D shape of them displayed in the image has been used. However, in this kind of subjective observation, verification of the propriety for the diagnostic result is difficult, in addition it cannot obtain sufficient reliability. Therefore, we think that the system to compare the shape based on a quantitative evaluation is necessary. To resolve this problem, we propose the system that enables the shape of the brain neural pathway extracted by the tractography technology to be compared quantitatively. The proposed system realized to calculate similarity between two neural pathways, and to display the difference area according to the similarity. (author)

  17. Presumed cultural similarity paradox : Expatriate adjustment and performance across the border or over the globe

    NARCIS (Netherlands)

    Vromans, P.; van Engen, M.L.; Mol, S.

    2013-01-01

    Purpose To introduce the presumed cultural similarity paradox as a possible explanation for the findings that adjusting to a culturally similar country is just as difficult as adjusting to a culturally dissimilar country. We provide a conceptual framework, enabling further understanding and research

  18. Presumed cultural similarity paradox: expatriate adjustment and performance across the border or over the globe

    NARCIS (Netherlands)

    Vromans, P.; van Engen, M.; Mol, S.

    2013-01-01

    Purpose - To introduce the presumed cultural similarity paradox as a possible explanation for the findings that adjusting to a culturally similar country is just as difficult as adjusting to a culturally dissimilar country. We provide a conceptual framework, enabling further understanding and

  19. The In-Medium Similarity Renormalization Group: A novel ab initio method for nuclei

    Energy Technology Data Exchange (ETDEWEB)

    Hergert, H., E-mail: hergert@nscl.msu.edu [National Superconducting Cyclotron Laboratory, Michigan State University, East Lansing, MI 48824 (United States); Department of Physics and Astronomy, Michigan State University, East Lansing, MI 48824 (United States); Department of Physics, The Ohio State University, Columbus, OH 43210 (United States); Bogner, S.K., E-mail: bogner@nscl.msu.edu [National Superconducting Cyclotron Laboratory, Michigan State University, East Lansing, MI 48824 (United States); Department of Physics and Astronomy, Michigan State University, East Lansing, MI 48824 (United States); Morris, T.D., E-mail: morrist@nscl.msu.edu [Department of Physics and Astronomy, Michigan State University, East Lansing, MI 48824 (United States); National Superconducting Cyclotron Laboratory, Michigan State University, East Lansing, MI 48824 (United States); Schwenk, A., E-mail: schwenk@physik.tu-darmstadt.de [Institut für Kernphysik, Technische Universität Darmstadt, 64289 Darmstadt (Germany); ExtreMe Matter Institute EMMI, GSI Helmholtzzentrum für Schwerionenforschung GmbH, 64291 Darmstadt (Germany); Tsukiyama, K., E-mail: tsuki.kr@gmail.com [Center for Nuclear Study, Graduate School of Science, University of Tokyo, Hongo, Tokyo, 113-0033 (Japan)

    2016-03-21

    We present a comprehensive review of the In-Medium Similarity Renormalization Group (IM-SRG), a novel ab initio method for nuclei. The IM-SRG employs a continuous unitary transformation of the many-body Hamiltonian to decouple the ground state from all excitations, thereby solving the many-body problem. Starting from a pedagogical introduction of the underlying concepts, the IM-SRG flow equations are developed for systems with and without explicit spherical symmetry. We study different IM-SRG generators that achieve the desired decoupling, and how they affect the details of the IM-SRG flow. Based on calculations of closed-shell nuclei, we assess possible truncations for closing the system of flow equations in practical applications, as well as choices of the reference state. We discuss the issue of center-of-mass factorization and demonstrate that the IM-SRG ground-state wave function exhibits an approximate decoupling of intrinsic and center-of-mass degrees of freedom, similar to Coupled Cluster (CC) wave functions. To put the IM-SRG in context with other many-body methods, in particular many-body perturbation theory and non-perturbative approaches like CC, a detailed perturbative analysis of the IM-SRG flow equations is carried out. We conclude with a discussion of ongoing developments, including IM-SRG calculations with three-nucleon forces, the multi-reference IM-SRG for open-shell nuclei, first non-perturbative derivations of shell-model interactions, and the consistent evolution of operators in the IM-SRG. We dedicate this review to the memory of Gerry Brown, one of the pioneers of many-body calculations of nuclei.

  20. The In-Medium Similarity Renormalization Group: A novel ab initio method for nuclei

    International Nuclear Information System (INIS)

    Hergert, H.; Bogner, S.K.; Morris, T.D.; Schwenk, A.; Tsukiyama, K.

    2016-01-01

    We present a comprehensive review of the In-Medium Similarity Renormalization Group (IM-SRG), a novel ab initio method for nuclei. The IM-SRG employs a continuous unitary transformation of the many-body Hamiltonian to decouple the ground state from all excitations, thereby solving the many-body problem. Starting from a pedagogical introduction of the underlying concepts, the IM-SRG flow equations are developed for systems with and without explicit spherical symmetry. We study different IM-SRG generators that achieve the desired decoupling, and how they affect the details of the IM-SRG flow. Based on calculations of closed-shell nuclei, we assess possible truncations for closing the system of flow equations in practical applications, as well as choices of the reference state. We discuss the issue of center-of-mass factorization and demonstrate that the IM-SRG ground-state wave function exhibits an approximate decoupling of intrinsic and center-of-mass degrees of freedom, similar to Coupled Cluster (CC) wave functions. To put the IM-SRG in context with other many-body methods, in particular many-body perturbation theory and non-perturbative approaches like CC, a detailed perturbative analysis of the IM-SRG flow equations is carried out. We conclude with a discussion of ongoing developments, including IM-SRG calculations with three-nucleon forces, the multi-reference IM-SRG for open-shell nuclei, first non-perturbative derivations of shell-model interactions, and the consistent evolution of operators in the IM-SRG. We dedicate this review to the memory of Gerry Brown, one of the pioneers of many-body calculations of nuclei.

  1. Advanced Models and Algorithms for Self-Similar IP Network Traffic Simulation and Performance Analysis

    Science.gov (United States)

    Radev, Dimitar; Lokshina, Izabella

    2010-11-01

    The paper examines self-similar (or fractal) properties of real communication network traffic data over a wide range of time scales. These self-similar properties are very different from the properties of traditional models based on Poisson and Markov-modulated Poisson processes. Advanced fractal models of sequentional generators and fixed-length sequence generators, and efficient algorithms that are used to simulate self-similar behavior of IP network traffic data are developed and applied. Numerical examples are provided; and simulation results are obtained and analyzed.

  2. Lyme Disease Diagnosed by Alternative Methods: A Phenotype Similar to That of Chronic Fatigue Syndrome.

    Science.gov (United States)

    Patrick, David M; Miller, Ruth R; Gardy, Jennifer L; Parker, Shoshana M; Morshed, Muhammad G; Steiner, Theodore S; Singer, Joel; Shojania, Kam; Tang, Patrick

    2015-10-01

    A subset of patients reporting a diagnosis of Lyme disease can be described as having alternatively diagnosed chronic Lyme syndrome (ADCLS), in which diagnosis is based on laboratory results from a nonreference Lyme specialty laboratory using in-house criteria. Patients with ADCLS report symptoms similar to those reported by patients with chronic fatigue syndrome (CFS). We performed a case-control study comparing patients with ADCLS and CFS to each other and to both healthy controls and controls with systemic lupus erythematosus (SLE). Subjects completed a history, physical exam, screening laboratory tests, 7 functional scales, reference serology for Lyme disease using Centers for Disease Control and Prevention criteria, reference serology for other tick-associated pathogens, and cytokine expression studies. The study enrolled 13 patients with ADCLS (12 of whom were diagnosed by 1 alternative US laboratory), 25 patients with CFS, 25 matched healthy controls, and 11 SLE controls. Baseline clinical data and functional scales indicate significant disability among ADCLS and CFS patients and many important differences between these groups and controls, but no significant differences between each other. No ADCLS patient was confirmed as having positive Lyme serology by reference laboratory testing, and there was no difference in distribution of positive serology for other tick-transmitted pathogens or cytokine expression across the groups. In British Columbia, a setting with low Lyme disease incidence, ADCLS patients have a similar phenotype to that of CFS patients. Disagreement between alternative and reference laboratory Lyme testing results in this setting is most likely explained by false-positive results from the alternative laboratory. © The Author 2015. Published by Oxford University Press on behalf of the Infectious Diseases Society of America. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  3. Evaluation of the performance of diagnosis-related groups and similar casemix systems: methodological issues.

    Science.gov (United States)

    Palmer, G; Reid, B

    2001-05-01

    With the increasing recognition and application of casemix for managing and financing healthcare resources, the evaluation of alternative versions of systems such as diagnosis-related groups (DRGs) has been afforded high priority by governments and researchers in many countries. Outside the United States, an important issue has been the perceived need to produce local versions, and to establish whether or not these perform more effectively than the US-based classifications. A discussion of casemix evaluation criteria highlights the large number of measures that may be used, the rationale and assumptions underlying each measure, and the problems in interpreting the results. A review of recent evaluation studies from a number of countries indicates that considerable emphasis has been placed on the predictive validity criterion, as measured by the R2 statistic. However, the interpretation of the findings has been affected greatly by the methods used, especially the treatment and definition of outlier cases. Furthermore, the extent to which other evaluation criteria have been addressed has varied widely. In the absence of minimum evaluation standards, it is not possible to draw clear-cut conclusions about the superiority of one version of a casemix system over another, the need for a local adaptation, or the further development of an existing version. Without the evidence provided by properly designed studies, policy-makers and managers may place undue reliance on subjective judgments and the views of the most influential, but not necessarily best informed, healthcare interest groups.

  4. Method to perform radioimmunological analyses

    International Nuclear Information System (INIS)

    Friedel, R.

    1976-01-01

    The invention concerns a method for the radioimmunoligcal detection of antigens. According to the invention, antibodies are adsorbed on water-insoluble high-polymeric compounds on the inner surfaces of a capillary device, a labelled antigen is then added and, following incubation, suching off of the test mixture and washing of the coated surfaces, the latter is measured for radioactivity. (VJ) [de

  5. A novel method to remove GPR background noise based on the similarity of non-neighboring regions

    Science.gov (United States)

    Montiel-Zafra, V.; Canadas-Quesada, F. J.; Vera-Candeas, P.; Ruiz-Reyes, N.; Rey, J.; Martinez, J.

    2017-09-01

    Ground penetrating radar (GPR) is a non-destructive technique that has been widely used in many areas of research, such as landmine detection or subsurface anomalies, where it is required to locate targets embedded within a background medium. One of the major challenges in the research of GPR data remains the improvement of the image quality of stone materials by means of detection of true anisotropies since most of the errors are caused by an incorrect interpretation by the users. However, it is complicated due to the interference of the horizontal background noise, e.g., the air-ground interface, that reduces the high-resolution quality of radargrams. Thus, weak or deep anisotropies are often masked by this type of noise. In order to remove the background noise obtained by GPR, this work proposes a novel background removal method assuming that the horizontal noise shows repetitive two-dimensional regions along the movement of the GPR antenna. Specifically, the proposed method, based on the non-local similarity of regions over the distance, computes similarities between different regions of the same depth in order to identify most repetitive regions using a criterion to avoid closer regions. Evaluations are performed using a set of synthetic and real GPR data. Experimental results show that the proposed method obtains promising results compared to the classic background removal techniques and the most recently published background removal methods.

  6. A new method for brain tumor detection using the Bhattacharyya similarity coefficient, color conversions and neural network

    Directory of Open Access Journals (Sweden)

    Bahman Mansori

    2015-10-01

    Full Text Available Background: Magnetic resonance imaging (MRI is widely applied for examination and diagnosis of brain tumors based on its advantages of high resolution in detecting the soft tissues and especially of its harmless radiation damages to human bodies. The goal of the processing of images is automatic segmentation of brain edema and tumors, in different dimensions of the magnetic resonance images. Methods: The proposed method is based on the unsupervised method which discovers the tumor region, if there is any, by analyzing the similarities between two hemispheres and computes the image size of the goal function based on Bhattacharyya coefficient which is used in the next stage to detect the tumor region or some part of it. In this stage, for reducing the color variation, the gray brain image is segmented, then it is turned to gray again. The self-organizing map (SOM neural network is used the segmented brain image is colored and finally the tumor is detected by matching the detected region and the colored image. This method is proposed to analyze MRI images for discovering brain tumors, and done in Bu Ali Sina University, Hamedan, Iran, in 2014. Results: The results for 30 randomly selected images from data bank of MRI center in Hamedan was compared with manually segmentation of experts. The results showed that, our proposed method had the accuracy of more than 94% at Jaccard similarity index (JSI, 97% at Dice similarity score (DSS, and 98% and 99% at two measures of specificity and sensitivity. Conclusion: The experimental results showed that it was satisfactory and can be used in automatic separation of tumor from normal brain tissues and therefore it can be used in practical applications. The results showed that the use of SOM neural network to classify useful magnetic resonance imaging of the brain and demonstrated a good performance.

  7. Detecting and classifying method based on similarity matching of Android malware behavior with profile.

    Science.gov (United States)

    Jang, Jae-Wook; Yun, Jaesung; Mohaisen, Aziz; Woo, Jiyoung; Kim, Huy Kang

    2016-01-01

    Mass-market mobile security threats have increased recently due to the growth of mobile technologies and the popularity of mobile devices. Accordingly, techniques have been introduced for identifying, classifying, and defending against mobile threats utilizing static, dynamic, on-device, and off-device techniques. Static techniques are easy to evade, while dynamic techniques are expensive. On-device techniques are evasion, while off-device techniques need being always online. To address some of those shortcomings, we introduce Andro-profiler, a hybrid behavior based analysis and classification system for mobile malware. Andro-profiler main goals are efficiency, scalability, and accuracy. For that, Andro-profiler classifies malware by exploiting the behavior profiling extracted from the integrated system logs including system calls. Andro-profiler executes a malicious application on an emulator in order to generate the integrated system logs, and creates human-readable behavior profiles by analyzing the integrated system logs. By comparing the behavior profile of malicious application with representative behavior profile for each malware family using a weighted similarity matching technique, Andro-profiler detects and classifies it into malware families. The experiment results demonstrate that Andro-profiler is scalable, performs well in detecting and classifying malware with accuracy greater than 98 %, outperforms the existing state-of-the-art work, and is capable of identifying 0-day mobile malware samples.

  8. Improving performance of content-based image retrieval schemes in searching for similar breast mass regions: an assessment

    International Nuclear Information System (INIS)

    Wang Xiaohui; Park, Sang Cheol; Zheng Bin

    2009-01-01

    This study aims to assess three methods commonly used in content-based image retrieval (CBIR) schemes and investigate the approaches to improve scheme performance. A reference database involving 3000 regions of interest (ROIs) was established. Among them, 400 ROIs were randomly selected to form a testing dataset. Three methods, namely mutual information, Pearson's correlation and a multi-feature-based k-nearest neighbor (KNN) algorithm, were applied to search for the 15 'the most similar' reference ROIs to each testing ROI. The clinical relevance and visual similarity of searching results were evaluated using the areas under receiver operating characteristic (ROC) curves (A Z ) and average mean square difference (MSD) of the mass boundary spiculation level ratings between testing and selected ROIs, respectively. The results showed that the A Z values were 0.893 ± 0.009, 0.606 ± 0.021 and 0.699 ± 0.026 for the use of KNN, mutual information and Pearson's correlation, respectively. The A Z values increased to 0.724 ± 0.017 and 0.787 ± 0.016 for mutual information and Pearson's correlation when using ROIs with the size adaptively adjusted based on actual mass size. The corresponding MSD values were 2.107 ± 0.718, 2.301 ± 0.733 and 2.298 ± 0.743. The study demonstrates that due to the diversity of medical images, CBIR schemes using multiple image features and mass size-based ROIs can achieve significantly improved performance.

  9. Prospective and retrospective ECG-gating for CT coronary angiography perform similarly accurate at low heart rates

    Energy Technology Data Exchange (ETDEWEB)

    Stolzmann, Paul, E-mail: paul.stolzmann@usz.ch [Institute of Diagnostic Radiology, University Hospital Zurich, Raemistrasse 100, 8091 Zurich (Switzerland); Goetti, Robert; Baumueller, Stephan [Institute of Diagnostic Radiology, University Hospital Zurich, Raemistrasse 100, 8091 Zurich (Switzerland); Plass, Andre; Falk, Volkmar [Clinic for Cardiovascular Surgery, University Hospital Zurich (Switzerland); Scheffel, Hans; Feuchtner, Gudrun; Marincek, Borut [Institute of Diagnostic Radiology, University Hospital Zurich, Raemistrasse 100, 8091 Zurich (Switzerland); Alkadhi, Hatem [Institute of Diagnostic Radiology, University Hospital Zurich, Raemistrasse 100, 8091 Zurich (Switzerland); Cardiac MR PET CT Program, Massachusetts General Hospital and Harvard Medical School, Boston, MA (United States); Leschka, Sebastian [Institute of Diagnostic Radiology, University Hospital Zurich, Raemistrasse 100, 8091 Zurich (Switzerland)

    2011-07-15

    Objective: To compare, in patients with suspicion of coronary artery disease (CAD) and low heart rates, image quality, diagnostic performance, and radiation dose values of prospectively and retrospectively electrocardiography (ECG)-gated dual-source computed tomography coronary angiography (CTCA) for the diagnosis of significant coronary stenoses. Materials and methods: Two-hundred consecutive patients with heart rates {<=}70 bpm were retrospectively enrolled; 100 patients undergoing prospectively ECG-gated CTCA (group 1) and 100 patients undergoing retrospectively-gated CTCA (group 2). Coronary artery segments were assessed for image quality and significant luminal diameter narrowing. Sensitivity, specificity, positive predictive values (PPV), negative predictive values (NPV), and accuracy of both CTCA groups were determined using conventional catheter angiography (CCA) as reference standard. Radiation dose values were calculated. Results: Both groups were comparable regarding gender, body weight, cardiovascular risk profile, severity of CAD, mean heart rate, heart rate variability, and Agatston score (all p > 0.05). There was no significant difference in the rate of non-assessable coronary segments between group 1 (1.6%, 24/1404) and group 2 (1.4%, 19/1385; p = 0.77); non-diagnostic image quality was significantly (p < 0.001) more often attributed to stair step artifacts in group 1. Segment-based sensitivity, specificity, PPV, NPV, and accuracy were 98%, 98%, 88%, 100%, and 100% among group 1; 96%, 99%, 90%, 100%, and 98% among group 2, respectively. Parameters of diagnostic performance were similar (all p > 0.05). Mean effective radiation dose of prospectively ECG-gated CTCA (2.2 {+-} 0.4 mSv) was significantly (p < 0.0001) smaller than that of retrospectively ECG-gated CTCA (8.1 {+-} 0.6 mSv). Conclusion: Prospectively ECG-gated CTCA yields similar image quality, performs as accurately as retrospectively ECG-gated CTCA in patients having heart rates {<=}70 bpm

  10. Pilates and Proprioceptive Neuromuscular Facilitation Methods Induce Similar Strength Gains but Different Neuromuscular Adaptations in Elderly Women.

    Science.gov (United States)

    Teixeira de Carvalho, Fabiana; de Andrade Mesquita, Laiana Sepúlveda; Pereira, Rafael; Neto, Osmar Pinto; Amaro Zangaro, Renato

    2017-01-01

    Background/Study Context: The aging process is associated with a decline in muscle mass, strength, and conditioning. Two training methods that may be useful to improve muscle function are Pilates and proprioceptive neuromuscular facilitation (PNF). Thus, the present study aimed to compare the influence of training programs using Pilates and PNF methods with elderly women. Sixty healthy elderly women were randomly divided into three groups: Pilates group, PNF group, and control group. Pilates and PNF groups underwent 1-month training programs with Pilates and PNF methods, respectively. The control group received no intervention during the 1 month. The maximal isometric force levels from knee extension and flexion, as well as the electromyography (EMG) signals from quadriceps and biceps femoris, were recorded before and after the 1-month intervention period. A two-way analysis of variance revealed that the Pilates and PNF methods induced similar strength gains from knee flexors and extensors, but Pilates exhibited greater low-gamma drive (i.e., oscillations in 30-60 Hz) in the EMG power spectrum after the training period. These results support use of both Pilates and PNF methods to enhance lower limb muscle strength in older groups, which is very important for gait, postural stability, and performance of daily life activities.

  11. Distance and Density Similarity Based Enhanced k-NN Classifier for Improving Fault Diagnosis Performance of Bearings

    Directory of Open Access Journals (Sweden)

    Sharif Uddin

    2016-01-01

    Full Text Available An enhanced k-nearest neighbor (k-NN classification algorithm is presented, which uses a density based similarity measure in addition to a distance based similarity measure to improve the diagnostic performance in bearing fault diagnosis. Due to its use of distance based similarity measure alone, the classification accuracy of traditional k-NN deteriorates in case of overlapping samples and outliers and is highly susceptible to the neighborhood size, k. This study addresses these limitations by proposing the use of both distance and density based measures of similarity between training and test samples. The proposed k-NN classifier is used to enhance the diagnostic performance of a bearing fault diagnosis scheme, which classifies different fault conditions based upon hybrid feature vectors extracted from acoustic emission (AE signals. Experimental results demonstrate that the proposed scheme, which uses the enhanced k-NN classifier, yields better diagnostic performance and is more robust to variations in the neighborhood size, k.

  12. IntNetLncSim: an integrative network analysis method to infer human lncRNA functional similarity.

    Science.gov (United States)

    Cheng, Liang; Shi, Hongbo; Wang, Zhenzhen; Hu, Yang; Yang, Haixiu; Zhou, Chen; Sun, Jie; Zhou, Meng

    2016-07-26

    Increasing evidence indicated that long non-coding RNAs (lncRNAs) were involved in various biological processes and complex diseases by communicating with mRNAs/miRNAs each other. Exploiting interactions between lncRNAs and mRNA/miRNAs to lncRNA functional similarity (LFS) is an effective method to explore function of lncRNAs and predict novel lncRNA-disease associations. In this article, we proposed an integrative framework, IntNetLncSim, to infer LFS by modeling the information flow in an integrated network that comprises both lncRNA-related transcriptional and post-transcriptional information. The performance of IntNetLncSim was evaluated by investigating the relationship of LFS with the similarity of lncRNA-related mRNA sets (LmRSets) and miRNA sets (LmiRSets). As a result, LFS by IntNetLncSim was significant positively correlated with the LmRSet (Pearson correlation γ2=0.8424) and LmiRSet (Pearson correlation γ2=0.2601). Particularly, the performance of IntNetLncSim is superior to several previous methods. In the case of applying the LFS to identify novel lncRNA-disease relationships, we achieved an area under the ROC curve (0.7300) in experimentally verified lncRNA-disease associations based on leave-one-out cross-validation. Furthermore, highly-ranked lncRNA-disease associations confirmed by literature mining demonstrated the excellent performance of IntNetLncSim. Finally, a web-accessible system was provided for querying LFS and potential lncRNA-disease relationships: http://www.bio-bigdata.com/IntNetLncSim.

  13. Composites Similarity Analysis Method Based on Knowledge Set in Composites Quality Control

    OpenAIRE

    Li Haifeng

    2016-01-01

    Composites similarity analysis is an important link of composites review, it can not only to declare composites review rechecking, still help composites applicants promptly have the research content relevant progress and avoid duplication. This paper mainly studies the composites similarity model in composites review. With the actual experience of composites management, based on the author’s knowledge set theory, paper analyzes deeply knowledge set representation of composites knowledge, impr...

  14. Comparison between statistical and optimization methods in accessing unmixing of spectrally similar materials

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2010-11-01

    Full Text Available This paper reports on the results from ordinary least squares and ridge regression as statistical methods, and is compared to numerical optimization methods such as the stochastic method for global optimization, simulated annealing, particle swarm...

  15. Evolutionary conservatism and convergence both lead to striking similarity in ecology, morphology and performance across continents in frogs.

    Science.gov (United States)

    Moen, Daniel S; Irschick, Duncan J; Wiens, John J

    2013-12-22

    Many clades contain ecologically and phenotypically similar species across continents, yet the processes generating this similarity are largely unstudied, leaving fundamental questions unanswered. Is similarity in morphology and performance across assemblages caused by evolutionary convergence or by biogeographic dispersal of evolutionarily conserved ecotypes? Does convergence to new ecological conditions erase evidence of past adaptation? Here, we analyse ecology, morphology and performance in frog assemblages from three continents (Asia, Australia and South America), assessing the importance of dispersal and convergent evolution in explaining similarity across regions. We find three striking results. First, species using the same microhabitat type are highly similar in morphology and performance across both clades and continents. Second, some species on different continents owe their similarity to dispersal and evolutionary conservatism (rather than evolutionary convergence), even over vast temporal and spatial scales. Third, in one case, an ecologically specialized ancestor radiated into diverse ecotypes that have converged with those on other continents, largely erasing traces of past adaptation to their ancestral ecology. Overall, our study highlights the roles of both evolutionary conservatism and convergence in explaining similarity in species traits over large spatial and temporal scales and demonstrates a statistical framework for addressing these questions in other systems.

  16. Class attendance, peer similarity, and academic performance in a large field study

    DEFF Research Database (Denmark)

    Kassarnig, Valentin; Bjerre-Nielsen, Andreas; Mones, Enys

    2017-01-01

    which have well-known systematic biases that lead to limitations on conclusions and generalizability as well as being costly to implement. Here we propose a novel method for measuring class attendance that overcomes these limitations by using location and bluetooth data collected from smartphone sensors.......5), suggesting either an important peer effect or homophily with respect to attendance....

  17. Similar performance of Brasfield and Wisconsin scoring systems in young children with cystic fibrosis

    Energy Technology Data Exchange (ETDEWEB)

    Cleveland, Robert H.; Stamoulis, Catherine [Boston Children' s Hospital, Harvard Medical School, Department of Radiology, Boston, MA (United States); Sawicki, Gregory S. [Boston Children' s Hospital, Harvard Medical School, Division of Respiratory Diseases, Department of Medicine, Boston, MA (United States)

    2015-10-15

    To assess the severity of lung disease in cystic fibrosis (CF), scoring systems based on chest radiographs (CXRs), CT and MRI have been used extensively, although primarily in research settings rather than for clinical purposes. It has recently been shown that those based on CXRs (primarily the Brasfield and Wisconsin systems) are as sensitive and valid as those based on CT. The reproducibility and correlation of both systems to pulmonary function tests (PFTs) were recently investigated and were found to be statistically identical. However, the relative performance of these systems has not been specifically assessed in children younger than 5 years old with mild lung disease, a critical age range in which PFTs is rarely performed. To investigate and compare the performance of the Brasfield and Wisconsin systems in children 0-5 years old with predominantly mild lung disease. Fifty-five patients 0-5 years old with 105 CXRs were included in the study. Given that the goal was to compare system performance in mild disease, only the first two CXRs from each patient were included (all but five patients had two images). When only one image was available in the target age range, it only was included. Agreement between the Brasfield and Wisconsin systems was assessed using a 2X2 contingency table assuming binary classification of CF lung disease using CXR scoring systems (mild vs. non-mild). In the absence of PFTs or another external gold standard for comparison, the Wisconsin system was used as an arbitrary gold standard against which the Brasfield was compared. Correlation between the two systems was assessed via a concordance correlation coefficient (CCC) for repeated measures. Scores were rated as mild or non-mild based on published numerical cutoffs for each system. The systems agreed on 89/105 (85%) and disagreed on 16/105 (15%) of the CXRs. Agreement between the two systems was statistically significant (P < 0.001). Relative sensitivity and specificity of the

  18. Aggregations of brittle stars can perform similar ecological roles as mussel reefs

    KAUST Repository

    Geraldi, NR

    2016-11-29

    Biogenic habitats, such as coral reefs, facilitate diverse communities. In aquatic systems, aggregations of mobile benthic species may play a similar ecological role to that of typically sessile biogenic habitats; however, this has rarely been considered. We quantified the abundance of sessile horse mussels Modiolus modiolus and aggregating brittle stars Ophiothrix fragilis and tested for correlations between the density of mussels (live and dead) and brittle stars each with (1) abundance, biomass, diversity, and assemblage structure of associated benthic macrofauna; and (2) percent organic matter of the sediment. We found that the abundance of live M. modiolus was positively associated with the abundance and biomass of macrofauna. The positive association between M. modiolus and macrofaunal abundance was further amplified with an increase in brittle stars and a decrease in dead mussel shells. Macrofaunal biomass was lower with a higher percentage of dead mussel shells, and macrofaunal diversity increased with greater abundances of live M. modiolus and brittle stars. Sediment organic matter was positively related to brittle star density, but not to the abundance of live or dead mussels. The positive relationship between brittle stars and sediment organic matter suggests that brittle stars could enhance rates of benthic- pelagic coupling. Given the importance of understanding the functional role of threatened habitats, it is essential that the underlying community patterns be understood through robust observational studies to then derive testable hypotheses to determine drivers. These findings provide novel insight into the ecological role of aggregations of mobile species, which is essential to prioritize conservation and restoration strategies.

  19. iPads and LCDs show similar performance in the detection of pulmonary nodules

    Science.gov (United States)

    McEntee, Mark F.; Lowe, Joanna; Butler, Marie Louise; Pietrzyk, Mariusz; Evanoff, Michael G.; Ryan, John; Brennan, Patrick C.; Rainford, Louise A.

    2012-02-01

    In February 2011 the University of Chicago Medical School distributed iPads to its trainee doctors for use when reviewing clinical information and images on the ward or clinics. The use of tablet computing devices is becoming widespread in medicine with Apple™ heralding them as "revolutionary" in medicine. The question arises, just because it is technical achievable to use iPads for clinical evaluation of images, should we do so? The current work assesses the diagnostic efficacy of iPads when compared with LCD secondary display monitors for identifying lung nodules on chest x-rays. Eight examining radiologists of the American Board of Radiology were involved in the assessment, reading chest images on both the iPad and the an off-the-shelf LCD monitor. Thirty chest images were shown to each observer, of which 15 had one or more lung nodules. Radiologists were asked to locate the nodules and score how confident they were with their decision on a scale of 1-5. An ROC and JAFROC analysis was performed and modalities were compared using DBM MRMC. The results demonstrate no significant differences in performance between the iPad and the LCD for the ROC AUC (p<0.075) or JAFROC FOM (p<0.059) for random readers and random cases. Sample size estimation showed that this result is significant at a power of 0.8 and an effect size of 0.05 for ROC and 0.07 for JAFROC. This work demonstrates that for the task of identifying pulmonary nodules, the use of the iPad does not significantly change performance compared to an off-the-shelf LCD.

  20. Influence of heat treatment on fatigue performances for self-piercing riveting similar and dissimilar titanium, aluminium and copper alloys

    OpenAIRE

    Zhang, Xianlian; He, Xiaocong; Xing, Baoying; Zhao, Lun; Lu, Yi; Gu, Fengshou; Ball, Andrew

    2016-01-01

    The fatigue performances of self-piercing riveting (SPR) joints connecting similar and dissimilar sheets of TA1 titanium alloy (TA1), Al5052 aluminium alloy (Al5052) and H62 copper alloy (H62) were studied in this paper. The specimens of similar TA1 sheets treated with stress relief annealing were prepared to investigate the influence of relief annealing on the mechanical properties of SPR joints. Fatigue tests were conducted to characterize the fatigue lives and failure modes of the joints. ...

  1. Similar diagnostic performance for neurocysticercosis of three glycoprotein preparations from Taenia solium metacestodes.

    Science.gov (United States)

    Villota, Guido E; Gomez, Diana I; Volcy, Michel; Franco, Andrés F; Cardona, Edgar A; Isaza, Rodrigo; Sanzón, Fernando; Teale, Judy M; Restrepo, Blanca I

    2003-03-01

    The detection of antibodies to Taenia solium metacestodes is very important in the differential diagnosis of neurocysticercosis (NCC). In this study, an electroimmunotransfer blot (EITB) assay that uses an elaborate protocol with metacestode glycoproteins as antigens was compared with two other Western blots that use glycoproteins obtained using simpler methods, including an eluate from a lectin column, or the vesicular fluid (VF) of the parasite. The concordance between the three assays was 91% in patients with active NCC and 100% in patients with suspected NCC and previous documentation of negative serology. The specificities for the Western blots and the EITB assay were 98% and 100%, respectively (98% concordance). These data suggest that the simplest of these immunoassays, the one that uses the VF of T. solium metacestodes in a Western blot format, can be reliably used for the serologic diagnosis of NCC in developing countries where access to the EITB assay is difficult.

  2. Thickness measuring apparatus and method for tire ply and similar materials

    International Nuclear Information System (INIS)

    Cho, B.Y.; Utt, O.L.

    1978-01-01

    The invention relates to method and apparatus for measuring the amount of material on opposite sides of the reinforcing cord of a tire ply, and more particularly relates to a backscatter X-ray radiation gauging apparatus and method for obtaining the aforesaid measurements. (author)

  3. The effect of target and non-target similarity on neural classification performance: A boost from confidence

    Directory of Open Access Journals (Sweden)

    Amar R Marathe

    2015-08-01

    Full Text Available Brain computer interaction (BCI technologies have proven effective in utilizing single-trial classification algorithms to detect target images in rapid serial visualization presentation tasks. While many factors contribute to the accuracy of these algorithms, a critical aspect that is often overlooked concerns the feature similarity between target and non-target images. In most real-world environments there are likely to be many shared features between targets and non-targets resulting in similar neural activity between the two classes. It is unknown how current neural-based target classification algorithms perform when qualitatively similar target and non-target images are presented. This study address this question by comparing behavioral and neural classification performance across two conditions: first, when targets were the only infrequent stimulus presented amongst frequent background distracters; and second when targets were presented together with infrequent non-targets containing similar visual features to the targets. The resulting findings show that behavior is slower and less accurate when targets are presented together with similar non-targets; moreover, single-trial classification yielded high levels of misclassification when infrequent non-targets are included. Furthermore, we present an approach to mitigate the image misclassification. We use confidence measures to assess the quality of single-trial classification, and demonstrate that a system in which low confidence trials are reclassified through a secondary process can result in improved performance.

  4. Intelligent method of plant dynamics behavior estimation by effectively applying similar cases

    International Nuclear Information System (INIS)

    Gofuku, Akio; Numoto, Atsushi; Yoshikawa, Hidekazu

    1994-01-01

    In order to accomplish efficient execution of a dynamic simulation of engineering systems, it is important to construct suitable mathematical models for the simulation. In the construction of the mathematical models, it is necessary to estimate the system's behavior to suppose the phenomena which are needed to be modeled. The case-based reasoning is considered to be a powerful tool to estimate the outline of system's behavior because we often estimate it from the similar cases which are stored as our experience or in literature. In this study, the technique based on similar cases is investigated to estimate the outline of time-responses of several important variables of pressurized water reactor (PWR) plants at a small break loss of coolant accident (SBLOCA). The registered cases in case base are gathered from various reports and the authors' numerical simulations related with SBLOCA of PWR plants. The functions to use in the case retrieval are formed from the characteristic features of SBLOCA of PWR plants. On the other hand, the rules to use in the case refinement are obtained from the qualitative and quantitative consideration of plants' behaviors of the cases in the case base. The applicability of the technique is discussed by two simple estimation trials of plant behavior. (author)

  5. A novel method for unsteady flow field segmentation based on stochastic similarity of direction

    Science.gov (United States)

    Omata, Noriyasu; Shirayama, Susumu

    2018-04-01

    Recent developments in fluid dynamics research have opened up the possibility for the detailed quantitative understanding of unsteady flow fields. However, the visualization techniques currently in use generally provide only qualitative insights. A method for dividing the flow field into physically relevant regions of interest can help researchers quantify unsteady fluid behaviors. Most methods at present compare the trajectories of virtual Lagrangian particles. The time-invariant features of an unsteady flow are also frequently of interest, but the Lagrangian specification only reveals time-variant features. To address these challenges, we propose a novel method for the time-invariant spatial segmentation of an unsteady flow field. This segmentation method does not require Lagrangian particle tracking but instead quantitatively compares the stochastic models of the direction of the flow at each observed point. The proposed method is validated with several clustering tests for 3D flows past a sphere. Results show that the proposed method reveals the time-invariant, physically relevant structures of an unsteady flow.

  6. Measuring age differences among globular clusters having similar metallicities - A new method and first results

    International Nuclear Information System (INIS)

    Vandenberg, D.A.; Bolte, M.; Stetson, P.B.

    1990-01-01

    A color-difference technique for estimating the relative ages of globular clusters with similar chemical compositions on the basis of their CM diagrams is described and demonstrated. The theoretical basis and implementation of the procedure are explained, and results for groups of globular clusters with m/H = about -2, -1.6, and -1.3, and for two special cases (Palomar 12 and NGC 5139) are presented in extensive tables and graphs and discussed in detail. It is found that the more metal-deficient globular clusters are nearly coeval (differences less than 0.5 Gyr), whereas the most metal-rich globular clusters exhibit significant age differences (about 2 Gyr). This result is shown to contradict Galactic evolution models postulating halo collapse in less than a few times 100 Myr. 77 refs

  7. Reliability Study Regarding the Use of Histogram Similarity Methods for Damage Detection

    Directory of Open Access Journals (Sweden)

    Nicoleta Gillich

    2013-01-01

    Full Text Available The paper analyses the reliability of three dissimilarity estimators to compare histograms, as support for a frequency-based damage detection method, able to identify structural changes in beam-like structures. First a brief presentation of the own developed damage detection method is made, with focus on damage localization. It consists actually in comparing a histogram derived from measurement results, with a large series of histograms, namely the damage location indexes for all locations along the beam, obtained by calculus. We tested some dissimilarity estimators like the Minkowski-form Distances, the Kullback-Leibler Divergence and the Histogram Intersection and found the Minkowski Distance as the method providing best results. It was tested for numerous locations, using real measurement results and with results artificially debased by noise, proving its reliability.

  8. COMPANY PERFORMANCE MEASUREMENT AND REPORTING METHODS

    Directory of Open Access Journals (Sweden)

    Nicu Ioana Elena

    2012-12-01

    Full Text Available One of the priorities of economic research has been and remains the re-evaluation of the notion of performance and especially exploring and finding some indicators that would reflect as accurately as possible the subtleties of the economic entity. The main purpose of this paper is to highlight the main company performance measurement and reporting methods. Performance is a concept that raises many question marks concerning the most accurate or the best method of reporting the performance at the company level. The research methodology has aimed at studying the Romanian and foreign specialized literature dealing with the analyzed field, studying magazines specialized on company performance measurement. If the financial performance measurement indicators are considered to offer an accurate image of the situation of the company, the modern approach through non-financial indicators offers a new perspective upon performance measurement, which is based on simplicity. In conclusion, after the theoretical study, I have noticed that the methods of performance measurement, reporting and interpretation are various, the opinions regarding the best performance measurement methods are contradictive and the companies prefer resorting to financial indicators that still play a more important role in the consolidation of the company performance measurement than the non-financial indicators do.

  9. Direct Patlak Reconstruction From Dynamic PET Data Using the Kernel Method With MRI Information Based on Structural Similarity.

    Science.gov (United States)

    Gong, Kuang; Cheng-Liao, Jinxiu; Wang, Guobao; Chen, Kevin T; Catana, Ciprian; Qi, Jinyi

    2018-04-01

    Positron emission tomography (PET) is a functional imaging modality widely used in oncology, cardiology, and neuroscience. It is highly sensitive, but suffers from relatively poor spatial resolution, as compared with anatomical imaging modalities, such as magnetic resonance imaging (MRI). With the recent development of combined PET/MR systems, we can improve the PET image quality by incorporating MR information into image reconstruction. Previously, kernel learning has been successfully embedded into static and dynamic PET image reconstruction using either PET temporal or MRI information. Here, we combine both PET temporal and MRI information adaptively to improve the quality of direct Patlak reconstruction. We examined different approaches to combine the PET and MRI information in kernel learning to address the issue of potential mismatches between MRI and PET signals. Computer simulations and hybrid real-patient data acquired on a simultaneous PET/MR scanner were used to evaluate the proposed methods. Results show that the method that combines PET temporal information and MRI spatial information adaptively based on the structure similarity index has the best performance in terms of noise reduction and resolution improvement.

  10. Similar estimates of temperature impacts on global wheat yield by three independent methods

    NARCIS (Netherlands)

    Liu, Bing; Asseng, Senthold; Müller, Christoph; Ewert, Frank; Elliott, Joshua; Lobell, David B.; Martre, Pierre; Ruane, Alex C.; Wallach, Daniel; Jones, James W.; Supit, Iwan; Wolf, Joost

    2016-01-01

    The potential impact of global temperature change on global crop yield has recently been assessed with different methods. Here we show that grid-based and point-based simulations and statistical regressions (from historic records), without deliberate adaptation or CO 2 fertilization effects,

  11. Statistical learning methods: Basics, control and performance

    Energy Technology Data Exchange (ETDEWEB)

    Zimmermann, J. [Max-Planck-Institut fuer Physik, Foehringer Ring 6, 80805 Munich (Germany)]. E-mail: zimmerm@mppmu.mpg.de

    2006-04-01

    The basics of statistical learning are reviewed with a special emphasis on general principles and problems for all different types of learning methods. Different aspects of controlling these methods in a physically adequate way will be discussed. All principles and guidelines will be exercised on examples for statistical learning methods in high energy and astrophysics. These examples prove in addition that statistical learning methods very often lead to a remarkable performance gain compared to the competing classical algorithms.

  12. Statistical learning methods: Basics, control and performance

    International Nuclear Information System (INIS)

    Zimmermann, J.

    2006-01-01

    The basics of statistical learning are reviewed with a special emphasis on general principles and problems for all different types of learning methods. Different aspects of controlling these methods in a physically adequate way will be discussed. All principles and guidelines will be exercised on examples for statistical learning methods in high energy and astrophysics. These examples prove in addition that statistical learning methods very often lead to a remarkable performance gain compared to the competing classical algorithms

  13. SemFunSim: a new method for measuring disease similarity by integrating semantic and gene functional association.

    Directory of Open Access Journals (Sweden)

    Liang Cheng

    Full Text Available Measuring similarity between diseases plays an important role in disease-related molecular function research. Functional associations between disease-related genes and semantic associations between diseases are often used to identify pairs of similar diseases from different perspectives. Currently, it is still a challenge to exploit both of them to calculate disease similarity. Therefore, a new method (SemFunSim that integrates semantic and functional association is proposed to address the issue.SemFunSim is designed as follows. First of all, FunSim (Functional similarity is proposed to calculate disease similarity using disease-related gene sets in a weighted network of human gene function. Next, SemSim (Semantic Similarity is devised to calculate disease similarity using the relationship between two diseases from Disease Ontology. Finally, FunSim and SemSim are integrated to measure disease similarity.The high average AUC (area under the receiver operating characteristic curve (96.37% shows that SemFunSim achieves a high true positive rate and a low false positive rate. 79 of the top 100 pairs of similar diseases identified by SemFunSim are annotated in the Comparative Toxicogenomics Database (CTD as being targeted by the same therapeutic compounds, while other methods we compared could identify 35 or less such pairs among the top 100. Moreover, when using our method on diseases without annotated compounds in CTD, we could confirm many of our predicted candidate compounds from literature. This indicates that SemFunSim is an effective method for drug repositioning.

  14. Similar predictions of etravirine sensitivity regardless of genotypic testing method used: comparison of available scoring systems.

    Science.gov (United States)

    Vingerhoets, Johan; Nijs, Steven; Tambuyzer, Lotke; Hoogstoel, Annemie; Anderson, David; Picchio, Gaston

    2012-01-01

    The aims of this study were to compare various genotypic scoring systems commonly used to predict virological outcome to etravirine, and examine their concordance with etravirine phenotypic susceptibility. Six etravirine genotypic scoring systems were assessed: Tibotec 2010 (based on 20 mutations; TBT 20), Monogram, Stanford HIVdb, ANRS, Rega (based on 37, 30, 27 and 49 mutations, respectively) and virco(®)TYPE HIV-1 (predicted fold change based on genotype). Samples from treatment-experienced patients who participated in the DUET trials and with both genotypic and phenotypic data (n=403) were assessed using each scoring system. Results were retrospectively correlated with virological response in DUET. κ coefficients were calculated to estimate the degree of correlation between the different scoring systems. Correlation between the five scoring systems and the TBT 20 system was approximately 90%. Virological response by etravirine susceptibility was comparable regardless of which scoring system was utilized, with 70-74% of DUET patients determined as susceptible to etravirine by the different scoring systems achieving plasma viral load <50 HIV-1 RNA copies/ml. In samples classed as phenotypically susceptible to etravirine (fold change in 50% effective concentration ≤3), correlations with genotypic score were consistently high across scoring systems (≥70%). In general, the etravirine genotypic scoring systems produced similar results, and genotype-phenotype concordance was high. As such, phenotypic interpretations, and in their absence all genotypic scoring systems investigated, may be used to reliably predict the activity of etravirine.

  15. Different methods to define utility functions yield similar results but engage different neural processes

    Directory of Open Access Journals (Sweden)

    Marcus Heldmann

    2009-10-01

    Full Text Available Although the concept of utility is fundamental to many economic theories, up to now a generally accepted method determining a subject’s utility function is not available. We investigated two methods that are used in economic sciences for describing utility functions by using response-locked event-related potentials in order to assess their neural underpinnings. For defining the certainty equivalent (CE, we used a lottery game with probabilities to win p=0.5, for identifying the subjects’ utility functions directly a standard bisection task was applied. Although the lottery tasks’ payoffs were only hypothetical, a pronounced negativity was observed resembling the error related negativity (ERN previously described in action monitoring research, but this occurred only for choices far away from the indifference point between money and lottery. By contrast, the bisection task failed to evoke an ERN irrespective of the responses’ correctness. Based on these findings we are reasoning that only decisions made in the lottery task achieved a level of subjective relevance that activates cognitive-emotional monitoring. In terms of economic sciences, our findings support the view that the bisection method is unaffected by any kind of probability valuation or other parameters related to risk and in combination with the lottery task can, therefore, be used to differentiate between payoff and probability valuation.

  16. Gradient High Performance Liquid Chromatography Method ...

    African Journals Online (AJOL)

    Purpose: To develop a gradient high performance liquid chromatography (HPLC) method for the simultaneous determination of phenylephrine (PHE) and ibuprofen (IBU) in solid ..... nimesulide, phenylephrine. Hydrochloride, chlorpheniramine maleate and caffeine anhydrous in pharmaceutical dosage form. Acta Pol.

  17. The production and perception of emotionally expressive walking sounds: similarities between musical performance and everyday motor activity.

    Directory of Open Access Journals (Sweden)

    Bruno L Giordano

    Full Text Available Several studies have investigated the encoding and perception of emotional expressivity in music performance. A relevant question concerns how the ability to communicate emotions in music performance is acquired. In accordance with recent theories on the embodiment of emotion, we suggest here that both the expression and recognition of emotion in music might at least in part rely on knowledge about the sounds of expressive body movements. We test this hypothesis by drawing parallels between musical expression of emotions and expression of emotions in sounds associated with a non-musical motor activity: walking. In a combined production-perception design, two experiments were conducted, and expressive acoustical features were compared across modalities. An initial performance experiment tested for similar feature use in walking sounds and music performance, and revealed that strong similarities exist. Features related to sound intensity, tempo and tempo regularity were identified as been used similarly in both domains. Participants in a subsequent perception experiment were able to recognize both non-emotional and emotional properties of the sound-generating walkers. An analysis of the acoustical correlates of behavioral data revealed that variations in sound intensity, tempo, and tempo regularity were likely used to recognize expressed emotions. Taken together, these results lend support the motor origin hypothesis for the musical expression of emotions.

  18. Rational Selection, Criticality Assessment, and Tiering of Quality Attributes and Test Methods for Analytical Similarity Evaluation of Biosimilars.

    Science.gov (United States)

    Vandekerckhove, Kristof; Seidl, Andreas; Gutka, Hiten; Kumar, Manish; Gratzl, Gyöngyi; Keire, David; Coffey, Todd; Kuehne, Henriette

    2018-05-10

    Leading regulatory agencies recommend biosimilar assessment to proceed in a stepwise fashion, starting with a detailed analytical comparison of the structural and functional properties of the proposed biosimilar and reference product. The degree of analytical similarity determines the degree of residual uncertainty that must be addressed through downstream in vivo studies. Substantive evidence of similarity from comprehensive analytical testing may justify a targeted clinical development plan, and thus enable a shorter path to licensing. The importance of a careful design of the analytical similarity study program therefore should not be underestimated. Designing a state-of-the-art analytical similarity study meeting current regulatory requirements in regions such as the USA and EU requires a methodical approach, consisting of specific steps that far precede the work on the actual analytical study protocol. This white paper discusses scientific and methodological considerations on the process of attribute and test method selection, criticality assessment, and subsequent assignment of analytical measures to US FDA's three tiers of analytical similarity assessment. Case examples of selection of critical quality attributes and analytical methods for similarity exercises are provided to illustrate the practical implementation of the principles discussed.

  19. Generalized method for sorting Shack-Hartmann spot patterns using local similarity

    International Nuclear Information System (INIS)

    Smith, Daniel G.; Greivenkamp, John E.

    2008-01-01

    The sensitivity and dynamic range of a Shack-Hartmann wavefront sensor is enhanced when the spots produced by the lenslet array are allowed to shift more than one lenslet radius from their on-axis positions. However, this presents the problem of accurately and robustly associating the spots with their respective subapertures. This paper describes a method for sorting spots that takes advantage of the local spot position distortions to unwrap the spot pattern. The described algorithm is both simple and robust and also applicable to any lenslet array geometry that can be described as a two-dimensional lattice, including hexagonal arrays, which are shown here to be more efficient than square arrays

  20. An adaptive image sparse reconstruction method combined with nonlocal similarity and cosparsity for mixed Gaussian-Poisson noise removal

    Science.gov (United States)

    Chen, Yong-fei; Gao, Hong-xia; Wu, Zi-ling; Kang, Hui

    2018-01-01

    Compressed sensing (CS) has achieved great success in single noise removal. However, it cannot restore the images contaminated with mixed noise efficiently. This paper introduces nonlocal similarity and cosparsity inspired by compressed sensing to overcome the difficulties in mixed noise removal, in which nonlocal similarity explores the signal sparsity from similar patches, and cosparsity assumes that the signal is sparse after a possibly redundant transform. Meanwhile, an adaptive scheme is designed to keep the balance between mixed noise removal and detail preservation based on local variance. Finally, IRLSM and RACoSaMP are adopted to solve the objective function. Experimental results demonstrate that the proposed method is superior to conventional CS methods, like K-SVD and state-of-art method nonlocally centralized sparse representation (NCSR), in terms of both visual results and quantitative measures.

  1. Validated High Performance Liquid Chromatography Method for ...

    African Journals Online (AJOL)

    Purpose: To develop a simple, rapid and sensitive high performance liquid chromatography (HPLC) method for the determination of cefadroxil monohydrate in human plasma. Methods: Schimadzu HPLC with LC solution software was used with Waters Spherisorb, C18 (5 μm, 150mm × 4.5mm) column. The mobile phase ...

  2. Neutron-Gamma Pulse Shape Discrimination With Ne-213 Liquid Scintillator By Using Digital Signal Processing Combined With Similarity Method

    International Nuclear Information System (INIS)

    Mardiyanto

    2008-01-01

    Neutron-Gamma Pulse Shape Discrimination with a NE-213 Liquid Scintillator by Using Digital Signal Processing Combined with Similarity Method. Measurement of mixed neutron-gamma radiation is difficult because a nuclear detector is usually sensitive to both radiations. A new attempt of neutron-gamma pulse shape discrimination for a NE-213 liquid scintillator is presented by using digital signal processing combined with an off-line similarity method. The output pulse shapes are digitized with a high speed digital oscilloscope. The n-γ discrimination is done by calculating the index of each pulse shape, which is determined by the similarity method, and then fusing it with its corresponding pulse height. Preliminary results demonstrate good separation of neutron and gamma-ray signals from a NE-213 scintillator with a simple digital system. The results were better than those with a conventional rise time method. Figure of Merit is used to determine the quality of discrimination. The figure of merit of the discrimination using digital signal processing combined with off-line similarity method are 1.9; 1.7; 1.1; 1.1; and 0.8; on the other hand by using conventional method the rise time are 0.9; 0.9; 0.9; 0.7; and 0.4 for the equivalent electron energy of 800; 278; 139; 69; and 30 keV. (author)

  3. Distinguishing the Chinese materia medica Tiepishihu from similar Dendrobium species of the same genus using histological and microscopic method.

    Science.gov (United States)

    Yu, Kun-Zi; Yan, Hua; Tai, Hai-Chuan; Zhang, Nan-Ping; Cheng, Xian-Long; Guo, Zeng-Xi; Ma, Shuang-Cheng; Wei, Feng

    2017-07-01

    The Chinese Materia Medica, Tiepishihu, used as a tonic for over one thousand years, is a well-known precious medicine in China. According to the Chinese Pharmacopoeia, its source is the species Dendrobium officinale Kimura et Migo, which is distinguished from other species in Dendrobium genus. However, these species from the same genus are similar with Tiepishihu and caused confusion in the market. To find a quick and simple method to distinguish Tiepishihu from other similar species, histologic and microscopic methods were combined together to investigate the transverse section of stem of Tiepishihu and other similar species. Phloroglucinol test solution with hydrochloric acid was used to reveal the lignified tissue by staining the transverse section of Tiepishihu and similar species. Results revealed the unique identification characteristics to distinguish Tiepishihu from similar species, which were difficult to distinguish by other methods. The identification characteristics of Tiepishihu include the cells of vascular bundle sheath were stained red, parenchyma cells were not stained red. What's more, other species can be distinguished from each other with microscopic and histological characteristics. These characteristics proved stable and can be easily observed by normal light microscopic examination. This method is rapid, accurate, stable, and inexpensive. © 2017 Wiley Periodicals, Inc.

  4. Similarities and differences between Turkic languages and comparative methods in teaching them

    Directory of Open Access Journals (Sweden)

    Gyulzura Zhumakunova

    2018-03-01

    Full Text Available Turkic languages are cognate language which stem back to the single ancestor – Proto-Turkic language. In the course of long and convoluted history of Turkic-speaking peoples, all of their languages suffered various degrees of change on every level of its structure – in phonemes, morphemes, vocabulary and syntax. At the same time, all of these languages have preserved some of their common features which link them to their ancestor tongue. In this article, the author, from her long experience of teaching Kyrgyz at Ankara University, explains her vision of teaching Turkic languages in general and describes the methods and approaches which she deems helpful in successful teaching on university level. In teaching a Turkic language, the main objective is not to focus on language study as such, but take a breathtaking journey along the hidden paths of one’s mother tongue and get to understand it in a more profound way, in its historical retrospective. Turkic languages have always attracted linguists and were in the focus of their studies. Getting acquainted with various linguistic theories and the works of prominent Turkologists, as seen from teaching practice, is also very helpful to shape an enduring interest in learning both the target language and first language more profoundly. We also have to take into account such extralinguistic factors as the objective of language study, friendliness of the environment and the demand for language in a given environment. The article examines these issues through the alphabets and their various uses among Turkic peoples. When setting up a language study program, it is also important to take into account how familiar students are with the target language. Overall, this new type of work required a fundamental study of methodological foundations of language teaching. New types of academic and learners’ dictionaries have to be prepared, as well as new textbooks to help students of a specific Turkic language.

  5. Employment of kernel methods on wind turbine power performance assessment

    DEFF Research Database (Denmark)

    Skrimpas, Georgios Alexandros; Sweeney, Christian Walsted; Marhadi, Kun S.

    2015-01-01

    A power performance assessment technique is developed for the detection of power production discrepancies in wind turbines. The method employs a widely used nonparametric pattern recognition technique, the kernel methods. The evaluation is based on the trending of an extracted feature from...... the kernel matrix, called similarity index, which is introduced by the authors for the first time. The operation of the turbine and consequently the computation of the similarity indexes is classified into five power bins offering better resolution and thus more consistent root cause analysis. The accurate...

  6. A comparative analysis of Painleve, Lax pair, and similarity transformation methods in obtaining the integrability conditions of nonlinear Schroedinger equations

    International Nuclear Information System (INIS)

    Al Khawaja, U.

    2010-01-01

    We derive the integrability conditions of nonautonomous nonlinear Schroedinger equations using the Lax pair and similarity transformation methods. We present a comparative analysis of these integrability conditions with those of the Painleve method. We show that while the Painleve integrability conditions restrict the dispersion, nonlinearity, and dissipation/gain coefficients to be space independent and the external potential to be only a quadratic function of position, the Lax Pair and the similarity transformation methods allow for space-dependent coefficients and an external potential that is not restricted to the quadratic form. The integrability conditions of the Painleve method are retrieved as a special case of our general integrability conditions. We also derive the integrability conditions of nonautonomous nonlinear Schroedinger equations for two- and three-spacial dimensions.

  7. GEPSI: A Gene Expression Profile Similarity-Based Identification Method of Bioactive Components in Traditional Chinese Medicine Formula.

    Science.gov (United States)

    Zhang, Baixia; He, Shuaibing; Lv, Chenyang; Zhang, Yanling; Wang, Yun

    2018-01-01

    The identification of bioactive components in traditional Chinese medicine (TCM) is an important part of the TCM material foundation research. Recently, molecular docking technology has been extensively used for the identification of TCM bioactive components. However, target proteins that are used in molecular docking may not be the actual TCM target. For this reason, the bioactive components would likely be omitted or incorrect. To address this problem, this study proposed the GEPSI method that identified the target proteins of TCM based on the similarity of gene expression profiles. The similarity of the gene expression profiles affected by TCM and small molecular drugs was calculated. The pharmacological action of TCM may be similar to that of small molecule drugs that have a high similarity score. Indeed, the target proteins of the small molecule drugs could be considered TCM targets. Thus, we identified the bioactive components of a TCM by molecular docking and verified the reliability of this method by a literature investigation. Using the target proteins that TCM actually affected as targets, the identification of the bioactive components was more accurate. This study provides a fast and effective method for the identification of TCM bioactive components.

  8. Personality, Assessment Methods and Academic Performance

    Science.gov (United States)

    Furnham, Adrian; Nuygards, Sarah; Chamorro-Premuzic, Tomas

    2013-01-01

    This study examines the relationship between personality and two different academic performance (AP) assessment methods, namely exams and coursework. It aimed to examine whether the relationship between traits and AP was consistent across self-reported versus documented exam results, two different assessment techniques and across different…

  9. Validated high performance liquid chromatographic (HPLC) method ...

    African Journals Online (AJOL)

    STORAGESEVER

    2010-02-22

    Feb 22, 2010 ... specific and accurate high performance liquid chromatographic method for determination of ZER in micro-volumes ... tional medicine as a cure for swelling, sores, loss of appetite and ... Receptor Activator for Nuclear Factor κ B Ligand .... The effect of ... be suitable for preclinical pharmacokinetic studies. The.

  10. Is having similar eye movement patterns during face learning and recognition beneficial for recognition performance? Evidence from hidden Markov modeling.

    Science.gov (United States)

    Chuk, Tim; Chan, Antoni B; Hsiao, Janet H

    2017-12-01

    The hidden Markov model (HMM)-based approach for eye movement analysis is able to reflect individual differences in both spatial and temporal aspects of eye movements. Here we used this approach to understand the relationship between eye movements during face learning and recognition, and its association with recognition performance. We discovered holistic (i.e., mainly looking at the face center) and analytic (i.e., specifically looking at the two eyes in addition to the face center) patterns during both learning and recognition. Although for both learning and recognition, participants who adopted analytic patterns had better recognition performance than those with holistic patterns, a significant positive correlation between the likelihood of participants' patterns being classified as analytic and their recognition performance was only observed during recognition. Significantly more participants adopted holistic patterns during learning than recognition. Interestingly, about 40% of the participants used different patterns between learning and recognition, and among them 90% switched their patterns from holistic at learning to analytic at recognition. In contrast to the scan path theory, which posits that eye movements during learning have to be recapitulated during recognition for the recognition to be successful, participants who used the same or different patterns during learning and recognition did not differ in recognition performance. The similarity between their learning and recognition eye movement patterns also did not correlate with their recognition performance. These findings suggested that perceptuomotor memory elicited by eye movement patterns during learning does not play an important role in recognition. In contrast, the retrieval of diagnostic information for recognition, such as the eyes for face recognition, is a better predictor for recognition performance. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Human performance assessment: methods and measures

    International Nuclear Information System (INIS)

    Andresen, Gisle; Droeivoldsmo, Asgeir

    2000-10-01

    The Human Error Analysis Project (HEAP) was initiated in 1994. The aim of the project was to acquire insights on how and why cognitive errors occur when operators are engaged in problem solving in advanced integrated control rooms. Since human error had not been studied in the HAlden Man-Machine LABoratory (HAMMLAB) before, it was also necessary to carry out research in methodology. In retrospect, it is clear that much of the methodological work is relevant to human-machine research in general, and not only to research on human error. The purpose of this report is, therefore, to give practitioners and researchers an overview of the methodological parts of HEAP. The scope of the report is limited to methods used throughout the data acquisition process, i.e., data-collection methods, data-refinement methods, and measurement methods. The data-collection methods include various types of verbal protocols, simulator logs, questionnaires, and interviews. Data-refinement methods involve different applications of the Eyecon system, a flexible data-refinement tool, and small computer programs used for rearranging, reformatting, and aggregating raw-data. Measurement methods involve assessment of diagnostic behaviour, erroneous actions, complexity, task/system performance, situation awareness, and workload. The report concludes that the data-collection methods are generally both reliable and efficient. The data-refinement methods, however, should be easier to use in order to facilitate explorative analyses. Although the series of experiments provided an opportunity for measurement validation, there are still uncertainties connected to several measures, due to their reliability still being unknown. (Author). 58 refs.,7 tabs

  12. Gender similarities and differences.

    Science.gov (United States)

    Hyde, Janet Shibley

    2014-01-01

    Whether men and women are fundamentally different or similar has been debated for more than a century. This review summarizes major theories designed to explain gender differences: evolutionary theories, cognitive social learning theory, sociocultural theory, and expectancy-value theory. The gender similarities hypothesis raises the possibility of theorizing gender similarities. Statistical methods for the analysis of gender differences and similarities are reviewed, including effect sizes, meta-analysis, taxometric analysis, and equivalence testing. Then, relying mainly on evidence from meta-analyses, gender differences are reviewed in cognitive performance (e.g., math performance), personality and social behaviors (e.g., temperament, emotions, aggression, and leadership), and psychological well-being. The evidence on gender differences in variance is summarized. The final sections explore applications of intersectionality and directions for future research.

  13. Hybrid Capture 2 and cobas human papillomavirus assays perform similarly on SurePath samples from women with abnormalities

    DEFF Research Database (Denmark)

    Fornari, D; Rebolj, M; Bjerregaard, B

    2016-01-01

    OBJECTIVE: In two laboratories (Departments of Pathology, Copenhagen University Hospitals of Herlev and Hvidovre), we compared cobas and Hybrid Capture 2 (HC2) human papillomavirus (HPV) assays using SurePath® samples from women with atypical squamous cells of undetermined significance (ASCUS......) at ≥30 years and women after treatment of cervical intraepithelial neoplasia (CIN). METHODS: Samples from 566 women with ASCUS and 411 women after treatment were routinely tested with HC2 and, thereafter, with cobas. Histological outcomes were retrieved from the Danish Pathology Data Base. We calculated...... the overall agreement between the assays, and compared their sensitivity and specificity for ≥CIN2. RESULTS: In women with ASCUS, HC2 and cobas testing results were similar in the two laboratories. The overall agreement was 91% (95% CI, 88-93). After CIN treatment, the overall agreement was 87% (95% CI, 82...

  14. An exergy method for compressor performance analysis

    Energy Technology Data Exchange (ETDEWEB)

    McGovern, J A; Harte, S [Trinity Coll., Dublin (Ireland)

    1995-07-01

    An exergy method for compressor performance analysis is presented. The purpose of this is to identify and quantify defects in the use of a compressor`s shaft power. This information can be used as the basis for compressor design improvements. The defects are attributed to friction, irreversible heat transfer, fluid throttling, and irreversible fluid mixing. They are described, on a common basis, as exergy destruction rates and their locations are identified. The method can be used with any type of positive displacement compressor. It is most readily applied where a detailed computer simulation program is available for the compressor. An analysis of an open reciprocating refrigeration compressor that used R12 refrigerant is given as an example. The results that are presented consist of graphs of the instantaneous rates of exergy destruction according to the mechanisms involved, a pie chart of the breakdown of the average shaft power wastage by mechanism, and a pie chart with a breakdown by location. (author)

  15. A touch-probe path generation method through similarity analysis between the feature vectors in new and old models

    Energy Technology Data Exchange (ETDEWEB)

    Jeon, Hye Sung; Lee, Jin Won; Yang, Jeong Sam [Dept. of Industrial Engineering, Ajou University, Suwon (Korea, Republic of)

    2016-10-15

    The On-machine measurement (OMM), which measures a work piece during or after the machining process in the machining center, has the advantage of measuring the work piece directly within the work space without moving it. However, the path generation procedure used to determine the measuring sequence and variables for the complex features of a target work piece has the limitation of requiring time-consuming tasks to generate the measuring points and mostly relies on the proficiency of the on-site engineer. In this study, we propose a touch-probe path generation method using similarity analysis between the feature vectors of three-dimensional (3-D) shapes for the OMM. For the similarity analysis between a new 3-D model and existing 3-D models, we extracted the feature vectors from models that can describe the characteristics of a geometric shape model; then, we applied those feature vectors to a geometric histogram that displays a probability distribution obtained by the similarity analysis algorithm. In addition, we developed a computer-aided inspection planning system that corrects non-applied measuring points that are caused by minute geometry differences between the two models and generates the final touch-probe path.

  16. SVMTriP: a method to predict antigenic epitopes using support vector machine to integrate tri-peptide similarity and propensity.

    Directory of Open Access Journals (Sweden)

    Bo Yao

    Full Text Available Identifying protein surface regions preferentially recognizable by antibodies (antigenic epitopes is at the heart of new immuno-diagnostic reagent discovery and vaccine design, and computational methods for antigenic epitope prediction provide crucial means to serve this purpose. Many linear B-cell epitope prediction methods were developed, such as BepiPred, ABCPred, AAP, BCPred, BayesB, BEOracle/BROracle, and BEST, towards this goal. However, effective immunological research demands more robust performance of the prediction method than what the current algorithms could provide. In this work, a new method to predict linear antigenic epitopes is developed; Support Vector Machine has been utilized by combining the Tri-peptide similarity and Propensity scores (SVMTriP. Applied to non-redundant B-cell linear epitopes extracted from IEDB, SVMTriP achieves a sensitivity of 80.1% and a precision of 55.2% with a five-fold cross-validation. The AUC value is 0.702. The combination of similarity and propensity of tri-peptide subsequences can improve the prediction performance for linear B-cell epitopes. Moreover, SVMTriP is capable of recognizing viral peptides from a human protein sequence background. A web server based on our method is constructed for public use. The server and all datasets used in the current study are available at http://sysbio.unl.edu/SVMTriP.

  17. The New Performance Calculation Method of Fouled Axial Flow Compressor

    Directory of Open Access Journals (Sweden)

    Huadong Yang

    2014-01-01

    Full Text Available Fouling is the most important performance degradation factor, so it is necessary to accurately predict the effect of fouling on engine performance. In the previous research, it is very difficult to accurately model the fouled axial flow compressor. This paper develops a new performance calculation method of fouled multistage axial flow compressor based on experiment result and operating data. For multistage compressor, the whole compressor is decomposed into two sections. The first section includes the first 50% stages which reflect the fouling level, and the second section includes the last 50% stages which are viewed as the clean stage because of less deposits. In this model, the performance of the first section is obtained by combining scaling law method and linear progression model with traditional stage stacking method; simultaneously ambient conditions and engine configurations are considered. On the other hand, the performance of the second section is calculated by averaged infinitesimal stage method which is based on Reynolds’ law of similarity. Finally, the model is successfully applied to predict the 8-stage axial flow compressor and 16-stage LM2500-30 compressor. The change of thermodynamic parameters such as pressure ratio, efficiency with the operating time, and stage number is analyzed in detail.

  18. Operator performance evaluation using multi criteria decision making methods

    Science.gov (United States)

    Rani, Ruzanita Mat; Ismail, Wan Rosmanira; Razali, Siti Fatihah

    2014-06-01

    Operator performance evaluation is a very important operation in labor-intensive manufacturing industry because the company's productivity depends on the performance of its operators. The aims of operator performance evaluation are to give feedback to operators on their performance, to increase company's productivity and to identify strengths and weaknesses of each operator. In this paper, six multi criteria decision making methods; Analytical Hierarchy Process (AHP), fuzzy AHP (FAHP), ELECTRE, PROMETHEE II, Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) and VlseKriterijumska Optimizacija I Kompromisno Resenje (VIKOR) are used to evaluate the operators' performance and to rank the operators. The performance evaluation is based on six main criteria; competency, experience and skill, teamwork and time punctuality, personal characteristics, capability and outcome. The study was conducted at one of the SME food manufacturing companies in Selangor. From the study, it is found that AHP and FAHP yielded the "outcome" criteria as the most important criteria. The results of operator performance evaluation showed that the same operator is ranked the first using all six methods.

  19. Olympic weightlifting and plyometric training with children provides similar or greater performance improvements than traditional resistance training.

    Science.gov (United States)

    Chaouachi, Anis; Hammami, Raouf; Kaabi, Sofiene; Chamari, Karim; Drinkwater, Eric J; Behm, David G

    2014-06-01

    A number of organizations recommend that advanced resistance training (RT) techniques can be implemented with children. The objective of this study was to evaluate the effectiveness of Olympic-style weightlifting (OWL), plyometrics, and traditional RT programs with children. Sixty-three children (10-12 years) were randomly allocated to a 12-week control OWL, plyometric, or traditional RT program. Pre- and post-training tests included body mass index (BMI), sum of skinfolds, countermovement jump (CMJ), horizontal jump, balance, 5- and 20-m sprint times, isokinetic force and power at 60 and 300° · s(-1). Magnitude-based inferences were used to analyze the likelihood of an effect having a standardized (Cohen's) effect size exceeding 0.20. All interventions were generally superior to the control group. Olympic weightlifting was >80% likely to provide substantially better improvements than plyometric training for CMJ, horizontal jump, and 5- and 20-m sprint times, whereas >75% likely to substantially exceed traditional RT for balance and isokinetic power at 300° · s(-1). Plyometric training was >78% likely to elicit substantially better training adaptations than traditional RT for balance, isokinetic force at 60 and 300° · s(-1), isokinetic power at 300° · s(-1), and 5- and 20-m sprints. Traditional RT only exceeded plyometric training for BMI and isokinetic power at 60° · s(-1). Hence, OWL and plyometrics can provide similar or greater performance adaptations for children. It is recommended that any of the 3 training modalities can be implemented under professional supervision with proper training progressions to enhance training adaptations in children.

  20. Meta-Analysis of Quantification Methods Shows that Archaea and Bacteria Have Similar Abundances in the Subseafloor

    Science.gov (United States)

    May, Megan K.; Kevorkian, Richard T.; Steen, Andrew D.

    2013-01-01

    There is no universally accepted method to quantify bacteria and archaea in seawater and marine sediments, and different methods have produced conflicting results with the same samples. To identify best practices, we compiled data from 65 studies, plus our own measurements, in which bacteria and archaea were quantified with fluorescent in situ hybridization (FISH), catalyzed reporter deposition FISH (CARD-FISH), polyribonucleotide FISH, or quantitative PCR (qPCR). To estimate efficiency, we defined “yield” to be the sum of bacteria and archaea counted by these techniques divided by the total number of cells. In seawater, the yield was high (median, 71%) and was similar for FISH, CARD-FISH, and polyribonucleotide FISH. In sediments, only measurements by CARD-FISH in which archaeal cells were permeabilized with proteinase K showed high yields (median, 84%). Therefore, the majority of cells in both environments appear to be alive, since they contain intact ribosomes. In sediments, the sum of bacterial and archaeal 16S rRNA gene qPCR counts was not closely related to cell counts, even after accounting for variations in copy numbers per genome. However, qPCR measurements were precise relative to other qPCR measurements made on the same samples. qPCR is therefore a reliable relative quantification method. Inconsistent results for the relative abundance of bacteria versus archaea in deep subsurface sediments were resolved by the removal of CARD-FISH measurements in which lysozyme was used to permeabilize archaeal cells and qPCR measurements which used ARCH516 as an archaeal primer or TaqMan probe. Data from best-practice methods showed that archaea and bacteria decreased as the depth in seawater and marine sediments increased, although archaea decreased more slowly. PMID:24096423

  1. Performance prediction of electrohydrodynamic thrusters by the perturbation method

    International Nuclear Information System (INIS)

    Shibata, H.; Watanabe, Y.; Suzuki, K.

    2016-01-01

    In this paper, we present a novel method for analyzing electrohydrodynamic (EHD) thrusters. The method is based on a perturbation technique applied to a set of drift-diffusion equations, similar to the one introduced in our previous study on estimating breakdown voltage. The thrust-to-current ratio is generalized to represent the performance of EHD thrusters. We have compared the thrust-to-current ratio obtained theoretically with that obtained from the proposed method under atmospheric air conditions, and we have obtained good quantitative agreement. Also, we have conducted a numerical simulation in more complex thruster geometries, such as the dual-stage thruster developed by Masuyama and Barrett [Proc. R. Soc. A 469, 20120623 (2013)]. We quantitatively clarify the fact that if the magnitude of a third electrode voltage is low, the effective gap distance shortens, whereas if the magnitude of the third electrode voltage is sufficiently high, the effective gap distance lengthens.

  2. Determining when a fracture occurred: Does the method matter? Analysis of the similarity of three different methods for estimating time since fracture of juvenile long bones.

    Science.gov (United States)

    Drury, Anne; Cunningham, Craig

    2018-01-01

    Radiographic fracture date estimation is a critical component of skeletal trauma analysis in the living. Several timetables have been proposed for how the appearance of radiographic features can be interpreted to provide a likely time frame for fracture occurrence. This study compares three such timetables for pediatric fractures, by Islam et al. (2000), Malone et al. (2011), and Prosser et al. (2012), in order to determine whether the fracture date ranges produced by using these methods are in agreement with one another. Fracture date ranges were estimated for 112 long bone fractures in 96 children aged 1-17 years, using the three different timetables. The extent of similarity of the intervals was tested by statistically comparing the overlap between the ranges. Results showed that none of the methods were in perfect agreement with one another. Differences seen included the size of the estimated date range for when a fracture occurred, and the specific dates given for both the upper and lower ends of the fracture date range. There was greater similarity between the ranges produced by Malone et al. (2011) and both the other two studies than there was between Islam et al. (2000) and Prosser et al. (2012). The greatest similarity existed between Malone et al. (2011) and Islam et al. (2000). The extent of differences between methods can vary widely, depending on the fracture analysed. Using one timetable gives an average earliest possible fracture date of less than 2 days before another, but the range was extreme, with one method estimating minimum time since fracture as 25 days before another method for a given fracture. In most cases, one method gave maximum time since fracture as a week less than the other two methods, but range was extreme and some estimates were nearly two months different. The variability in fracture date estimates given by these timetables indicates that caution should be exercised when estimating the timing of a juvenile fracture if relying

  3. Similarity measure and topology evolution of foreign exchange markets using dynamic time warping method: Evidence from minimal spanning tree

    Science.gov (United States)

    Wang, Gang-Jin; Xie, Chi; Han, Feng; Sun, Bo

    2012-08-01

    In this study, we employ a dynamic time warping method to study the topology of similarity networks among 35 major currencies in international foreign exchange (FX) markets, measured by the minimal spanning tree (MST) approach, which is expected to overcome the synchronous restriction of the Pearson correlation coefficient. In the empirical process, firstly, we subdivide the analysis period from June 2005 to May 2011 into three sub-periods: before, during, and after the US sub-prime crisis. Secondly, we choose NZD (New Zealand dollar) as the numeraire and then, analyze the topology evolution of FX markets in terms of the structure changes of MSTs during the above periods. We also present the hierarchical tree associated with the MST to study the currency clusters in each sub-period. Our results confirm that USD and EUR are the predominant world currencies. But USD gradually loses the most central position while EUR acts as a stable center in the MST passing through the crisis. Furthermore, an interesting finding is that, after the crisis, SGD (Singapore dollar) becomes a new center currency for the network.

  4. Telerobotic system performance measurement - Motivation and methods

    Science.gov (United States)

    Kondraske, George V.; Khoury, George J.

    1992-01-01

    A systems performance-based strategy for modeling and conducting experiments relevant to the design and performance characterization of telerobotic systems is described. A developmental testbed consisting of a distributed telerobotics network and initial efforts to implement the strategy described is presented. Consideration is given to the general systems performance theory (GSPT) to tackle human performance problems as a basis for: measurement of overall telerobotic system (TRS) performance; task decomposition; development of a generic TRS model; and the characterization of performance of subsystems comprising the generic model. GSPT employs a resource construct to model performance and resource economic principles to govern the interface of systems to tasks. It provides a comprehensive modeling/measurement strategy applicable to complex systems including both human and artificial components. Application is presented within the framework of a distributed telerobotics network as a testbed. Insight into the design of test protocols which elicit application-independent data is described.

  5. Performance Benchmarking of Fast Multipole Methods

    KAUST Repository

    Al-Harthi, Noha A.

    2013-06-01

    The current trends in computer architecture are shifting towards smaller byte/flop ratios, while available parallelism is increasing at all levels of granularity – vector length, core count, and MPI process. Intel’s Xeon Phi coprocessor, NVIDIA’s Kepler GPU, and IBM’s BlueGene/Q all have a Byte/flop ratio close to 0.2, which makes it very difficult for most algorithms to extract a high percentage of the theoretical peak flop/s from these architectures. Popular algorithms in scientific computing such as FFT are continuously evolving to keep up with this trend in hardware. In the meantime it is also necessary to invest in novel algorithms that are more suitable for computer architectures of the future. The fast multipole method (FMM) was originally developed as a fast algorithm for ap- proximating the N-body interactions that appear in astrophysics, molecular dynamics, and vortex based fluid dynamics simulations. The FMM possesses have a unique combination of being an efficient O(N) algorithm, while having an operational intensity that is higher than a matrix-matrix multiplication. In fact, the FMM can reduce the requirement of Byte/flop to around 0.01, which means that it will remain compute bound until 2020 even if the cur- rent trend in microprocessors continues. Despite these advantages, there have not been any benchmarks of FMM codes on modern architectures such as Xeon Phi, Kepler, and Blue- Gene/Q. This study aims to provide a comprehensive benchmark of a state of the art FMM code “exaFMM” on the latest architectures, in hopes of providing a useful reference for deciding when the FMM will become useful as the computational engine in a given application code. It may also serve as a warning to certain problem size domains areas where the FMM will exhibit insignificant performance improvements. Such issues depend strongly on the asymptotic constants rather than the asymptotics themselves, and therefore are strongly implementation and hardware

  6. The effect of target and non-target similarity on neural classification performance: A boost from confidence

    OpenAIRE

    Amar R Marathe; Anthony J Ries; Vernon J Lawhern; Vernon J Lawhern; Brent J Lance; Jonathan eTouryan; Kaleb eMcDowell; Hubert eCecotti

    2015-01-01

    Brain computer interaction (BCI) technologies have proven effective in utilizing single-trial classification algorithms to detect target images in rapid serial visualization presentation tasks. While many factors contribute to the accuracy of these algorithms, a critical aspect that is often overlooked concerns the feature similarity between target and non-target images. In most real-world environments there are likely to be many shared features between targets and non-targets resulting in si...

  7. The effect of target and non-target similarity on neural classification performance: a boost from confidence

    OpenAIRE

    Marathe, Amar R.; Ries, Anthony J.; Lawhern, Vernon J.; Lance, Brent J.; Touryan, Jonathan; McDowell, Kaleb; Cecotti, Hubert

    2015-01-01

    Brain computer interaction (BCI) technologies have proven effective in utilizing single-trial classification algorithms to detect target images in rapid serial visualization presentation tasks. While many factors contribute to the accuracy of these algorithms, a critical aspect that is often overlooked concerns the feature similarity between target and non-target images. In most real-world environments there are likely to be many shared features between targets and non-targets resulting in si...

  8. Performance Evaluation Methods for Assistive Robotic Technology

    Science.gov (United States)

    Tsui, Katherine M.; Feil-Seifer, David J.; Matarić, Maja J.; Yanco, Holly A.

    Robots have been developed for several assistive technology domains, including intervention for Autism Spectrum Disorders, eldercare, and post-stroke rehabilitation. Assistive robots have also been used to promote independent living through the use of devices such as intelligent wheelchairs, assistive robotic arms, and external limb prostheses. Work in the broad field of assistive robotic technology can be divided into two major research phases: technology development, in which new devices, software, and interfaces are created; and clinical, in which assistive technology is applied to a given end-user population. Moving from technology development towards clinical applications is a significant challenge. Developing performance metrics for assistive robots poses a related set of challenges. In this paper, we survey several areas of assistive robotic technology in order to derive and demonstrate domain-specific means for evaluating the performance of such systems. We also present two case studies of applied performance measures and a discussion regarding the ubiquity of functional performance measures across the sampled domains. Finally, we present guidelines for incorporating human performance metrics into end-user evaluations of assistive robotic technologies.

  9. Validated High Performance Liquid Chromatography Method for ...

    African Journals Online (AJOL)

    Purpose: To develop a simple, rapid and sensitive high performance liquid ... response, tailing factor and resolution of six replicate injections was < 3 %. ... Cefadroxil monohydrate, Human plasma, Pharmacokinetics Bioequivalence ... Drug-free plasma was obtained from the local .... Influence of probenicid on the renal.

  10. Spectral method and its high performance implementation

    KAUST Repository

    Wu, Zedong

    2014-01-01

    We have presented a new method that can be dispersion free and unconditionally stable. Thus the computational cost and memory requirement will be reduced a lot. Based on this feature, we have implemented this algorithm on GPU based CUDA for the anisotropic Reverse time migration. There is almost no communication between CPU and GPU. For the prestack wavefield extrapolation, it can combine all the shots together to migration. However, it requires to solve a bigger dimensional problem and more meory which can\\'t fit into one GPU cards. In this situation, we implement it based on domain decomposition method and MPI for distributed memory system.

  11. Personality, Study Methods and Academic Performance

    Science.gov (United States)

    Entwistle, N. J.; Wilson, J. D.

    1970-01-01

    A questionnaire measuring four student personality types--stable introvert, unstable introvert, stable extrovert and unstable extrovert--along with the Eysenck Personality Inventory (Form A) were give to 72 graduate students at Aberdeen University and the results showed recognizable interaction between study methods, motivation and personality…

  12. Renewing the Respect for Similarity

    Directory of Open Access Journals (Sweden)

    Shimon eEdelman

    2012-07-01

    Full Text Available In psychology, the concept of similarity has traditionally evoked a mixture of respect, stemmingfrom its ubiquity and intuitive appeal, and concern, due to its dependence on the framing of the problemat hand and on its context. We argue for a renewed focus on similarity as an explanatory concept, bysurveying established results and new developments in the theory and methods of similarity-preservingassociative lookup and dimensionality reduction — critical components of many cognitive functions, aswell as of intelligent data management in computer vision. We focus in particular on the growing familyof algorithms that support associative memory by performing hashing that respects local similarity, andon the uses of similarity in representing structured objects and scenes. Insofar as these similarity-basedideas and methods are useful in cognitive modeling and in AI applications, they should be included inthe core conceptual toolkit of computational neuroscience.

  13. Development of a high performance liquid chromatography method ...

    African Journals Online (AJOL)

    Development of a high performance liquid chromatography method for simultaneous ... Purpose: To develop and validate a new low-cost high performance liquid chromatography (HPLC) method for ..... Several papers have reported the use of ...

  14. THE MANAGEMENT METHODS IN PERFORMANCE SPORTS

    Directory of Open Access Journals (Sweden)

    Silvia GRĂDINARU

    2015-12-01

    Full Text Available Sports are a widespread phenomenon, capable of raising human energies and mobilize financial and material resources that can be difficult compared with those in other areas of social life. Management of sports organizations is influenced and determined by the compliance and requirements arising from the documents issued by international organizations with authority in the field. Organizational development is considered essentially as a strategy to increase organizational effectiveness by determining changes that consider both human resources and organizations. On the whole society, it is accelerated by an industry evolving sport with distinctive features. Its development is conditional on macroeconomics and technology. The complexity of the activities of sports organizations performance, the main laboratory performance national and international sports, requiring a more thorough investigation to enable knowledge of the complex mechanisms of their management and simultaneously identify some optimization solutions throughout the economic-financial and human resources.

  15. Development of a systematic method to assess similarity between nanomaterials for human hazard evaluation purposes - lessons learnt.

    NARCIS (Netherlands)

    Vdz Park, Margriet; Catalán, Julia; Ferraz, Natalia; Cabellos, Joan; Vanhauten, Ralph; Vázquez-Campos, Socorro; Janer, Gemma

    2018-01-01

    Within the EU FP-7 GUIDEnano project, a methodology was developed to systematically quantify the similarity between a nanomaterial (NM) that has been tested in toxicity studies and the NM for which risk needs to be evaluated, for the purpose of extrapolating toxicity data between the two materials.

  16. Mathematical evaluation of similarity factor using various weighing approaches on aceclofenac marketed formulations by model-independent method.

    Science.gov (United States)

    Soni, T G; Desai, J U; Nagda, C D; Gandhi, T R; Chotai, N P

    2008-01-01

    The US Food and Drug Administration's (FDA's) guidance for industry on dissolution testing of immediate-release solid oral dosage forms describes that drug dissolution may be the rate limiting step for drug absorption in the case of low solubility/high permeability drugs (BCS class II drugs). US FDA Guidance describes the model-independent mathematical approach proposed by Moore and Flanner for calculating a similarity factor (f2) of dissolution across a suitable time interval. In the present study, the similarity factor was calculated on dissolution data of two marketed aceclofenac tablets (a BCS class II drug) using various weighing approaches proposed by Gohel et al. The proposed approaches were compared with a conventional approach (W = 1). On the basis of consideration of variability, preference is given in the order of approach 3 > approach 2 > approach 1 as approach 3 considers batch-to-batch as well as within-samples variability and shows best similarity profile. Approach 2 considers batch-to batch variability with higher specificity than approach 1.

  17. Phoneme Similarity and Confusability

    Science.gov (United States)

    Bailey, T.M.; Hahn, U.

    2005-01-01

    Similarity between component speech sounds influences language processing in numerous ways. Explanation and detailed prediction of linguistic performance consequently requires an understanding of these basic similarities. The research reported in this paper contrasts two broad classes of approach to the issue of phoneme similarity-theoretically…

  18. [Confidence interval or p-value--similarities and differences between two important methods of statistical inference of quantitative studies].

    Science.gov (United States)

    Harari, Gil

    2014-01-01

    Statistic significance, also known as p-value, and CI (Confidence Interval) are common statistics measures and are essential for the statistical analysis of studies in medicine and life sciences. These measures provide complementary information about the statistical probability and conclusions regarding the clinical significance of study findings. This article is intended to describe the methodologies, compare between the methods, assert their suitability for the different needs of study results analysis and to explain situations in which each method should be used.

  19. A Similarity-Ranking Method on Semantic Computing for Providing Information-Services in Station-Concierge System

    Directory of Open Access Journals (Sweden)

    Motoki Yokoyama

    2017-07-01

    Full Text Available The prevalence of smartphones and wireless broadband networks have been progressing as a new Railway infomration environment. According to the spread of such devices and information technology, various types of information can be obtained from databases connected to the Internet. One scenario of obtaining such a wide variety of information resources is in the phase of user’s transportation. This paper proposes an information provision system, named the Station Concierge System that matches the situation and intention of passengers. The purpose of this system is to estimate the needs of passengers like station staff or hotel concierge and to provide information resources that satisfy user’s expectations dynamically. The most important module of the system is constructed based on a new information ranking method for passenger intention prediction and service recommendation. This method has three main features, which are (1 projecting a user to semantic vector space by using her current context, (2 predicting the intention of a user based on selecting a semantic vector subspace, and (3 ranking the services by a descending order of relevant scores to the user’ intention. By comparing the predicted results of our method with those of two straightforward computation methods, the experimental studies show the effectiveness and efficiency of the proposed method. Using this system, users can obtain transit information and service map that dynamically matches their context.

  20. Identification of strains with phenotypes similar to those of Staphylococcus aureus isolated from table chicken eggs using MALDI-TOF MS and genotyping methods

    Directory of Open Access Journals (Sweden)

    Marek Agnieszka

    2015-06-01

    Full Text Available The aim of the study was to identify the affinity of 10 Staphylococcus strains isolated from table chicken eggs to specific species. Preliminary analysis performed by API ID32 Staph test identified these strains as S. aureus, but they exhibited a negative reaction in the tube coagulase test. Thus, the analysed strains were initially characterised as Staphylococcus aureus-like (SAL. Further characterisation was performed by genotypic methods, using restriction fragment length polymorphism (RFLP of the coagulase gene (coa and sequencing of the gene rpoB. An attempt was also made to identify the isolated Staphylococcus strains by MALDI-TOF mass spectrometry. The results indicated that none of the strains tested belonged to the species S. aureus. The rpoB sequences of five isolates showed the highest sequence similarity to S. haemolyticus, three isolates to S. chromogenes, and one isolate to S. epidermidis. One strain (SAL4 remained unidentified in this analysis. The results obtained using mass spectrometry were comparable to those based on gene sequence analysis. Strain SAL4, which could not be identified by sequencing, was identified by MALDI-TOF as Staphylococcus chromogenes.

  1. New Similarity Functions

    DEFF Research Database (Denmark)

    Yazdani, Hossein; Ortiz-Arroyo, Daniel; Kwasnicka, Halina

    2016-01-01

    spaces, in addition to their similarity in the vector space. Prioritized Weighted Feature Distance (PWFD) works similarly as WFD, but provides the ability to give priorities to desirable features. The accuracy of the proposed functions are compared with other similarity functions on several data sets....... Our results show that the proposed functions work better than other methods proposed in the literature....

  2. Transcoding method from H.264/AVC to high efficiency video coding based on similarity of intraprediction, interprediction, and motion vector

    Science.gov (United States)

    Liu, Mei-Feng; Zhong, Guo-Yun; He, Xiao-Hai; Qing, Lin-Bo

    2016-09-01

    Currently, most video resources on line are encoded in the H.264/AVC format. More fluent video transmission can be obtained if these resources are encoded in the newest international video coding standard: high efficiency video coding (HEVC). In order to improve the video transmission and storage on line, a transcoding method from H.264/AVC to HEVC is proposed. In this transcoding algorithm, the coding information of intraprediction, interprediction, and motion vector (MV) in H.264/AVC video stream are used to accelerate the coding in HEVC. It is found through experiments that the region of interprediction in HEVC overlaps that in H.264/AVC. Therefore, the intraprediction for the region in HEVC, which is interpredicted in H.264/AVC, can be skipped to reduce coding complexity. Several macroblocks in H.264/AVC are combined into one PU in HEVC when the MV difference between two of the macroblocks in H.264/AVC is lower than a threshold. This method selects only one coding unit depth and one prediction unit (PU) mode to reduce the coding complexity. An MV interpolation method of combined PU in HEVC is proposed according to the areas and distances between the center of one macroblock in H.264/AVC and that of the PU in HEVC. The predicted MV accelerates the motion estimation for HEVC coding. The simulation results show that our proposed algorithm achieves significant coding time reduction with a little loss in bitrates distortion rate, compared to the existing transcoding algorithms and normal HEVC coding.

  3. Towards Personalized Medicine: Leveraging Patient Similarity and Drug Similarity Analytics

    Science.gov (United States)

    Zhang, Ping; Wang, Fei; Hu, Jianying; Sorrentino, Robert

    2014-01-01

    The rapid adoption of electronic health records (EHR) provides a comprehensive source for exploratory and predictive analytic to support clinical decision-making. In this paper, we investigate how to utilize EHR to tailor treatments to individual patients based on their likelihood to respond to a therapy. We construct a heterogeneous graph which includes two domains (patients and drugs) and encodes three relationships (patient similarity, drug similarity, and patient-drug prior associations). We describe a novel approach for performing a label propagation procedure to spread the label information representing the effectiveness of different drugs for different patients over this heterogeneous graph. The proposed method has been applied on a real-world EHR dataset to help identify personalized treatments for hypercholesterolemia. The experimental results demonstrate the effectiveness of the approach and suggest that the combination of appropriate patient similarity and drug similarity analytics could lead to actionable insights for personalized medicine. Particularly, by leveraging drug similarity in combination with patient similarity, our method could perform well even on new or rarely used drugs for which there are few records of known past performance. PMID:25717413

  4. A method for optimizing the performance of buildings

    DEFF Research Database (Denmark)

    Pedersen, Frank

    2007-01-01

    needed for solving the optimization problem. Furthermore, the algorithm uses so-called domain constraint functions in order to ensure that the input to the simulation software is feasible. Using this technique avoids performing time-consuming simulations for unrealistic design decisions. The algorithm......This thesis describes a method for optimizing the performance of buildings. Design decisions made in early stages of the building design process have a significant impact on the performance of buildings, for instance, the performance with respect to the energy consumption, economical aspects......, and the indoor environment. The method is intended for supporting design decisions for buildings, by combining methods for calculating the performance of buildings with numerical optimization methods. The method is able to find optimum values of decision variables representing different features of the building...

  5. On the use of the fictitious wave steepness and related surf-similarity parameters in methods that describe the hydraulic and structural response to waves

    NARCIS (Netherlands)

    Heineke, D.; Verhagen, H.J.

    2007-01-01

    To assess the hydraulic performance of coastal structures - viz. wave run-up, overtopping and reflection - and to evaluate the stability of the armour layers, use is made of the dimensionless surf similarity parameter, as introduced by Battjes (1974). The front side slope of the structure and the

  6. Semantic relatedness and similarity of biomedical terms: examining the effects of recency, size, and section of biomedical publications on the performance of word2vec.

    Science.gov (United States)

    Zhu, Yongjun; Yan, Erjia; Wang, Fei

    2017-07-03

    Understanding semantic relatedness and similarity between biomedical terms has a great impact on a variety of applications such as biomedical information retrieval, information extraction, and recommender systems. The objective of this study is to examine word2vec's ability in deriving semantic relatedness and similarity between biomedical terms from large publication data. Specifically, we focus on the effects of recency, size, and section of biomedical publication data on the performance of word2vec. We download abstracts of 18,777,129 articles from PubMed and 766,326 full-text articles from PubMed Central (PMC). The datasets are preprocessed and grouped into subsets by recency, size, and section. Word2vec models are trained on these subtests. Cosine similarities between biomedical terms obtained from the word2vec models are compared against reference standards. Performance of models trained on different subsets are compared to examine recency, size, and section effects. Models trained on recent datasets did not boost the performance. Models trained on larger datasets identified more pairs of biomedical terms than models trained on smaller datasets in relatedness task (from 368 at the 10% level to 494 at the 100% level) and similarity task (from 374 at the 10% level to 491 at the 100% level). The model trained on abstracts produced results that have higher correlations with the reference standards than the one trained on article bodies (i.e., 0.65 vs. 0.62 in the similarity task and 0.66 vs. 0.59 in the relatedness task). However, the latter identified more pairs of biomedical terms than the former (i.e., 344 vs. 498 in the similarity task and 339 vs. 503 in the relatedness task). Increasing the size of dataset does not always enhance the performance. Increasing the size of datasets can result in the identification of more relations of biomedical terms even though it does not guarantee better precision. As summaries of research articles, compared with article

  7. Performance evaluation methods and instrumentation for mine ventilation fans

    Institute of Scientific and Technical Information of China (English)

    LI Man; WANG Xue-rong

    2009-01-01

    Ventilation fans are one of the most important pieces of equipment in coal mines. Their performance plays an important role in the safety of staff and production. Given the actual requirements of coal mine production, we instituted a research project on the measurement methods of key performance parameters such as wind pressure, amount of ventilation and power. At the end a virtual instrument for mine ventilation fans performance evaluation was developed using a USB interface. The practical perform-ance and analytical results of our experiments show that it is feasible, reliable and effective to use the proposed instrumentation for mine ventilation performance evaluation.

  8. Influence of discretization method on the digital control system performance

    Directory of Open Access Journals (Sweden)

    Futás József

    2003-12-01

    Full Text Available The design of control system can be divided into two steps. First the process or plant have to be convert into mathematical model form, so that its behavior can be analyzed. Then an appropriate controller have to be design in order to get the desired response of the controlled system. In the continuous time domain the system is represented by differential equations. Replacing a continuous system into discrete time form is always an approximation of the continuous system. The different discretization methods give different digital controller performance. The methods presented on the paper are Step Invariant or Zero Order Hold (ZOH Method, Matched Pole-Zero Method, Backward difference Method and Bilinear transformation. The above mentioned discretization methods are used in developing PI position controller of a dc motor. The motor model was converted by the ZOH method. The performances of the different methods are compared and the results are presented.

  9. Performance study of Active Queue Management methods: Adaptive GRED, REDD, and GRED-Linear analytical model

    Directory of Open Access Journals (Sweden)

    Hussein Abdel-jaber

    2015-10-01

    Full Text Available Congestion control is one of the hot research topics that helps maintain the performance of computer networks. This paper compares three Active Queue Management (AQM methods, namely, Adaptive Gentle Random Early Detection (Adaptive GRED, Random Early Dynamic Detection (REDD, and GRED Linear analytical model with respect to different performance measures. Adaptive GRED and REDD are implemented based on simulation, whereas GRED Linear is implemented as a discrete-time analytical model. Several performance measures are used to evaluate the effectiveness of the compared methods mainly mean queue length, throughput, average queueing delay, overflow packet loss probability, and packet dropping probability. The ultimate aim is to identify the method that offers the highest satisfactory performance in non-congestion or congestion scenarios. The first comparison results that are based on different packet arrival probability values show that GRED Linear provides better mean queue length; average queueing delay and packet overflow probability than Adaptive GRED and REDD methods in the presence of congestion. Further and using the same evaluation measures, Adaptive GRED offers a more satisfactory performance than REDD when heavy congestion is present. When the finite capacity of queue values varies the GRED Linear model provides the highest satisfactory performance with reference to mean queue length and average queueing delay and all the compared methods provide similar throughput performance. However, when the finite capacity value is large, the compared methods have similar results in regard to probabilities of both packet overflowing and packet dropping.

  10. Performance prediction method for a multi-stage Knudsen pump

    Science.gov (United States)

    Kugimoto, K.; Hirota, Y.; Kizaki, Y.; Yamaguchi, H.; Niimi, T.

    2017-12-01

    In this study, the novel method to predict the performance of a multi-stage Knudsen pump is proposed. The performance prediction method is carried out in two steps numerically with the assistance of a simple experimental result. In the first step, the performance of a single-stage Knudsen pump was measured experimentally under various pressure conditions, and the relationship of the mass flow rate was obtained with respect to the average pressure between the inlet and outlet of the pump and the pressure difference between them. In the second step, the performance of a multi-stage pump was analyzed by a one-dimensional model derived from the mass conservation law. The performances predicted by the 1D-model of 1-stage, 2-stage, 3-stage, and 4-stage pumps were validated by the experimental results for the corresponding number of stages. It was concluded that the proposed prediction method works properly.

  11. Improved Performance of Unsupervised Method by Renovated K-Means

    OpenAIRE

    Ashok, P.; Nawaz, G. M Kadhar; Elayaraja, E.; Vadivel, V.

    2013-01-01

    Clustering is a separation of data into groups of similar objects. Every group called cluster consists of objects that are similar to one another and dissimilar to objects of other groups. In this paper, the K-Means algorithm is implemented by three distance functions and to identify the optimal distance function for clustering methods. The proposed K-Means algorithm is compared with K-Means, Static Weighted K-Means (SWK-Means) and Dynamic Weighted K-Means (DWK-Means) algorithm by using Davis...

  12. Effects of Napping During Shift Work on Sleepiness and Performance in Emergency Medical Services Personnel and Similar Shift Workers: A Systematic Review and Meta-Analysis.

    Science.gov (United States)

    Martin-Gill, Christian; Barger, Laura K; Moore, Charity G; Higgins, J Stephen; Teasley, Ellen M; Weiss, Patricia M; Condle, Joseph P; Flickinger, Katharyn L; Coppler, Patrick J; Sequeira, Denisse J; Divecha, Ayushi A; Matthews, Margaret E; Lang, Eddy S; Patterson, P Daniel

    2018-02-15

    Scheduled napping during work shifts may be an effective way to mitigate fatigue-related risk. This study aimed to critically review and synthesize existing literature on the impact of scheduled naps on fatigue-related outcomes for EMS personnel and similar shift worker groups. A systematic literature review was performed of the impact of a scheduled nap during shift work on EMS personnel or similar shift workers. The primary (critical) outcome of interest was EMS personnel safety. Secondary (important) outcomes were patient safety; personnel performance; acute states of fatigue, alertness, and sleepiness; indicators of sleep duration and/or quality; employee retention/turnover; indicators of long-term health; and cost to the system. Meta-analyses were performed to evaluate the impact of napping on a measure of personnel performance (the psychomotor vigilance test [PVT]) and measures of acute fatigue. Of 4,660 unique records identified, 13 experimental studies were determined relevant and summarized. The effect of napping on reaction time measured at the end of shift was small and non-significant (SMD 0.12, 95% CI -0.13 to 0.36; p = 0.34). Napping during work did not change reaction time from the beginning to the end of the shift (SMD -0.01, 95% CI -25.0 to 0.24; p = 0.96). Naps had a moderate, significant effect on sleepiness measured at the end of shift (SMD 0.40, 95% CI 0.09 to 0.72; p = 0.01). The difference in sleepiness from the start to the end of shift was moderate and statistically significant (SMD 0.41, 95% CI 0.09 to 0.72; p = 0.01). Reviewed literature indicated that scheduled naps at work improved performance and decreased fatigue in shift workers. Further research is required to identify the optimal timing and duration of scheduled naps to maximize the beneficial outcomes.

  13. DDT performance of energetic cobalt coordination compounds. [Dozen of compounds similar to 2-(5-cyanotetrazolato)pentaaminecobalt perchlorate, trinitrotriamine cobalt, dinitrobis(ethylenediamine) cobalt perchlorate

    Energy Technology Data Exchange (ETDEWEB)

    Lieberman, M.L.; Fleming, W.

    1986-01-01

    The compound 2-(5-cyanotetrazolato)pentaamminecobalt(III) perchlorate (CP) has been utilized in low-voltage detonators because it reliably undergoes deflagration-to-detonation transition (DDT). In the present investigation, we have compared the performance of over a dozen similar compounds. These compounds all have cobalt as the coordinating metal, most are ammine complexes, and all except one incorporate the perchlorate anion as an oxidizer. Chemical factors such as fuel-to-oxidizer ratio, trigger group, and organic content have been varied. 18 refs., 7 figs., 2 tabs.

  14. Improved end-stage high-intensity performance but similar glycemic responses after waxy barley starch ingestion compared to dextrose in type 1 diabetes.

    Science.gov (United States)

    Gray, Benjamin J; Page, Rhydian; Turner, Daniel; West, Daniel J; Campbell, Matthew D; Kilduff, Liam P; Stephens, Jeffrey W; Bain, Stephen C; Bracken, Richard M

    2016-11-01

    Pre-exercise carbohydrate (CHO) ingestion is an effective strategy for reducing the occurrence of hypoglycemia during or after exercise in individuals with type 1 diabetes (T1DM). The metabolic effects of ingestion of different CHOs for glycemic or performance gains have been under-researched. This study compared metabolic responses and fuel use during sub-maximal and high-intensity performance running following pre-exercise ingestion of waxy barley starch (WBS) or dextrose (DEX) in T1DM. Seven participants attended the laboratory on two separate occasions following preliminary testing. On each visit participants consumed either 0.6 g/kg body mass of DEX or WBS 2 hours before a 26-minute discontinuous incremental treadmill protocol (4-minute running: 1.5-min rest) finishing at 80±4% V̇O2peak followed by a 10-min performance run on a non-motorized treadmill. Capillary blood samples were taken at rest, during and following exercise and analyzed for glucose (BG) and acid-base variables. Data (mean ± SEM) were analyzed using repeated measures ANOVA (P0.05). In the final quartile of the performance run, a greater distance was completed under WBS (WBS 323±21 vs. DEX 301±20 m, P=0.02). Consumption of WBS demonstrated similar hyperglycemic responses to dextrose ingestion but a greater rate of CHO use at rest. Interestingly, T1DM individuals displayed an improved performance at the latter stages of a high-intensity run test.

  15. A fully automatic end-to-end method for content-based image retrieval of CT scans with similar liver lesion annotations.

    Science.gov (United States)

    Spanier, A B; Caplan, N; Sosna, J; Acar, B; Joskowicz, L

    2018-01-01

    The goal of medical content-based image retrieval (M-CBIR) is to assist radiologists in the decision-making process by retrieving medical cases similar to a given image. One of the key interests of radiologists is lesions and their annotations, since the patient treatment depends on the lesion diagnosis. Therefore, a key feature of M-CBIR systems is the retrieval of scans with the most similar lesion annotations. To be of value, M-CBIR systems should be fully automatic to handle large case databases. We present a fully automatic end-to-end method for the retrieval of CT scans with similar liver lesion annotations. The input is a database of abdominal CT scans labeled with liver lesions, a query CT scan, and optionally one radiologist-specified lesion annotation of interest. The output is an ordered list of the database CT scans with the most similar liver lesion annotations. The method starts by automatically segmenting the liver in the scan. It then extracts a histogram-based features vector from the segmented region, learns the features' relative importance, and ranks the database scans according to the relative importance measure. The main advantages of our method are that it fully automates the end-to-end querying process, that it uses simple and efficient techniques that are scalable to large datasets, and that it produces quality retrieval results using an unannotated CT scan. Our experimental results on 9 CT queries on a dataset of 41 volumetric CT scans from the 2014 Image CLEF Liver Annotation Task yield an average retrieval accuracy (Normalized Discounted Cumulative Gain index) of 0.77 and 0.84 without/with annotation, respectively. Fully automatic end-to-end retrieval of similar cases based on image information alone, rather that on disease diagnosis, may help radiologists to better diagnose liver lesions.

  16. Buffer-Free High Performance Liquid Chromatography Method for ...

    African Journals Online (AJOL)

    Purpose: To develop and validate a simple, economical and reproducible high performance liquid chromatographic (HPLC) method for the determination of theophylline in pharmaceutical dosage forms. Method: Caffeine was used as the internal standard and reversed phase C-18 column was used to elute the drug and ...

  17. Do similarities or differences between CEO leadership and organizational culture have a more positive effect on firm performance? A test of competing predictions.

    Science.gov (United States)

    Hartnell, Chad A; Kinicki, Angelo J; Lambert, Lisa Schurer; Fugate, Mel; Doyle Corner, Patricia

    2016-06-01

    This study examines the nature of the interaction between CEO leadership and organizational culture using 2 common metathemes (task and relationship) in leadership and culture research. Two perspectives, similarity and dissimilarity, offer competing predictions about the fit, or interaction, between leadership and culture and its predicted effect on firm performance. Predictions for the similarity perspective draw upon attribution theory and social identity theory of leadership, whereas predictions for the dissimilarity perspective are developed based upon insights from leadership contingency theories and the notion of substitutability. Hierarchical regression results from 114 CEOs and 324 top management team (TMT) members failed to support the similarity hypotheses but revealed broad support for the dissimilarity predictions. Findings suggest that culture can serve as a substitute for leadership when leadership behaviors are redundant with cultural values (i.e., they both share a task- or relationship-oriented focus). Findings also support leadership contingency theories indicating that CEO leadership is effective when it provides psychological and motivational resources lacking in the organization's culture. We discuss theoretical and practical implications and delineate directions for future research. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  18. IONS: Identification of Orthologs by Neighborhood and Similarity-an Automated Method to Identify Orthologs in Chromosomal Regions of Common Evolutionary Ancestry and its Application to Hemiascomycetous Yeasts.

    Science.gov (United States)

    Seret, Marie-Line; Baret, Philippe V

    2011-01-01

    Comparative sequence analysis is widely used to infer gene function and study genome evolution and requires proper ortholog identification across different genomes. We have developed a program for the Identification of Orthologs in one-to-one relationship by Neighborhood and Similarity (IONS) between closely related species. The algorithm combines two levels of evidence to determine co-ancestrality at the genome scale: sequence similarity and shared neighborhood. The method was initially designed to provide anchor points for syntenic blocks within the Génolevures project concerning nine hemiascomycetous yeasts (about 50,000 genes) and is applicable to different input databases. Comparison based on use of a Rand index shows that the results are highly consistent with the pillars of the Yeast Gene Order Browser, a manually curated database. Compared with SYNERGY, another algorithm reporting homology relationships, our method's main advantages are its automation and the absence of dataset-dependent parameters, facilitating consistent integration of newly released genomes.

  19. ASD Is Not DLI: Individuals With Autism and Individuals With Syntactic DLI Show Similar Performance Level in Syntactic Tasks, but Different Error Patterns.

    Science.gov (United States)

    Sukenik, Nufar; Friedmann, Naama

    2018-01-01

    Do individuals with autism have a developmental syntactic impairment, DLI (formerly known as SLI)? In this study we directly compared the performance of 18 individuals with Autism Spectrum Disorder (ASD) aged 9;0-18;0 years with that of 93 individuals with Syntactic-Developmental Language Impairment (SyDLI) aged 8;8-14;6 (and with 166 typically-developing children aged 5;2-18;1). We tested them using three syntactic tests assessing the comprehension and production of syntactic structures that are known to be sensitive to syntactic impairment: elicitation of subject and object relative clauses, reading and paraphrasing of object relatives, and repetition of complex syntactic structures including Wh questions, relative clauses, topicalized sentences, sentences with verb movement, sentences with A-movement, and embedded sentences. The results were consistent across the three tasks: the overall rate of correct performance on the syntactic tasks is similar for the children with ASD and those with SyDLI. However, once we look closer, they are very different. The types of errors of the ASD group differ from those of the SyDLI group-the children with ASD provide various types of pragmatically infelicitous responses that are not evinced in the SyDLI or in the age equivalent typically-developing groups. The two groups (ASD and SyDLI) also differ in the pattern of performance-the children with SyDLI show a syntactically-principled pattern of impairment, with selective difficulty in specific sentence types (such as sentences derived by movement of the object across the subject), and normal performance on other structures (such as simple sentences). In contrast, the ASD participants showed generalized low performance on the various sentence structures. Syntactic performance was far from consistent within the ASD group. Whereas all ASD participants had errors that can originate in pragmatic/discourse difficulties, seven of them had completely normal syntax in the structures we

  20. An advanced probabilistic structural analysis method for implicit performance functions

    Science.gov (United States)

    Wu, Y.-T.; Millwater, H. R.; Cruse, T. A.

    1989-01-01

    In probabilistic structural analysis, the performance or response functions usually are implicitly defined and must be solved by numerical analysis methods such as finite element methods. In such cases, the most commonly used probabilistic analysis tool is the mean-based, second-moment method which provides only the first two statistical moments. This paper presents a generalized advanced mean value (AMV) method which is capable of establishing the distributions to provide additional information for reliability design. The method requires slightly more computations than the second-moment method but is highly efficient relative to the other alternative methods. In particular, the examples show that the AMV method can be used to solve problems involving non-monotonic functions that result in truncated distributions.

  1. Similarity Theory Based Radial Turbine Performance and Loss Mechanism Comparison between R245fa and Air for Heavy-Duty Diesel Engine Organic Rankine Cycles

    Directory of Open Access Journals (Sweden)

    Lei Zhang

    2017-01-01

    Full Text Available Organic Rankine Cycles using radial turbines as expanders are considered as one of the most efficient technologies to convert heavy-duty diesel engine waste heat into useful work. Turbine similarity design based on the existing air turbine profiles is time saving. Due to totally different thermodynamic properties between organic fluids and air, its influence on turbine performance and loss mechanisms need to be analyzed. This paper numerically simulated a radial turbine under similar conditions between R245fa and air, and compared the differences of the turbine performance and loss mechanisms. Larger specific heat ratio of air leads to air turbine operating at higher pressure ratios. As R245fa gas constant is only about one-fifth of air gas constant, reduced rotating speeds of R245fa turbine are only 0.4-fold of those of air turbine, and reduced mass flow rates are about twice of those of air turbine. When using R245fa as working fluid, the nozzle shock wave losses decrease but rotor suction surface separation vortex losses increase, and eventually leads that isentropic efficiencies of R245fa turbine in the commonly used velocity ratio range from 0.5 to 0.9 are 3%–4% lower than those of air turbine.

  2. Performance analysis, quality function deployment and structured methods

    Science.gov (United States)

    Maier, M. W.

    Quality function deployment, (QFD), an approach to synthesizing several elements of system modeling and design into a single unit, is presented. Behavioral, physical, and performance modeling are usually considered as separate aspects of system design without explicit linkages. Structured methodologies have developed linkages between behavioral and physical models before, but have not considered the integration of performance models. QFD integrates performance models with traditional structured models. In this method, performance requirements such as cost, weight, and detection range are partitioned into matrices. Partitioning is done by developing a performance model, preferably quantitative, for each requirement. The parameters of the model become the engineering objectives in a QFD analysis and the models are embedded in a spreadsheet version of the traditional QFD matrices. The performance model and its parameters are used to derive part of the functional model by recognizing that a given performance model implies some structure to the functionality of the system.

  3. ASD Is Not DLI: Individuals With Autism and Individuals With Syntactic DLI Show Similar Performance Level in Syntactic Tasks, but Different Error Patterns

    Directory of Open Access Journals (Sweden)

    Nufar Sukenik

    2018-04-01

    Full Text Available Do individuals with autism have a developmental syntactic impairment, DLI (formerly known as SLI? In this study we directly compared the performance of 18 individuals with Autism Spectrum Disorder (ASD aged 9;0–18;0 years with that of 93 individuals with Syntactic-Developmental Language Impairment (SyDLI aged 8;8–14;6 (and with 166 typically-developing children aged 5;2–18;1. We tested them using three syntactic tests assessing the comprehension and production of syntactic structures that are known to be sensitive to syntactic impairment: elicitation of subject and object relative clauses, reading and paraphrasing of object relatives, and repetition of complex syntactic structures including Wh questions, relative clauses, topicalized sentences, sentences with verb movement, sentences with A-movement, and embedded sentences. The results were consistent across the three tasks: the overall rate of correct performance on the syntactic tasks is similar for the children with ASD and those with SyDLI. However, once we look closer, they are very different. The types of errors of the ASD group differ from those of the SyDLI group—the children with ASD provide various types of pragmatically infelicitous responses that are not evinced in the SyDLI or in the age equivalent typically-developing groups. The two groups (ASD and SyDLI also differ in the pattern of performance—the children with SyDLI show a syntactically-principled pattern of impairment, with selective difficulty in specific sentence types (such as sentences derived by movement of the object across the subject, and normal performance on other structures (such as simple sentences. In contrast, the ASD participants showed generalized low performance on the various sentence structures. Syntactic performance was far from consistent within the ASD group. Whereas all ASD participants had errors that can originate in pragmatic/discourse difficulties, seven of them had completely normal syntax

  4. The semantic similarity ensemble

    Directory of Open Access Journals (Sweden)

    Andrea Ballatore

    2013-12-01

    Full Text Available Computational measures of semantic similarity between geographic terms provide valuable support across geographic information retrieval, data mining, and information integration. To date, a wide variety of approaches to geo-semantic similarity have been devised. A judgment of similarity is not intrinsically right or wrong, but obtains a certain degree of cognitive plausibility, depending on how closely it mimics human behavior. Thus selecting the most appropriate measure for a specific task is a significant challenge. To address this issue, we make an analogy between computational similarity measures and soliciting domain expert opinions, which incorporate a subjective set of beliefs, perceptions, hypotheses, and epistemic biases. Following this analogy, we define the semantic similarity ensemble (SSE as a composition of different similarity measures, acting as a panel of experts having to reach a decision on the semantic similarity of a set of geographic terms. The approach is evaluated in comparison to human judgments, and results indicate that an SSE performs better than the average of its parts. Although the best member tends to outperform the ensemble, all ensembles outperform the average performance of each ensemble's member. Hence, in contexts where the best measure is unknown, the ensemble provides a more cognitively plausible approach.

  5. Similarity solution and Runge Kutta method to a thermal boundary layer model at the entrance region of a circular tube: The Lévêque Approximation

    Directory of Open Access Journals (Sweden)

    Ali Belhocine

    2018-01-01

    Full Text Available In the thermal entrance region, a thermal boundary layer develops and also reaches the circular tube center. The fully developed region is the zone in which the flow is both hydrodynamically and thermally developed. The heat flux will be higher near the inlet because the heat transfer coefficient is highest at the tube inlet where the thickness of the thermal boundary layer is zero and decreases gradually to the fully developed value. In this paper, the assumptions implicit in Leveque's approximation are re-examined, and the analytical solution of the problem with additional boundary conditions, for the temperature field and the boundary layer thickness through the long tube is presented. By defining a similarity variable, the governing equations are reduced to a dimensionless equation with an analytic solution in the entrance region. This report gives justification for the similarity variable via scaling analysis, details the process of converting to a similarity form, and presents a similarity solution. The analytical solutions are then checked against numerical solution programming by Fortran code obtained via using Runge-Kutta fourth order (RK4 method. Finally, others important thermal results obtained from this analysis, such as; approximate Nusselt number in the thermal entrance region was discussed in detail.

  6. The performances of R GPU implementations of the GMRES method

    Directory of Open Access Journals (Sweden)

    Bogdan Oancea

    2018-03-01

    Full Text Available Although the performance of commodity computers has improved drastically with the introduction of multicore processors and GPU computing, the standard R distribution is still based on single-threaded model of computation, using only a small fraction of the computational power available now for most desktops and laptops. Modern statistical software packages rely on high performance implementations of the linear algebra routines there are at the core of several important leading edge statistical methods. In this paper we present a GPU implementation of the GMRES iterative method for solving linear systems. We compare the performance of this implementation with a pure single threaded version of the CPU. We also investigate the performance of our implementation using different GPU packages available now for R such as gmatrix, gputools or gpuR which are based on CUDA or OpenCL frameworks.

  7. Evaluation Method for Low-Temperature Performance of Lithium Battery

    Science.gov (United States)

    Wang, H. W.; Ma, Q.; Fu, Y. L.; Tao, Z. Q.; Xiao, H. Q.; Bai, H.; Bai, H.

    2018-05-01

    In this paper, the evaluation method for low temperature performance of lithium battery is established. The low temperature performance level was set up to determine the best operating temperature range of the lithium battery using different cathode materials. Results are shared with the consumers for the proper use of lithium battery to make it have a longer service life and avoid the occurrence of early rejection.

  8. Resource Isolation Method for Program’S Performance on CMP

    Science.gov (United States)

    Guan, Ti; Liu, Chunxiu; Xu, Zheng; Li, Huicong; Ma, Qiang

    2017-10-01

    Data center and cloud computing are more popular, which make more benefits for customers and the providers. However, in data center or clusters, commonly there is more than one program running on one server, but programs may interference with each other. The interference may take a little effect, however, the interference may cause serious drop down of performance. In order to avoid the performance interference problem, the mechanism of isolate resource for different programs is a better choice. In this paper we propose a light cost resource isolation method to improve program’s performance. This method uses Cgroups to set the dedicated CPU and memory resource for a program, aiming at to guarantee the program’s performance. There are three engines to realize this method: Program Monitor Engine top program’s resource usage of CPU and memory and transfer the information to Resource Assignment Engine; Resource Assignment Engine calculates the size of CPU and memory resource should be applied for the program; Cgroups Control Engine divide resource by Linux tool Cgroups, and drag program in control group for execution. The experiment result show that making use of the resource isolation method proposed by our paper, program’s performance can be improved.

  9. A Method To ModifyCorrect The Performance Of Amplifiers

    Directory of Open Access Journals (Sweden)

    Rohith Krishnan R

    2015-01-01

    Full Text Available Abstract The actual response of the amplifier may vary with the replacement of some aged or damaged components and this method is to compensate that problem. Here we use op-amp Fixator as the design tool. The tool helps us to isolate the selected circuit component from rest of the circuit adjust its operating point to correct the performance deviations and to modify the circuit without changing other parts of the circuit. A method to modifycorrect the performance of amplifiers by properly redesign the circuit is presented in this paper.

  10. Roles and methods of performance evaluation of hospital academic leadership.

    Science.gov (United States)

    Zhou, Ying; Yuan, Huikang; Li, Yang; Zhao, Xia; Yi, Lihua

    2016-01-01

    The rapidly advancing implementation of public hospital reform urgently requires the identification and classification of a pool of exceptional medical specialists, corresponding with incentives to attract and retain them, providing a nucleus of distinguished expertise to ensure public hospital preeminence. This paper examines the significance of academic leadership, from a strategic management perspective, including various tools, methods and mechanisms used in the theory and practice of performance evaluation, and employed in the selection, training and appointment of academic leaders. Objective methods of assessing leadership performance are also provided for reference.

  11. Investigation of Thermal Performance for Atria: a Method Overview

    Directory of Open Access Journals (Sweden)

    Moosavi Leila

    2016-01-01

    Full Text Available The importance of low energy design in large buildings has encouraged researchers to implement different methods for predicting a building’s thermal performance. Atria, as energy efficient features, have been implemented to improve the indoor thermal environment in large modern buildings. Though widely implemented, the thorough study of atrium performance is restricted due to its large size, complex thermodynamic behavior and the inaccuracies and limitations of available prediction tools. This study reviews the most common research tools implemented in previous researches on atria thermal performance, to explore the advantages and limitation of different methods for future studies. The methods reviewed are analytical, experimental, computer modelling and a combination of any or all of these methods. The findings showed that CFD (computational fluid dynamic models are the most popular tools of recent due to their higher accuracy, capabilities and user-friendly modification. Although the experimental methods were reliable for predicting atria thermal and ventilation performance, they have mostly been used to provide data for validation of CFD models. Furthermore, coupling CFD with other experimental models could increase the reliability and accuracy of the models and provide a more comprehensive analysis.

  12. Box-counting dimension revisited: presenting an efficient method of minimising quantisation error and an assessment of the self-similarity of structural root systems

    Directory of Open Access Journals (Sweden)

    Martin eBouda

    2016-02-01

    Full Text Available Fractal dimension (FD, estimated by box-counting, is a metric used to characterise plant anatomical complexity or space-filling characteristic for a variety of purposes. The vast majority of published studies fail to evaluate the assumption of statistical self-similarity, which underpins the validity of the procedure. The box-counting procedure is also subject to error arising from arbitrary grid placement, known as quantisation error (QE, which is strictly positive and varies as a function of scale, making it problematic for the procedure's slope estimation step. Previous studies either ignore QE or employ inefficient brute-force grid translations to reduce it. The goals of this study were to characterise the effect of QE due to translation and rotation on FD estimates, to provide an efficient method of reducing QE, and to evaluate the assumption of statistical self-similarity of coarse root datasets typical of those used in recent trait studies. Coarse root systems of 36 shrubs were digitised in 3D and subjected to box-counts. A pattern search algorithm was used to minimise QE by optimising grid placement and its efficiency was compared to the brute force method. The degree of statistical self-similarity was evaluated using linear regression residuals and local slope estimates.QE due to both grid position and orientation was a significant source of error in FD estimates, but pattern search provided an efficient means of minimising it. Pattern search had higher initial computational cost but converged on lower error values more efficiently than the commonly employed brute force method. Our representations of coarse root system digitisations did not exhibit details over a sufficient range of scales to be considered statistically self-similar and informatively approximated as fractals, suggesting a lack of sufficient ramification of the coarse root systems for reiteration to be thought of as a dominant force in their development. FD estimates did

  13. Development of a NIR-based blend uniformity method for a drug product containing multiple structurally similar actives by using the quality by design principles.

    Science.gov (United States)

    Lin, Yiqing; Li, Weiyong; Xu, Jin; Boulas, Pierre

    2015-07-05

    The aim of this study is to develop an at-line near infrared (NIR) method for the rapid and simultaneous determination of four structurally similar active pharmaceutical ingredients (APIs) in powder blends intended for the manufacturing of tablets. Two of the four APIs in the formula are present in relatively small amounts, one at 0.95% and the other at 0.57%. Such small amounts in addition to the similarity in structures add significant complexity to the blend uniformity analysis. The NIR method is developed using spectra from six laboratory-created calibration samples augmented by a small set of spectra from a large-scale blending sample. Applying the quality by design (QbD) principles, the calibration design included concentration variations of the four APIs and a main excipient, microcrystalline cellulose. A bench-top FT-NIR instrument was used to acquire the spectra. The obtained NIR spectra were analyzed by applying principal component analysis (PCA) before calibration model development. Score patterns from the PCA were analyzed to reveal relationship between latent variables and concentration variations of the APIs. In calibration model development, both PLS-1 and PLS-2 models were created and evaluated for their effectiveness in predicting API concentrations in the blending samples. The final NIR method shows satisfactory specificity and accuracy. Copyright © 2015 Elsevier B.V. All rights reserved.

  14. Using hybrid method to evaluate the green performance in uncertainty.

    Science.gov (United States)

    Tseng, Ming-Lang; Lan, Lawrence W; Wang, Ray; Chiu, Anthony; Cheng, Hui-Ping

    2011-04-01

    Green performance measure is vital for enterprises in making continuous improvements to maintain sustainable competitive advantages. Evaluation of green performance, however, is a challenging task due to the dependence complexity of the aspects, criteria, and the linguistic vagueness of some qualitative information and quantitative data together. To deal with this issue, this study proposes a novel approach to evaluate the dependence aspects and criteria of firm's green performance. The rationale of the proposed approach, namely green network balanced scorecard, is using balanced scorecard to combine fuzzy set theory with analytical network process (ANP) and importance-performance analysis (IPA) methods, wherein fuzzy set theory accounts for the linguistic vagueness of qualitative criteria and ANP converts the relations among the dependence aspects and criteria into an intelligible structural modeling used IPA. For the empirical case study, four dependence aspects and 34 green performance criteria for PCB firms in Taiwan were evaluated. The managerial implications are discussed.

  15. An alignment-free method to find similarity among protein sequences via the general form of Chou's pseudo amino acid composition.

    Science.gov (United States)

    Gupta, M K; Niyogi, R; Misra, M

    2013-01-01

    In this paper, we propose a method to create the 60-dimensional feature vector for protein sequences via the general form of pseudo amino acid composition. The construction of the feature vector is based on the contents of amino acids, total distance of each amino acid from the first amino acid in the protein sequence and the distribution of 20 amino acids. The obtained cosine distance metric (also called the similarity matrix) is used to construct the phylogenetic tree by the neighbour joining method. In order to show the applicability of our approach, we tested it on three proteins: 1) ND5 protein sequences from nine species, 2) ND6 protein sequences from eight species, and 3) 50 coronavirus spike proteins. The results are in agreement with known history and the output from the multiple sequence alignment program ClustalW, which is widely used. We have also compared our phylogenetic results with six other recently proposed alignment-free methods. These comparisons show that our proposed method gives a more consistent biological relationship than the others. In addition, the time complexity is linear and space required is less as compared with other alignment-free methods that use graphical representation. It should be noted that the multiple sequence alignment method has exponential time complexity.

  16. Performance Analysis of Unsupervised Clustering Methods for Brain Tumor Segmentation

    Directory of Open Access Journals (Sweden)

    Tushar H Jaware

    2013-10-01

    Full Text Available Medical image processing is the most challenging and emerging field of neuroscience. The ultimate goal of medical image analysis in brain MRI is to extract important clinical features that would improve methods of diagnosis & treatment of disease. This paper focuses on methods to detect & extract brain tumour from brain MR images. MATLAB is used to design, software tool for locating brain tumor, based on unsupervised clustering methods. K-Means clustering algorithm is implemented & tested on data base of 30 images. Performance evolution of unsupervised clusteringmethods is presented.

  17. MULTICRITERIA METHODS IN PERFORMING COMPANIES’ RESULTS USING ELECTRONIC RECRUITING, CORPORATE COMMUNICATION AND FINANCIAL RATIOS

    Directory of Open Access Journals (Sweden)

    Ivana Bilić

    2011-02-01

    Full Text Available Human resources represent one of the most important companies’ resources responsible in creation of companies’ competitive advantage. In search for the most valuable resources, companies use different methods. Lately, one of the growing methods is electronic recruiting, not only as a recruitment tool, but also as a mean of external communication. Additionally, in the process of corporate communication, companies nowadays use the electronic corporate communication as the easiest, the cheapest and the simplest form of business communication. The aim of this paper is to investigate relationship between three groups of different criteria; including main characteristics of performed electronic recruiting, corporate communication and selected financial performances. Selected companies were ranked separately by each group of criteria by usage of multicriteria decision making method PROMETHEE II. The main idea is to research whether companies which are the highest performers by certain group of criteria obtain the similar results regarding other group of criteria or performing results.

  18. Performance of trim coils made by a novel method

    International Nuclear Information System (INIS)

    Wanderer, P.; Anerella, M.; Cottingham, J.; Ganetis, G.; Garber, M.; Ghosh, A.; Goodzeit, C.; Greene, A.; Gupta, R.; Herrera, J.; Kahn, S.; Kelly, E.; Meade, A.; Morgan, G.; Muratore, J.; Prodell, A.; Rehak, M.; Rohrer, E.P.; Sampson, W.; Shutt, R.; Skaritka, J.; Thompson, P.; Willen, E.

    1991-01-01

    A precision, automated method of manufacturing trim coils based on printed circuit technology has been developed. Excellent quench performance and increased radiation resistance have been achieved in recently-tested models of sextupole trim coils developed for operation inside 40 mm-aperture SSC Main Collider dipoles. 6 refs., 2 figs

  19. Sensitive high performance liquid chromatographic method for the ...

    African Journals Online (AJOL)

    A new simple, sensitive, cost-effective and reproducible high performance liquid chromatographic (HPLC) method for the determination of proguanil (PG) and its metabolites, cycloguanil (CG) and 4-chlorophenylbiguanide (4-CPB) in urine and plasma is described. The extraction procedure is a simple three-step process ...

  20. Visual art teachers and performance assessment methods in ...

    African Journals Online (AJOL)

    This paper examines the competencies of visual arts teachers in using performance assessment methods, and to ascertain the extent to which the knowledge, skills and experiences of teachers affect their competence in using assessment strategies in their classroom. The study employs a qualitative research design; ...

  1. Methods of evaluating performance in controlling marketing,activities

    OpenAIRE

    Codruţa Dura

    2002-01-01

    There are specific methods for assessing and improving the effectiveness of a marketing strategy. A marketer should state in the marketing plan what a marketing strategy is supposed to accomplish. These statements should set forth performance standards, which usually are stated in terms of profits, sales, or costs

  2. Improvement on the Performance of Canal Network and Method of ...

    African Journals Online (AJOL)

    This paper presents the required improvement on the performance of canal network and method of on-farm water application systems at Tunga-Kawo irrigation scheme, Wushishi, Niger state. The problems of poor delivery of water to the farmland were identified to include erosion of canal embarkment, lack of water ...

  3. Data-driven performance evaluation method for CMS RPC trigger ...

    Indian Academy of Sciences (India)

    2012-10-06

    Oct 6, 2012 ... hardware-implemented algorithm, which performs the task of combining and merging information from muon ... Figure 1 shows the comparison of efficiencies obtained with the two methods containing .... [3] The CMS Collaboration, The trigger and data acquisition project, Volume 1, The Level 1. Trigger ...

  4. Neural circuits of eye movements during performance of the visual exploration task, which is similar to the responsive search score task, in schizophrenia patients and normal subjects

    International Nuclear Information System (INIS)

    Nemoto, Yasundo; Matsuda, Tetsuya; Matsuura, Masato

    2004-01-01

    Abnormal exploratory eye movements have been studied as a biological marker for schizophrenia. Using functional MRI (fMRI), we investigated brain activations of 12 healthy and 8 schizophrenic subjects during performance of a visual exploration task that is similar to the responsive search score task to clarify the neural basis of the abnormal exploratory eye movement. Performance data, such as the number of eye movements, the reaction time, and the percentage of correct answers showed no significant differences between the two groups. Only the normal subjects showed activations at the bilateral thalamus and the left anterior medial frontal cortex during the visual exploration tasks. In contrast, only the schizophrenic subjects showed activations at the right anterior cingulate gyms during the same tasks. The activation at the different locations between the two groups, the left anterior medial frontal cortex in normal subjects and the right anterior cingulate gyrus in schizophrenia subjects, was explained by the feature of the visual tasks. Hypoactivation at the bilateral thalamus supports a dysfunctional filtering theory of schizophrenia. (author)

  5. Performance Poetry as a Method to Understand Disability

    Directory of Open Access Journals (Sweden)

    Lee-Ann Fenge

    2016-03-01

    Full Text Available The Seen but Seldom Heard project was a performative social science (PSS project which used performance poetry to illuminate the experiences of young people with physical impairments. Two performance poets, a group of young people with physical impairments, and academics from social science and media/communication backgrounds worked together to explore various aspects of the lived experience of disability exploring issues associated with identity, stereotypes, stigma and representation. In this article, we will present an overview of the project and consider how PSS offers a method to engage seldom heard voices, and illustrate this through two poems which shed light on the lived experience of disability. The article will consider the impact of these poems as PSS, and how this method allows the audience to develop a deeper understanding of the "lived" experience of disability and to reflect upon their own understandings of disability and discrimination. URN: http://nbn-resolving.de/urn:nbn:de:0114-fqs1602118

  6. Total System Performance Assessment-License Application Methods and Approach

    Energy Technology Data Exchange (ETDEWEB)

    J. McNeish

    2002-09-13

    ''Total System Performance Assessment-License Application (TSPA-LA) Methods and Approach'' provides the top-level method and approach for conducting the TSPA-LA model development and analyses. The method and approach is responsive to the criteria set forth in Total System Performance Assessment Integration (TSPAI) Key Technical Issue (KTI) agreements, the ''Yucca Mountain Review Plan'' (CNWRA 2002 [158449]), and 10 CFR Part 63. This introductory section provides an overview of the TSPA-LA, the projected TSPA-LA documentation structure, and the goals of the document. It also provides a brief discussion of the regulatory framework, the approach to risk management of the development and analysis of the model, and the overall organization of the document. The section closes with some important conventions that are utilized in this document.

  7. Total System Performance Assessment - License Application Methods and Approach

    International Nuclear Information System (INIS)

    McNeish, J.

    2003-01-01

    ''Total System Performance Assessment-License Application (TSPA-LA) Methods and Approach'' provides the top-level method and approach for conducting the TSPA-LA model development and analyses. The method and approach is responsive to the criteria set forth in Total System Performance Assessment Integration (TSPAI) Key Technical Issues (KTIs) identified in agreements with the U.S. Nuclear Regulatory Commission, the ''Yucca Mountain Review Plan'' (YMRP), ''Final Report'' (NRC 2003 [163274]), and the NRC final rule 10 CFR Part 63 (NRC 2002 [156605]). This introductory section provides an overview of the TSPA-LA, the projected TSPA-LA documentation structure, and the goals of the document. It also provides a brief discussion of the regulatory framework, the approach to risk management of the development and analysis of the model, and the overall organization of the document. The section closes with some important conventions that are used in this document

  8. Evaluation method for the drying performance of enzyme containing formulations

    DEFF Research Database (Denmark)

    Sloth, Jakob; Bach, P.; Jensen, Anker Degn

    2008-01-01

    A method is presented for fast and cheap evaluation of the performance of enzyme containing formulations in terms of preserving the highest enzyme activity during spray drying. The method is based on modeling the kinetics of the thermal inactivation reaction which occurs during the drying process....... Relevant kinetic parameters are determined from differential scanning calorimeter (DSC) experiments and the model is used to simulate the severity of the inactivation reaction for temperatures and moisture levels relevant for spray drying. After conducting experiments and subsequent simulations...... for a number of different formulations it may be deduced which formulation performs best. This is illustrated by a formulation design study where 4 different enzyme containing formulations are evaluated. The method is validated by comparison to pilot scale spray dryer experiments....

  9. Performance evaluation of sea surface simulation methods for target detection

    Science.gov (United States)

    Xia, Renjie; Wu, Xin; Yang, Chen; Han, Yiping; Zhang, Jianqi

    2017-11-01

    With the fast development of sea surface target detection by optoelectronic sensors, machine learning has been adopted to improve the detection performance. Many features can be learned from training images by machines automatically. However, field images of sea surface target are not sufficient as training data. 3D scene simulation is a promising method to address this problem. For ocean scene simulation, sea surface height field generation is the key point to achieve high fidelity. In this paper, two spectra-based height field generation methods are evaluated. Comparison between the linear superposition and linear filter method is made quantitatively with a statistical model. 3D ocean scene simulating results show the different features between the methods, which can give reference for synthesizing sea surface target images with different ocean conditions.

  10. Modeling the Performance of Fast Mulipole Method on HPC platforms

    KAUST Repository

    Ibeid, Huda

    2012-04-06

    The current trend in high performance computing is pushing towards exascale computing. To achieve this exascale performance, future systems will have between 100 million and 1 billion cores assuming gigahertz cores. Currently, there are many efforts studying the hardware and software bottlenecks for building an exascale system. It is important to understand and meet these bottlenecks in order to attain 10 PFLOPS performance. On applications side, there is an urgent need to model application performance and to understand what changes need to be made to ensure continued scalability at this scale. Fast multipole methods (FMM) were originally developed for accelerating N-body problems for particle based methods. Nowadays, FMM is more than an N-body solver, recent trends in HPC have been to use FMMs in unconventional application areas. FMM is likely to be a main player in exascale due to its hierarchical nature and the techniques used to access the data via a tree structure which allow many operations to happen simultaneously at each level of the hierarchy. In this thesis , we discuss the challenges for FMM on current parallel computers and future exasclae architecture. Furthermore, we develop a novel performance model for FMM. Our ultimate aim of this thesis is to ensure the scalability of FMM on the future exascale machines.

  11. System and Method for Monitoring Piezoelectric Material Performance

    Science.gov (United States)

    Moses, Robert W. (Inventor); Fox, Christopher L. (Inventor); Fox, Melanie L. (Inventor); Chattin, Richard L. (Inventor); Shams, Qamar A. (Inventor); Fox, Robert L. (Inventor)

    2007-01-01

    A system and method are provided for monitoring performance capacity of a piezoelectric material that may form part of an actuator or sensor device. A switch is used to selectively electrically couple an inductor to the piezoelectric material to form an inductor-capacitor circuit. Resonance is induced in the inductor-capacitor circuit when the switch is operated to create the circuit. The resonance of the inductor-capacitor circuit is monitored with the frequency of the resonance being indicative of performance capacity of the device's piezoelectric material.

  12. Cycling Time Trial Performance 4 Hours After Glycogen-Lowering Exercise Is Similarly Enhanced by Recovery Nondairy Chocolate Beverages Versus Chocolate Milk.

    Science.gov (United States)

    Upshaw, Adam U; Wong, Tiffany S; Bandegan, Arash; Lemon, Peter W

    2016-02-01

    Postexercise chocolate milk ingestion has been shown to enhance both glycogen resynthesis and subsequent exercise performance. To assess whether nondairy chocolate beverage ingestion post-glycogen-lowering exercise can enhance 20-km cycling time trial performance 4 hr later, eight healthy trained male cyclists (21.8 ± 2.3y, VO2max = 61.2 ± 1.4 ml·kg-1·min-1; M ± SD) completed a series of intense cycling intervals designed to lower muscle glycogen (Jentjens & Jeukendrup, 2003) followed by 4 hr of recovery and a subsequent 20-km cycling time trial. During the first 2 hr of recovery, participants ingested chocolate dairy milk (DAIRYCHOC), chocolate soy beverage (SOYCHOC), chocolate hemp beverage (HEMPCHOC), low-fat dairy milk (MILK), or a low-energy artificially sweetened, flavored beverage (PLACEBO) at 30-min intervals in a double-blind, counterbalanced repeated-measures design. All drinks, except the PLACEBO (247 kJ) were isoenergetic (2,107 kJ), and all chocolate-flavored drinks provided 1-g CHO·kg body mass-1·h-1. Fluid intake across treatments was equalized (2,262 ± 148 ml) by ingesting appropriate quantities of water based on drink intake. The CHO:PRO ratio was 4:1, 1.5:1, 4:1, and 6:1 for DAIRYCHOC, MILK, SOYCHOC, and HEMPCHOC, respectively. One-way analysis of variance with repeated measures showed time trial performance (DAIRYCHOC = 34.58 ± 2.5 min, SOYCHOC = 34.83 ± 2.2 min, HEMPCHOC = 34.88 ± 1.1 min, MILK = 34.47 ± 1.7 min) was enhanced similarly vs PLACEBO (37.85 ± 2.1) for all treatments (p = .019) These data suggest that postexercise macronutrient and total energy intake are more important for same-day 20-km cycling time trial performance after glycogen-lowering exercise than protein type or protein-to-carbohydrate ratio.

  13. Evaluating firms' R&D performance using best worst method.

    Science.gov (United States)

    Salimi, Negin; Rezaei, Jafar

    2018-02-01

    Since research and development (R&D) is the most critical determinant of the productivity, growth and competitive advantage of firms, measuring R&D performance has become the core of attention of R&D managers, and an extensive body of literature has examined and identified different R&D measurements and determinants of R&D performance. However, measuring R&D performance and assigning the same level of importance to different R&D measures, which is the common approach in existing studies, can oversimplify the R&D measuring process, which may result in misinterpretation of the performance and consequently fallacy R&D strategies. The aim of this study is to measure R&D performance taking into account the different levels of importance of R&D measures, using a multi-criteria decision-making method called Best Worst Method (BWM) to identify the weights (importance) of R&D measures and measure the R&D performance of 50 high-tech SMEs in the Netherlands using the data gathered in a survey among SMEs and from R&D experts. The results show how assigning different weights to different R&D measures (in contrast to simple mean) results in a different ranking of the firms and allow R&D managers to formulate more effective strategies to improve their firm's R&D performance by applying knowledge regarding the importance of different R&D measures. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  14. Modeling the kinetics of hydrates formation using phase field method under similar conditions of petroleum pipelines; Modelagem da cinetica de formacao de hidratos utilizando o Modelo do Campo de Fase em condicoes similares a dutos de petroleo

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, Mabelle Biancardi; Castro, Jose Adilson de; Silva, Alexandre Jose da [Universidade Federal Fluminense (UFF), Volta Redonda, RJ (Brazil). Programa de Pos-Graduacao em Engenharia Metalurgica], e-mails: mabelle@metal.eeimvr.uff.br; adilson@metal.eeimvr.uff.br; ajs@metal.eeimvr.uff.br

    2008-10-15

    Natural hydrates are crystalline compounds that are ice-like formed under oil extraction transportation and processing. This paper deals with the kinetics of hydrate formation by using the phase field approach coupled with the transport equation of energy. The kinetic parameters of the hydrate formation were obtained by adjusting the proposed model to experimental results in similar conditions of oil extraction. The effect of thermal and nucleation conditions were investigated while the rate of formation and morphology were obtained by numerical computation. Model results of kinetics growth and morphology presented good agreement with the experimental ones. Simulation results indicated that super-cooling and pressure were decisive parameters for hydrates growth, morphology and interface thickness. (author)

  15. An efficient method for evaluating RRAM crossbar array performance

    Science.gov (United States)

    Song, Lin; Zhang, Jinyu; Chen, An; Wu, Huaqiang; Qian, He; Yu, Zhiping

    2016-06-01

    An efficient method is proposed in this paper to mitigate computational burden in resistive random access memory (RRAM) array simulation. In the worst case scenario, a 4 Mb RRAM array with line resistance is greatly reduced using this method. For 1S1R-RRAM array structures, static and statistical parameters in both reading and writing processes are simulated. Error analysis is performed to prove the reliability of the algorithm when line resistance is extremely small compared with the junction resistance. Results show that high precision is maintained even if the size of RRAM array is reduced by one thousand times, which indicates significant improvements in both computational efficiency and memory requirements.

  16. Benchmarking whole-building energy performance with multi-criteria technique for order preference by similarity to ideal solution using a selective objective-weighting approach

    International Nuclear Information System (INIS)

    Wang, Endong

    2015-01-01

    Highlights: • A TOPSIS based multi-criteria whole-building energy benchmarking is developed. • A selective objective-weighting procedure is used for a cost-accuracy tradeoff. • Results from a real case validated the benefits of the presented approach. - Abstract: This paper develops a robust multi-criteria Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) based building energy efficiency benchmarking approach. The approach is explicitly selective to address multicollinearity trap due to the subjectivity in selecting energy variables by considering cost-accuracy trade-off. It objectively weights the relative importance of individual pertinent efficiency measuring criteria using either multiple linear regression or principal component analysis contingent on meta data quality. Through this approach, building energy performance is comprehensively evaluated and optimized. Simultaneously, the significant challenges associated with conventional single-criterion benchmarking models can be avoided. Together with a clustering algorithm on a three-year panel dataset, the benchmarking case of 324 single-family dwellings demonstrated an improved robustness of the presented multi-criteria benchmarking approach over the conventional single-criterion ones

  17. Optimization of cooling tower performance analysis using Taguchi method

    Directory of Open Access Journals (Sweden)

    Ramkumar Ramakrishnan

    2013-01-01

    Full Text Available This study discuss the application of Taguchi method in assessing maximum cooling tower effectiveness for the counter flow cooling tower using expanded wire mesh packing. The experiments were planned based on Taguchi’s L27 orthogonal array .The trail was performed under different inlet conditions of flow rate of water, air and water temperature. Signal-to-noise ratio (S/N analysis, analysis of variance (ANOVA and regression were carried out in order to determine the effects of process parameters on cooling tower effectiveness and to identity optimal factor settings. Finally confirmation tests verified this reliability of Taguchi method for optimization of counter flow cooling tower performance with sufficient accuracy.

  18. The application of advanced rotor (performance) methods for design calculations

    Energy Technology Data Exchange (ETDEWEB)

    Bussel, G.J.W. van [Delft Univ. of Technology, Inst. for Wind Energy, Delft (Netherlands)

    1997-08-01

    The calculation of loads and performance of wind turbine rotors has been a topic for research over the last century. The principles for the calculation of loads on rotor blades with a given specific geometry, as well as the development of optimal shaped rotor blades have been published in the decades that significant aircraft development took place. Nowadays advanced computer codes are used for specific problems regarding modern aircraft, and application to wind turbine rotors has also been performed occasionally. The engineers designing rotor blades for wind turbines still use methods based upon global principles developed in the beginning of the century. The question what to expect in terms of the type of methods to be applied in a design environment for the near future is addressed here. (EG) 14 refs.

  19. Optimization of cooling tower performance analysis using Taguchi method

    OpenAIRE

    Ramkumar Ramakrishnan; Ragupathy Arumugam

    2013-01-01

    This study discuss the application of Taguchi method in assessing maximum cooling tower effectiveness for the counter flow cooling tower using expanded wire mesh packing. The experiments were planned based on Taguchi’s L27 orthogonal array .The trail was performed under different inlet conditions of flow rate of water, air and water temperature. Signal-to-noise ratio (S/N) analysis, analysis of variance (ANOVA) and regression were carried out in order to determine the effects of process...

  20. Method discussion of the performance evaluation on nuclear plant cable

    International Nuclear Information System (INIS)

    Lu Yongfang; Zhong Weixia; Sun Jiansheng; Liu Jingping

    2014-01-01

    A stock cable, which is same as the nuclear plant cable in service, was treated by thermal aging. After that, the mechanical property, the flame retardant property, the anti-oxidation were measured, and relationships between them due to the thermal aging were established. By those analysis, evaluating the in-service cable performance in nuclear plant and calculating its remaining life. Furthermore, the feasibility of this method was disscussed. (authors)

  1. Performance Evaluation of Methods for Estimating Achievable Throughput on Cellular Connections

    DEFF Research Database (Denmark)

    Mikkelsen, Lars Møller; Højholt, Nikolaj Bove; Madsen, Tatiana Kozlova

    2015-01-01

    The continuous increase in always connected devices and the advance in capabilities of networks and services offered via the network is evident. The target group of most of these devices and services is users, so how users perceive the network performance is of great importance. Estimating achiev...... AT, due to its much reduced resource consumption. Based on real-life measurements of the two methods we conclude that TOPP is a good candidate to estimate AT, based on similarity in results with BTC....

  2. Performance of Commercial Laying Hen Submitted to Different Debeaking Methods

    Directory of Open Access Journals (Sweden)

    CH Oka

    Full Text Available ABSTRACT Among the several factors required in breeding laying hens, debeaking is a factor that interferes with batch performance and affects animal welfare. Thus, the objective of this study was to evaluate three different debeaking procedures and to verify the best technique to be used. For this, the performance of the birds, the incidence of cannibalism, and in rearing phase, the quality of the eggs were evaluated. Dekalb White birds were distributed in a completely randomized design with three treatments, T1 (infrared radiation debeaking T2 (hot blade debeaking and T3 (V debeaking.The data was submitted to Analysis of Variance and compared by Tukey’s test (95%, using statistical software R. The frequencies of mortality and cannibalism were submitted to the Chi-Square test (Software R. It was observed that mortality was lower with IR debeaking in the breeding phase. Already in the rearing phase, the mortality was similar between the debeaking techniques and the cannibalism was null. The final mean weight (g, mean weight gain (g and average daily weight gain in the rearing and egg quality variables were higher for V debeaking when compared to other techniques. It is concluded that V-debeaking provides better bird performance, resistance and shell thickness when compared to the infrared radiations and hot blade debeaking, in addition to subjecting the birds to less stress.

  3. A method for optimizing the performance of buildings

    Energy Technology Data Exchange (ETDEWEB)

    Pedersen, Frank

    2006-07-01

    This thesis describes a method for optimizing the performance of buildings. Design decisions made in early stages of the building design process have a significant impact on the performance of buildings, for instance, the performance with respect to the energy consumption, economical aspects, and the indoor environment. The method is intended for supporting design decisions for buildings, by combining methods for calculating the performance of buildings with numerical optimization methods. The method is able to find optimum values of decision variables representing different features of the building, such as its shape, the amount and type of windows used, and the amount of insulation used in the building envelope. The parties who influence design decisions for buildings, such as building owners, building users, architects, consulting engineers, contractors, etc., often have different and to some extent conflicting requirements to buildings. For instance, the building owner may be more concerned about the cost of constructing the building, rather than the quality of the indoor climate, which is more likely to be a concern of the building user. In order to support the different types of requirements made by decision-makers for buildings, an optimization problem is formulated, intended for representing a wide range of design decision problems for buildings. The problem formulation involves so-called performance measures, which can be calculated with simulation software for buildings. For instance, the annual amount of energy required by the building, the cost of constructing the building, and the annual number of hours where overheating occurs, can be used as performance measures. The optimization problem enables the decision-makers to specify many different requirements to the decision variables, as well as to the performance of the building. Performance measures can for instance be required to assume their minimum or maximum value, they can be subjected to upper or

  4. Total error components - isolation of laboratory variation from method performance

    International Nuclear Information System (INIS)

    Bottrell, D.; Bleyler, R.; Fisk, J.; Hiatt, M.

    1992-01-01

    The consideration of total error across sampling and analytical components of environmental measurements is relatively recent. The U.S. Environmental Protection Agency (EPA), through the Contract Laboratory Program (CLP), provides complete analyses and documented reports on approximately 70,000 samples per year. The quality assurance (QA) functions of the CLP procedures provide an ideal data base-CLP Automated Results Data Base (CARD)-to evaluate program performance relative to quality control (QC) criteria and to evaluate the analysis of blind samples. Repetitive analyses of blind samples within each participating laboratory provide a mechanism to separate laboratory and method performance. Isolation of error sources is necessary to identify effective options to establish performance expectations, and to improve procedures. In addition, optimized method performance is necessary to identify significant effects that result from the selection among alternative procedures in the data collection process (e.g., sampling device, storage container, mode of sample transit, etc.). This information is necessary to evaluate data quality; to understand overall quality; and to provide appropriate, cost-effective information required to support a specific decision

  5. Advanced non-destructive methods for an efficient service performance

    International Nuclear Information System (INIS)

    Rauschenbach, H.; Clossen-von Lanken Schulz, M.; Oberlin, R.

    2015-01-01

    Due to the power generation industry's desire to decrease outage time and extend inspection intervals for highly stressed turbine parts, advanced and reliable Non-destructive methods were developed by Siemens Non-destructive laboratory. Effective outage performance requires the optimized planning of all outage activities as well as modern Non-destructive examination methods, in order to examine the highly stressed components (turbine rotor, casings, valves, generator rotor) reliably and in short periods of access. This paper describes the experience of Siemens Energy with an ultrasonic Phased Array inspection technique for the inspection of radial entry pinned turbine blade roots. The developed inspection technique allows the ultrasonic inspection of steam turbine blades without blade removal. Furthermore advanced Non-destructive examination methods for joint bolts will be described, which offer a significant reduction of outage duration in comparison to conventional inspection techniques. (authors)

  6. Multipolar Ewald methods, 1: theory, accuracy, and performance.

    Science.gov (United States)

    Giese, Timothy J; Panteva, Maria T; Chen, Haoyuan; York, Darrin M

    2015-02-10

    The Ewald, Particle Mesh Ewald (PME), and Fast Fourier–Poisson (FFP) methods are developed for systems composed of spherical multipole moment expansions. A unified set of equations is derived that takes advantage of a spherical tensor gradient operator formalism in both real space and reciprocal space to allow extension to arbitrary multipole order. The implementation of these methods into a novel linear-scaling modified “divide-and-conquer” (mDC) quantum mechanical force field is discussed. The evaluation times and relative force errors are compared between the three methods, as a function of multipole expansion order. Timings and errors are also compared within the context of the quantum mechanical force field, which encounters primary errors related to the quality of reproducing electrostatic forces for a given density matrix and secondary errors resulting from the propagation of the approximate electrostatics into the self-consistent field procedure, which yields a converged, variational, but nonetheless approximate density matrix. Condensed-phase simulations of an mDC water model are performed with the multipolar PME method and compared to an electrostatic cutoff method, which is shown to artificially increase the density of water and heat of vaporization relative to full electrostatic treatment.

  7. A kernel plus method for quantifying wind turbine performance upgrades

    KAUST Repository

    Lee, Giwhyun

    2014-04-21

    Power curves are commonly estimated using the binning method recommended by the International Electrotechnical Commission, which primarily incorporates wind speed information. When such power curves are used to quantify a turbine\\'s upgrade, the results may not be accurate because many other environmental factors in addition to wind speed, such as temperature, air pressure, turbulence intensity, wind shear and humidity, all potentially affect the turbine\\'s power output. Wind industry practitioners are aware of the need to filter out effects from environmental conditions. Toward that objective, we developed a kernel plus method that allows incorporation of multivariate environmental factors in a power curve model, thereby controlling the effects from environmental factors while comparing power outputs. We demonstrate that the kernel plus method can serve as a useful tool for quantifying a turbine\\'s upgrade because it is sensitive to small and moderate changes caused by certain turbine upgrades. Although we demonstrate the utility of the kernel plus method in this specific application, the resulting method is a general, multivariate model that can connect other physical factors, as long as their measurements are available, with a turbine\\'s power output, which may allow us to explore new physical properties associated with wind turbine performance. © 2014 John Wiley & Sons, Ltd.

  8. Performance assessment plans and methods for the Salt Repository Project

    International Nuclear Information System (INIS)

    1984-08-01

    This document presents the preliminary plans and anticipated methods of the Salt Repository Project (SRP) for assessing the postclosure and radiological aspects of preclosure performance of a nuclear waste repository in salt. This plan is intended to be revised on an annual basis. The emphasis in this preliminary effort is on the method of conceptually dividing the system into three subsystems (the very near field, the near field, and the far field) and applying models to analyze the behavior of each subsystem and its individual components. The next revision will contain more detailed plans being developed as part of Site Characterization Plan (SCP) activities. After a brief system description, this plan presents the performance targets which have been established for nuclear waste repositories by regulatory agencies (Chapter 3). The SRP approach to modeling, including sensitivity and uncertainty techniques is then presented (Chapter 4). This is followed by a discussion of scenario analysis (Chapter 5), a presentation of preliminary data needs as anticipated by the SRP (Chapter 6), and a presentation of the SRP approach to postclosure assessment of the very near field, the near field, and the far field (Chapters 7, 8, and 9, respectively). Preclosure radiological assessment is discussed in Chapter 10. Chapter 11 presents the SRP approach to code verification and validation. Finally, the Appendix lists all computer codes anticipated for use in performance assessments. The list of codes will be updated as plans are revised

  9. A new method to evaluate human-robot system performance

    Science.gov (United States)

    Rodriguez, G.; Weisbin, C. R.

    2003-01-01

    One of the key issues in space exploration is that of deciding what space tasks are best done with humans, with robots, or a suitable combination of each. In general, human and robot skills are complementary. Humans provide as yet unmatched capabilities to perceive, think, and act when faced with anomalies and unforeseen events, but there can be huge potential risks to human safety in getting these benefits. Robots provide complementary skills in being able to work in extremely risky environments, but their ability to perceive, think, and act by themselves is currently not error-free, although these capabilities are continually improving with the emergence of new technologies. Substantial past experience validates these generally qualitative notions. However, there is a need for more rigorously systematic evaluation of human and robot roles, in order to optimize the design and performance of human-robot system architectures using well-defined performance evaluation metrics. This article summarizes a new analytical method to conduct such quantitative evaluations. While the article focuses on evaluating human-robot systems, the method is generally applicable to a much broader class of systems whose performance needs to be evaluated.

  10. Total System Performance Assessment - License Application Methods and Approach

    Energy Technology Data Exchange (ETDEWEB)

    J. McNeish

    2003-12-08

    ''Total System Performance Assessment-License Application (TSPA-LA) Methods and Approach'' provides the top-level method and approach for conducting the TSPA-LA model development and analyses. The method and approach is responsive to the criteria set forth in Total System Performance Assessment Integration (TSPAI) Key Technical Issues (KTIs) identified in agreements with the U.S. Nuclear Regulatory Commission, the ''Yucca Mountain Review Plan'' (YMRP), ''Final Report'' (NRC 2003 [163274]), and the NRC final rule 10 CFR Part 63 (NRC 2002 [156605]). This introductory section provides an overview of the TSPA-LA, the projected TSPA-LA documentation structure, and the goals of the document. It also provides a brief discussion of the regulatory framework, the approach to risk management of the development and analysis of the model, and the overall organization of the document. The section closes with some important conventions that are used in this document.

  11. A Control Variate Method for Probabilistic Performance Assessment. Improved Estimates for Mean Performance Quantities of Interest

    Energy Technology Data Exchange (ETDEWEB)

    MacKinnon, Robert J.; Kuhlman, Kristopher L

    2016-05-01

    We present a method of control variates for calculating improved estimates for mean performance quantities of interest, E(PQI) , computed from Monte Carlo probabilistic simulations. An example of a PQI is the concentration of a contaminant at a particular location in a problem domain computed from simulations of transport in porous media. To simplify the presentation, the method is described in the setting of a one- dimensional elliptical model problem involving a single uncertain parameter represented by a probability distribution. The approach can be easily implemented for more complex problems involving multiple uncertain parameters and in particular for application to probabilistic performance assessment of deep geologic nuclear waste repository systems. Numerical results indicate the method can produce estimates of E(PQI)having superior accuracy on coarser meshes and reduce the required number of simulations needed to achieve an acceptable estimate.

  12. Performance Assessment Method for a Forged Fingerprint Detection Algorithm

    Science.gov (United States)

    Shin, Yong Nyuo; Jun, In-Kyung; Kim, Hyun; Shin, Woochang

    The threat of invasion of privacy and of the illegal appropriation of information both increase with the expansion of the biometrics service environment to open systems. However, while certificates or smart cards can easily be cancelled and reissued if found to be missing, there is no way to recover the unique biometric information of an individual following a security breach. With the recognition that this threat factor may disrupt the large-scale civil service operations approaching implementation, such as electronic ID cards and e-Government systems, many agencies and vendors around the world continue to develop forged fingerprint detection technology, but no objective performance assessment method has, to date, been reported. Therefore, in this paper, we propose a methodology designed to evaluate the objective performance of the forged fingerprint detection technology that is currently attracting a great deal of attention.

  13. Experimental Methods for UAV Aerodynamic and Propulsion Performance Assessment

    Directory of Open Access Journals (Sweden)

    Stefan ANTON

    2015-06-01

    Full Text Available This paper presents an experimental method for assessing the performances and the propulsion power of a UAV in several points based on telemetry. The points in which we make the estimations are chosen based on several criteria and the fallowing parameters are measured: airspeed, time-to-climb, altitude and the horizontal distance. With the estimated propulsion power and knowing the shaft motor power, the propeller efficiency is determined at several speed values. The shaft motor power was measured in the lab using the propeller as a break. Many flights, using the same UAV configuration, were performed before extracting flight data, in order to reduce the instrumental or statistic errors. This paper highlights both the methodology of processing the data and the validation of theoretical results.

  14. Exploration of submarine wake and powering performance using CFD method

    International Nuclear Information System (INIS)

    Huizhi, Y.; Hongcui, S.; Nan, Z.; Renyou, Y.; Liangmei, Y.

    2005-01-01

    In response to the needs of better design and less time, Computational Fluid Dynamic(CFD) methods have become an impartible part in the ship design, especially in the earlier design phases. In this paper FLUENT software was used to predict the wake character and powering performance of submarine at model scale. By an effective combination of the block topology, grid, turbulence model and validation, the simulation scheme was developed and applied to the predictions of multiple designs and optimizations of the earlier submarine design iterations. The incompressible RANS equations with different turbulence models were solved. To handle the block interface between the propeller and submarine stern, sliding girds in multiple blocks were employed, unstructural grids were used in the block around the propeller. Submarine with/without stator and/or propeller were studied. The flow feature, forces and powering performance at various conditions were calculated. The results were compared with experimental data, and a good agreement was obtained. (author)

  15. Methods of performing downhole operations using orbital vibrator energy sources

    Science.gov (United States)

    Cole, Jack H.; Weinberg, David M.; Wilson, Dennis R.

    2004-02-17

    Methods of performing down hole operations in a wellbore. A vibrational source is positioned within a tubular member such that an annulus is formed between the vibrational source and an interior surface of the tubular member. A fluid medium, such as high bulk modulus drilling mud, is disposed within the annulus. The vibrational source forms a fluid coupling with the tubular member through the fluid medium to transfer vibrational energy to the tubular member. The vibrational energy may be used, for example, to free a stuck tubular, consolidate a cement slurry and/or detect voids within a cement slurry prior to the curing thereof.

  16. High-performance liquid chromatographic method for guanylhydrazone compounds.

    Science.gov (United States)

    Cerami, C; Zhang, X; Ulrich, P; Bianchi, M; Tracey, K J; Berger, B J

    1996-01-12

    A high-performance liquid chromatographic method has been developed for a series of aromatic guanylhydrazones that have demonstrated therapeutic potential as anti-inflammatory agents. The compounds were separated using octadecyl or diisopropyloctyl reversed-phase columns, with an acetonitrile gradient in water containing heptane sulfonate, tetramethylammonium chloride, and phosphoric acid. The method was used to reliably quantify levels of analyte as low as 785 ng/ml, and the detector response was linear to at least 50 micrograms/ml using a 100 microliters injection volume. The assay system was used to determine the basic pharmacokinetics of a lead compound, CNI-1493, from serum concentrations following a single intravenous injection in rats.

  17. Markov chain-based mass estimation method for loose part monitoring system and its performance

    Directory of Open Access Journals (Sweden)

    Sung-Hwan Shin

    2017-10-01

    Full Text Available A loose part monitoring system is used to identify unexpected loose parts in a nuclear reactor vessel or steam generator. It is still necessary for the mass estimation of loose parts, one function of a loose part monitoring system, to develop a new method due to the high estimation error of conventional methods such as Hertz's impact theory and the frequency ratio method. The purpose of this study is to propose a mass estimation method using a Markov decision process and compare its performance with a method using an artificial neural network model proposed in a previous study. First, how to extract feature vectors using discrete cosine transform was explained. Second, Markov chains were designed with codebooks obtained from the feature vector. A 1/8-scaled mockup of the reactor vessel for OPR1000 was employed, and all used signals were obtained by impacting its surface with several solid spherical masses. Next, the performance of mass estimation by the proposed Markov model was compared with that of the artificial neural network model. Finally, it was investigated that the proposed Markov model had matching error below 20% in mass estimation. That was a similar performance to the method using an artificial neural network model and considerably improved in comparison with the conventional methods.

  18. Quantifying Neonatal Sucking Performance: Promise of New Methods.

    Science.gov (United States)

    Capilouto, Gilson J; Cunningham, Tommy J; Mullineaux, David R; Tamilia, Eleonora; Papadelis, Christos; Giannone, Peter J

    2017-04-01

    Neonatal feeding has been traditionally understudied so guidelines and evidence-based support for common feeding practices are limited. A major contributing factor to the paucity of evidence-based practice in this area has been the lack of simple-to-use, low-cost tools for monitoring sucking performance. We describe new methods for quantifying neonatal sucking performance that hold significant clinical and research promise. We present early results from an ongoing study investigating neonatal sucking as a marker of risk for adverse neurodevelopmental outcomes. We include quantitative measures of sucking performance to better understand how movement variability evolves during skill acquisition. Results showed the coefficient of variation of suck duration was significantly different between preterm neonates at high risk for developmental concerns (HRPT) and preterm neonates at low risk for developmental concerns (LRPT). For HRPT, results indicated the coefficient of variation of suck smoothness increased from initial feeding to discharge and remained significantly greater than healthy full-term newborns (FT) at discharge. There was no significant difference in our measures between FT and LRPT at discharge. Our findings highlight the need to include neonatal sucking assessment as part of routine clinical care in order to capture the relative risk of adverse neurodevelopmental outcomes at discharge. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  19. A method for identifying hierarchical sub-networks / modules and weighting network links based on their similarity in sub-network / module affiliation

    Directory of Open Access Journals (Sweden)

    WenJun Zhang

    2016-06-01

    Full Text Available Some networks, including biological networks, consist of hierarchical sub-networks / modules. Based on my previous study, in present study a method for both identifying hierarchical sub-networks / modules and weighting network links is proposed. It is based on the cluster analysis in which between-node similarity in sets of adjacency nodes is used. Two matrices, linkWeightMat and linkClusterIDs, are achieved by using the algorithm. Two links with both the same weight in linkWeightMat and the same cluster ID in linkClusterIDs belong to the same sub-network / module. Two links with the same weight in linkWeightMat but different cluster IDs in linkClusterIDs belong to two sub-networks / modules at the same hirarchical level. However, a link with an unique cluster ID in linkClusterIDs does not belong to any sub-networks / modules. A sub-network / module of the greater weight is the more connected sub-network / modules. Matlab codes of the algorithm are presented.

  20. ''In situ'' electronic testing method of a neutron detector performance

    International Nuclear Information System (INIS)

    Gonzalez, J.M.; Levai, F.

    1987-01-01

    The method allows detection of any important change in the electrical characteristics of a neutron sensor channel. It checks the response signal produced by an electronic detector circuit when a pulse generator is connected as input signal in the high voltage supply. The electronic circuit compares the detector capacitance value, previously measured, against a reference value, which is adjusted in a window type comparator electronic circuit to detect any important degrading condition of the capacitance value in a detector-cable system. The ''in-situ'' electronic testing method of neutron detector performance has been verified in a laboratory atmosphere to be a potential method to detect any significant change in the capacitance value of a nuclear sensor and its connecting cable, also checking: detector disconnections, cable disconnections, length changes of the connecting cable, electric short-opened circuits in the sensor channel, and any electrical trouble in the detector-connector-cable system. The experimental practices were carried out by simulation of several electric changes in a nuclear sensor-cable system from a linear D.C. channel which measures reactor power during nuclear reactor operation. It was made at the Training Reactor Electronic Laboratory. The results and conclusions obtained at the Laboratory were proved, satisfactorily, in the Electronic Instrumentation of Budapest Technical University Training Reactor, Hungary

  1. Performance of analytical methods for tomographic gamma scanning

    International Nuclear Information System (INIS)

    Prettyman, T.H.; Mercer, D.J.

    1997-01-01

    The use of gamma-ray computerized tomography for nondestructive assay of radioactive materials has led to the development of specialized analytical methods. Over the past few years, Los Alamos has developed and implemented a computer code, called ARC-TGS, for the analysis of data obtained by tomographic gamma scanning (TGS). ARC-TGS reduces TGS transmission and emission tomographic data, providing the user with images of the sample contents, the activity or mass of selected radionuclides, and an estimate of the uncertainty in the measured quantities. The results provided by ARC-TGS can be corrected for self-attenuation when the isotope of interest emits more than one gamma-ray. In addition, ARC-TGS provides information needed to estimate TGS quantification limits and to estimate the scan time needed to screen for small amounts of radioactivity. In this report, an overview of the analytical methods used by ARC-TGS is presented along with an assessment of the performance of these methods for TGS

  2. Objective Method for Selecting Outdoor Reporting Conditions for Photovoltaic Performance

    International Nuclear Information System (INIS)

    Maish, A.

    1999-01-01

    Outdoor performance of photovoltaic modules and systems depends on prevailing conditions at the time of measurement. Outdoor test conditions must be relevant to device performance and readily attainable. Flat-plate, nonconcentrator PV device performance is reported with respect to fixed conditions referred to as Standard Reporting Conditions (SRC) of 1 kW/m plane of array total irradiance, 25 C device temperature, and a reference spectral distribution at air mass 1.5 under certain atmospheric conditions. We report a method of analyzing historical meteorological and irradiance data to determine the range of outdoor environmental parameters and solar irradiance components that affect solar collector performance when the SRC 1 kW/m total irradiance value occurs outdoors. We used data from the 30 year U.S. National Solar Radiation Data Base (NSRDB) , restricting irradiance conditions to within +/- 25 W/m of 1 kW/m on a solar tracking flat-plate collector. The distributions of environmental parameter values under these conditions are non-Gaussian and site dependent. Therefore the median, as opposed to the mean, of the observed distributions is chosen to represent appropriate outdoor reporting conditions. We found the average medians for the direct beam component (834 W/m), ambient temperature (24.4 C), total column water vapor (1.4 cm), and air mass (1.43) are near commonly used SRC values. Average median wind speed (4.4 m/s) and broadband aerosol optical depth (0.08) were significantly different from commonly used values

  3. EVALUATION OF ASSEMBLY LINE BALANCING METHODS USING AN ANALYTICAL HIERARCHY PROCESS (AHP AND TECHNIQUE FOR ORDER PREFERENCES BY SIMILARITY TO IDEAL SOLUTION (TOPSIS BASED APPROACH

    Directory of Open Access Journals (Sweden)

    Pallavi Sharma

    2013-12-01

    Full Text Available Assembly lines are special flow-line production systems which are of great importance in the industrial production of high quantity standardized commodities. In this article, assembly line balancing problem is formulated as a multi objective (criteria problem where four easily quantifiable objectives (criteria's are defined. Objectives (criteria's included are line efficiency, balance delay, smoothness index, and line time. And the value of these objectives is calculated by five different heuristics. In this paper, focus is made on the prioritization of assembly line balancing (ALB solution methods (heuristics and to select the best of them. For this purpose, a bench mark assembly line balancing problem is solved by five different heuristics and the value of objectives criteria's (performance measures of the line is determined. Finally the prioritization of heuristics is carried out through the use of AHP-TOPSIS based approach by solving an example.

  4. Combining multiple FDG-PET radiotherapy target segmentation methods to reduce the effect of variable performance of individual segmentation methods

    Energy Technology Data Exchange (ETDEWEB)

    McGurk, Ross J. [Medical Physics Graduate Program, Duke University, Durham, North Carolina 27705 (United States); Bowsher, James; Das, Shiva K. [Department of Radiation Oncology, Duke University Medical Center, Durham, North Carolina 27705 (United States); Lee, John A [Molecular Imaging and Experimental Radiotherapy Unit, Universite Catholique de Louvain, 1200 Brussels (Belgium)

    2013-04-15

    Purpose: Many approaches have been proposed to segment high uptake objects in 18F-fluoro-deoxy-glucose positron emission tomography images but none provides consistent performance across the large variety of imaging situations. This study investigates the use of two methods of combining individual segmentation methods to reduce the impact of inconsistent performance of the individual methods: simple majority voting and probabilistic estimation. Methods: The National Electrical Manufacturers Association image quality phantom containing five glass spheres with diameters 13-37 mm and two irregularly shaped volumes (16 and 32 cc) formed by deforming high-density polyethylene bottles in a hot water bath were filled with 18-fluoro-deoxyglucose and iodine contrast agent. Repeated 5-min positron emission tomography (PET) images were acquired at 4:1 and 8:1 object-to-background contrasts for spherical objects and 4.5:1 and 9:1 for irregular objects. Five individual methods were used to segment each object: 40% thresholding, adaptive thresholding, k-means clustering, seeded region-growing, and a gradient based method. Volumes were combined using a majority vote (MJV) or Simultaneous Truth And Performance Level Estimate (STAPLE) method. Accuracy of segmentations relative to CT ground truth volumes were assessed using the Dice similarity coefficient (DSC) and the symmetric mean absolute surface distances (SMASDs). Results: MJV had median DSC values of 0.886 and 0.875; and SMASD of 0.52 and 0.71 mm for spheres and irregular shapes, respectively. STAPLE provided similar results with median DSC of 0.886 and 0.871; and median SMASD of 0.50 and 0.72 mm for spheres and irregular shapes, respectively. STAPLE had significantly higher DSC and lower SMASD values than MJV for spheres (DSC, p < 0.0001; SMASD, p= 0.0101) but MJV had significantly higher DSC and lower SMASD values compared to STAPLE for irregular shapes (DSC, p < 0.0001; SMASD, p= 0.0027). DSC was not significantly

  5. Sustainable Supplier Performance Evaluation and Selection with Neofuzzy TOPSIS Method.

    Science.gov (United States)

    Chaharsooghi, S K; Ashrafi, Mehdi

    2014-01-01

    Supplier selection plays an important role in the supply chain management and traditional criteria such as price, quality, and flexibility are considered for supplier performance evaluation in researches. In recent years sustainability has received more attention in the supply chain management literature with triple bottom line (TBL) describing the sustainability in supply chain management with social, environmental, and economic initiatives. This paper explores sustainability in supply chain management and examines the problem of identifying a new model for supplier selection based on extended model of TBL approach in supply chain by presenting fuzzy multicriteria method. Linguistic values of experts' subjective preferences are expressed with fuzzy numbers and Neofuzzy TOPSIS is proposed for finding the best solution of supplier selection problem. Numerical results show that the proposed model is efficient for integrating sustainability in supplier selection problem. The importance of using complimentary aspects of sustainability and Neofuzzy TOPSIS concept in sustainable supplier selection process is shown with sensitivity analysis.

  6. Performance of various mathematical methods for calculation of radioimmunoassay results

    International Nuclear Information System (INIS)

    Sandel, P.; Vogt, W.

    1977-01-01

    Interpolation and regression methods are available for computer aided determination of radioimmunological end results. We compared the performance of eight algorithms (weighted and unweighted linear logit-log regression, quadratic logit-log regression, Rodbards logistic model in the weighted and unweighted form, smoothing spline interpolation with a large and small smoothing factor and polygonal interpolation) on the basis of three radioimmunoassays with different reference curve characteristics (digoxin, estriol, human chorionic somatomammotropin = HCS). Great store was set by the accuracy of the approximation at the intermediate points on the curve, ie. those points that lie midway between two standard concentrations. These concentrations were obtained by weighing and inserted as unknown samples. In the case of digoxin and estriol the polygonal interpolation provided the best results while the weighted logit-log regression proved superior in the case of HCS. (orig.) [de

  7. A rapid method of evaluating fluoroscopic system performance

    International Nuclear Information System (INIS)

    Sprawls, P.

    1989-01-01

    This paper presents a study to develop a method for the rapid evaluation and documentation of fluoroscopic image quality. All objects contained within a conventional contrast-detail test phantom (Leeds TO-10) are displayed in an array format according to their contrast and size. A copy of the display is used as the data collection form and a permanent record of system performance. A fluoroscope is evaluated by viewing the test phantom and marking the visible objects on the display. A line drawn through the objects with minimum visibility in each size group forms a contrast-detail curve for the system. This is compared with a standard or reference line, which is in the display.Deviations in curve position are useful indicators of specific image quality problems, such as excessive noise or blurring. The use of a special object-visibility array format display makes it possible to collect data, analyze the results, and create a record of fluoroscopic performance in less than 2 minutes for each viewing mode

  8. Mechanical/structural performance test method of a spacer grid

    International Nuclear Information System (INIS)

    Yoon, Kyung Ho

    2000-06-01

    The spacer grid is one of the main structural components in the fuel assembly, which supports the fuel rods, guides cooling water, and protects the system from an external impact load, such as earthquakes. In order to develop the spacer grid with the high mechanical performance, the mechanical and structural properties of the spacer grids must be extensively examined while designing it. In this report, the mechanical/structural test methods, i.e. the characteristic test of a spacer grid spring or dimple, static buckling test of a partial or full size spacer grid and dynamic impact test of them are described. The characteristic test of a spacer grid spring or dimple is accomplished with universal tensile test machine, a specimen is fixed with test fixture and then applied compressive load. The characteristic test data is saved at loading and unloading event. The static buckling test of a partial or full size spacer grid is executed with the same universal tensile testing machine, a specimen is fixed between cross-heads and then applied the compressive load. The buckling strength is decided the maximum strength at load vs. displacement curve. The dynamic impact test of a partial or full size spacer grid is performed with pendulum type impact machine and free fall shock test machine, a specimen is fixed with test fixture and then applied the impact load by impact hammer. Specially, the pendulum type impact test machine is also possible under the operating temperature because a furnace is separately attached with test machine

  9. Uncertainty evaluation methods for waste package performance assessment

    International Nuclear Information System (INIS)

    Wu, Y.T.; Nair, P.K.; Journel, A.G.; Abramson, L.R.

    1991-01-01

    This report identifies and investigates methodologies to deal with uncertainties in assessing high-level nuclear waste package performance. Four uncertainty evaluation methods (probability-distribution approach, bounding approach, expert judgment, and sensitivity analysis) are suggested as the elements of a methodology that, without either diminishing or enhancing the input uncertainties, can evaluate performance uncertainty. Such a methodology can also help identify critical inputs as a guide to reducing uncertainty so as to provide reasonable assurance that the risk objectives are met. This report examines the current qualitative waste containment regulation and shows how, in conjunction with the identified uncertainty evaluation methodology, a framework for a quantitative probability-based rule can be developed that takes account of the uncertainties. Current US Nuclear Regulatory Commission (NRC) regulation requires that the waste packages provide ''substantially complete containment'' (SCC) during the containment period. The term ''SCC'' is ambiguous and subject to interpretation. This report, together with an accompanying report that describes the technical considerations that must be addressed to satisfy high-level waste containment requirements, provides a basis for a third report to develop recommendations for regulatory uncertainty reduction in the ''containment''requirement of 10 CFR Part 60. 25 refs., 3 figs., 2 tabs

  10. Physics methods for calculating light water reactor increased performances

    International Nuclear Information System (INIS)

    Vandenberg, C.; Charlier, A.

    1988-01-01

    The intensive use of light water reactors (LWRs) has induced modification of their characteristics and performances in order to improve fissile material utilization and to increase their availability and flexibility under operation. From the conceptual point of view, adequate methods must be used to calculate core characteristics, taking into account present design requirements, e.g., use of burnable poison, plutonium recycling, etc. From the operational point of view, nuclear plants that have been producing a large percentage of electricity in some countries must adapt their planning to the need of the electrical network and operate on a load-follow basis. Consequently, plant behavior must be predicted and accurately followed in order to improve the plant's capability within safety limits. The Belgonucleaire code system has been developed and extensively validated. It is an accurate, flexible, easily usable, fast-running tool for solving the problems related to LWR technology development. The methods and validation of the two computer codes LWR-WIMS and MICROLUX, which are the main components of the physics calculation system, are explained

  11. A new ART iterative method and a comparison of performance among various ART methods

    International Nuclear Information System (INIS)

    Tan, Yufeng; Sato, Shunsuke

    1993-01-01

    Many algebraic reconstruction techniques (ART) image reconstruction algorithms, for instance, simultaneous iterative reconstruction technique (SIRT), the relaxation method and multiplicative ART (MART), have been proposed and their convergent properties have been studied. SIRT and the underrelaxed relaxation method converge to the least-squares solution, but the convergent speeds are very slow. The Kaczmarz method converges very quickly, but the reconstructed images contain a lot of noise. The comparative studies between these algorithms have been done by Gilbert and others, but are not adequate. In this paper, we (1) propose a new method which is a modified Kaczmarz method and prove its convergence property, (2) study performance of 7 algorithms including the one proposed here by computer simulation for 3 kinds of typical phantoms. The method proposed here does not give the least-square solution, but the root mean square errors of its reconstructed images decrease very quickly after few interations. The result shows that the method proposed here gives a better reconstructed image. (author)

  12. Information processing differences and similarities in adults with dyslexia and adults with Attention Deficit Hyperactivity Disorder during a Continuous Performance Test : A study of cortical potentials

    NARCIS (Netherlands)

    Dhar, Monica; Been, Pieter H.; Minderaa, Ruud B.; Althaus, Monika

    Twenty male adults with ADHD, 16 dyslexic adults, 15 comorbid adults, and 16 normal controls were compared on performance and underlying brain responses, during a cued Continuous Performance Test (O-X CPT), with the aim of discovering features of information processing differentiating between the

  13. Information Processing Differences and Similarities in Adults with Dyslexia and Adults with Attention Deficit Hyperactivity Disorder during a Continuous Performance Test: A Study of Cortical Potentials

    Science.gov (United States)

    Dhar, Monica; Been, Pieter H.; Minderaa, Ruud B.; Althaus, Monika

    2010-01-01

    Twenty male adults with ADHD, 16 dyslexic adults, 15 comorbid adults, and 16 normal controls were compared on performance and underlying brain responses, during a cued Continuous Performance Test (O-X CPT), with the aim of discovering features of information processing differentiating between the groups. The study evaluated both cue- and…

  14. Subjective perceptions, symptom intensity and performance: a comparison of two independent studies, both changing similarly the pollution load in an office

    DEFF Research Database (Denmark)

    Wargocki, Pawel; Lagercrantz, Love Per; Witterseh, Thomas

    2002-01-01

    , intensity of sick building syndrome symptoms and performance of office work. Removing the pollution source improved the perceived air quality, decreased the perceived dryness of air and the severity of headaches, and increased typing performance. These effects were observed separately in each experiment...

  15. Two Project Methods: Preliminary Observations on the Similarities and Differences between William Heard Kilpatrick's Project Method and John Dewey's Problem-Solving Method

    Science.gov (United States)

    Sutinen, Ari

    2013-01-01

    The project method became a famous teaching method when William Heard Kilpatrick published his article "Project Method" in 1918. The key idea in Kilpatrick's project method is to try to explain how pupils learn things when they work in projects toward different common objects. The same idea of pupils learning by work or action in an…

  16. Similarity-based pattern analysis and recognition

    CERN Document Server

    Pelillo, Marcello

    2013-01-01

    This accessible text/reference presents a coherent overview of the emerging field of non-Euclidean similarity learning. The book presents a broad range of perspectives on similarity-based pattern analysis and recognition methods, from purely theoretical challenges to practical, real-world applications. The coverage includes both supervised and unsupervised learning paradigms, as well as generative and discriminative models. Topics and features: explores the origination and causes of non-Euclidean (dis)similarity measures, and how they influence the performance of traditional classification alg

  17. Operation Performance Evaluation of Power Grid Enterprise Using a Hybrid BWM-TOPSIS Method

    Directory of Open Access Journals (Sweden)

    Peipei You

    2017-12-01

    Full Text Available Electricity market reform is in progress in China, and the operational performance of power grid enterprises are vital for its healthy and sustainable development in the current electricity market environment. In this paper, a hybrid multi-criteria decision-making (MCDM framework for operational performance evaluation of a power grid enterprise is proposed from the perspective of sustainability. The latest MCDM method, namely the best-worst method (BWM was employed to determine the weights of all criteria, and the technique for order preference by similarity to an ideal solution (TOPSIS was applied to rank the operation performance of a power grid enterprise. The evaluation index system was built based on the concept of sustainability, which includes three criteria (namely economy, society, and environment and seven sub-criteria. Four power grid enterprises were selected to perform the empirical analysis, and the results indicate that power grid enterprise A1 has the best operation performance. The proposed hybrid BWM-TOPSIS-based framework for operation performance evaluation of a power grid enterprise is effective and practical.

  18. Staff Performance Analysis: A Method for Identifying Brigade Staff Tasks

    National Research Council Canada - National Science Library

    Ford, Laura

    1997-01-01

    ... members of conventional mounted brigade staff. Initial analysis of performance requirements in existing documentation revealed that the performance specifications were not sufficiently detailed for brigade battle staffs...

  19. Two methods to simulate intrapulpal pressure: effects upon bonding performance of self-etch adhesives.

    Science.gov (United States)

    Feitosa, V P; Gotti, V B; Grohmann, C V; Abuná, G; Correr-Sobrinho, L; Sinhoreti, M A C; Correr, A B

    2014-09-01

    To evaluate the effects of two methods to simulate physiological pulpal pressure on the dentine bonding performance of two all-in-one adhesives and a two-step self-etch silorane-based adhesive by means of microtensile bond strength (μTBS) and nanoleakage surveys. The self-etch adhesives [G-Bond Plus (GB), Adper Easy Bond (EB) and silorane adhesive (SIL)] were applied to flat deep dentine surfaces from extracted human molars. The restorations were constructed using resin composites Filtek Silorane or Filtek Z350 (3M ESPE). After 24 h using the two methods of simulated pulpal pressure or no pulpal pressure (control groups), the bonded teeth were cut into specimens and submitted to μTBS and silver uptake examination. Results were analysed with two-way anova and Tukey's test (P adhesives. No difference between control and pulpal pressure groups was found for SIL and GB. EB led significant drop (P = 0.002) in bond strength under pulpal pressure. Silver impregnation was increased after both methods of simulated pulpal pressure for all adhesives, and it was similar between the simulated pulpal pressure methods. The innovative method to simulate pulpal pressure behaved similarly to the classic one and could be used as an alternative. The HEMA-free one-step and the two-step self-etch adhesives had acceptable resistance against pulpal pressure, unlike the HEMA-rich adhesive. © 2013 International Endodontic Journal. Published by John Wiley & Sons Ltd.

  20. A general method for assessing brain-computer interface performance and its limitations

    Science.gov (United States)

    Hill, N. Jeremy; Häuser, Ann-Katrin; Schalk, Gerwin

    2014-04-01

    Objective. When researchers evaluate brain-computer interface (BCI) systems, we want quantitative answers to questions such as: How good is the system’s performance? How good does it need to be? and: Is it capable of reaching the desired level in future? In response to the current lack of objective, quantitative, study-independent approaches, we introduce methods that help to address such questions. We identified three challenges: (I) the need for efficient measurement techniques that adapt rapidly and reliably to capture a wide range of performance levels; (II) the need to express results in a way that allows comparison between similar but non-identical tasks; (III) the need to measure the extent to which certain components of a BCI system (e.g. the signal processing pipeline) not only support BCI performance, but also potentially restrict the maximum level it can reach. Approach. For challenge (I), we developed an automatic staircase method that adjusted task difficulty adaptively along a single abstract axis. For challenge (II), we used the rate of information gain between two Bernoulli distributions: one reflecting the observed success rate, the other reflecting chance performance estimated by a matched random-walk method. This measure includes Wolpaw’s information transfer rate as a special case, but addresses the latter’s limitations including its restriction to item-selection tasks. To validate our approach and address challenge (III), we compared four healthy subjects’ performance using an EEG-based BCI, a ‘Direct Controller’ (a high-performance hardware input device), and a ‘Pseudo-BCI Controller’ (the same input device, but with control signals processed by the BCI signal processing pipeline). Main results. Our results confirm the repeatability and validity of our measures, and indicate that our BCI signal processing pipeline reduced attainable performance by about 33% (21 bits min-1). Significance. Our approach provides a flexible basis

  1. Empirical Performance of Cross-Validation With Oracle Methods in a Genomics Context.

    Science.gov (United States)

    Martinez, Josue G; Carroll, Raymond J; Müller, Samuel; Sampson, Joshua N; Chatterjee, Nilanjan

    2011-11-01

    When employing model selection methods with oracle properties such as the smoothly clipped absolute deviation (SCAD) and the Adaptive Lasso, it is typical to estimate the smoothing parameter by m-fold cross-validation, for example, m = 10. In problems where the true regression function is sparse and the signals large, such cross-validation typically works well. However, in regression modeling of genomic studies involving Single Nucleotide Polymorphisms (SNP), the true regression functions, while thought to be sparse, do not have large signals. We demonstrate empirically that in such problems, the number of selected variables using SCAD and the Adaptive Lasso, with 10-fold cross-validation, is a random variable that has considerable and surprising variation. Similar remarks apply to non-oracle methods such as the Lasso. Our study strongly questions the suitability of performing only a single run of m-fold cross-validation with any oracle method, and not just the SCAD and Adaptive Lasso.

  2. Unveiling Music Structure Via PLSA Similarity Fusion

    DEFF Research Database (Denmark)

    Arenas-García, Jerónimo; Meng, Anders; Petersen, Kaare Brandt

    2007-01-01

    Nowadays there is an increasing interest in developing methods for building music recommendation systems. In order to get a satisfactory performance from such a system, one needs to incorporate as much information about songs similarity as possible; however, how to do so is not obvious. In this p......Nowadays there is an increasing interest in developing methods for building music recommendation systems. In order to get a satisfactory performance from such a system, one needs to incorporate as much information about songs similarity as possible; however, how to do so is not obvious...... observed similarities can be satisfactorily explained using the latent semantics. Additionally, this approach significantly simplifies the song retrieval phase, leading to a more practical system implementation. The suitability of the PLSA model for representing music structure is studied in a simplified...

  3. A Comparison of the Developmental Experiences of Elite and Sub-Elite Swimmers: Similar Developmental Histories Can Lead to Differences in Performance Level

    Science.gov (United States)

    Johnson, Michael B.; Tenenbaum, Gershon; Edmonds, William A.; Castillo, Yvonne

    2008-01-01

    The current study fills a void in the literature that investigates the factors required for elite athlete development. Previous studies have (a) illustrated psychological and physiological differences between elites and non-elites; "or" (b) described the psychological and physiological developmental experiences of elite performers. The…

  4. Stability over Time of Different Methods of Estimating School Performance

    Science.gov (United States)

    Dumay, Xavier; Coe, Rob; Anumendem, Dickson Nkafu

    2014-01-01

    This paper aims to investigate how stability varies with the approach used in estimating school performance in a large sample of English primary schools. The results show that (a) raw performance is considerably more stable than adjusted performance, which in turn is slightly more stable than growth model estimates; (b) schools' performance…

  5. Performance Analysis of Cyber Security Awareness Delivery Methods

    Science.gov (United States)

    Abawajy, Jemal; Kim, Tai-Hoon

    In order to decrease information security threats caused by human-related vulnerabilities, an increased concentration on information security awareness and training is necessary. There are numerous information security awareness training delivery methods. The purpose of this study was to determine what delivery method is most successful in providing security awareness training. We conducted security awareness training using various delivery methods such as text based, game based and a short video presentation with the aim of determining user preference delivery methods. Our study suggests that a combined delvery methods are better than individual secrity awareness delivery method.

  6. Age-Related Changes in Locomotor Performance Reveal a Similar Pattern for Caenorhabditis elegans, Mus domesticus, Canis familiaris, Equus caballus, and Homo sapiens.

    Science.gov (United States)

    Marck, Adrien; Berthelot, Geoffroy; Foulonneau, Vincent; Marc, Andy; Antero-Jacquemin, Juliana; Noirez, Philippe; Bronikowski, Anne M; Morgan, Theodore J; Garland, Theodore; Carter, Patrick A; Hersen, Pascal; Di Meglio, Jean-Marc; Toussaint, Jean-François

    2017-04-01

    Locomotion is one of the major physiological functions for most animals. Previous studies have described aging mechanisms linked to locomotor performance among different species. However, the precise dynamics of these age-related changes, and their interactions with development and senescence, are largely unknown. Here, we use the same conceptual framework to describe locomotor performances in Caenorhabditis elegans, Mus domesticus, Canis familiaris, Equus caballus, and Homo sapiens. We show that locomotion is a consistent biomarker of age-related changes, with an asymmetrical pattern throughout life, regardless of the type of effort or its duration. However, there is variation (i) among species for the same mode of locomotion, (ii) within species for different modes of locomotion, and (iii) among individuals of the same species for the same mode of locomotion. Age-related patterns are modulated by genetic (such as selective breeding) as well as environmental conditions (such as temperature). However, in all cases, the intersection of the rising developmental phase and the declining senescent phase reveals neither a sharp transition nor a plateau, but a smooth transition, emphasizing a crucial moment: the age at peak performance. This transition may define a specific target for future investigations on the dynamics of such biological interactions. © The Author 2016. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  7. Computational hydrodynamic comparison of a mini vessel and a USP 2 dissolution testing system to predict the dynamic operating conditions for similarity of dissolution performance.

    Science.gov (United States)

    Wang, Bing; Bredael, Gerard; Armenante, Piero M

    2018-03-25

    The hydrodynamic characteristics of a mini vessel and a USP 2 dissolution testing system were obtained and compared to predict the tablet-liquid mass transfer coefficient from velocity distributions near the tablet and establish the dynamic operating conditions under which dissolution in mini vessels could be conducted to generate concentration profiles similar to those in the USP 2. Velocity profiles were obtained experimentally using Particle Image Velocimetry (PIV). Computational Fluid Dynamics (CFD) was used to predict the velocity distribution and strain rate around a model tablet. A CFD-based mass transfer model was also developed. When plotted against strain rate, the predicted tablet-liquid mass transfer coefficient was found to be independent of the system where it was obtained, implying that a tablet would dissolve at the same rate in both systems provided that the concentration gradient between the tablet surface and the bulk is the same, the tablet surface area per unit liquid volume is identical, and the two systems are operated at the appropriate agitation speeds specified in this work. The results of this work will help dissolution scientists operate mini vessels so as to predict the dissolution profiles in the USP 2, especially during the early stages of drug development. Copyright © 2018 Elsevier B.V. All rights reserved.

  8. Simulating elastic light scattering using high performance computing methods

    NARCIS (Netherlands)

    Hoekstra, A.G.; Sloot, P.M.A.; Verbraeck, A.; Kerckhoffs, E.J.H.

    1993-01-01

    The Coupled Dipole method, as originally formulated byPurcell and Pennypacker, is a very powerful method tosimulate the Elastic Light Scattering from arbitraryparticles. This method, which is a particle simulationmodel for Computational Electromagnetics, has one majordrawback: if the size of the

  9. Similar reliability and equivalent performance of female and male mice in the open field and water-maze place navigation task.

    Science.gov (United States)

    Fritz, Ann-Kristina; Amrein, Irmgard; Wolfer, David P

    2017-09-01

    Although most nervous system diseases affect women and men differentially, most behavioral studies using mouse models do not include subjects of both sexes. Many researchers worry that data of female mice may be unreliable due to the estrous cycle. Here, we retrospectively evaluated sex effects on coefficient of variation (CV) in 5,311 mice which had performed the same place navigation protocol in the water-maze and in 4,554 mice tested in the same open field arena. Confidence intervals for Cohen's d as measure of effect size were computed and tested for equivalence with 0.2 as equivalence margin. Despite the large sample size, only few behavioral parameters showed a significant sex effect on CV. Confidence intervals of effect size indicated that CV was either equivalent or showed a small sex difference at most, accounting for less than 2% of total group to group variation of CV. While female mice were potentially slightly more variable in water-maze acquisition and in the open field, males tended to perform less reliably in the water-maze probe trial. In addition to evaluating variability, we also directly compared mean performance of female and male mice and found them to be equivalent in both water-maze place navigation and open field exploration. Our data confirm and extend other large scale studies in demonstrating that including female mice in experiments does not cause a relevant increase of data variability. Our results make a strong case for including mice of both sexes whenever open field or water-maze are used in preclinical research. © 2017 The Authors. American Journal of Medical Genetics Part C Published by Wiley Periodicals, Inc.

  10. Similar reliability and equivalent performance of female and male mice in the open field and water‐maze place navigation task

    Science.gov (United States)

    Fritz, Ann‐Kristina; Amrein, Irmgard

    2017-01-01

    Although most nervous system diseases affect women and men differentially, most behavioral studies using mouse models do not include subjects of both sexes. Many researchers worry that data of female mice may be unreliable due to the estrous cycle. Here, we retrospectively evaluated sex effects on coefficient of variation (CV) in 5,311 mice which had performed the same place navigation protocol in the water‐maze and in 4,554 mice tested in the same open field arena. Confidence intervals for Cohen's d as measure of effect size were computed and tested for equivalence with 0.2 as equivalence margin. Despite the large sample size, only few behavioral parameters showed a significant sex effect on CV. Confidence intervals of effect size indicated that CV was either equivalent or showed a small sex difference at most, accounting for less than 2% of total group to group variation of CV. While female mice were potentially slightly more variable in water‐maze acquisition and in the open field, males tended to perform less reliably in the water‐maze probe trial. In addition to evaluating variability, we also directly compared mean performance of female and male mice and found them to be equivalent in both water‐maze place navigation and open field exploration. Our data confirm and extend other large scale studies in demonstrating that including female mice in experiments does not cause a relevant increase of data variability. Our results make a strong case for including mice of both sexes whenever open field or water‐maze are used in preclinical research. PMID:28654717

  11. Similar reliability and equivalent performance of female and male mice in the open field and water-maze place navigation task

    OpenAIRE

    Fritz, Ann-Kristina; Amrein, Irmgard; Wolfer, David P.

    2017-01-01

    Although most nervous system diseases affect women and men differentially, most behavioral studies using mouse models do not include subjects of both sexes. Many researchers worry that data of female mice may be unreliable due to the estrous cycle. Here, we retrospectively evaluated sex effects on coefficient of variation (CV) in 5,311 mice which had performed the same place navigation protocol in the water-maze and in 4,554 mice tested in the same open field arena. Confidence intervals for C...

  12. Prediction of polymer flooding performance using an analytical method

    International Nuclear Information System (INIS)

    Tan Czek Hoong; Mariyamni Awang; Foo Kok Wai

    2001-01-01

    The study investigated the applicability of an analytical method developed by El-Khatib in polymer flooding. Results from a simulator UTCHEM and experiments were compared with the El-Khatib prediction method. In general, by assuming a constant viscosity polymer injection, the method gave much higher recovery values than the simulation runs and the experiments. A modification of the method gave better correlation, albeit only oil production. Investigation is continuing on modifying the method so that a better overall fit can be obtained for polymer flooding. (Author)

  13. Analysis of ECT Synchronization Performance Based on Different Interpolation Methods

    Directory of Open Access Journals (Sweden)

    Yang Zhixin

    2014-01-01

    Full Text Available There are two synchronization methods of electronic transformer in IEC60044-8 standard: impulsive synchronization and interpolation. When the impulsive synchronization method is inapplicability, the data synchronization of electronic transformer can be realized by using the interpolation method. The typical interpolation methods are piecewise linear interpolation, quadratic interpolation, cubic spline interpolation and so on. In this paper, the influences of piecewise linear interpolation, quadratic interpolation and cubic spline interpolation for the data synchronization of electronic transformer are computed, then the computational complexity, the synchronization precision, the reliability, the application range of different interpolation methods are analyzed and compared, which can serve as guide studies for practical applications.

  14. Determination of Settling Tanks Performance Using an Eulerian- Lagrangian Method

    OpenAIRE

    A Tamayol; B Firoozabadi; G Ahmadi

    2008-01-01

    Circulation regions always exist in settling tanks. These regions reduce the tank’s performance and decrease its effective volume. The recirculation zones would result in short-circuiting and high flow mixing problems. The inlet position would also affect the size and location of the recirculation region. Using a proper baffle configuration could substantially increase the performance of the settling tanks. A common procedure for the comparison of the performances of diffe...

  15. A performance model for the communication in fast multipole methods on high-performance computing platforms

    KAUST Repository

    Ibeid, Huda

    2016-03-04

    Exascale systems are predicted to have approximately 1 billion cores, assuming gigahertz cores. Limitations on affordable network topologies for distributed memory systems of such massive scale bring new challenges to the currently dominant parallel programing model. Currently, there are many efforts to evaluate the hardware and software bottlenecks of exascale designs. It is therefore of interest to model application performance and to understand what changes need to be made to ensure extrapolated scalability. The fast multipole method (FMM) was originally developed for accelerating N-body problems in astrophysics and molecular dynamics but has recently been extended to a wider range of problems. Its high arithmetic intensity combined with its linear complexity and asynchronous communication patterns make it a promising algorithm for exascale systems. In this paper, we discuss the challenges for FMM on current parallel computers and future exascale architectures, with a focus on internode communication. We focus on the communication part only; the efficiency of the computational kernels are beyond the scope of the present study. We develop a performance model that considers the communication patterns of the FMM and observe a good match between our model and the actual communication time on four high-performance computing (HPC) systems, when latency, bandwidth, network topology, and multicore penalties are all taken into account. To our knowledge, this is the first formal characterization of internode communication in FMM that validates the model against actual measurements of communication time. The ultimate communication model is predictive in an absolute sense; however, on complex systems, this objective is often out of reach or of a difficulty out of proportion to its benefit when there exists a simpler model that is inexpensive and sufficient to guide coding decisions leading to improved scaling. The current model provides such guidance.

  16. A Mixed-Method Approach on Digital Educational Games for K12: Gender, Attitudes and Performance

    Science.gov (United States)

    Law, Effie Lai-Chong; Gamble, Tim; Schwarz, Daniel; Kickmeier-Rust, Michael D.; Holzinger, Andreas

    Research on the influence of gender on attitudes towards and performance in digital educational games (DEGs) has quite a long history. Generally, males tend to play such games more engagingly than females, consequently attitude and performance of males using DEGs should be presumably higher than that of females. This paper reports an investigation of a DEG, which was developed to enhance the acquisition of geographical knowledge, carried out on British, German and Austrian K12 students aged between 11 and 14. Methods include a survey on initial design concepts, user tests on the system and two single-gender focus groups. Gender and cultural differences in gameplay habit, game type preferences and game character perceptions were observed. The results showed that both genders similarly improved their geographical knowledge, although boys tended to have a higher level of positive user experience than the girls. The qualitative data from the focus groups illustrated some interesting gender differences in perceiving various aspects of the game.

  17. Development of an analytical microbial consortia method for enhancing performance monitoring at aerobic wastewater treatment plants.

    Science.gov (United States)

    Razban, Behrooz; Nelson, Kristina Y; McMartin, Dena W; Cullimore, D Roy; Wall, Michelle; Wang, Dunling

    2012-01-01

    An analytical method to produce profiles of bacterial biomass fatty acid methyl esters (FAME) was developed employing rapid agitation followed by static incubation (RASI) using selective media of wastewater microbial communities. The results were compiled to produce a unique library for comparison and performance analysis at a Wastewater Treatment Plant (WWTP). A total of 146 samples from the aerated WWTP, comprising 73 samples of each secondary and tertiary effluent, were included analyzed. For comparison purposes, all samples were evaluated via a similarity index (SI) with secondary effluents producing an SI of 0.88 with 2.7% variation and tertiary samples producing an SI 0.86 with 5.0% variation. The results also highlighted significant differences between the fatty acid profiles of the tertiary and secondary effluents indicating considerable shifts in the bacterial community profile between these treatment phases. The WWTP performance results using this method were highly replicable and reproducible indicating that the protocol has potential as a performance-monitoring tool for aerated WWTPs. The results quickly and accurately reflect shifts in dominant bacterial communities that result when processes operations and performance change.

  18. The strategic selecting criteria and performance by using the multiple criteria method

    Directory of Open Access Journals (Sweden)

    Lisa Y. Chen

    2008-02-01

    Full Text Available As the increasing competitive intensity in the current service market, organizational capabilities have been recognized as the importance of sustaining competitive advantage. The profitable growth for the firms has been fueled a need to systematically assess and renew the organization. The purpose of this study is to analyze the financial performance of the firms to create an effective evaluating structure for the Taiwan's service industry. This study utilized TOPSIS (technique for order preference by similarity to ideal solution method to evaluate the operating performance of 12 companies. TOPSIS is a multiple criteria decision making method to identify solutions from a finite set of alternatives based upon simultaneous minimization of distance from an ideal point and maximization of distance from a nadir point. By using this approach, this study measures the financial performance of firms through two aspects and ten indicators. The result indicated e-life had outstanding performance among the 12 retailers. The findings of this study provided managers to better understand their market position, competition, and profitability for future strategic planning and operational management.

  19. Method of performing a layer operation in a communications network

    NARCIS (Netherlands)

    Bosch, H.G.P.; Mullender, Sape J.; Narlikar, G.J.

    2006-01-01

    In an embodiment of the present invention, a scheduling operation is performed at a lower layer based on upper layer information. In another embodiment of the present invention, an action is performed at an upper layer based on received lower layer information. Also, a scheduling operation may be

  20. Evaluation method for acoustic trapping performance by tracking motion of trapped microparticle

    Science.gov (United States)

    Lim, Hae Gyun; Ham Kim, Hyung; Yoon, Changhan

    2018-05-01

    We report a method to evaluate the performances of a single-beam acoustic tweezer using a high-frequency ultrasound transducer. The motion of a microparticle trapped by a 45-MHz single-element transducer was captured and analyzed to deduce the magnitude of trapping force. In the proposed method, the motion of a trapped microparticle was analyzed from a series of microscopy images to compute trapping force; thus, no additional equipment such as microfluidics is required. The method could be used to estimate the effective trapping force in an acoustic tweezer experiment to assess cell membrane deformability by attaching a microbead to the surface of a cell and tracking the motion of the trapped bead, which is similar to a bead-based assay that uses optical tweezers. The results showed that the trapping force increased with increasing acoustic intensity and duty factor, but the force eventually reached a plateau at a higher acoustic intensity. They demonstrated that this method could be used as a simple tool to evaluate the performance and to optimize the operating conditions of acoustic tweezers.

  1. Method to fabricate high performance tubular solid oxide fuel cells

    Science.gov (United States)

    Chen, Fanglin; Yang, Chenghao; Jin, Chao

    2013-06-18

    In accordance with the present disclosure, a method for fabricating a solid oxide fuel cell is described. The method includes forming an asymmetric porous ceramic tube by using a phase inversion process. The method further includes forming an asymmetric porous ceramic layer on a surface of the asymmetric porous ceramic tube by using a phase inversion process. The tube is co-sintered to form a structure having a first porous layer, a second porous layer, and a dense layer positioned therebetween.

  2. Performance Analysis and Application of Three Different Computational Methods for Solar Heating System with Seasonal Water Tank Heat Storage

    Directory of Open Access Journals (Sweden)

    Dongliang Sun

    2013-01-01

    Full Text Available We analyze and compare three different computational methods for a solar heating system with seasonal water tank heat storage (SHS-SWTHS. These methods are accurate numerical method, temperature stratification method, and uniform temperature method. The accurate numerical method can accurately predict the performance of the system, but it takes about 4 to 5 weeks, which is too long and hard for the performance analysis of this system. The temperature stratification method obtains relatively accurate computation results and takes a relatively short computation time, which is about 2 to 3 hours. Therefore, this method is most suitable for the performance analysis of this system. The deviation of the computational results of the uniform temperature method is great, and the time consumed is similar to that of the temperature stratification method. Therefore, this method is not recommended herein. Based on the above analyses, the temperature stratification method is applied to analyze the influence of the embedded depth of water tank, the thickness of thermal insulation material, and the collection area on the performance of this system. The results will provide a design basis for the related demonstration projects.

  3. Multicore Performance of Block Algebraic Iterative Reconstruction Methods

    DEFF Research Database (Denmark)

    Sørensen, Hans Henrik B.; Hansen, Per Christian

    2014-01-01

    Algebraic iterative methods are routinely used for solving the ill-posed sparse linear systems arising in tomographic image reconstruction. Here we consider the algebraic reconstruction technique (ART) and the simultaneous iterative reconstruction techniques (SIRT), both of which rely on semiconv......Algebraic iterative methods are routinely used for solving the ill-posed sparse linear systems arising in tomographic image reconstruction. Here we consider the algebraic reconstruction technique (ART) and the simultaneous iterative reconstruction techniques (SIRT), both of which rely...... on semiconvergence. Block versions of these methods, based on a partitioning of the linear system, are able to combine the fast semiconvergence of ART with the better multicore properties of SIRT. These block methods separate into two classes: those that, in each iteration, access the blocks in a sequential manner...... a fixed relaxation parameter in each method, namely, the one that leads to the fastest semiconvergence. Computational results show that for multicore computers, the sequential approach is preferable....

  4. Performance Measures for Public Participation Methods : Final Report

    Science.gov (United States)

    2018-01-01

    Public engagement is an important part of transportation project development, but measuring its effectiveness is typically piecemealed. Performance measurementdescribed by the Urban Institute as the measurement on a regular basis of the results (o...

  5. Modeling the Performance of Fast Mulipole Method on HPC platforms

    KAUST Repository

    Ibeid, Huda

    2012-01-01

    In this thesis , we discuss the challenges for FMM on current parallel computers and future exasclae architecture. Furthermore, we develop a novel performance model for FMM. Our ultimate aim of this thesis

  6. Instrumental performance of an etude after three methods of practice.

    Science.gov (United States)

    Vanden Ark, S

    1997-12-01

    For 80 fifth-grade students three practice conditions (mental, mental with physical simulation, and physical with singing) produced significant mean differences in instrumental performance of an etude. No significant differences were found for traditional, physical practice.

  7. EVALUATION OF SERVICE SUPPLY CHAIN PERFORMANCE CRITERIA WITH DANP METHOD

    OpenAIRE

    ÖZVERİ, Onur; GÜÇLÜ, Pembe; AYCİN, Ejder

    2018-01-01

    Despite the service industry composes large part of the world economy, the academic studies and applications on supply chain are mainly about production industry. Because of the different structure of services, the service supply chain and also performance criteria-metrics differ from the product supply chain. The aim of this paper is to evaluate the supply chain performance metrics for restaurant sector. For this purpose in the first and second part of the paper the service supply chain conc...

  8. Statistical modelling and RCS detrending methods provide similar estimates of long-term trend in radial growth of common beech in north-eastern France

    OpenAIRE

    Bontemps , Jean-Daniel; Esper , Jan

    2011-01-01

    International audience; Dendrochronological methods have greatly contributed to the documentation of past long-term trends in forest growth. These methods primarily focus on the high-frequency signals of tree ring chronologies. They require the removal of the ageing trend in tree growth, known as 'standardisation' or 'detrending', as a prerequisite to the estimation of such trends. Because the approach is sequential, it may however absorb part of the low-frequency historical signal. In this s...

  9. Discovering Music Structure via Similarity Fusion

    DEFF Research Database (Denmark)

    for representing music structure is studied in a simplified scenario consisting of 4412 songs and two similarity measures among them. The results suggest that the PLSA model is a useful framework to combine different sources of information, and provides a reasonable space for song representation.......Automatic methods for music navigation and music recommendation exploit the structure in the music to carry out a meaningful exploration of the “song space”. To get a satisfactory performance from such systems, one should incorporate as much information about songs similarity as possible; however...... semantics”, in such a way that all observed similarities can be satisfactorily explained using the latent semantics. Therefore, one can think of these semantics as the real structure in music, in the sense that they can explain the observed similarities among songs. The suitability of the PLSA model...

  10. Discovering Music Structure via Similarity Fusion

    DEFF Research Database (Denmark)

    Arenas-García, Jerónimo; Parrado-Hernandez, Emilio; Meng, Anders

    Automatic methods for music navigation and music recommendation exploit the structure in the music to carry out a meaningful exploration of the “song space”. To get a satisfactory performance from such systems, one should incorporate as much information about songs similarity as possible; however...... semantics”, in such a way that all observed similarities can be satisfactorily explained using the latent semantics. Therefore, one can think of these semantics as the real structure in music, in the sense that they can explain the observed similarities among songs. The suitability of the PLSA model...... for representing music structure is studied in a simplified scenario consisting of 4412 songs and two similarity measures among them. The results suggest that the PLSA model is a useful framework to combine different sources of information, and provides a reasonable space for song representation....

  11. Developing Methods of praxeology to Perform Document-analysis

    DEFF Research Database (Denmark)

    Frederiksen, Jesper

    2016-01-01

    This paper provides a contribution to the methodological development on praxeologic document analysis of neoliberal welfare state policies. Different institutions related to the Danish Healthcare area, transform international health policies and these institutions produce a range of strategies...... is possible. The different works are unique but at the same time part of a common neoliberal welfare state practice. They have a structural similarity as homologous strategies related to an institutional production field of Health- and Social care service. From the construction of these strategies, it is thus...... possible to discuss more overall consequences of the neoliberal policies and the impact on nurses and their position as a health-profession....

  12. Similar or different?

    DEFF Research Database (Denmark)

    Cornér, Solveig; Pyhältö, Kirsi; Peltonen, Jouni

    2018-01-01

    Previous research has identified researcher community and supervisory support as key determinants of the doctoral journey contributing to students’ persistence and robustness. However, we still know little about cross-cultural variation in the researcher community and supervisory support experien...... counter partners, whereas the Finnish students perceived lower levels of instrumental support than the Danish students. The findings imply that seemingly similar contexts hold valid differences in experienced social support and educational strategies at the PhD level....... experienced by PhD students within the same discipline. This study explores the support experiences of 381 PhD students within the humanities and social sciences from three research-intensive universities in Denmark (n=145) and Finland (n=236). The mixed methods design was utilized. The data were collected...... counter partners. The results also indicated that the only form of support in which the students expressed more matched support than mismatched support was informational support. Further investigation showed that the Danish students reported a high level of mismatch in emotional support than their Finnish...

  13. A kernel plus method for quantifying wind turbine performance upgrades

    KAUST Repository

    Lee, Giwhyun; Ding, Yu; Xie, Le; Genton, Marc G.

    2014-01-01

    Power curves are commonly estimated using the binning method recommended by the International Electrotechnical Commission, which primarily incorporates wind speed information. When such power curves are used to quantify a turbine's upgrade

  14. Performance of spectral fitting methods for vegetation fluorescence quantification

    NARCIS (Netherlands)

    Meroni, M.; Busetto, D.; Colombo, R.; Guanter, L.; Moreno, J.; Verhoef, W.

    2010-01-01

    The Fraunhofer Line Discriminator (FLD) principle has long been considered as the reference method to quantify solar-induced chlorophyll fluorescence (F) from passive remote sensing measurements. Recently, alternative retrieval algorithms based on the spectral fitting of hyperspectral radiance

  15. Performance of density functional theory methods to describe ...

    Indian Academy of Sciences (India)

    Unknown

    Chemical compounds present different types of isomer- ism. When two isomers differ by ... of DFT methods to describe intramolecular hydrogen shifts. Three small ..... qualitative descriptions of intramolecular hydrogen shifts when large basis ...

  16. MIMO Terminal Performance Evaluation with a Novel Wireless Cable Method

    DEFF Research Database (Denmark)

    Fan, Wei; Kyösti, Pekka; Hentilä, Lassi

    2017-01-01

    chamber, which might be impractical and expensive. In this paper, a novel wireless cable method is proposed and experimentally validated. By recording the average power (i.e. reference signal received power (RSRP) in the LTE) per DUT antenna port and selecting optimal complex weights at the channel...... emulator output ports, a wireless cable connection can be achieved. The proposed method can be executed in a small RF shielded anechoic box, and offers low system cost, high measurement reliability and repeatability....

  17. A COMPARISON OF SEMANTIC SIMILARITY MODELS IN EVALUATING CONCEPT SIMILARITY

    Directory of Open Access Journals (Sweden)

    Q. X. Xu

    2012-08-01

    Full Text Available The semantic similarities are important in concept definition, recognition, categorization, interpretation, and integration. Many semantic similarity models have been established to evaluate semantic similarities of objects or/and concepts. To find out the suitability and performance of different models in evaluating concept similarities, we make a comparison of four main types of models in this paper: the geometric model, the feature model, the network model, and the transformational model. Fundamental principles and main characteristics of these models are introduced and compared firstly. Land use and land cover concepts of NLCD92 are employed as examples in the case study. The results demonstrate that correlations between these models are very high for a possible reason that all these models are designed to simulate the similarity judgement of human mind.

  18. Supplier Selection based on the Performance by using PROMETHEE Method

    Science.gov (United States)

    Sinaga, T. S.; Siregar, K.

    2017-03-01

    Generally, companies faced problem to identify vendors that can provide excellent service in availability raw material and on time delivery. The performance of suppliers in a company have to be monitored to ensure the availability to fulfill the company needs. This research is intended to explain how to assess suppliers to improve manufacturing performance. The criteria that considered in evaluating suppliers is criteria of Dickson. There are four main criteria which further split into seven sub-criteria, namely compliance with accuracy, consistency, on-time delivery, right quantity order, flexibility and negotiation, timely of order confirmation, and responsiveness. This research uses PROMETHEE methodology in assessing the supplier performances and obtaining a selected supplier as the best one that shown from the degree of alternative comparison preference between suppliers.

  19. A method for performance assessment of medical radioisotope equipment

    International Nuclear Information System (INIS)

    Kerin, T.; Slavtchev, Ath.; Nedeltchev, M.; Kjurktchiev, T.

    1984-01-01

    A variety of tests and procedures exist for the performance assessment of radioisotope diagnostic equipment. The complex performance index which has been introduced to date is based on an heuristic approach. The present work tries to interconnect algorithmically the most important factors as the influence of the measurement geometry, the statistic peculiarities for lower activities and the information loss at high count rates. All this is reflected in a criterion which integrates the spatial resolution, the effective detector's field of vision, the radionuclide's sensitivity, the background count rate and the effective dead-time of the system under investigation. (Auth.)

  20. New test methods for BIPV. Results from IP performance

    International Nuclear Information System (INIS)

    Jol, J.C.; Van Kampen, B.J.M.; De Boer, B.J.; Reil, F.; Geyer, D.

    2009-11-01

    Within the Performance project new test procedures for PV building products and the building performance as a whole when PV is applied in buildings have been drafted. It has resulted in a first draft of new test procedures for PV building products and proposals for tests for novel BIPV technology like thin film. The test proposed are a module breakage test for BIPV products, a fire safety test for BIPV products and a dynamic load test for BIPV products. Furthermore first proposals of how flexible PV modules could be tested in an appropriate way to ensure long time quality and safety of these new products are presented.

  1. Quality Evaluation of Potentilla fruticosa L. by High Performance Liquid Chromatography Fingerprinting Associated with Chemometric Methods.

    Science.gov (United States)

    Liu, Wei; Wang, Dongmei; Liu, Jianjun; Li, Dengwu; Yin, Dongxue

    2016-01-01

    The present study was performed to assess the quality of Potentilla fruticosa L. sampled from distinct regions of China using high performance liquid chromatography (HPLC) fingerprinting coupled with a suite of chemometric methods. For this quantitative analysis, the main active phytochemical compositions and the antioxidant activity in P. fruticosa were also investigated. Considering the high percentages and antioxidant activities of phytochemicals, P. fruticosa samples from Kangding, Sichuan were selected as the most valuable raw materials. Similarity analysis (SA) of HPLC fingerprints, hierarchical cluster analysis (HCA), principle component analysis (PCA), and discriminant analysis (DA) were further employed to provide accurate classification and quality estimates of P. fruticosa. Two principal components (PCs) were collected by PCA. PC1 separated samples from Kangding, Sichuan, capturing 57.64% of the variance, whereas PC2 contributed to further separation, capturing 18.97% of the variance. Two kinds of discriminant functions with a 100% discrimination ratio were constructed. The results strongly supported the conclusion that the eight samples from different regions were clustered into three major groups, corresponding with their morphological classification, for which HPLC analysis confirmed the considerable variation in phytochemical compositions and that P. fruticosa samples from Kangding, Sichuan were of high quality. The results of SA, HCA, PCA, and DA were in agreement and performed well for the quality assessment of P. fruticosa. Consequently, HPLC fingerprinting coupled with chemometric techniques provides a highly flexible and reliable method for the quality evaluation of traditional Chinese medicines.

  2. Quality Evaluation of Potentilla fruticosa L. by High Performance Liquid Chromatography Fingerprinting Associated with Chemometric Methods

    Science.gov (United States)

    Liu, Wei; Wang, Dongmei; Liu, Jianjun; Li, Dengwu; Yin, Dongxue

    2016-01-01

    The present study was performed to assess the quality of Potentilla fruticosa L. sampled from distinct regions of China using high performance liquid chromatography (HPLC) fingerprinting coupled with a suite of chemometric methods. For this quantitative analysis, the main active phytochemical compositions and the antioxidant activity in P. fruticosa were also investigated. Considering the high percentages and antioxidant activities of phytochemicals, P. fruticosa samples from Kangding, Sichuan were selected as the most valuable raw materials. Similarity analysis (SA) of HPLC fingerprints, hierarchical cluster analysis (HCA), principle component analysis (PCA), and discriminant analysis (DA) were further employed to provide accurate classification and quality estimates of P. fruticosa. Two principal components (PCs) were collected by PCA. PC1 separated samples from Kangding, Sichuan, capturing 57.64% of the variance, whereas PC2 contributed to further separation, capturing 18.97% of the variance. Two kinds of discriminant functions with a 100% discrimination ratio were constructed. The results strongly supported the conclusion that the eight samples from different regions were clustered into three major groups, corresponding with their morphological classification, for which HPLC analysis confirmed the considerable variation in phytochemical compositions and that P. fruticosa samples from Kangding, Sichuan were of high quality. The results of SA, HCA, PCA, and DA were in agreement and performed well for the quality assessment of P. fruticosa. Consequently, HPLC fingerprinting coupled with chemometric techniques provides a highly flexible and reliable method for the quality evaluation of traditional Chinese medicines. PMID:26890416

  3. Molecular similarity measures.

    Science.gov (United States)

    Maggiora, Gerald M; Shanmugasundaram, Veerabahu

    2011-01-01

    Molecular similarity is a pervasive concept in chemistry. It is essential to many aspects of chemical reasoning and analysis and is perhaps the fundamental assumption underlying medicinal chemistry. Dissimilarity, the complement of similarity, also plays a major role in a growing number of applications of molecular diversity in combinatorial chemistry, high-throughput screening, and related fields. How molecular information is represented, called the representation problem, is important to the type of molecular similarity analysis (MSA) that can be carried out in any given situation. In this work, four types of mathematical structure are used to represent molecular information: sets, graphs, vectors, and functions. Molecular similarity is a pairwise relationship that induces structure into sets of molecules, giving rise to the concept of chemical space. Although all three concepts - molecular similarity, molecular representation, and chemical space - are treated in this chapter, the emphasis is on molecular similarity measures. Similarity measures, also called similarity coefficients or indices, are functions that map pairs of compatible molecular representations that are of the same mathematical form into real numbers usually, but not always, lying on the unit interval. This chapter presents a somewhat pedagogical discussion of many types of molecular similarity measures, their strengths and limitations, and their relationship to one another. An expanded account of the material on chemical spaces presented in the first edition of this book is also provided. It includes a discussion of the topography of activity landscapes and the role that activity cliffs in these landscapes play in structure-activity studies.

  4. Alternative test method to assess the energy performance of frost-free refrigerating appliances

    International Nuclear Information System (INIS)

    Hermes, Christian J.L.; Melo, Cláudio; Knabben, Fernando T.

    2013-01-01

    This paper outlines an alternative test method to evaluate the energy consumption of frost-free refrigerators and freezers for residential applications. While the standardized methods require the refrigerating appliance to be kept running according to its onboard control system, which usually drives the refrigerator through an on–off cycling pattern, the proposed approach assesses the refrigerator energy performance in the steady-state regime, being therefore much faster and more reliable. In this procedure, the cooling capacity is matched to the cooling loads by PID-controlled electrical heaters installed within the refrigerated compartments, so that the compartment temperatures are kept at the desired standardized levels. Comparisons between the experimental results obtained using the steady-state energy test and the standardized procedures showed that the former follows closely the trends observed for the latter. - Highlights: ► An alternative test method to assess the energy consumption of refrigerators is proposed. ► PID-controlled electrical heaters were installed within the compartments. ► Steady-state and ISO energy tests were performed and compared. ► Both proposed and standardized test procedures showed similar trends.

  5. A recursive method for calculating the total number of spanning trees and its applications in self-similar small-world scale-free network models

    Science.gov (United States)

    Ma, Fei; Su, Jing; Yao, Bing

    2018-05-01

    The problem of determining and calculating the number of spanning trees of any finite graph (model) is a great challenge, and has been studied in various fields, such as discrete applied mathematics, theoretical computer science, physics, chemistry and the like. In this paper, firstly, thank to lots of real-life systems and artificial networks built by all kinds of functions and combinations among some simpler and smaller elements (components), we discuss some helpful network-operation, including link-operation and merge-operation, to design more realistic and complicated network models. Secondly, we present a method for computing the total number of spanning trees. As an accessible example, we apply this method to space of trees and cycles respectively, and our results suggest that it is indeed a better one for such models. In order to reflect more widely practical applications and potentially theoretical significance, we study the enumerating method in some existing scale-free network models. On the other hand, we set up a class of new models displaying scale-free feature, that is to say, following P(k) k-γ, where γ is the degree exponent. Based on detailed calculation, the degree exponent γ of our deterministic scale-free models satisfies γ > 3. In the rest of our discussions, we not only calculate analytically the solutions of average path length, which indicates our models have small-world property being prevailing in amounts of complex systems, but also derive the number of spanning trees by means of the recursive method described in this paper, which clarifies our method is convenient to research these models.

  6. Effects of Pretreatment Methods on Electrodes and SOFC Performance

    Directory of Open Access Journals (Sweden)

    Guo-Bin Jung

    2014-06-01

    Full Text Available Commercially available tapes (anode, electrolyte and paste (cathode were choosen to prepare anode-supported cells for solid oxide fuel cell applications. For both anode-supported cells or electrolyte-supported cells, the anode needs pretreatment to reduce NiO/YSZ to Ni/YSZ to increase its conductivity as well as its catalytic characteristics. In this study, the effects of different pretreatments (open-circuit, closed-circuit on cathode and anodes as well as SOFC performance are investigated. To investigate the influence of closed-circuit pretreatment on the NiO/YSZ anode alone, a Pt cathode is utilized as reference for comparison with the LSM cathode. The characterization of the electrical resistance, AC impedance, and SOFC performance of the resulting electrodes and/or anode-supported cell were carried out. It’s found that the influence of open-circuit pretreatment on the LSM cathode is limited. However, the influence of closed-circuit pretreatment on both the LSM cathode and NiO/YSZ anode and the resulting SOFC performance is profound. The effect of closed-circuit pretreatment on the NiO/YSZ anode is attributed to its change of electronic/pore structure as well as catalytic characteristics. With closed-circuit pretreatment, the SOFC performance improved greatly from the change of LSM cathode (and Pt reference compared to the Ni/YSZ anode.

  7. Visual Narrative Research Methods as Performance in Industrial Design Education

    Science.gov (United States)

    Campbell, Laurel H.; McDonagh, Deana

    2009-01-01

    This article discusses teaching empathic research methodology as performance. The authors describe their collaboration in an activity to help undergraduate industrial design students learn empathy for others when designing products for use by diverse or underrepresented people. The authors propose that an industrial design curriculum would benefit…

  8. Performance Analysis of Methods for Estimating Weibull Parameters ...

    African Journals Online (AJOL)

    The performance analysis revealed that the MLM was the most accurate model followed by the EPF and the GM. Furthermore, the comparison between the wind speed standard deviation predicted by the proposed models and the measured data showed that the MLM has a smaller relative error of -3.33% on average ...

  9. Performance of regional flood frequency analysis methods in ...

    African Journals Online (AJOL)

    2015-04-03

    Apr 3, 2015 ... Available on website http://www.wrc.org.za. ISSN 1816-7950 ... Estimates of design floods are required for the design of hydraulic structures and to quantify the risk of failure of the ... performance when compared to design floods estimated from the annual maximum series extracted from the observed data.

  10. Team Performance Pay and Motivation Theory: A Mixed Methods Study

    Science.gov (United States)

    Wells, Pamela; Combs, Julie P.; Bustamante, Rebecca M.

    2013-01-01

    This study was conducted to explore teachers' perceptions of a team performance pay program in a large suburban school district through the lens of motivation theories. Mixed data analysis was used to analyze teacher responses from two archival questionnaires (Year 1, n = 368; Year 2, n = 649). Responses from teachers who participated in the team…

  11. Evaluating firms’ R&D performance using best worst method

    NARCIS (Netherlands)

    Salimi, Negin; Rezaei, J.

    2018-01-01

    Since research and development (R&D) is the most critical determinant of the productivity, growth and competitive advantage of firms, measuring R&D performance has become the core of attention of R&D managers, and an extensive body of literature has examined and identified different

  12. Numerical methods for optimizing the performance of buildings

    OpenAIRE

    Pedersen, Frank

    2008-01-01

    The many different parties that influence design and control decisions for buildings, such as building owners, users, architects, consulting engineers, contractors, etx. may have different and to some extent conradicting requirements to buildings. Furthermore, national building regulations specify requirements to (among others) the energy performance and quality of the indoor climate.

  13. New methods for mobility performance measurement in wheelchair basketball

    NARCIS (Netherlands)

    van der Slikke, R.M.A.; Berger, MAM; Bregman, DJJ; Veeger, H.E.J.

    2016-01-01

    Increased professionalism in wheelchair sports demand a more precise and quantitative measure of individual wheelchair mobility performance, to allow it to be an evaluation measure of wheelchair setting or training optimization. This research describes the application of an inertial sensor based

  14. New methods for moisture control of high-performance concrete

    DEFF Research Database (Denmark)

    Kovler, Konstantin; Jensen, Ole Mejlhede; Falikman, Vyacheslav

    2005-01-01

    Curing of concrete by both external (conventional) and internal methods is reviewed and analyzed. The focus is given on the mitigation of autogenous shrinkage of low water-to-cementitious materials ratio concrete by means of internal curing. The concepts of internal curing are based on using pre......-soaked lightweight aggregate, super-absorbent polymers or water-soluble chemicals, which reduce water evaporation (so called "internal sealing"). These concepts have been suggested in the 90s, but still are not popular among users, engineers, contractors, concrete suppliers, researchers, and the rest...... of professionals who work for them. The differences between conventional methods of external curing and novel methods of internal curing are described. It is concluded that proper curing is a key factor to achieve durable concrete....

  15. Influence function method for fast estimation of BWR core performance

    International Nuclear Information System (INIS)

    Rahnema, F.; Martin, C.L.; Parkos, G.R.; Williams, R.D.

    1993-01-01

    The model, which is based on the influence function method, provides rapid estimate of important quantities such as margins to fuel operating limits, the effective multiplication factor, nodal power and void and bundle flow distributions as well as the traversing in-core probe (TIP) and local power range monitor (LPRM) readings. The fast model has been incorporated into GE's three-dimensional core monitoring system (3D Monicore). In addition to its predicative capability, the model adapts to LPRM readings in the monitoring mode. Comparisons have shown that the agreement between the results of the fast method and those of the standard 3D Monicore is within a few percent. (orig.)

  16. An Analysis of Looking Back Method in Problem-Based Learning: Case Study on Congruence and Similarity in Junior High School

    Science.gov (United States)

    Kosasih, U.; Wahyudin, W.; Prabawanto, S.

    2017-09-01

    This study aims to understand how learners do look back their idea of problem solving. This research is based on qualitative approach with case study design. Participants in this study were xx students of Junior High School, who were studying the material of congruence and similarity. The supporting instruments in this research are test and interview sheet. The data obtained were analyzed by coding and constant-comparison. The analysis find that there are three ways in which the students review the idea of problem solving, which is 1) carried out by comparing answers to the completion measures exemplified by learning resources; 2) carried out by examining the logical relationship between the solution and the problem; and 3) carried out by means of confirmation to the prior knowledge they have. This happens because most students learn in a mechanistic way. This study concludes that students validate the idea of problem solving obtained, influenced by teacher explanations, learning resources, and prior knowledge. Therefore, teacher explanations and learning resources contribute to the success or failure of students in solving problems.

  17. Concurrent validity and clinical utility of the HCR-20V3 compared with the HCR-20 in forensic mental health nursing: similar tools but improved method.

    Science.gov (United States)

    Bjørkly, Stål; Eidhammer, Gunnar; Selmer, Lars Erik

    2014-01-01

    The main scope of this small-scale investigation was to compare clinical application of the HCR-20V3 with its predecessor, the HCR-20. To explore concurrent validity, two experienced nurses assessed 20 forensic mental health service patients with the tools. Estimates of internal consistency for the HCR-20 and the HCR-20V3 were calculated by Cronbach's alpha for two levels of measurement: the H-, C-, and R-scales and the total sum scores. We found moderate (C-scale) to good (H- and R- scales and aggregate scores) estimates of internal consistency and significant differences for the two versions of the HCR. This finding indicates that the two versions reflect common underlying dimensions and that there still appears to be differences between V2 and V3 ratings for the same patients. A case from forensic mental health was used to illustrate similarities and differences in assessment results between the two HCR-20 versions. The case illustration depicts clinical use of the HCR-20V3 and application of two structured nursing interventions pertaining to the risk management part of the tool. According to our experience, Version 3 is superior to Version 2 concerning: (a) item clarity; (b) the distinction between presence and relevance of risk factors; (c) the integration of risk formulation and risk scenario; and (d) the explicit demand to construct a risk management plan as part of the standard assessment procedure.

  18. An alternative method for performing pressurized thermal shock analysis

    International Nuclear Information System (INIS)

    Bishop, B.A.; Meyer, T.A.; Carter, R.G.; Gamble, R.M.

    1997-01-01

    This paper describes how Probability of Crack Initiation and acceptable Pressurized Thermal Shock frequency were correlated with a c and summarizes several example applications, including evaluation of potential plant modifications. Plans for an industry supported pilot-plant application of the alternative Probabilistic Fracture Mechanics method for RG 1.154 are also discussed. 9 refs, 4 figs, 1 tab

  19. Long term performance of different radon remedial methods in Sweden

    CERN Document Server

    Clavensjoe, B

    2002-01-01

    The object of this project was to investigate the long time effectiveness of different radon remedial methods. The ten years project started 1991. From start the investigation comprised of 105 dwellings (91 single-family houses and 14 flats in multi-family buildings). In all of the dwellings remedial measures were carried out in the eighties. Before and immediately after the reduction the local measured the radon concentrations. New measurements of the radon concentrations have been made every third year; in 1991, 1994, 1997 and in 2000. Twelve different radon remedial methods and method combinations were used. The radon sources were building materials as well as sub-soils. In all of the dwellings the radon concentrations were measured by nuclear track films during 3 months (January-March) measurements and in half of them the air change rates by passive tracer gas methods. The results of the 2000 and the 1991 (within brackets) studies showed that the radon concentration was up to 200 Bq/m sup 3 in 54 (54) sin...

  20. 40 CFR 63.827 - Performance test methods.

    Science.gov (United States)

    2010-07-01

    ... compounds as carbon, ppmv. 12.0 = Molecular weight of carbon. 0.0416 = Conversion factor for molar volume... monitors for determining total organic volatile matter concentration and the volumetric gas flow rate, and... 2D of 40 CFR part 60, appendix A is used to determine gas volumetric flow rate. (iii) Method 3 of 40...

  1. 40 CFR 63.1161 - Performance testing and test methods.

    Science.gov (United States)

    2010-07-01

    ..., “Determination of Hydrogen Halide and Halogen Emissions from Stationary Sources—Isokinetic Method,” to determine... shall be 60 minutes and the minimum sample volume 0.85 dry standard cubic meters (30 dry standard cubic...(mg/dscm) is concentration in milligrams per dry standard cubic meter as calculated by the procedure...

  2. Comparison of the performances and validation of three methods for ...

    African Journals Online (AJOL)

    SARAH

    2014-02-28

    Feb 28, 2014 ... bacteria in Norwegian slaughter pigs. Int J. Food Microbiol 1, 301–309. [NCFA] Nordic Committee of Food Analysis (1996). Yersinia enterocolitica Detection in foods 117,. 3rd,edn,1-12. Nowak, B., Mueffling, T.V., Caspari, K. and Hartung, J. 2006 Validation of a method for the detection of virulent Yersinia ...

  3. Performance of sampling methods to estimate log characteristics for wildlife.

    Science.gov (United States)

    Lisa J. Bate; Torolf R. Torgersen; Michael J. Wisdom; Edward O. Garton

    2004-01-01

    Accurate estimation of the characteristics of log resources, or coarse woody debris (CWD), is critical to effective management of wildlife and other forest resources. Despite the importance of logs as wildlife habitat, methods for sampling logs have traditionally focused on silvicultural and fire applications. These applications have emphasized estimates of log volume...

  4. Data-driven performance evaluation method for CMS RPC trigger ...

    Indian Academy of Sciences (India)

    level triggers, to handle the large stream of data produced in collision. The information transmitted from the three muon subsystems (DT, CSC and RPC) are collected by the Global Muon Trigger (GMT) Board and merged. A method for evaluating ...

  5. An alternative method for performing pressurized thermal shock analysis

    Energy Technology Data Exchange (ETDEWEB)

    Bishop, B A; Meyer, T A [Westinghouse Energy Systems, Pittsburgh, PA (United States); Carter, R G [Electric Power Research Inst., Charlotte, NC (United States); Gamble, R M [Sartrex Corp., Rockville, MD (United States)

    1997-09-01

    This paper describes how Probability of Crack Initiation and acceptable Pressurized Thermal Shock frequency were correlated with a{sub c} and summarizes several example applications, including evaluation of potential plant modifications. Plans for an industry supported pilot-plant application of the alternative Probabilistic Fracture Mechanics method for RG 1.154 are also discussed. 9 refs, 4 figs, 1 tab.

  6. Method of data mining including determining multidimensional coordinates of each item using a predetermined scalar similarity value for each item pair

    Science.gov (United States)

    Meyers, Charles E.; Davidson, George S.; Johnson, David K.; Hendrickson, Bruce A.; Wylie, Brian N.

    1999-01-01

    A method of data mining represents related items in a multidimensional space. Distance between items in the multidimensional space corresponds to the extent of relationship between the items. The user can select portions of the space to perceive. The user also can interact with and control the communication of the space, focusing attention on aspects of the space of most interest. The multidimensional spatial representation allows more ready comprehension of the structure of the relationships among the items.

  7. Similarity Measure of Graphs

    Directory of Open Access Journals (Sweden)

    Amine Labriji

    2017-07-01

    Full Text Available The topic of identifying the similarity of graphs was considered as highly recommended research field in the Web semantic, artificial intelligence, the shape recognition and information research. One of the fundamental problems of graph databases is finding similar graphs to a graph query. Existing approaches dealing with this problem are usually based on the nodes and arcs of the two graphs, regardless of parental semantic links. For instance, a common connection is not identified as being part of the similarity of two graphs in cases like two graphs without common concepts, the measure of similarity based on the union of two graphs, or the one based on the notion of maximum common sub-graph (SCM, or the distance of edition of graphs. This leads to an inadequate situation in the context of information research. To overcome this problem, we suggest a new measure of similarity between graphs, based on the similarity measure of Wu and Palmer. We have shown that this new measure satisfies the properties of a measure of similarities and we applied this new measure on examples. The results show that our measure provides a run time with a gain of time compared to existing approaches. In addition, we compared the relevance of the similarity values obtained, it appears that this new graphs measure is advantageous and  offers a contribution to solving the problem mentioned above.

  8. Processes of Similarity Judgment

    Science.gov (United States)

    Larkey, Levi B.; Markman, Arthur B.

    2005-01-01

    Similarity underlies fundamental cognitive capabilities such as memory, categorization, decision making, problem solving, and reasoning. Although recent approaches to similarity appreciate the structure of mental representations, they differ in the processes posited to operate over these representations. We present an experiment that…

  9. Judgments of brand similarity

    NARCIS (Netherlands)

    Bijmolt, THA; Wedel, M; Pieters, RGM; DeSarbo, WS

    This paper provides empirical insight into the way consumers make pairwise similarity judgments between brands, and how familiarity with the brands, serial position of the pair in a sequence, and the presentation format affect these judgments. Within the similarity judgment process both the

  10. Means, methods and performances of the AREVA's HTR compact controls

    International Nuclear Information System (INIS)

    Banchet, J.; Guillermier, P.; Tisseur, D.; Vitali, M. P.

    2008-01-01

    In the AREVA's HTR development program, the reactor plant is composed of a prismatic core containing graphite cylindrical fuel elements, called compacts, where TRISO particles are dispersed. Starting from its past compacting process, the latter being revamped through the use of state of the art equipments, CERCA, 100% AREVA NP's subsidiary, was able to recover the quality of past compacts production. The recovered compacting process is composed of the following manufacturing steps: graphite matrix granulation, mix between the obtained granulates and particles, compacting and calcining at low pressure and temperature. To adapt this past process to new manufacturing equipments, non destructive examination tests were carried out to assess the compact quality, the latter being assessed via in house developed equipments and methods at each step of the design of experiments. As for the manufacturing process, past quality control methods were revamped to measure compact dimensional features (diameter, perpendicularity and cone effect), visual aspect, SiC layer failure fraction (via anodic disintegration and burn leach test) and homogeneity via 2D radiography coupled to ceramography. Although meeting quality requirements, 2D radiography method could not provide a quantified specification for compact homogeneity characterization. This limitation yielded the replacement of this past technique by a method based on X-Ray tomography. Development was conducted on this new technique to enable the definition of a criterion to quantify compact homogeneity, as well as to provide information about the distances in between particles. This study also included a comparison between simulated and real compacts to evaluate the accuracy of the technique as well as the influence of particle packing fraction on compact homogeneity. The developed quality control methods and equipments guided the choices of manufacturing parameters adjustments at the development stage and are now applied for

  11. Assessing performance of flaw characterization methods through uncertainty propagation

    Science.gov (United States)

    Miorelli, R.; Le Bourdais, F.; Artusi, X.

    2018-04-01

    In this work, we assess the inversion performance in terms of crack characterization and localization based on synthetic signals associated to ultrasonic and eddy current physics. More precisely, two different standard iterative inversion algorithms are used to minimize the discrepancy between measurements (i.e., the tested data) and simulations. Furthermore, in order to speed up the computational time and get rid of the computational burden often associated to iterative inversion algorithms, we replace the standard forward solver by a suitable metamodel fit on a database built offline. In a second step, we assess the inversion performance by adding uncertainties on a subset of the database parameters and then, through the metamodel, we propagate these uncertainties within the inversion procedure. The fast propagation of uncertainties enables efficiently evaluating the impact due to the lack of knowledge on some parameters employed to describe the inspection scenarios, which is a situation commonly encountered in the industrial NDE context.

  12. A novel method for assessing position-sensitive detector performance

    International Nuclear Information System (INIS)

    Clinthorne, N.H.; Rogers, W.L.; Shao, L.; Hero, A.O. III; Koral, K.F.

    1989-01-01

    A marked point process model of a position-sensitive detector is developed which includes the effects of detector efficiency, spatial response, energy response, and source statistics. The average mutual information between the incident distribution of γ rays and the detector response is derived and used as a performance index for detector optimization. A brief example is presented which uses this figure-of-merit for optimization of light guide dimensions for a modular scintillation camera

  13. Sustainable Supplier Performance Evaluation and Selection with Neofuzzy TOPSIS Method

    OpenAIRE

    Chaharsooghi, S. K.; Ashrafi, Mehdi

    2014-01-01

    Supplier selection plays an important role in the supply chain management and traditional criteria such as price, quality, and flexibility are considered for supplier performance evaluation in researches. In recent years sustainability has received more attention in the supply chain management literature with triple bottom line (TBL) describing the sustainability in supply chain management with social, environmental, and economic initiatives. This paper explores sustainability in supply chain...

  14. Measuring recent research performance for Chinese universities using bibliometric methods

    KAUST Repository

    Zhu, Jia

    2014-07-29

    This paper focuses on measuring the academic research performance of Chinese universities by using Scopus database from 2007 to 2010. We have provided meaningful indicators to measure the research performance of Chinese universities as compared to world class universities of the US and the European region. Using these indicators, we first measure the quantity and quality of the research outcomes of the universities and then examine the internationalization of research by using international collaborations, international citations and international impact metrics. Using all of this data, we finally present an overall score called research performance point to measure the comprehensive research strength of the universities for the selected subject categories. The comparison identifies the gap between Chinese universities and top-tier universities from selected regions across various subject areas. We find that Chinese universities are doing well in terms of publication volume but receive less citations from their published work. We also find that the Chinese universities have relative low percentage of publications at high impact venues, which may be the reason that they are not receiving more citations. Therefore, a careful selection of publication venues may help the Chinese universities to compete with world class universities and increase their research internationalization. © 2014 Akadémiai Kiadó, Budapest, Hungary.

  15. Method of performing sugar dehydration and catalyst treatment

    Science.gov (United States)

    Hu, Jianli [Kennewick, WA; Holladay, Johnathan E [Kennewick, WA; Zhang, Xinjie [Burlington, MA; Wang, Yong [Richland, WA

    2010-06-01

    The invention includes a method of treating a solid acid catalyst. After exposing the catalyst to a mixture containing a sugar alcohol, the catalyst is washed with an organic solvent and is then exposed to a second reaction mixture. The invention includes a process for production of anhydrosugar alcohol. A solid acid catalyst is provided to convert sugar alcohol in a first sample to an anhydrosugar alcohol. The catalyst is then washed with an organic solvent and is subsequently utilized to expose a second sample. The invention includes a method for selective production of an anhydrosugar. A solid acid catalyst is provided within a reactor and anhydrosugar alcohol is formed by flowing a starting sugar alcohol into the reactor. The acid catalyst is then exposed to an organic solvent which allows a greater amount of additional anhydrosugar to be produced than would occur without exposing the acid catalyst to the organic solvent.

  16. Performing Allen's test in immobile hand: The Esmarch bandage method

    Directory of Open Access Journals (Sweden)

    Nebil Yesiloglu

    2015-12-01

    Full Text Available In this study, an alternative method of assessing hand vascular flow using a modification of Allen's test is presented. This technique may be helpful for patients who have immobile hands due to severe trauma, patients scheduled for free tissue transfer reconstruction, patients under general anesthesia in intensive care units that require serial arterial blood gas analyses, and emergency coronary by-pass candidates who decided to receive radial arterial grafts. [Hand Microsurg 2015; 4(3.000: 83-85

  17. Long term performance of different radon remedial methods in Sweden

    International Nuclear Information System (INIS)

    Clavensjoe, Bertil

    2002-06-01

    The object of this project was to investigate the long time effectiveness of different radon remedial methods. The ten years project started 1991. From start the investigation comprised of 105 dwellings (91 single-family houses and 14 flats in multi-family buildings). In all of the dwellings remedial measures were carried out in the eighties. Before and immediately after the reduction the local authorities measured the radon concentrations. New measurements of the radon concentrations have been made every third year; in 1991, 1994, 1997 and in 2000. Twelve different radon remedial methods and method combinations were used. The radon sources were building materials as well as sub-soils. In all of the dwellings the radon concentrations were measured by nuclear track films during 3 months (January-March) measurements and in half of them the air change rates by passive tracer gas methods. The results of the 2000 and the 1991 (within brackets) studies showed that the radon concentration was up to 200 Bq/m 3 in 54 (54) single-family houses and 7 (7) flats, between 210 Bq/m 3 and 400 Bq/m 3 in 23 (18) single-family houses and 5 (6) flats, and higher than 400 Bq/m 3 in 12 (18) single-family houses and 2 (1) flats. The study 1991 showed also that in about 40 % of the cases the radon concentration had increased by more than 30 % only a few years after reduction actions had been taken. In 19 dwellings the radon concentration was at least doubled. In no fewer than 38 dwellings the radon level has been over 400 Bq/m 3 in at least one of the four measuring occasions. The change in radon concentrations was not specific to any given method but seemed to be evenly distributed over all of them. The investigation results showed the necessity for repeated measuring where counter measures have been taken. The causes for increasing radon levels have been made clear in all except 2-3 cases

  18. Application of Numerical Optimization Methods to Perform Molecular Docking on Graphics Processing Units

    Directory of Open Access Journals (Sweden)

    M. A. Farkov

    2014-01-01

    Full Text Available An analysis of numerical optimization methods for solving a problem of molecular docking has been performed. Some additional requirements for optimization methods according to GPU architecture features were specified. A promising method for implementation on GPU was selected. Its implementation was described and performance and accuracy tests were performed.

  19. Algorithms and Methods for High-Performance Model Predictive Control

    DEFF Research Database (Denmark)

    Frison, Gianluca

    routines employed in the numerical tests. The main focus of this thesis is on linear MPC problems. In this thesis, both the algorithms and their implementation are equally important. About the implementation, a novel implementation strategy for the dense linear algebra routines in embedded optimization...... is proposed, aiming at improving the computational performance in case of small matrices. About the algorithms, they are built on top of the proposed linear algebra, and they are tailored to exploit the high-level structure of the MPC problems, with special care on reducing the computational complexity....

  20. Personalized recommendation with corrected similarity

    International Nuclear Information System (INIS)

    Zhu, Xuzhen; Tian, Hui; Cai, Shimin

    2014-01-01

    Personalized recommendation has attracted a surge of interdisciplinary research. Especially, similarity-based methods in applications of real recommendation systems have achieved great success. However, the computations of similarities are overestimated or underestimated, in particular because of the defective strategy of unidirectional similarity estimation. In this paper, we solve this drawback by leveraging mutual correction of forward and backward similarity estimations, and propose a new personalized recommendation index, i.e., corrected similarity based inference (CSI). Through extensive experiments on four benchmark datasets, the results show a greater improvement of CSI in comparison with these mainstream baselines. And a detailed analysis is presented to unveil and understand the origin of such difference between CSI and mainstream indices. (paper)

  1. Improvement in Performance of ZnO based DSC Prepared by Spraying Method

    Directory of Open Access Journals (Sweden)

    Rangga Winantyo

    2013-09-01

    Full Text Available This paper reports the effect of TiCl4 on the performance of ZnO based DSC. ZnO was used due to its stability against photo-corrosion  and  photochemical  properties  similar  to  TiO2.  Thin  films  of  nanocrystalline  ZnO  were  deposited  on transparent conducting oxide glass using spray  method. The ZnO  films  were treated using TiCl4. The cell’s efficiency was found to be 2.5% with TiCl4 post-treatment and 1.9% without TiCl4 post-treatment.

  2. Hypoxic training methods for improving endurance exercise performance

    Directory of Open Access Journals (Sweden)

    Jacob A. Sinex

    2015-12-01

    Full Text Available Endurance athletic performance is highly related to a number of factors that can be altered through altitude and hypoxic training including increases in erythrocyte volume, maximal aerobic exercise capacity, capillary density, and economy. Physiological adaptations in response to acute and chronic exposure to hypoxic environments are well documented and range from short-term detrimental effects to longer-term adaptations that can improve performance at altitude and in sea-level competitions. Many altitude and hypoxic training protocols have been developed, employing various combinations of living and training at sea-level, low, moderate, and high altitudes and utilizing natural and artificial altitudes, with varying degrees of effectiveness. Several factors have been identified that are associated with individual responses to hypoxic training, and techniques for identifying those athletes most likely to benefit from hypoxic training continue to be investigated. Exposure to sufficiently high altitude (2000–3000 m for more than 12 h/day, while training at lower altitudes, for a minimum of 21 days is recommended. Timing of altitude training related to competition remains under debate, although general recommendations can be considered.

  3. COMPUTER-IMPLEMENTED METHOD OF PERFORMING A SEARCH USING SIGNATURES

    DEFF Research Database (Denmark)

    2017-01-01

    A computer-implemented method of processing a query vector and a data vector), comprising: generating a set of masks and a first set of multiple signatures and a second set of multiple signatures by applying the set of masks to the query vector and the data vector, respectively, and generating...... candidate pairs, of a first signature and a second signature, by identifying matches of a first signature and a second signature. The set of masks comprises a configuration of the elements that is a Hadamard code; a permutation of a Hadamard code; or a code that deviates from a Hadamard code...

  4. Performance of the lot quality assurance sampling method compared to surveillance for identifying inadequately-performing areas in Matlab, Bangladesh.

    Science.gov (United States)

    Bhuiya, Abbas; Hanifi, S M A; Roy, Nikhil; Streatfield, P Kim

    2007-03-01

    This paper compared the performance of the lot quality assurance sampling (LQAS) method in identifying inadequately-performing health work-areas with that of using health and demographic surveillance system (HDSS) data and examined the feasibility of applying the method by field-level programme supervisors. The study was carried out in Matlab, the field site of ICDDR,B, where a HDSS has been in place for over 30 years. The LQAS method was applied in 57 work-areas of community health workers in ICDDR,B-served areas in Matlab during July-September 2002. The performance of the LQAS method in identifying work-areas with adequate and inadequate coverage of various health services was compared with those of the HDSS. The health service-coverage indicators included coverage of DPT, measles, BCG vaccination, and contraceptive use. It was observed that the difference in the proportion of work-areas identified to be inadequately performing using the LQAS method with less than 30 respondents, and the HDSS was not statistically significant. The consistency between the LQAS method and the HDSS in identifying work-areas was greater for adequately-performing areas than inadequately-performing areas. It was also observed that the field managers could be trained to apply the LQAS method in monitoring their performance in reaching the target population.

  5. Practical applications and methods in performing cardiac digital subtraction angiography

    International Nuclear Information System (INIS)

    Markovic, D.M.; Withrow, S.; Moodie, D.S.

    1986-01-01

    One of the purposes of this book is to outline the utility of digital subtraction angiography (DSA) in common clinical practice. No text has dealt with the actual physical setup of the room or the patient prior and during a digital subtraction angiographic study at rest and with exercise. This chapter outlines the steps commonly used when cardiac DSA is performed on patients in the authors' laboratory. The authors have learned over the last few years the best way to prepare the patient and the equipment and it is hoped that utilizing this experience, other centers may avoid the mistakes the authors have made in the past and develop new techniques for the future

  6. A Novel Method for Performance Analysis of Compartmentalized Reservoirs

    Directory of Open Access Journals (Sweden)

    Shahamat Mohammad Sadeq

    2016-05-01

    Full Text Available This paper presents a simple analytical model for performance analysis of compartmentalized reservoirs producing under Constant Terminal Rate (CTR and Constant Terminal Pressure (CTP. The model is based on the well-known material balance and boundary dominated flow equations and is written in terms of capacitance and resistance of a production and a support compartment. These capacitance and resistance terms account for a combination of reservoir parameters which enable the developed model to be used for characterizing such systems. In addition to considering the properties contrast between the two reservoir compartments, the model takes into account existence of transmissibility barriers with the use of resistance terms. The model is used to analyze production performance of unconventional reservoirs, where the multistage fracturing of horizontal wells effectively creates a Stimulated Reservoir Volume (SRV with an enhanced permeability surrounded by a non-stimulated region. It can also be used for analysis of compartmentalized conventional reservoirs. The analytical solutions provide type curves through which the controlling reservoirs parameters of a compartmentalized system can be estimated. The contribution of the supporting compartment is modeled based on a boundary dominated flow assumption. The transient behaviour of the support compartment is captured by application of “distance of investigation” concept. The model shows that depletion of the production and support compartments exhibit two unit slopes on a log-log plot of pressure versus time for CTR. For CTP, however, the depletions display two exponential declines. The depletion signatures are separated by transition periods, which depend on the contribution of the support compartment (i.e. transient or boundary dominated flow. The developed equations can be implemented easily in a spreadsheet application, and are corroborated with the use of a numerical simulation. The study

  7. Performance Evaluation for Sustainability of Strong Smart Grid by Using Stochastic AHP and Fuzzy TOPSIS Methods

    Directory of Open Access Journals (Sweden)

    Huiru Zhao

    2016-01-01

    Full Text Available As an efficient way to deal with the global climate change and energy shortage problems, a strong, self-healing, compatible, economic and integrative smart gird is under construction in China, which is supported by large amounts of investments and advanced technologies. To promote the construction, operation and sustainable development of Strong Smart Grid (SSG, a novel hybrid framework for evaluating the performance of SSG is proposed from the perspective of sustainability. Based on a literature review, experts’ opinions and the technical characteristics of SSG, the evaluation model involves four sustainability criteria defined as economy, society, environment and technology aspects associated with 12 sub-criteria. Considering the ambiguity and vagueness of the subjective judgments on sub-criteria, fuzzy TOPSIS method is employed to evaluate the performance of SSG. In addition, different from previous research, this paper adopts the stochastic Analytical Hierarchy Process (AHP method to upgrade the traditional Technique for Order Preference by Similarity to Ideal Solution (TOPSIS by addressing the fuzzy and stochastic factors within weights calculation. Finally, four regional smart grids in China are ranked by employing the proposed framework. The results show that the sub-criteria affiliated with environment obtain much more attention than that of economy from experts group. Moreover, the sensitivity analysis indicates the ranking list remains stable no matter how sub-criteria weights are changed, which verifies the robustness and effectiveness of the proposed model and evaluation results. This study provides a comprehensive and effective method for performance evaluation of SSG and also innovates the weights calculation for traditional TOPSIS.

  8. Similarity measures for face recognition

    CERN Document Server

    Vezzetti, Enrico

    2015-01-01

    Face recognition has several applications, including security, such as (authentication and identification of device users and criminal suspects), and in medicine (corrective surgery and diagnosis). Facial recognition programs rely on algorithms that can compare and compute the similarity between two sets of images. This eBook explains some of the similarity measures used in facial recognition systems in a single volume. Readers will learn about various measures including Minkowski distances, Mahalanobis distances, Hansdorff distances, cosine-based distances, among other methods. The book also summarizes errors that may occur in face recognition methods. Computer scientists "facing face" and looking to select and test different methods of computing similarities will benefit from this book. The book is also useful tool for students undertaking computer vision courses.

  9. A method for analyzing the business case for provider participation in the National Cancer Institute's Community Clinical Oncology Program and similar federally funded, provider-based research networks.

    Science.gov (United States)

    Reiter, Kristin L; Song, Paula H; Minasian, Lori; Good, Marjorie; Weiner, Bryan J; McAlearney, Ann Scheck

    2012-09-01

    The Community Clinical Oncology Program (CCOP) plays an essential role in the efforts of the National Cancer Institute (NCI) to increase enrollment in clinical trials. Currently, there is little practical guidance in the literature to assist provider organizations in analyzing the return on investment (ROI), or business case, for establishing and operating a provider-based research network (PBRN) such as the CCOP. In this article, the authors present a conceptual model of the business case for PBRN participation, a spreadsheet-based tool and advice for evaluating the business case for provider participation in a CCOP organization. A comparative, case-study approach was used to identify key components of the business case for hospitals attempting to support a CCOP research infrastructure. Semistructured interviews were conducted with providers and administrators. Key themes were identified and used to develop the financial analysis tool. Key components of the business case included CCOP start-up costs, direct revenue from the NCI CCOP grant, direct expenses required to maintain the CCOP research infrastructure, and incidental benefits, most notably downstream revenues from CCOP patients. The authors recognized the value of incidental benefits as an important contributor to the business case for CCOP participation; however, currently, this component is not calculated. The current results indicated that providing a method for documenting the business case for CCOP or other PBRN involvement will contribute to the long-term sustainability and expansion of these programs by improving providers' understanding of the financial implications of participation. Copyright © 2011 American Cancer Society.

  10. Multi-Scale Scattering Transform in Music Similarity Measuring

    Science.gov (United States)

    Wang, Ruobai

    Scattering transform is a Mel-frequency spectrum based, time-deformation stable method, which can be used in evaluating music similarity. Compared with Dynamic time warping, it has better performance in detecting similar audio signals under local time-frequency deformation. Multi-scale scattering means to combine scattering transforms of different window lengths. This paper argues that, multi-scale scattering transform is a good alternative of dynamic time warping in music similarity measuring. We tested the performance of multi-scale scattering transform against other popular methods, with data designed to represent different conditions.

  11. A performance model for the communication in fast multipole methods on high-performance computing platforms

    KAUST Repository

    Ibeid, Huda; Yokota, Rio; Keyes, David E.

    2016-01-01

    model and the actual communication time on four high-performance computing (HPC) systems, when latency, bandwidth, network topology, and multicore penalties are all taken into account. To our knowledge, this is the first formal characterization

  12. Methods of making high performance compacts and products

    International Nuclear Information System (INIS)

    Fey, M.G.; Iyer, N.C.; Male, A.T.; Lovic, W.R.

    1990-01-01

    This patent describes a method of forming a pressed, dense compact. It comprises: providing a compactable particulate combination of: Class 1 metals selected from the group consisting of Ag, Cu, Al, and mixtures thereof, with material selected from the class consisting of CdO, SnO, SnO 2 , C, Co, Ni, Fe, Cr, Cr 3 C 2 , Cr 7 C 3 , W, WC, W 2 C, WB, Mo, Mo 2 C, MoB, Mo 2 B, TiC, TiN, TiB 2 , Si, SiC, Si 3 N 4 , and mixtures thereof; uniaxially pressing the particulate combination to provide a compact; placing at least one compact in an open pan; evacuating air from the pan; sealing the open top portion of the pan; stacking the pans next to each other, with plates having a high electrical resistance disposed between each pan so that the pans and plates alternate with each other, where a layer of thermally conductive, granular, pressure transmitting material is disposed between each pan and plate, which granular material acts to provide heat transfer and uniform mechanical loading to the compacts in the pans upon subsequent pressing; placing the stack in a press, passing an electrical current through the pans and high electrical resistance plates to cause a heating effect on the compacts in the pans, and uniaxial pressing the alternating pans and plates; cooling and releasing pressure on the alternating pans and plates; and separating the pans from the plates and the compacts from the pans

  13. Performance Analysis of Video Transmission Using Sequential Distortion Minimization Method for Digital Video Broadcasting Terrestrial

    Directory of Open Access Journals (Sweden)

    Novita Astin

    2016-12-01

    Full Text Available This paper presents about the transmission of Digital Video Broadcasting system with streaming video resolution 640x480 on different IQ rate and modulation. In the video transmission, distortion often occurs, so the received video has bad quality. Key frames selection algorithm is flexibel on a change of video, but on these methods, the temporal information of a video sequence is omitted. To minimize distortion between the original video and received video, we aimed at adding methodology using sequential distortion minimization algorithm. Its aim was to create a new video, better than original video without significant loss of content between the original video and received video, fixed sequentially. The reliability of video transmission was observed based on a constellation diagram, with the best result on IQ rate 2 Mhz and modulation 8 QAM. The best video transmission was also investigated using SEDIM (Sequential Distortion Minimization Method and without SEDIM. The experimental result showed that the PSNR (Peak Signal to Noise Ratio average of video transmission using SEDIM was an increase from 19,855 dB to 48,386 dB and SSIM (Structural Similarity average increase 10,49%. The experimental results and comparison of proposed method obtained a good performance. USRP board was used as RF front-end on 2,2 GHz.

  14. The performance of selected semi-empirical and DFT methods in studying C60 fullerene derivatives

    Science.gov (United States)

    Sikorska, Celina; Puzyn, Tomasz

    2015-11-01

    The capability of reproducing the open circuit voltages (V oc) of 15 representative C60 fullerene derivatives was tested using the selected quantum mechanical methods (B3LYP, PM6, and PM7) together with the two one-electron basis sets. Certain theoretical treatments (e.g. PM6) were found to be satisfactory for preliminary estimates of the open circuit voltages (V oc), whereas the use of the B3LYP/6-31G(d) approach has been proven to assure highly accurate results. We also examined the structural similarity of 19 fullerene derivatives by employing principle component analysis (PCA). In order to express the structural features of the studied compounds we used molecular descriptors calculated with semi-empirical (PM6 and PM7) and density functional (B3LYP/6-31G(d)) methods separately. In performing PCA, we noticed that semi-empirical methods (i.e. PM6 and PM7) seem satisfactory for molecules, in which one can distinguish the aromatic and the aliphatic parts in the cyclopropane ring of PCBM (phenyl-C61-buteric acid methyl ester) and they significantly overestimate the energy of the highest occupied molecular orbital (E HOMO). The use of the B3LYP functional, however, is recommended for studying methanofullerenes, which closely resemble the structure of PCBM, and for their modifications.

  15. The performance of selected semi-empirical and DFT methods in studying C60 fullerene derivatives

    International Nuclear Information System (INIS)

    Sikorska, Celina; Puzyn, Tomasz

    2015-01-01

    The capability of reproducing the open circuit voltages (V oc ) of 15 representative C 60 fullerene derivatives was tested using the selected quantum mechanical methods (B3LYP, PM6, and PM7) together with the two one-electron basis sets. Certain theoretical treatments (e.g. PM6) were found to be satisfactory for preliminary estimates of the open circuit voltages (V oc ), whereas the use of the B3LYP/6-31G(d) approach has been proven to assure highly accurate results. We also examined the structural similarity of 19 fullerene derivatives by employing principle component analysis (PCA). In order to express the structural features of the studied compounds we used molecular descriptors calculated with semi-empirical (PM6 and PM7) and density functional (B3LYP/6-31G(d)) methods separately. In performing PCA, we noticed that semi-empirical methods (i.e. PM6 and PM7) seem satisfactory for molecules, in which one can distinguish the aromatic and the aliphatic parts in the cyclopropane ring of PCBM (phenyl-C 61 -buteric acid methyl ester) and they significantly overestimate the energy of the highest occupied molecular orbital (E HOMO ). The use of the B3LYP functional, however, is recommended for studying methanofullerenes, which closely resemble the structure of PCBM, and for their modifications. (paper)

  16. Image magnification based on similarity analogy

    International Nuclear Information System (INIS)

    Chen Zuoping; Ye Zhenglin; Wang Shuxun; Peng Guohua

    2009-01-01

    Aiming at the high time complexity of the decoding phase in the traditional image enlargement methods based on fractal coding, a novel image magnification algorithm is proposed in this paper, which has the advantage of iteration-free decoding, by using the similarity analogy between an image and its zoom-out and zoom-in. A new pixel selection technique is also presented to further improve the performance of the proposed method. Furthermore, by combining some existing fractal zooming techniques, an efficient image magnification algorithm is obtained, which can provides the image quality as good as the state of the art while greatly decrease the time complexity of the decoding phase.

  17. Revisiting Inter-Genre Similarity

    DEFF Research Database (Denmark)

    Sturm, Bob L.; Gouyon, Fabien

    2013-01-01

    We revisit the idea of ``inter-genre similarity'' (IGS) for machine learning in general, and music genre recognition in particular. We show analytically that the probability of error for IGS is higher than naive Bayes classification with zero-one loss (NB). We show empirically that IGS does...... not perform well, even for data that satisfies all its assumptions....

  18. Similarity or difference?

    DEFF Research Database (Denmark)

    Villadsen, Anders Ryom

    2013-01-01

    While the organizational structures and strategies of public organizations have attracted substantial research attention among public management scholars, little research has explored how these organizational core dimensions are interconnected and influenced by pressures for similarity....... In this paper I address this topic by exploring the relation between expenditure strategy isomorphism and structure isomorphism in Danish municipalities. Different literatures suggest that organizations exist in concurrent pressures for being similar to and different from other organizations in their field......-shaped relation exists between expenditure strategy isomorphism and structure isomorphism in a longitudinal quantitative study of Danish municipalities....

  19. Comparing Harmonic Similarity Measures

    NARCIS (Netherlands)

    de Haas, W.B.; Robine, M.; Hanna, P.; Veltkamp, R.C.; Wiering, F.

    2010-01-01

    We present an overview of the most recent developments in polyphonic music retrieval and an experiment in which we compare two harmonic similarity measures. In contrast to earlier work, in this paper we specifically focus on the symbolic chord description as the primary musical representation and

  20. Performance Analyses of Counter-Flow Closed Wet Cooling Towers Based on a Simplified Calculation Method

    Directory of Open Access Journals (Sweden)

    Xiaoqing Wei

    2017-02-01

    Full Text Available As one of the most widely used units in water cooling systems, the closed wet cooling towers (CWCTs have two typical counter-flow constructions, in which the spray water flows from the top to the bottom, and the moist air and cooling water flow in the opposite direction vertically (parallel or horizontally (cross, respectively. This study aims to present a simplified calculation method for conveniently and accurately analyzing the thermal performance of the two types of counter-flow CWCTs, viz. the parallel counter-flow CWCT (PCFCWCT and the cross counter-flow CWCT (CCFCWCT. A simplified cooling capacity model that just includes two characteristic parameters is developed. The Levenberg–Marquardt method is employed to determine the model parameters by curve fitting of experimental data. Based on the proposed model, the predicted outlet temperatures of the process water are compared with the measurements of a PCFCWCT and a CCFCWCT, respectively, reported in the literature. The results indicate that the predicted values agree well with the experimental data in previous studies. The maximum absolute errors in predicting the process water outlet temperatures are 0.20 and 0.24 °C for the PCFCWCT and CCFCWCT, respectively. These results indicate that the simplified method is reliable for performance prediction of counter-flow CWCTs. Although the flow patterns of the two towers are different, the variation trends of thermal performance are similar to each other under various operating conditions. The inlet air wet-bulb temperature, inlet cooling water temperature, air flow rate, and cooling water flow rate are crucial for determining the cooling capacity of a counter-flow CWCT, while the cooling tower effectiveness is mainly determined by the flow rates of air and cooling water. Compared with the CCFCWCT, the PCFCWCT is much more applicable in a large-scale cooling water system, and the superiority would be amplified when the scale of water

  1. Development of similarity theory for control systems

    Science.gov (United States)

    Myshlyaev, L. P.; Evtushenko, V. F.; Ivushkin, K. A.; Makarov, G. V.

    2018-05-01

    The area of effective application of the traditional similarity theory and the need necessity of its development for systems are discussed. The main statements underlying the similarity theory of control systems are given. The conditions for the similarity of control systems and the need for similarity control control are formulated. Methods and algorithms for estimating and similarity control of control systems and the results of research of control systems based on their similarity are presented. The similarity control of systems includes the current evaluation of the degree of similarity of control systems and the development of actions controlling similarity, and the corresponding targeted change in the state of any element of control systems.

  2. What Math Matters? Types of Mathematics Knowledge and Relationships to Methods Course Performance

    Science.gov (United States)

    Kajander, Ann; Holm, Jennifer

    2016-01-01

    This study investigated the effect of a departmental focus on enhanced mathematics knowledge for teaching on overall performance in upper elementary mathematics methods courses. The study examined the effect of performance on a new course in mathematics for teaching on performance at the methods course level. In addition, the effect of performance…

  3. Performance of the Lot Quality Assurance Sampling Method Compared to Surveillance for Identifying Inadequately-performing Areas in Matlab, Bangladesh

    OpenAIRE

    Bhuiya, Abbas; Hanifi, S.M.A.; Roy, Nikhil; Streatfield, P. Kim

    2007-01-01

    This paper compared the performance of the lot quality assurance sampling (LQAS) method in identifying inadequately-performing health work-areas with that of using health and demographic surveillance system (HDSS) data and examined the feasibility of applying the method by field-level programme supervisors. The study was carried out in Matlab, the field site of ICDDR,B, where a HDSS has been in place for over 30 years. The LQAS method was applied in 57 work-areas of community health workers i...

  4. Investigation on multi-objective performance optimization algorithm application of fan based on response surface method and entropy method

    Science.gov (United States)

    Zhang, Li; Wu, Kexin; Liu, Yang

    2017-12-01

    A multi-objective performance optimization method is proposed, and the problem that single structural parameters of small fan balance the optimization between the static characteristics and the aerodynamic noise is solved. In this method, three structural parameters are selected as the optimization variables. Besides, the static pressure efficiency and the aerodynamic noise of the fan are regarded as the multi-objective performance. Furthermore, the response surface method and the entropy method are used to establish the optimization function between the optimization variables and the multi-objective performances. Finally, the optimized model is found when the optimization function reaches its maximum value. Experimental data shows that the optimized model not only enhances the static characteristics of the fan but also obviously reduces the noise. The results of the study will provide some reference for the optimization of multi-objective performance of other types of rotating machinery.

  5. PV Performance Modeling Methods and Practices: Results from the 4th PV Performance Modeling Collaborative Workshop.

    Energy Technology Data Exchange (ETDEWEB)

    Stein, Joshua [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-03-01

    In 2014, the IEA PVPS Task 13 added the PVPMC as a formal activity to its technical work plan for 2014-2017. The goal of this activity is to expand the reach of the PVPMC to a broader international audience and help to reduce PV performance modeling uncertainties worldwide. One of the main deliverables of this activity is to host one or more PVPMC workshops outside the US to foster more international participation within this collaborative group. This report reviews the results of the first in a series of these joint IEA PVPS Task 13/PVPMC workshops. The 4th PV Performance Modeling Collaborative Workshop was held in Cologne, Germany at the headquarters of TÜV Rheinland on October 22-23, 2015.

  6. Assessing the stability of free-energy perturbation calculations by performing variations in the method

    Science.gov (United States)

    Manzoni, Francesco; Ryde, Ulf

    2018-03-01

    We have calculated relative binding affinities for eight tetrafluorophenyl-triazole-thiogalactoside inhibitors of galectin-3 with the alchemical free-energy perturbation approach. We obtain a mean absolute deviation from experimental estimates of only 2-3 kJ/mol and a correlation coefficient (R 2) of 0.5-0.8 for seven relative affinities spanning a range of up to 11 kJ/mol. We also studied the effect of using different methods to calculate the charges of the inhibitor and different sizes of the perturbed group (the atoms that are described by soft-core potentials and are allowed to have differing coordinates). However, the various approaches gave rather similar results and it is not possible to point out one approach as consistently and significantly better than the others. Instead, we suggest that such small and reasonable variations in the computational method can be used to check how stable the calculated results are and to obtain a more accurate estimate of the uncertainty than if performing only one calculation with a single computational setup.

  7. Can We Use Polya’s Method to Improve Students’ Performance in the Statistics Classes?

    Directory of Open Access Journals (Sweden)

    Indika Wickramasinghe

    2015-01-01

    Full Text Available In this study, Polya’s problem-solving method is introduced in a statistics class in an effort to enhance students’ performance. Teaching the method was applied to one of the two introductory-level statistics classes taught by the same instructor, and a comparison was made between the performances in the two classes. The results indicate there was a significant improvement of the students’ performance in the class in which Polya’s method was introduced.

  8. Comparison of two methods to determine fan performance curves using computational fluid dynamics

    Science.gov (United States)

    Onma, Patinya; Chantrasmi, Tonkid

    2018-01-01

    This work investigates a systematic numerical approach that employs Computational Fluid Dynamics (CFD) to obtain performance curves of a backward-curved centrifugal fan. Generating the performance curves requires a number of three-dimensional simulations with varying system loads at a fixed rotational speed. Two methods were used and their results compared to experimental data. The first method incrementally changes the mass flow late through the inlet boundary condition while the second method utilizes a series of meshes representing the physical damper blade at various angles. The generated performance curves from both methods are compared with an experiment setup in accordance with the AMCA fan performance testing standard.

  9. Natural texture retrieval based on perceptual similarity measurement

    Science.gov (United States)

    Gao, Ying; Dong, Junyu; Lou, Jianwen; Qi, Lin; Liu, Jun

    2018-04-01

    A typical texture retrieval system performs feature comparison and might not be able to make human-like judgments of image similarity. Meanwhile, it is commonly known that perceptual texture similarity is difficult to be described by traditional image features. In this paper, we propose a new texture retrieval scheme based on texture perceptual similarity. The key of the proposed scheme is that prediction of perceptual similarity is performed by learning a non-linear mapping from image features space to perceptual texture space by using Random Forest. We test the method on natural texture dataset and apply it on a new wallpapers dataset. Experimental results demonstrate that the proposed texture retrieval scheme with perceptual similarity improves the retrieval performance over traditional image features.

  10. Improved resins and novel materials and methods for solid phase extraction and high performance liquid chromatography

    Energy Technology Data Exchange (ETDEWEB)

    Freeze, Ronald [Iowa State Univ., Ames, IA (United States)

    1997-10-08

    Solid-phase extraction (SPE) has grown to be one of the most widely used methods for isolation and preconcentration of a vast range of compounds from aqueous solutions. By modifying polymeric SPE resins with chelating functional groups, the selective uptake of metals was accomplished. The resin, along with adsorbed metals, was vaporized in the ICP and detection of the metals was then possible using either mass or emission spectroscopy. Drug analyses in biological fluids have received heightened attention as drug testing is on the increase both in sports and in the work environment. By using a direct-injection technique, biological fluids can be injected directly into the liquid chromatographic system with no pretreatment. A new surfactant, a sulfonated form of Brij-30 (Brij-S) is shown to prevent the uptake of serum proteins on commercial HPLC columns by forming a thin coating on the silica C18 surface. Excellent separations of eight or more drugs with a wide range of retention times were obtained. The separations had sharper peaks and lower retention times than similar separations performed with the surfactant sodium dodecylsulfate (SDS). Quantitative recovery of a number of drugs with limits of detection near 1 ppm with a 5 μl injection volume were obtained. Finally, a method for solid-phase extraction in a syringe is introduced. The system greatly reduced the volume of solvent required to elute adsorbed analytes from the SPE bed while providing a semi-automated setup. SPE in a syringe consists of a very small bed of resin-loaded membrane packed into a GC or HPLC syringe. After extraction, elution was performed with just a few μl of solvent. This small elution volume allowed injection of the eluent directly from the syringe into the chromatographic system, eliminating the handling problems associated with such small volumes.

  11. A high-performance liquid chromatography method for the serotonin release assay is equivalent to the radioactive method.

    Science.gov (United States)

    Sono-Koree, N K; Crist, R A; Frank, E L; Rodgers, G M; Smock, K J

    2016-02-01

    The serotonin release assay (SRA) is considered the gold standard laboratory test for heparin-induced thrombocytopenia (HIT). The historic SRA method uses platelets loaded with radiolabeled serotonin to evaluate platelet activation by HIT immune complexes. However, a nonradioactive method is desirable. We report the performance characteristics of a high-performance liquid chromatography (HPLC) SRA method. We validated the performance characteristics of an HPLC-SRA method, including correlation with a reference laboratory using the radioactive method. Serotonin released from reagent platelets was quantified by HPLC using fluorescent detection. Results were expressed as % release and classified as positive, negative, or indeterminate based on previously published cutoffs. Serum samples from 250 subjects with suspected HIT were tested in the HPLC-SRA and with the radioactive method. Concordant classifications were observed in 230 samples (92%). Sera from 41 healthy individuals tested negative. Between-run imprecision studies showed standard deviation of performance characteristics, equivalent to the historic radioactive method, but avoids the complexities of working with radioactivity. © 2015 John Wiley & Sons Ltd.

  12. Effect of the precipitation interpolation method on the performance of a snowmelt runoff model

    Science.gov (United States)

    Jacquin, Alexandra

    2014-05-01

    , HBV parameters are calibrated with respect to Nash-Sutcliffe efficiency. The performance of HBV in the study catchment is not satisfactory. Although volumetric errors are modest, efficiency values are lower than 70%. Discharge estimates resulting from the application of TP, MFF and IDW obtain similar model efficiencies and volumetric errors. These error statistics moderately improve if KED or OIM are used instead. Even though the quality of precipitation estimates of distinct interpolation methods is dissimilar, the results of this study show that these differences do not necessarily produce noticeable changes in HBV's model performance statistics. This situation arises because the calibration of the model parameters allows some degree of compensation of deficient areal precipitation estimates, mainly through the adjustment of model simulated evaporation and glacier melt, as revealed by the analysis of water balances. In general, even if there is a good agreement between model estimated and observed discharge, this information is not sufficient to assert that the internal hydrological processes of the catchment are properly simulated by a watershed model. Other calibration criteria should be incorporated if a more reliable representation of these processes is desired. Acknowledgements: This research was funded by FONDECYT, Research Project 1110279. The HBV Light software used in this study was kindly provided by J. Seibert, Department of Geography, University of Zürich.

  13. More Similar Than Different

    DEFF Research Database (Denmark)

    Pedersen, Mogens Jin

    2018-01-01

    What role do employee features play into the success of different personnel management practices for serving high performance? Using data from a randomized survey experiment among 5,982 individuals of all ages, this article examines how gender conditions the compliance effects of different incent...

  14. [The effects of 6 months' multimodal training on functional performance, strength, endurance, and body mass index of older individuals. Are the benefits of training similar among women and men?].

    Science.gov (United States)

    Gudlaugsson, Janus; Aspelund, Thor; Gudnason, Vilmundur; Olafsdottir, Anna Sigridur; Jonsson, Palmi V; Arngrimsson, Sigurbjorn Arni; Johannsson, Erlingur

    2013-07-01

    Good functional performance in elderly people greatly improves their changes of independence and well-being. Conversely, bad functional performance can impair their capability of managing the activities of daily life.. The main goal of this study was to investigate the effects of a 6-months' multimodal training intervention on the physical performance of males and females, possible gender differences and the outcome 6 and 12 months after its completion. This study examined 71-90 year old healthy seniors (n=117) participating in the AGES Reykjavik Study. It was a randomized and controlled cross-over trial, conducted in three 6-months' phases (time-points). After enrollment and baseline assessments, the study group was divided in two. Group 1 received 6-months' training while group 2 served as a control. In the second 6 months' phase, group 1 received no formal training while group 2 did. In the third phase, neither group received training. The groups' physical conditions were assessed after each phase. After 6-months' training, 32% improvement was seen in physical activity among males (ptraining results. Both sexes retained long-term effects of the training on physical performance and dynamic balance for at least 12 months. Multimodal training intervention has positive effects on physical performance in older individuals, the sexes respond similarly to the training and retain achieved improvement for at least 12 months. The research indicates that moderate and systemic training for this age group could be a part of conventional health service for this age group.

  15. Performance of the Different Methods of Study Financing: A Measurement through the Data Envelopment Analysis Method

    OpenAIRE

    Valérie Vierstraete; Eric Yergeau

    2010-01-01

    Financial hardship can significantly undermine post-secondary students’ ability to attain their academic goals: completing their training and obtaining degrees with good grades. This study considers which method of financing studies—loans and bursaries from the Government, student aid granted directly by universities, scholarships or on-campus jobs, off-campus jobs or parental financial contribution—will best help students attain academic success. For these purposes, we use a non-parametric d...

  16. Comparing performance of standard and iterative linear unmixing methods for hyperspectral signatures

    Science.gov (United States)

    Gault, Travis R.; Jansen, Melissa E.; DeCoster, Mallory E.; Jansing, E. David; Rodriguez, Benjamin M.

    2016-05-01

    Linear unmixing is a method of decomposing a mixed signature to determine the component materials that are present in sensor's field of view, along with the abundances at which they occur. Linear unmixing assumes that energy from the materials in the field of view is mixed in a linear fashion across the spectrum of interest. Traditional unmixing methods can take advantage of adjacent pixels in the decomposition algorithm, but is not the case for point sensors. This paper explores several iterative and non-iterative methods for linear unmixing, and examines their effectiveness at identifying the individual signatures that make up simulated single pixel mixed signatures, along with their corresponding abundances. The major hurdle addressed in the proposed method is that no neighboring pixel information is available for the spectral signature of interest. Testing is performed using two collections of spectral signatures from the Johns Hopkins University Applied Physics Laboratory's Signatures Database software (SigDB): a hand-selected small dataset of 25 distinct signatures from a larger dataset of approximately 1600 pure visible/near-infrared/short-wave-infrared (VIS/NIR/SWIR) spectra. Simulated spectra are created with three and four material mixtures randomly drawn from a dataset originating from SigDB, where the abundance of one material is swept in 10% increments from 10% to 90%with the abundances of the other materials equally divided amongst the remainder. For the smaller dataset of 25 signatures, all combinations of three or four materials are used to create simulated spectra, from which the accuracy of materials returned, as well as the correctness of the abundances, is compared to the inputs. The experiment is expanded to include the signatures from the larger dataset of almost 1600 signatures evaluated using a Monte Carlo scheme with 5000 draws of three or four materials to create the simulated mixed signatures. The spectral similarity of the inputs to the

  17. 46 CFR 57.06-3 - Method of performing production testing.

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 2 2010-10-01 2010-10-01 false Method of performing production testing. 57.06-3 Section 57.06-3 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE ENGINEERING WELDING AND BRAZING Production Tests § 57.06-3 Method of performing production testing. (a) Except as...

  18. 40 CFR 63.344 - Performance test requirements and test methods.

    Science.gov (United States)

    2010-07-01

    ... electroplating tanks or chromium anodizing tanks. The sampling time and sample volume for each run of Methods 306... Chromium Anodizing Tanks § 63.344 Performance test requirements and test methods. (a) Performance test... Emissions From Decorative and Hard Chromium Electroplating and Anodizing Operations,” appendix A of this...

  19. A method to evaluate process performance by integrating time and resources

    Science.gov (United States)

    Wang, Yu; Wei, Qingjie; Jin, Shuang

    2017-06-01

    The purpose of process mining is to improve the existing process of the enterprise, so how to measure the performance of the process is particularly important. However, the current research on the performance evaluation method is still insufficient. The main methods of evaluation are mainly using time or resource. These basic statistics cannot evaluate process performance very well. In this paper, a method of evaluating the performance of the process based on time dimension and resource dimension is proposed. This method can be used to measure the utilization and redundancy of resources in the process. This paper will introduce the design principle and formula of the evaluation algorithm. Then, the design and the implementation of the evaluation method will be introduced. Finally, we will use the evaluating method to analyse the event log from a telephone maintenance process and propose an optimization plan.

  20. Financial Performance of Pension Companies Operating in Turkey with Topsis Analysis Method

    OpenAIRE

    Gulsun Isseveroglu; Ozan Sezer

    2015-01-01

    In this study, financial performances of the companies were analyzed by TOPSIS method via using financial tables of the sixteen pension and life-pension companies. Firstly, financial ratios which are one of the important indicators for the financial power of companies were determined and calculated for each company separately. Calculated ratios converted to demonstrate of company performance unique point by using TOPSIS method. Companies have sorted according to their calculated performance s...

  1. Identification of robust statistical downscaling methods based on a comprehensive suite of performance metrics for South Korea

    Science.gov (United States)

    Eum, H. I.; Cannon, A. J.

    2015-12-01

    Climate models are a key provider to investigate impacts of projected future climate conditions on regional hydrologic systems. However, there is a considerable mismatch of spatial resolution between GCMs and regional applications, in particular a region characterized by complex terrain such as Korean peninsula. Therefore, a downscaling procedure is an essential to assess regional impacts of climate change. Numerous statistical downscaling methods have been used mainly due to the computational efficiency and simplicity. In this study, four statistical downscaling methods [Bias-Correction/Spatial Disaggregation (BCSD), Bias-Correction/Constructed Analogue (BCCA), Multivariate Adaptive Constructed Analogs (MACA), and Bias-Correction/Climate Imprint (BCCI)] are applied to downscale the latest Climate Forecast System Reanalysis data to stations for precipitation, maximum temperature, and minimum temperature over South Korea. By split sampling scheme, all methods are calibrated with observational station data for 19 years from 1973 to 1991 are and tested for the recent 19 years from 1992 to 2010. To assess skill of the downscaling methods, we construct a comprehensive suite of performance metrics that measure an ability of reproducing temporal correlation, distribution, spatial correlation, and extreme events. In addition, we employ Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) to identify robust statistical downscaling methods based on the performance metrics for each season. The results show that downscaling skill is considerably affected by the skill of CFSR and all methods lead to large improvements in representing all performance metrics. According to seasonal performance metrics evaluated, when TOPSIS is applied, MACA is identified as the most reliable and robust method for all variables and seasons. Note that such result is derived from CFSR output which is recognized as near perfect climate data in climate studies. Therefore, the

  2. Interpolation in Time Series : An Introductive Overview of Existing Methods, Their Performance Criteria and Uncertainty Assessment

    NARCIS (Netherlands)

    Lepot, M.J.; Aubin, Jean Baptiste; Clemens, F.H.L.R.

    2017-01-01

    A thorough review has been performed on interpolation methods to fill gaps in time-series, efficiency criteria, and uncertainty quantifications. On one hand, there are numerous available methods: interpolation, regression, autoregressive, machine learning methods, etc. On the other hand, there are

  3. Analysis and performance estimation of the Conjugate Gradient method on multiple GPUs

    NARCIS (Netherlands)

    Verschoor, M.; Jalba, A.C.

    2012-01-01

    The Conjugate Gradient (CG) method is a widely-used iterative method for solving linear systems described by a (sparse) matrix. The method requires a large amount of Sparse-Matrix Vector (SpMV) multiplications, vector reductions and other vector operations to be performed. We present a number of

  4. Theoretical comparison of performance using transfer functions for reactivity meters based on inverse kinetic method and simple feedback method

    International Nuclear Information System (INIS)

    Shimazu, Yoichiro; Tashiro, Shoichi; Tojo, Masayuki

    2017-01-01

    The performance of two digital reactivity meters, one based on the conventional inverse kinetic method and the other one based on simple feedback theory, are compared analytically using their respective transfer functions. The latter one is proposed by one of the authors. It has been shown that the performance of the two reactivity meters become almost identical when proper system parameters are selected for each reactivity meter. A new correlation between the system parameters of the two reactivity meters is found. With this correlation, filter designers can easily determine the system parameters for the respective reactivity meters to obtain identical performance. (author)

  5. Performance of machine learning methods for ligand-based virtual screening.

    Science.gov (United States)

    Plewczynski, Dariusz; Spieser, Stéphane A H; Koch, Uwe

    2009-05-01

    Computational screening of compound databases has become increasingly popular in pharmaceutical research. This review focuses on the evaluation of ligand-based virtual screening using active compounds as templates in the context of drug discovery. Ligand-based screening techniques are based on comparative molecular similarity analysis of compounds with known and unknown activity. We provide an overview of publications that have evaluated different machine learning methods, such as support vector machines, decision trees, ensemble methods such as boosting, bagging and random forests, clustering methods, neuronal networks, naïve Bayesian, data fusion methods and others.

  6. Interpolation in Time Series: An Introductive Overview of Existing Methods, Their Performance Criteria and Uncertainty Assessment

    Directory of Open Access Journals (Sweden)

    Mathieu Lepot

    2017-10-01

    Full Text Available A thorough review has been performed on interpolation methods to fill gaps in time-series, efficiency criteria, and uncertainty quantifications. On one hand, there are numerous available methods: interpolation, regression, autoregressive, machine learning methods, etc. On the other hand, there are many methods and criteria to estimate efficiencies of these methods, but uncertainties on the interpolated values are rarely calculated. Furthermore, while they are estimated according to standard methods, the prediction uncertainty is not taken into account: a discussion is thus presented on the uncertainty estimation of interpolated/extrapolated data. Finally, some suggestions for further research and a new method are proposed.

  7. Performance comparison of a new hybrid conjugate gradient method under exact and inexact line searches

    Science.gov (United States)

    Ghani, N. H. A.; Mohamed, N. S.; Zull, N.; Shoid, S.; Rivaie, M.; Mamat, M.

    2017-09-01

    Conjugate gradient (CG) method is one of iterative techniques prominently used in solving unconstrained optimization problems due to its simplicity, low memory storage, and good convergence analysis. This paper presents a new hybrid conjugate gradient method, named NRM1 method. The method is analyzed under the exact and inexact line searches in given conditions. Theoretically, proofs show that the NRM1 method satisfies the sufficient descent condition with both line searches. The computational result indicates that NRM1 method is capable in solving the standard unconstrained optimization problems used. On the other hand, the NRM1 method performs better under inexact line search compared with exact line search.

  8. Modeling of similar economies

    Directory of Open Access Journals (Sweden)

    Sergey B. Kuznetsov

    2017-06-01

    Full Text Available Objective to obtain dimensionless criteria ndash economic indices characterizing the national economy and not depending on its size. Methods mathematical modeling theory of dimensions processing statistical data. Results basing on differential equations describing the national economy with the account of economical environment resistance two dimensionless criteria are obtained which allow to compare economies regardless of their sizes. With the theory of dimensions we show that the obtained indices are not accidental. We demonstrate the implementation of the obtained dimensionless criteria for the analysis of behavior of certain countriesrsquo economies. Scientific novelty the dimensionless criteria are obtained ndash economic indices which allow to compare economies regardless of their sizes and to analyze the dynamic changes in the economies with time. nbsp Practical significance the obtained results can be used for dynamic and comparative analysis of different countriesrsquo economies regardless of their sizes.

  9. Similar speaker recognition using nonlinear analysis

    International Nuclear Information System (INIS)

    Seo, J.P.; Kim, M.S.; Baek, I.C.; Kwon, Y.H.; Lee, K.S.; Chang, S.W.; Yang, S.I.

    2004-01-01

    Speech features of the conventional speaker identification system, are usually obtained by linear methods in spectral space. However, these methods have the drawback that speakers with similar voices cannot be distinguished, because the characteristics of their voices are also similar in spectral space. To overcome the difficulty in linear methods, we propose to use the correlation exponent in the nonlinear space as a new feature vector for speaker identification among persons with similar voices. We show that our proposed method surprisingly reduces the error rate of speaker identification system to speakers with similar voices

  10. Performance of Firth-and logF-type penalized methods in risk prediction for small or sparse binary data.

    Science.gov (United States)

    Rahman, M Shafiqur; Sultana, Mahbuba

    2017-02-23

    When developing risk models for binary data with small or sparse data sets, the standard maximum likelihood estimation (MLE) based logistic regression faces several problems including biased or infinite estimate of the regression coefficient and frequent convergence failure of the likelihood due to separation. The problem of separation occurs commonly even if sample size is large but there is sufficient number of strong predictors. In the presence of separation, even if one develops the model, it produces overfitted model with poor predictive performance. Firth-and logF-type penalized regression methods are popular alternative to MLE, particularly for solving separation-problem. Despite the attractive advantages, their use in risk prediction is very limited. This paper evaluated these methods in risk prediction in comparison with MLE and other commonly used penalized methods such as ridge. The predictive performance of the methods was evaluated through assessing calibration, discrimination and overall predictive performance using an extensive simulation study. Further an illustration of the methods were provided using a real data example with low prevalence of outcome. The MLE showed poor performance in risk prediction in small or sparse data sets. All penalized methods offered some improvements in calibration, discrimination and overall predictive performance. Although the Firth-and logF-type methods showed almost equal amount of improvement, Firth-type penalization produces some bias in the average predicted probability, and the amount of bias is even larger than that produced by MLE. Of the logF(1,1) and logF(2,2) penalization, logF(2,2) provides slight bias in the estimate of regression coefficient of binary predictor and logF(1,1) performed better in all aspects. Similarly, ridge performed well in discrimination and overall predictive performance but it often produces underfitted model and has high rate of convergence failure (even the rate is higher than that

  11. A comparative performance study of sound zoning methods in a reflective environment

    DEFF Research Database (Denmark)

    Olik, Marek; Francombe, Jon; Coleman, Philip

    2013-01-01

    Whilst sound zoning methods have typically been studied under anechoic conditions, it is desirable to evaluate the performance of various methods in a real room. Three control methods were implemented (delay and sum, DS; acoustic contrast control, ACC; and pressure matching, PM) on two regular 24...... monophonic auralisations from measured system responses to collect ratings of perceived distraction due to the alternate audio programme. Distraction ratings were a ected by control method and programme material....

  12. PERFORMANCE EVALUATION OF TURKISH TYPE A MUTUAL FUNDS AND PENSION STOCK FUNDS BY USING TOPSIS METHOD

    Directory of Open Access Journals (Sweden)

    Nesrin ALPTEKIN

    2009-07-01

    Full Text Available In this paper, it is evaluated performance of Turkish Type A mutual funds and pension stock funds by using TOPSIS method which is a multicriteria decision making approach. Both of these funds compose of stocks in their portfolios, so it can be enabled to compare each other. Generally, mutual or pension funds are evaluated according to their risk and return. At this point, it is used traditional performance measurement techniques of funds like Sharpe ratio, Sortino ratio, Treynor index and Jensen’s alpha. TOPSIS method takes into consideration all of these fund performance measurement techniques and provides more reasonable performance measurement.

  13. A new method to assess the sustainability performance of events: Application to the 2014 World Orienteering Championship

    International Nuclear Information System (INIS)

    Scrucca, Flavio; Severi, Claudio; Galvan, Nicola; Brunori, Antonio

    2016-01-01

    Nowadays an increasing attention of public and private agencies to the sustainability performance of events is observed, since it is recognized as a key issue in the context of sustainable development. Assessing the sustainability performance of events involves environmental, social and economic aspects; their impacts are complex and a quantitative assessment is often difficult. This paper presents a new quali-quantitative method developed to measure the sustainability of events, taking into account all its potential impacts. The 2014 World Orienteering Championship, held in Italy, was selected to test the proposed evaluation methodology. The total carbon footprint of the event was 165.34 tCO_2eq and the avoided emissions were estimated as being 46 tCO_2eq. The adopted quali-quantitative method resulted to be efficient in assessing the sustainability impacts and can be applied for the evaluation of similar events. - Highlights: • A quali-quantitative method to assess events' sustainability is presented. • All the methodological issues related to the method are explained. • The method is used to evaluate the sustainability of an international sports event. • The method resulted to be valid to assess the event's sustainability level. • The carbon footprint of the event has been calculated.

  14. A new method to assess the sustainability performance of events: Application to the 2014 World Orienteering Championship

    Energy Technology Data Exchange (ETDEWEB)

    Scrucca, Flavio; Severi, Claudio [ECOCONGRESS Scientific Department, Via N. Cristofani 4/a, 06135 Perugia (Italy); Galvan, Nicola [WOC (World Orienteering Championship) Organising Committee 2014, Loc. Parco 3, 38056 Levico Terme (Italy); Brunori, Antonio, E-mail: info@pefc.it [PEFC Italy (Programme for Endorsement of Forest Certification schemes), Strada dei Loggi 22, 06135 Perugia (Italy)

    2016-01-15

    Nowadays an increasing attention of public and private agencies to the sustainability performance of events is observed, since it is recognized as a key issue in the context of sustainable development. Assessing the sustainability performance of events involves environmental, social and economic aspects; their impacts are complex and a quantitative assessment is often difficult. This paper presents a new quali-quantitative method developed to measure the sustainability of events, taking into account all its potential impacts. The 2014 World Orienteering Championship, held in Italy, was selected to test the proposed evaluation methodology. The total carbon footprint of the event was 165.34 tCO{sub 2}eq and the avoided emissions were estimated as being 46 tCO{sub 2}eq. The adopted quali-quantitative method resulted to be efficient in assessing the sustainability impacts and can be applied for the evaluation of similar events. - Highlights: • A quali-quantitative method to assess events' sustainability is presented. • All the methodological issues related to the method are explained. • The method is used to evaluate the sustainability of an international sports event. • The method resulted to be valid to assess the event's sustainability level. • The carbon footprint of the event has been calculated.

  15. Validation of a simple method for predicting the disinfection performance in a flow-through contactor.

    Science.gov (United States)

    Pfeiffer, Valentin; Barbeau, Benoit

    2014-02-01

    Despite its shortcomings, the T10 method introduced by the United States Environmental Protection Agency (USEPA) in 1989 is currently the method most frequently used in North America to calculate disinfection performance. Other methods (e.g., the Integrated Disinfection Design Framework, IDDF) have been advanced as replacements, and more recently, the USEPA suggested the Extended T10 and Extended CSTR (Continuous Stirred-Tank Reactor) methods to improve the inactivation calculations within ozone contactors. To develop a method that fully considers the hydraulic behavior of the contactor, two models (Plug Flow with Dispersion and N-CSTR) were successfully fitted with five tracer tests results derived from four Water Treatment Plants and a pilot-scale contactor. A new method based on the N-CSTR model was defined as the Partially Segregated (Pseg) method. The predictions from all the methods mentioned were compared under conditions of poor and good hydraulic performance, low and high disinfectant decay, and different levels of inactivation. These methods were also compared with experimental results from a chlorine pilot-scale contactor used for Escherichia coli inactivation. The T10 and Extended T10 methods led to large over- and under-estimations. The Segregated Flow Analysis (used in the IDDF) also considerably overestimated the inactivation under high disinfectant decay. Only the Extended CSTR and Pseg methods produced realistic and conservative predictions in all cases. Finally, a simple implementation procedure of the Pseg method was suggested for calculation of disinfection performance. Copyright © 2013 Elsevier Ltd. All rights reserved.

  16. Reforming High School Science for Low-Performing Students Using Inquiry Methods and Communities of Practice

    Science.gov (United States)

    Bolden, Marsha Gail

    Some schools fall short of the high demand to increase science scores on state exams because low-performing students enter high school unprepared for high school science. Low-performing students are not successful in high school for many reasons. However, using inquiry methods have improved students' understanding of science concepts. The purpose of this qualitative research study was to investigate the teachers' lived experiences with using inquiry methods to motivate low-performing high school science students in an inquiry-based program called Xtreem Science. Fifteen teachers were selected from the Xtreem Science program, a program designed to assist teachers in motivating struggling science students. The research questions involved understanding (a) teachers' experiences in using inquiry methods, (b) challenges teachers face in using inquiry methods, and (c) how teachers describe student's response to inquiry methods. Strategy of data collection and analysis included capturing and understanding the teachers' feelings, perceptions, and attitudes in their lived experience of teaching using inquiry method and their experience in motivating struggling students. Analysis of interview responses revealed teachers had some good experiences with inquiry and expressed that inquiry impacted their teaching style and approach to topics, and students felt that using inquiry methods impacted student learning for the better. Inquiry gave low-performing students opportunities to catch up and learn information that moved them to the next level of science courses. Implications for positive social change include providing teachers and school district leaders with information to help improve performance of the low performing science students.

  17. Modeling Nanoscale FinFET Performance by a Neural Network Method

    Directory of Open Access Journals (Sweden)

    Jin He

    2017-07-01

    Full Text Available This paper presents a neural network method to model nanometer FinFET performance. The principle of this method is firstly introduced and its application in modeling DC and conductance characteristics of nanoscale FinFET transistor is demonstrated in detail. It is shown that this method does not need parameter extraction routine while its prediction of the transistor performance has a small relative error within 1 % compared with measured data, thus this new method is as accurate as the physics based surface potential model.

  18. A simple high performance liquid chromatography method for analyzing paraquat in soil solution samples.

    Science.gov (United States)

    Ouyang, Ying; Mansell, Robert S; Nkedi-Kizza, Peter

    2004-01-01

    A high performance liquid chromatography (HPLC) method with UV detection was developed to analyze paraquat (1,1'-dimethyl-4,4'-dipyridinium dichloride) herbicide content in soil solution samples. The analytical method was compared with the liquid scintillation counting (LSC) method using 14C-paraquat. Agreement obtained between the two methods was reasonable. However, the detection limit for paraquat analysis was 0.5 mg L(-1) by the HPLC method and 0.05 mg L(-1) by the LSC method. The LSC method was, therefore, 10 times more precise than the HPLC method for solution concentrations less than 1 mg L(-1). In spite of the high detection limit, the UC (nonradioactive) HPLC method provides an inexpensive and environmentally safe means for determining paraquat concentration in soil solution compared with the 14C-LSC method.

  19. Systematic characterizations of text similarity in full text biomedical publications.

    Science.gov (United States)

    Sun, Zhaohui; Errami, Mounir; Long, Tara; Renard, Chris; Choradia, Nishant; Garner, Harold

    2010-09-15

    Computational methods have been used to find duplicate biomedical publications in MEDLINE. Full text articles are becoming increasingly available, yet the similarities among them have not been systematically studied. Here, we quantitatively investigated the full text similarity of biomedical publications in PubMed Central. 72,011 full text articles from PubMed Central (PMC) were parsed to generate three different datasets: full texts, sections, and paragraphs. Text similarity comparisons were performed on these datasets using the text similarity algorithm eTBLAST. We measured the frequency of similar text pairs and compared it among different datasets. We found that high abstract similarity can be used to predict high full text similarity with a specificity of 20.1% (95% CI [17.3%, 23.1%]) and sensitivity of 99.999%. Abstract similarity and full text similarity have a moderate correlation (Pearson correlation coefficient: -0.423) when the similarity ratio is above 0.4. Among pairs of articles in PMC, method sections are found to be the most repetitive (frequency of similar pairs, methods: 0.029, introduction: 0.0076, results: 0.0043). In contrast, among a set of manually verified duplicate articles, results are the most repetitive sections (frequency of similar pairs, results: 0.94, methods: 0.89, introduction: 0.82). Repetition of introduction and methods sections is more likely to be committed by the same authors (odds of a highly similar pair having at least one shared author, introduction: 2.31, methods: 1.83, results: 1.03). There is also significantly more similarity in pairs of review articles than in pairs containing one review and one nonreview paper (frequency of similar pairs: 0.0167 and 0.0023, respectively). While quantifying abstract similarity is an effective approach for finding duplicate citations, a comprehensive full text analysis is necessary to uncover all potential duplicate citations in the scientific literature and is helpful when

  20. Prioritization of candidate disease genes by combining topological similarity and semantic similarity.

    Science.gov (United States)

    Liu, Bin; Jin, Min; Zeng, Pan

    2015-10-01

    The identification of gene-phenotype relationships is very important for the treatment of human diseases. Studies have shown that genes causing the same or similar phenotypes tend to interact with each other in a protein-protein interaction (PPI) network. Thus, many identification methods based on the PPI network model have achieved good results. However, in the PPI network, some interactions between the proteins encoded by candidate gene and the proteins encoded by known disease genes are very weak. Therefore, some studies have combined the PPI network with other genomic information and reported good predictive performances. However, we believe that the results could be further improved. In this paper, we propose a new method that uses the semantic similarity between the candidate gene and known disease genes to set the initial probability vector of a random walk with a restart algorithm in a human PPI network. The effectiveness of our method was demonstrated by leave-one-out cross-validation, and the experimental results indicated that our method outperformed other methods. Additionally, our method can predict new causative genes of multifactor diseases, including Parkinson's disease, breast cancer and obesity. The top predictions were good and consistent with the findings in the literature, which further illustrates the effectiveness of our method. Copyright © 2015 Elsevier Inc. All rights reserved.

  1. [Precautions of physical performance requirements and test methods during product standard drafting process of medical devices].

    Science.gov (United States)

    Song, Jin-Zi; Wan, Min; Xu, Hui; Yao, Xiu-Jun; Zhang, Bo; Wang, Jin-Hong

    2009-09-01

    The major idea of this article is to discuss standardization and normalization for the product standard of medical devices. Analyze the problem related to the physical performance requirements and test methods during product standard drafting process and make corresponding suggestions.

  2. Performance analysis of demodulation with diversity -- A combinatorial approach I: Symmetric function theoretical methods

    Directory of Open Access Journals (Sweden)

    Jean-Louis Dornstetter

    2002-12-01

    Full Text Available This paper is devoted to the presentation of a combinatorial approach, based on the theory of symmetric functions, for analyzing the performance of a family of demodulation methods used in mobile telecommunications.

  3. Performance analysis of demodulation with diversity -- A combinatorial approach I: Symmetric function theoretical methods

    OpenAIRE

    Jean-Louis Dornstetter; Daniel Krob; Jean-Yves Thibon; Ekaterina A. Vassilieva

    2002-01-01

    This paper is devoted to the presentation of a combinatorial approach, based on the theory of symmetric functions, for analyzing the performance of a family of demodulation methods used in mobile telecommunications.

  4. Similarity flows in relativistic hydrodynamics

    International Nuclear Information System (INIS)

    Blaizot, J.P.; Ollitrault, J.Y.

    1986-01-01

    In ultra-relativistic heavy ion collisions, one expects in particular to observe a deconfinement transition leading to a formation of quark gluon plasma. In the framework of the hydrodynamic model, experimental signatures of such a plasma may be looked for as observable consequences of a first order transition on the evolution of the system. In most of the possible scenario, the phase transition is accompanied with discontinuities in the hydrodynamic flow, such as shock waves. The method presented in this paper has been developed to treat without too much numerical effort such discontinuous flow. It relies heavily on the use of similarity solutions of the hydrodynamic equations

  5. Comparative performance evaluation of automated segmentation methods of hippocampus from magnetic resonance images of temporal lobe epilepsy patients.

    Science.gov (United States)

    Hosseini, Mohammad-Parsa; Nazem-Zadeh, Mohammad-Reza; Pompili, Dario; Jafari-Khouzani, Kourosh; Elisevich, Kost; Soltanian-Zadeh, Hamid

    2016-01-01

    Segmentation of the hippocampus from magnetic resonance (MR) images is a key task in the evaluation of mesial temporal lobe epilepsy (mTLE) patients. Several automated algorithms have been proposed although manual segmentation remains the benchmark. Choosing a reliable algorithm is problematic since structural definition pertaining to multiple edges, missing and fuzzy boundaries, and shape changes varies among mTLE subjects. Lack of statistical references and guidance for quantifying the reliability and reproducibility of automated techniques has further detracted from automated approaches. The purpose of this study was to develop a systematic and statistical approach using a large dataset for the evaluation of automated methods and establish a method that would achieve results better approximating those attained by manual tracing in the epileptogenic hippocampus. A template database of 195 (81 males, 114 females; age range 32-67 yr, mean 49.16 yr) MR images of mTLE patients was used in this study. Hippocampal segmentation was accomplished manually and by two well-known tools (FreeSurfer and hammer) and two previously published methods developed at their institution [Automatic brain structure segmentation (ABSS) and LocalInfo]. To establish which method was better performing for mTLE cases, several voxel-based, distance-based, and volume-based performance metrics were considered. Statistical validations of the results using automated techniques were compared with the results of benchmark manual segmentation. Extracted metrics were analyzed to find the method that provided a more similar result relative to the benchmark. Among the four automated methods, ABSS generated the most accurate results. For this method, the Dice coefficient was 5.13%, 14.10%, and 16.67% higher, Hausdorff was 22.65%, 86.73%, and 69.58% lower, precision was 4.94%, -4.94%, and 12.35% higher, and the root mean square (RMS) was 19.05%, 61.90%, and 65.08% lower than LocalInfo, FreeSurfer, and

  6. Intuitionistic fuzzy based DEMATEL method for developing green practices and performances in a green supply chain

    DEFF Research Database (Denmark)

    Govindan, Kannan; Khodaverdi, Roohollah; Vafadarnikjoo, Amin

    2015-01-01

    for organizations to enhance their environmental performance and achieve competitive advantages. This study pioneers using the decision-making trial and evaluation laboratory (DEMATEL) method with intuitionistic fuzzy sets to handle the important and causal relationships between GSCM practices and performances...... to evaluate the efficiency of the proposed method. The results reveal "internal management support", "green purchasing" and "ISO 14001 certification" are the most significant GSCM practices. The practical results of this study offer useful insights for managers to become more environmentally responsible...

  7. Abundance estimation of spectrally similar minerals

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2009-07-01

    Full Text Available This paper evaluates a spectral unmixing method for estimating the partial abundance of spectrally similar minerals in complex mixtures. The method requires formulation of a linear function of individual spectra of individual minerals. The first...

  8. The use of mixed-methods research to diagnose the organisational performance of a local government

    Directory of Open Access Journals (Sweden)

    Benjamin H. Olivier

    2017-07-01

    Full Text Available Orientation: The majority of local governments in South Africa are underperforming; a first step to improve their performance is to accurately diagnose their current functioning. The utilisation of a mixed-methods approach for this diagnosis based on a valid model of organisational performance will form a better and holistic understanding of how a local government is performing. Research purpose: The aim of this study is to investigate the utility of mixed-methods research as a diagnostic approach for determining the organisational performance of a local government in South Africa. Motivation for the study: The use of either quantitative or qualitative data gathering in isolation as part of an organisational diagnosis can lead to biased information and not identifying the root causes of problems. The use of mixed-methods research in which both quantitative and qualitative data gathering methods are utilised has been shown to produce numerous benefits, such as confirmation of gathered data, providing richer detail and initiating new lines of thinking. Such multiple methodologies are recognised as an essential component of any organisational diagnosis and can be an effective means of eliminating biases in singular data gathering methods. Research design, approach and method: A concurrent transformative mixed-methods strategy based on the Burke–Litwin model of organisational performance with triangulation of results and findings to determine convergence validity was used. A convenience sample of 116 (N = 203 permanent officials in a rural district municipality in South Africa completed a survey questionnaire and were also individually interviewed. Main findings: Results indicate that mixed-methods research is a valid technique for establishing the integrity of survey data and for providing a better and holistic understanding of the functioning of an organisation. The results also indicate that the Burke–Litwin model is a useful and valid

  9. On different forms of self similarity

    International Nuclear Information System (INIS)

    Aswathy, R.K.; Mathew, Sunil

    2016-01-01

    Fractal geometry is mainly based on the idea of self-similar forms. To be self-similar, a shape must able to be divided into parts that are smaller copies, which are more or less similar to the whole. There are different forms of self similarity in nature and mathematics. In this paper, some of the topological properties of super self similar sets are discussed. It is proved that in a complete metric space with two or more elements, the set of all non super self similar sets are dense in the set of all non-empty compact sub sets. It is also proved that the product of self similar sets are super self similar in product metric spaces and that the super self similarity is preserved under isometry. A characterization of super self similar sets using contracting sub self similarity is also presented. Some relevant counterexamples are provided. The concepts of exact super and sub self similarity are introduced and a necessary and sufficient condition for a set to be exact super self similar in terms of condensation iterated function systems (Condensation IFS’s) is obtained. A method to generate exact sub self similar sets using condensation IFS’s and the denseness of exact super self similar sets are also discussed.

  10. Study on Fault Diagnostics of a Turboprop Engine Using Inverse Performance Model and Artificial Intelligent Methods

    Science.gov (United States)

    Kong, Changduk; Lim, Semyeong

    2011-12-01

    Recently, the health monitoring system of major gas path components of gas turbine uses mostly the model based method like the Gas Path Analysis (GPA). This method is to find quantity changes of component performance characteristic parameters such as isentropic efficiency and mass flow parameter by comparing between measured engine performance parameters such as temperatures, pressures, rotational speeds, fuel consumption, etc. and clean engine performance parameters without any engine faults which are calculated by the base engine performance model. Currently, the expert engine diagnostic systems using the artificial intelligent methods such as Neural Networks (NNs), Fuzzy Logic and Genetic Algorithms (GAs) have been studied to improve the model based method. Among them the NNs are mostly used to the engine fault diagnostic system due to its good learning performance, but it has a drawback due to low accuracy and long learning time to build learning data base if there are large amount of learning data. In addition, it has a very complex structure for finding effectively single type faults or multiple type faults of gas path components. This work builds inversely a base performance model of a turboprop engine to be used for a high altitude operation UAV using measured performance data, and proposes a fault diagnostic system using the base engine performance model and the artificial intelligent methods such as Fuzzy logic and Neural Network. The proposed diagnostic system isolates firstly the faulted components using Fuzzy Logic, then quantifies faults of the identified components using the NN leaned by fault learning data base, which are obtained from the developed base performance model. In leaning the NN, the Feed Forward Back Propagation (FFBP) method is used. Finally, it is verified through several test examples that the component faults implanted arbitrarily in the engine are well isolated and quantified by the proposed diagnostic system.

  11. Good performance of an immunoassay based method for nevirapine measurements in human breast milk

    DEFF Research Database (Denmark)

    Salado-Rasmussen, Kirsten; Persson Theilgaard, Zahra; Chiduo, Mercy

    2011-01-01

    , developed and validated for plasma use. In this study, the ARK NVP-test was evaluated for measurement of nevirapine concentrations in breast milk. High performance liquid chromatography (HPLC) is the method currently used to determine nevirapine in breast milk. This method, however, requires complicated...

  12. Performance of different methods for reference evapotranspiration estimation in Jaíba, Brazil

    Directory of Open Access Journals (Sweden)

    Gustavo H. da Silva

    Full Text Available ABSTRACT FAO Penman-Monteith (FO-PM is considered the standard method for the estimation of reference evapotranspiration (ET0 but requires various meteorological data, which are often not available. The objective of this work was to evaluate the performance of the FAO-PM method with limited meteorological data and other methods as alternatives to estimate ET0 in Jaíba-MG. The study used daily meteorological data from 2007 to 2016 of the National Institute of Meteorology’s station. Daily ET0 values were randomized, and 70% of these were used to determine the calibration parameters of the ET0 for the equations of each method under study. The remaining data were used to test the calibration against the standard method. Performance evaluation was based on Willmott’s index of agreement, confidence coefficient and root-mean-square error. When one meteorological variable was missing, either solar radiation, relative air humidity or wind speed, or in the simultaneous absence of wind speed and relative air humidity, the FAO-PM method showed the best performances and, therefore, was recommended for Jaíba. The FAO-PM method with two missing variables, one of them being solar radiation, showed intermediate performance. Methods that used only air temperature data are not recommended for the region.

  13. A method to evaluate performance reliability of individual subjects in laboratory research applied to work settings.

    Science.gov (United States)

    1978-10-01

    This report presents a method that may be used to evaluate the reliability of performance of individual subjects, particularly in applied laboratory research. The method is based on analysis of variance of a tasks-by-subjects data matrix, with all sc...

  14. Development of methods of key performance indicators formation for corporate planning

    International Nuclear Information System (INIS)

    Chebotarev, A.N.

    2011-01-01

    A theoretical proposition, a model of enterprise performance management and a concept of balanced key performance indicators as a method to control the enterprise strategy have been systematized and presented. An algorithm that increases the efficiency of action plans' formation has been developed and implemented. In particular, a set of criteria for the selection of events and parameters necessary for the formation of an action plan has been created. A method of control of the business processes, allowing the experts to establish the relationship between the business processes performance indicators and the enterprise's key indicators has been developed [ru

  15. Development and validation of ultra-high performance supercritical fluid chromatography method for determination of illegal dyes and comparison to ultra-high performance liquid chromatography method.

    Science.gov (United States)

    Khalikova, Maria A; Šatínský, Dalibor; Solich, Petr; Nováková, Lucie

    2015-05-18

    A novel simple, fast and efficient ultra-high performance supercritical fluid chromatography (UHPSFC) method was developed and validated for the separation and quantitative determination of eleven illegal dyes in chili-containing spices. The method involved a simple ultrasound-assisted liquid extraction of illegal compounds with tetrahydrofuran. The separation was performed using a supercritical fluid chromatography system and CSH Fluoro-Phenyl stationary phase at 70°C. The mobile phase was carbon dioxide and the mixture of methanol:acetonitrile (1:1, v/v) with 2.5% formic acid as an additive at the flow rate 2.0 mL min(-1). The UV-vis detection was accomplished at 500 nm for seven compounds and at 420 nm for Sudan Orange G, Butter Yellow, Fast Garnet GBC and Methyl Red due to their maximum of absorbance. All eleven compounds were separated in less than 5 min. The method was successfully validated and applied using three commercial samples of chili-containing spices - Chili sauce (Indonesia), Feferony sauce (Slovakia) and Mojo sauce (Spain). The linearity range of proposed method was 0.50-9.09 mg kg(-1) (r ≥ 0.995). The detection limits were determined as signal to noise ratio of 3 and were ranged from 0.15 mg kg(-1) to 0.60 mg kg(-1) (1.80 mg kg(-1) for Fast Garnet) for standard solution and from 0.25 mg kg(-1) to 1.00 mg kg(-1) (2.50 mg kg(-1) for Fast Garnet, 1.50 mg kg(-1) for Sudan Red 7B) for chili-containing samples. The recovery values were in the range of 73.5-107.2% and relative standard deviation ranging from 0.1% to 8.2% for within-day precision and from 0.5% to 8.8% for between-day precision. The method showed potential for being used to monitor forbidden dyes in food constituents. The developed UHPSFC method was compared to the UHPLC-UV method. The orthogonality of Sudan dyes separation by these two methods was demonstrated. Benefits and drawbacks were discussed showing the reliability of both methods for monitoring of studied illegal dyes in real

  16. A validated high performance thin layer chromatography method for determination of yohimbine hydrochloride in pharmaceutical preparations

    OpenAIRE

    Jihan M Badr

    2013-01-01

    Background: Yohimbine is an indole alkaloid used as a promising therapy for erectile dysfunction. A number of methods were reported for the analysis of yohimbine in the bark or in pharmaceutical preparations. Materials and Method: In the present work, a simple and sensitive high performance thin layer chromatographic method is developed for determination of yohimbine (occurring as yohimbine hydrochloride) in pharmaceutical preparations and validated according to International Conference of Ha...

  17. Why Did the Bear Cross the Road? Comparing the Performance of Multiple Resistance Surfaces and Connectivity Modeling Methods

    Directory of Open Access Journals (Sweden)

    Samuel A. Cushman

    2014-12-01

    Full Text Available There have been few assessments of the performance of alternative resistance surfaces, and little is known about how connectivity modeling approaches differ in their ability to predict organism movements. In this paper, we evaluate the performance of four connectivity modeling approaches applied to two resistance surfaces in predicting the locations of highway crossings by American black bears in the northern Rocky Mountains, USA. We found that a resistance surface derived directly from movement data greatly outperformed a resistance surface produced from analysis of genetic differentiation, despite their heuristic similarities. Our analysis also suggested differences in the performance of different connectivity modeling approaches. Factorial least cost paths appeared to slightly outperform other methods on the movement-derived resistance surface, but had very poor performance on the resistance surface obtained from multi-model landscape genetic analysis. Cumulative resistant kernels appeared to offer the best combination of high predictive performance and sensitivity to differences in resistance surface parameterization. Our analysis highlights that even when two resistance surfaces include the same variables and have a high spatial correlation of resistance values, they may perform very differently in predicting animal movement and population connectivity.

  18. Systems, methods and computer-readable media for modeling cell performance fade of rechargeable electrochemical devices

    Science.gov (United States)

    Gering, Kevin L

    2013-08-27

    A system includes an electrochemical cell, monitoring hardware, and a computing system. The monitoring hardware periodically samples performance characteristics of the electrochemical cell. The computing system determines cell information from the performance characteristics of the electrochemical cell. The computing system also develops a mechanistic level model of the electrochemical cell to determine performance fade characteristics of the electrochemical cell and analyzing the mechanistic level model to estimate performance fade characteristics over aging of a similar electrochemical cell. The mechanistic level model uses first constant-current pulses applied to the electrochemical cell at a first aging period and at three or more current values bracketing a first exchange current density. The mechanistic level model also is based on second constant-current pulses applied to the electrochemical cell at a second aging period and at three or more current values bracketing the second exchange current density.

  19. A High-Performance Parallel FDTD Method Enhanced by Using SSE Instruction Set

    Directory of Open Access Journals (Sweden)

    Dau-Chyrh Chang

    2012-01-01

    Full Text Available We introduce a hardware acceleration technique for the parallel finite difference time domain (FDTD method using the SSE (streaming (single instruction multiple data SIMD extensions instruction set. The implementation of SSE instruction set to parallel FDTD method has achieved the significant improvement on the simulation performance. The benchmarks of the SSE acceleration on both the multi-CPU workstation and computer cluster have demonstrated the advantages of (vector arithmetic logic unit VALU acceleration over GPU acceleration. Several engineering applications are employed to demonstrate the performance of parallel FDTD method enhanced by SSE instruction set.

  20. A service based estimation method for MPSoC performance modelling

    DEFF Research Database (Denmark)

    Tranberg-Hansen, Anders Sejer; Madsen, Jan; Jensen, Bjørn Sand

    2008-01-01

    This paper presents an abstract service based estimation method for MPSoC performance modelling which allows fast, cycle accurate design space exploration of complex architectures including multi processor configurations at a very early stage in the design phase. The modelling method uses a service...... oriented model of computation based on Hierarchical Colored Petri Nets and allows the modelling of both software and hardware in one unified model. To illustrate the potential of the method, a small MPSoC system, developed at Bang & Olufsen ICEpower a/s, is modelled and performance estimates are produced...

  1. Accurate simulation of MPPT methods performance when applied to commercial photovoltaic panels.

    Science.gov (United States)

    Cubas, Javier; Pindado, Santiago; Sanz-Andrés, Ángel

    2015-01-01

    A new, simple, and quick-calculation methodology to obtain a solar panel model, based on the manufacturers' datasheet, to perform MPPT simulations, is described. The method takes into account variations on the ambient conditions (sun irradiation and solar cells temperature) and allows fast MPPT methods comparison or their performance prediction when applied to a particular solar panel. The feasibility of the described methodology is checked with four different MPPT methods applied to a commercial solar panel, within a day, and under realistic ambient conditions.

  2. Accurate Simulation of MPPT Methods Performance When Applied to Commercial Photovoltaic Panels

    Directory of Open Access Journals (Sweden)

    Javier Cubas

    2015-01-01

    Full Text Available A new, simple, and quick-calculation methodology to obtain a solar panel model, based on the manufacturers’ datasheet, to perform MPPT simulations, is described. The method takes into account variations on the ambient conditions (sun irradiation and solar cells temperature and allows fast MPPT methods comparison or their performance prediction when applied to a particular solar panel. The feasibility of the described methodology is checked with four different MPPT methods applied to a commercial solar panel, within a day, and under realistic ambient conditions.

  3. Study on Performance Shaping Factors (PSFs) Quantification Method in Human Reliability Analysis (HRA)

    International Nuclear Information System (INIS)

    Kim, Ar Ryum; Jang, Inseok Jang; Seong, Poong Hyun; Park, Jinkyun; Kim, Jong Hyun

    2015-01-01

    The purpose of HRA implementation is 1) to achieve the human factor engineering (HFE) design goal of providing operator interfaces that will minimize personnel errors and 2) to conduct an integrated activity to support probabilistic risk assessment (PRA). For these purposes, various HRA methods have been developed such as technique for human error rate prediction (THERP), simplified plant analysis risk human reliability assessment (SPAR-H), cognitive reliability and error analysis method (CREAM) and so on. In performing HRA, such conditions that influence human performances have been represented via several context factors called performance shaping factors (PSFs). PSFs are aspects of the human's individual characteristics, environment, organization, or task that specifically decrements or improves human performance, thus respectively increasing or decreasing the likelihood of human errors. Most HRA methods evaluate the weightings of PSFs by expert judgment and explicit guidance for evaluating the weighting is not provided. It has been widely known that the performance of the human operator is one of the critical factors to determine the safe operation of NPPs. HRA methods have been developed to identify the possibility and mechanism of human errors. In performing HRA methods, the effect of PSFs which may increase or decrease human error should be investigated. However, the effect of PSFs were estimated by expert judgment so far. Accordingly, in order to estimate the effect of PSFs objectively, the quantitative framework to estimate PSFs by using PSF profiles is introduced in this paper

  4. Delineating species with DNA barcodes: a case of taxon dependent method performance in moths.

    Directory of Open Access Journals (Sweden)

    Mari Kekkonen

    Full Text Available The accelerating loss of biodiversity has created a need for more effective ways to discover species. Novel algorithmic approaches for analyzing sequence data combined with rapidly expanding DNA barcode libraries provide a potential solution. While several analytical methods are available for the delineation of operational taxonomic units (OTUs, few studies have compared their performance. This study compares the performance of one morphology-based and four DNA-based (BIN, parsimony networks, ABGD, GMYC methods on two groups of gelechioid moths. It examines 92 species of Finnish Gelechiinae and 103 species of Australian Elachistinae which were delineated by traditional taxonomy. The results reveal a striking difference in performance between the two taxa with all four DNA-based methods. OTU counts in the Elachistinae showed a wider range and a relatively low (ca. 65% OTU match with reference species while OTU counts were more congruent and performance was higher (ca. 90% in the Gelechiinae. Performance rose when only monophyletic species were compared, but the taxon-dependence remained. None of the DNA-based methods produced a correct match with non-monophyletic species, but singletons were handled well. A simulated test of morphospecies-grouping performed very poorly in revealing taxon diversity in these small, dull-colored moths. Despite the strong performance of analyses based on DNA barcodes, species delineated using single-locus mtDNA data are best viewed as OTUs that require validation by subsequent integrative taxonomic work.

  5. Comparing performances of clements, box-cox, Johnson methods with weibull distributions for assessing process capability

    Energy Technology Data Exchange (ETDEWEB)

    Senvar, O.; Sennaroglu, B.

    2016-07-01

    This study examines Clements’ Approach (CA), Box-Cox transformation (BCT), and Johnson transformation (JT) methods for process capability assessments through Weibull-distributed data with different parameters to figure out the effects of the tail behaviours on process capability and compares their estimation performances in terms of accuracy and precision. Design/methodology/approach: Usage of process performance index (PPI) Ppu is handled for process capability analysis (PCA) because the comparison issues are performed through generating Weibull data without subgroups. Box plots, descriptive statistics, the root-mean-square deviation (RMSD), which is used as a measure of error, and a radar chart are utilized all together for evaluating the performances of the methods. In addition, the bias of the estimated values is important as the efficiency measured by the mean square error. In this regard, Relative Bias (RB) and the Relative Root Mean Square Error (RRMSE) are also considered. Findings: The results reveal that the performance of a method is dependent on its capability to fit the tail behavior of the Weibull distribution and on targeted values of the PPIs. It is observed that the effect of tail behavior is more significant when the process is more capable. Research limitations/implications: Some other methods such as Weighted Variance method, which also give good results, were also conducted. However, we later realized that it would be confusing in terms of comparison issues between the methods for consistent interpretations... (Author)

  6. A simplified method for evaluating thermal performance of unglazed transpired solar collectors under steady state

    International Nuclear Information System (INIS)

    Wang, Xiaoliang; Lei, Bo; Bi, Haiquan; Yu, Tao

    2017-01-01

    Highlights: • A simplified method for evaluating thermal performance of UTC is developed. • Experiments, numerical simulations, dimensional analysis and data fitting are used. • The correlation of absorber plate temperature for UTC is established. • The empirical correlation of heat exchange effectiveness for UTC is proposed. - Abstract: Due to the advantages of low investment and high energy efficiency, unglazed transpired solar collectors (UTC) have been widely used for heating in buildings. However, it is difficult for designers to quickly evaluate the thermal performance of UTC based on the conventional methods such as experiments and numerical simulations. Therefore, a simple and fast method to determine the thermal performance of UTC is indispensable. The objective of this work is to provide a simplified calculation method to easily evaluate the thermal performance of UTC under steady state. Different parameters are considered in the simplified method, including pitch, perforation diameter, solar radiation, solar absorptivity, approach velocity, ambient air temperature, absorber plate temperature, and so on. Based on existing design parameters and operating conditions, correlations for the absorber plate temperature and the heat exchange effectiveness are developed using dimensional analysis and data fitting, respectively. Results show that the proposed simplified method has a high accuracy and can be employed to evaluate the collector efficiency, the heat exchange effectiveness and the air temperature rise. The proposed method in this paper is beneficial to directly determine design parameters and operating status for UTC.

  7. Adaptation Method for Overall and Local Performances of Gas Turbine Engine Model

    Science.gov (United States)

    Kim, Sangjo; Kim, Kuisoon; Son, Changmin

    2018-04-01

    An adaptation method was proposed to improve the modeling accuracy of overall and local performances of gas turbine engine. The adaptation method was divided into two steps. First, the overall performance parameters such as engine thrust, thermal efficiency, and pressure ratio were adapted by calibrating compressor maps, and second, the local performance parameters such as temperature of component intersection and shaft speed were adjusted by additional adaptation factors. An optimization technique was used to find the correlation equation of adaptation factors for compressor performance maps. The multi-island genetic algorithm (MIGA) was employed in the present optimization. The correlations of local adaptation factors were generated based on the difference between the first adapted engine model and performance test data. The proposed adaptation method applied to a low-bypass ratio turbofan engine of 12,000 lb thrust. The gas turbine engine model was generated and validated based on the performance test data in the sea-level static condition. In flight condition at 20,000 ft and 0.9 Mach number, the result of adapted engine model showed improved prediction in engine thrust (overall performance parameter) by reducing the difference from 14.5 to 3.3%. Moreover, there was further improvement in the comparison of low-pressure turbine exit temperature (local performance parameter) as the difference is reduced from 3.2 to 0.4%.

  8. Research on the Risk Early Warning Method of Material Supplier Performance in Power Industry

    Science.gov (United States)

    Chen, Peng; Zhang, Xi

    2018-01-01

    The early warning of supplier performance risk is still in the initial stage interiorly, and research on the early warning mechanism to identify, analyze and prevent the performance risk is few. In this paper, a new method aiming at marerial supplier performance risk in power industry is proposed, firstly, establishing a set of risk early warning indexes, Then use the ECM method to classify the indexes to form different risk grades. Then, improving Crock Ford risk quantization model by considering three indicators, including the stability of power system, economic losses and successful bid ratio to form the predictive risk grade, and ultimately using short board effect principle to form the ultimate risk grade to truly reflect the supplier performance risk. Finally, making empirical analysis on supplier performance and putting forward the counter measures and prevention strategies for different risks.

  9. Performance evaluation of the zero-multipole summation method in modern molecular dynamics software.

    Science.gov (United States)

    Sakuraba, Shun; Fukuda, Ikuo

    2018-05-04

    The zero-multiple summation method (ZMM) is a cutoff-based method for calculating electrostatic interactions in molecular dynamics simulations, utilizing an electrostatic neutralization principle as a physical basis. Since the accuracies of the ZMM have been revealed to be sufficient in previous studies, it is highly desirable to clarify its practical performance. In this paper, the performance of the ZMM is compared with that of the smooth particle mesh Ewald method (SPME), where the both methods are implemented in molecular dynamics software package GROMACS. Extensive performance comparisons against a highly optimized, parameter-tuned SPME implementation are performed for various-sized water systems and two protein-water systems. We analyze in detail the dependence of the performance on the potential parameters and the number of CPU cores. Even though the ZMM uses a larger cutoff distance than the SPME does, the performance of the ZMM is comparable to or better than that of the SPME. This is because the ZMM does not require a time-consuming electrostatic convolution and because the ZMM gains short neighbor-list distances due to the smooth damping feature of the pairwise potential function near the cutoff length. We found, in particular, that the ZMM with quadrupole or octupole cancellation and no damping factor is an excellent candidate for the fast calculation of electrostatic interactions. © 2018 Wiley Periodicals, Inc. © 2018 Wiley Periodicals, Inc.

  10. Standard Test Method for Electrical Performance of Concentrator Terrestrial Photovoltaic Modules and Systems Under Natural Sunlight

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2009-01-01

    1.1 This test method covers the determination of the electrical performance of photovoltaic concentrator modules and systems under natural sunlight using a normal incidence pyrheliometer. 1.2 The test method is limited to module assemblies and systems where the geometric concentration ratio specified by the manufacturer is greater than 5. 1.3 This test method applies to concentrators that use passive cooling where the cell temperature is related to the air temperature. 1.4 Measurements under a variety of conditions are allowed; results are reported under a select set of concentrator reporting conditions to facilitate comparison of results. 1.5 This test method applies only to concentrator terrestrial modules and systems. 1.6 This test method assumes that the module or system electrical performance characteristics do not change during the period of test. 1.7 The performance rating determined by this test method applies only at the period of the test, and implies no past or future performance level. 1.8...

  11. Performance-based parameter tuning method of model-driven PID control systems.

    Science.gov (United States)

    Zhao, Y M; Xie, W F; Tu, X W

    2012-05-01

    In this paper, performance-based parameter tuning method of model-driven Two-Degree-of-Freedom PID (MD TDOF PID) control system has been proposed to enhance the control performances of a process. Known for its ability of stabilizing the unstable processes, fast tracking to the change of set points and rejecting disturbance, the MD TDOF PID has gained research interest recently. The tuning methods for the reported MD TDOF PID are based on internal model control (IMC) method instead of optimizing the performance indices. In this paper, an Integral of Time Absolute Error (ITAE) zero-position-error optimal tuning and noise effect minimizing method is proposed for tuning two parameters in MD TDOF PID control system to achieve the desired regulating and disturbance rejection performance. The comparison with Two-Degree-of-Freedom control scheme by modified smith predictor (TDOF CS MSP) and the designed MD TDOF PID tuned by the IMC tuning method demonstrates the effectiveness of the proposed tuning method. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.

  12. Walking on a user similarity network towards personalized recommendations.

    Directory of Open Access Journals (Sweden)

    Mingxin Gan

    Full Text Available Personalized recommender systems have been receiving more and more attention in addressing the serious problem of information overload accompanying the rapid evolution of the world-wide-web. Although traditional collaborative filtering approaches based on similarities between users have achieved remarkable success, it has been shown that the existence of popular objects may adversely influence the correct scoring of candidate objects, which lead to unreasonable recommendation results. Meanwhile, recent advances have demonstrated that approaches based on diffusion and random walk processes exhibit superior performance over collaborative filtering methods in both the recommendation accuracy and diversity. Building on these results, we adopt three strategies (power-law adjustment, nearest neighbor, and threshold filtration to adjust a user similarity network from user similarity scores calculated on historical data, and then propose a random walk with restart model on the constructed network to achieve personalized recommendations. We perform cross-validation experiments on two real data sets (MovieLens and Netflix and compare the performance of our method against the existing state-of-the-art methods. Results show that our method outperforms existing methods in not only recommendation accuracy and diversity, but also retrieval performance.

  13. Walking on a user similarity network towards personalized recommendations.

    Science.gov (United States)

    Gan, Mingxin

    2014-01-01

    Personalized recommender systems have been receiving more and more attention in addressing the serious problem of information overload accompanying the rapid evolution of the world-wide-web. Although traditional collaborative filtering approaches based on similarities between users have achieved remarkable success, it has been shown that the existence of popular objects may adversely influence the correct scoring of candidate objects, which lead to unreasonable recommendation results. Meanwhile, recent advances have demonstrated that approaches based on diffusion and random walk processes exhibit superior performance over collaborative filtering methods in both the recommendation accuracy and diversity. Building on these results, we adopt three strategies (power-law adjustment, nearest neighbor, and threshold filtration) to adjust a user similarity network from user similarity scores calculated on historical data, and then propose a random walk with restart model on the constructed network to achieve personalized recommendations. We perform cross-validation experiments on two real data sets (MovieLens and Netflix) and compare the performance of our method against the existing state-of-the-art methods. Results show that our method outperforms existing methods in not only recommendation accuracy and diversity, but also retrieval performance.

  14. Self-similar gravitational clustering

    International Nuclear Information System (INIS)

    Efstathiou, G.; Fall, S.M.; Hogan, C.

    1979-01-01

    The evolution of gravitational clustering is considered and several new scaling relations are derived for the multiplicity function. These include generalizations of the Press-Schechter theory to different densities and cosmological parameters. The theory is then tested against multiplicity function and correlation function estimates for a series of 1000-body experiments. The results are consistent with the theory and show some dependence on initial conditions and cosmological density parameter. The statistical significance of the results, however, is fairly low because of several small number effects in the experiments. There is no evidence for a non-linear bootstrap effect or a dependence of the multiplicity function on the internal dynamics of condensed groups. Empirical estimates of the multiplicity function by Gott and Turner have a feature near the characteristic luminosity predicted by the theory. The scaling relations allow the inference from estimates of the galaxy luminosity function that galaxies must have suffered considerable dissipation if they originally formed from a self-similar hierarchy. A method is also developed for relating the multiplicity function to similar measures of clustering, such as those of Bhavsar, for the distribution of galaxies on the sky. These are shown to depend on the luminosity function in a complicated way. (author)

  15. An exploratory survey of methods used to develop measures of performance

    Science.gov (United States)

    Hamner, Kenneth L.; Lafleur, Charles A.

    1993-09-01

    Nonmanufacturing organizations are being challenged to provide high-quality products and services to their customers, with an emphasis on continuous process improvement. Measures of performance, referred to as metrics, can be used to foster process improvement. The application of performance measurement to nonmanufacturing processes can be very difficult. This research explored methods used to develop metrics in nonmanufacturing organizations. Several methods were formally defined in the literature, and the researchers used a two-step screening process to determine the OMB Generic Method was most likely to produce high-quality metrics. The OMB Generic Method was then used to develop metrics. A few other metric development methods were found in use at nonmanufacturing organizations. The researchers interviewed participants in metric development efforts to determine their satisfaction and to have them identify the strengths and weaknesses of, and recommended improvements to, the metric development methods used. Analysis of participants' responses allowed the researchers to identify the key components of a sound metrics development method. Those components were incorporated into a proposed metric development method that was based on the OMB Generic Method, and should be more likely to produce high-quality metrics that will result in continuous process improvement.

  16. Seniority bosons from similarity transformations

    International Nuclear Information System (INIS)

    Geyer, H.B.

    1986-01-01

    The requirement of associating in the boson space seniority with twice the number of non-s bosons defines a similarity transformation which re-expresses the Dyson pair boson images in terms of seniority bosons. In particular the fermion S-pair creation operator is mapped onto an operator which, unlike the pair boson image, does not change the number of non-s bosons. The original results of Otsuka, Arima and Iachello are recovered by this procedure while at the same time they are generalized to include g-bosons or even bosons with J>4 as well as any higher order boson terms. Furthermore the seniority boson images are valid for an arbitrary number of d- or g-bosons - a result which is not readily obtainable within the framework of the usual Marumori- or OAI-method

  17. Standard Test Methods for Electrical Performance of Nonconcentrator Terrestrial Photovoltaic Modules and Arrays Using Reference Cells

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2008-01-01

    1.1 These test methods cover the electrical performance of photovoltaic modules and arrays under natural or simulated sunlight using a calibrated reference cell. 1.1.1 These test methods allow a reference module to be used instead of a reference cell provided the reference module has been calibrated using these test methods against a calibrated reference cell. 1.2 Measurements under a variety of conditions are allowed; results are reported under a select set of reporting conditions (RC) to facilitate comparison of results. 1.3 These test methods apply only to nonconcentrator terrestrial modules and arrays. 1.4 The performance parameters determined by these test methods apply only at the time of the test, and imply no past or future performance level. 1.5 These test methods apply to photovoltaic modules and arrays that do not contain series-connected photovoltaic multijunction devices; such module and arrays should be tested according to Test Methods E 2236. 1.6 The values stated in SI units are to be re...

  18. Description of JNC's analytical method and its performance for FBR cores

    International Nuclear Information System (INIS)

    Ishikawa, M.

    2000-01-01

    The description of JNC's analytical method and its performance for FBR cores includes: an outline of JNC's Analytical System Compared with ERANOS; a standard data base for FBR Nuclear Design in JNC; JUPITER Critical Experiment; details of Analytical Method and Its Effects on JUPITER; performance of JNC Analytical System (effective multiplication factor k eff , control rod worth, and sodium void reactivity); design accuracy of a 600 MWe-class FBR Core. JNC developed a consistent analytical system for FBR core evaluation, based on JENDL library, f-table method, and three dimensional diffusion/transport theory, which includes comprehensive sensitivity tools to improve the prediction accuracy of core parameters. JNC system was verified by analysis of JUPITER critical experiment, and other facilities. Its performance can be judged quite satisfactory for FBR-core design work, though there is room for further improvement, such as more detailed treatment of cross-section resonance regions

  19. HUMAN ERROR QUANTIFICATION USING PERFORMANCE SHAPING FACTORS IN THE SPAR-H METHOD

    Energy Technology Data Exchange (ETDEWEB)

    Harold S. Blackman; David I. Gertman; Ronald L. Boring

    2008-09-01

    This paper describes a cognitively based human reliability analysis (HRA) quantification technique for estimating the human error probabilities (HEPs) associated with operator and crew actions at nuclear power plants. The method described here, Standardized Plant Analysis Risk-Human Reliability Analysis (SPAR-H) method, was developed to aid in characterizing and quantifying human performance at nuclear power plants. The intent was to develop a defensible method that would consider all factors that may influence performance. In the SPAR-H approach, calculation of HEP rates is especially straightforward, starting with pre-defined nominal error rates for cognitive vs. action-oriented tasks, and incorporating performance shaping factor multipliers upon those nominal error rates.

  20. Analyzing the Performances of Automotive Companies Using Entropy Based MAUT and SAW Methods

    Directory of Open Access Journals (Sweden)

    Nuri Ömürbek

    2016-06-01

    Full Text Available In this study, performances of automotive companies traded on BİST (Istanbul Stock Exchange and also operated in our country have been  compared with the multi-criteria decision making techniques. Data of the most important automotive companies operating in Turkey have been analyzed based on capital, stock certificate, marketing value, sales revenue, number of employees, net profit margin, current ratio, net profit/capital, net profit/sales and net sales/number of employees. Criteria applied on  Performance measurement  was gained  operating reports of companies  in 2014. Entropy method  has been used to determine the weights of the criteria. Those weights have been used MAUT (Multi-Attribute Utility Theory and SAW (Simple Additive Weighting  methods to rank automative companies’ performances The findings highlight that the same companies were in the first three places  in both methods.

  1. In-depth performance evaluation of PFP and ESG sequence-based function prediction methods in CAFA 2011 experiment

    Directory of Open Access Journals (Sweden)

    Chitale Meghana

    2013-02-01

    Full Text Available Abstract Background Many Automatic Function Prediction (AFP methods were developed to cope with an increasing growth of the number of gene sequences that are available from high throughput sequencing experiments. To support the development of AFP methods, it is essential to have community wide experiments for evaluating performance of existing AFP methods. Critical Assessment of Function Annotation (CAFA is one such community experiment. The meeting of CAFA was held as a Special Interest Group (SIG meeting at the Intelligent Systems in Molecular Biology (ISMB conference in 2011. Here, we perform a detailed analysis of two sequence-based function prediction methods, PFP and ESG, which were developed in our lab, using the predictions submitted to CAFA. Results We evaluate PFP and ESG using four different measures in comparison with BLAST, Prior, and GOtcha. In addition to the predictions submitted to CAFA, we further investigate performance of a different scoring function to rank order predictions by PFP as well as PFP/ESG predictions enriched with Priors that simply adds frequently occurring Gene Ontology terms as a part of predictions. Prediction accuracies of each method were also evaluated separately for different functional categories. Successful and unsuccessful predictions by PFP and ESG are also discussed in comparison with BLAST. Conclusion The in-depth analysis discussed here will complement the overall assessment by the CAFA organizers. Since PFP and ESG are based on sequence database search results, our analyses are not only useful for PFP and ESG users but will also shed light on the relationship of the sequence similarity space and functions that can be inferred from the sequences.

  2. Introduction into scientific work methods-a necessity when performance-based codes are introduced

    DEFF Research Database (Denmark)

    Dederichs, Anne; Sørensen, Lars Schiøtt

    The introduction of performance-based codes in Denmark in 2004 requires new competences from people working with different aspects of fire safety in the industry and the public sector. This abstract presents an attempt in reducing problems with handling and analysing the mathematical methods...... and CFD models when applying performance-based codes. This is done within the educational program "Master of Fire Safety Engineering" at the department of Civil Engineering at the Technical University of Denmark. It was found that the students had general problems with academic methods. Therefore, a new...

  3. Performance Evaluation of Multivariate Analysis Methods on the $Z \\gamma$ Final State

    CERN Document Server

    Amos, Kieran Robert

    2017-01-01

    The performance of various machine learning algorithms are evaluated for their separation power of the $Z\\gamma$ Electroweak process (with $Z\\rightarrow\\ell\\ell$ and $\\ell=e,\\mu$) against the various backgrounds that populate the selection.\\\\ The Boosted Decision Tree method is found to deliver the best performance and is compared to that of neural net analysis and previously used methods using $36.1\\, \\text{fb}^{-1}$ of data obtained at $\\sqrt{s}=13\\, \\text{TeV}$ from the ATLAS detector in 2015 and 2016.

  4. 3D Blade Hydraulic Design Method of the Rotodynamic Multiphase Pump Impeller and Performance Research

    Directory of Open Access Journals (Sweden)

    Yongxue Zhang

    2014-02-01

    Full Text Available A hydraulic design method of three-dimensional blade was presented to design the blades of the rotodynamic multiphase pump. Numerical simulations and bench test were conducted to investigate the performance of the example impeller designed by the presented method. The results obtained from the bench test were in good agreement with the simulation results, which indicated the reasonability of the simulation. The distributions of pressure and gas volume fraction were analyzed and the results showed that the designed impeller was good for the transportation of mixture composed of gas and liquid. In addition, the advantage of the impeller designed by the presented method was suitable for using in large volume rate conditions, which were reflected by the comparison of the head performance between this three-dimensional design method and another one.

  5. Performance of some numerical Laplace inversion methods on American put option formula

    Science.gov (United States)

    Octaviano, I.; Yuniar, A. R.; Anisa, L.; Surjanto, S. D.; Putri, E. R. M.

    2018-03-01

    Numerical inversion approaches of Laplace transform is used to obtain a semianalytic solution. Some of the mathematical inversion methods such as Durbin-Crump, Widder, and Papoulis can be used to calculate American put options through the optimal exercise price in the Laplace space. The comparison of methods on some simple functions is aimed to know the accuracy and parameters which used in the calculation of American put options. The result obtained is the performance of each method regarding accuracy and computational speed. The Durbin-Crump method has an average error relative of 2.006e-004 with computational speed of 0.04871 seconds, the Widder method has an average error relative of 0.0048 with computational speed of 3.100181 seconds, and the Papoulis method has an average error relative of 9.8558e-004 with computational speed of 0.020793 seconds.

  6. Performance of particle in cell methods on highly concurrent computational architectures

    International Nuclear Information System (INIS)

    Adams, M.F.; Ethier, S.; Wichmann, N.

    2009-01-01

    Particle in cell (PIC) methods are effective in computing Vlasov-Poisson system of equations used in simulations of magnetic fusion plasmas. PIC methods use grid based computations, for solving Poisson's equation or more generally Maxwell's equations, as well as Monte-Carlo type methods to sample the Vlasov equation. The presence of two types of discretizations, deterministic field solves and Monte-Carlo methods for the Vlasov equation, pose challenges in understanding and optimizing performance on today large scale computers which require high levels of concurrency. These challenges arises from the need to optimize two very different types of processes and the interactions between them. Modern cache based high-end computers have very deep memory hierarchies and high degrees of concurrency which must be utilized effectively to achieve good performance. The effective use of these machines requires maximizing concurrency by eliminating serial or redundant work and minimizing global communication. A related issue is minimizing the memory traffic between levels of the memory hierarchy because performance is often limited by the bandwidths and latencies of the memory system. This paper discusses some of the performance issues, particularly in regard to parallelism, of PIC methods. The gyrokinetic toroidal code (GTC) is used for these studies and a new radial grid decomposition is presented and evaluated. Scaling of the code is demonstrated on ITER sized plasmas with up to 16K Cray XT3/4 cores.

  7. Performance of particle in cell methods on highly concurrent computational architectures

    International Nuclear Information System (INIS)

    Adams, M F; Ethier, S; Wichmann, N

    2007-01-01

    Particle in cell (PIC) methods are effective in computing Vlasov-Poisson system of equations used in simulations of magnetic fusion plasmas. PIC methods use grid based computations, for solving Poisson's equation or more generally Maxwell's equations, as well as Monte-Carlo type methods to sample the Vlasov equation. The presence of two types of discretizations, deterministic field solves and Monte-Carlo methods for the Vlasov equation, pose challenges in understanding and optimizing performance on today large scale computers which require high levels of concurrency. These challenges arises from the need to optimize two very different types of processes and the interactions between them. Modern cache based high-end computers have very deep memory hierarchies and high degrees of concurrency which must be utilized effectively to achieve good performance. The effective use of these machines requires maximizing concurrency by eliminating serial or redundant work and minimizing global communication. A related issue is minimizing the memory traffic between levels of the memory hierarchy because performance is often limited by the bandwidths and latencies of the memory system. This paper discusses some of the performance issues, particularly in regard to parallelism, of PIC methods. The gyrokinetic toroidal code (GTC) is used for these studies and a new radial grid decomposition is presented and evaluated. Scaling of the code is demonstrated on ITER sized plasmas with up to 16K Cray XT3/4 cores

  8. Evaluation of the long-term performance of six alternative disposal methods for LLRW

    Energy Technology Data Exchange (ETDEWEB)

    Kossik, R.; Sharp, G. [Golder Associates, Inc., Redmond, WA (United States); Chau, T. [Rogers & Associates Engineering Corp., Salt Lake City, UT (United States)

    1995-12-31

    The State of New York has carried out a comparison of six alternative disposal methods for low-level radioactive waste (LLRW). An important part of these evaluations involved quantitatively analyzing the long-term (10,000 yr) performance of the methods with respect to dose to humans, radionuclide concentrations in the environment, and cumulative release from the facility. Four near-surface methods (covered above-grade vault, uncovered above-grade vault, below-grade vault, augered holes) and two mine methods (vertical shaft mine and drift mine) were evaluated. Each method was analyzed for several generic site conditions applicable for the state. The evaluations were carried out using RIP (Repository Integration Program), an integrated, total system performance assessment computer code which has been applied to radioactive waste disposal facilities both in the U.S. (Yucca Mountain, WIPP) and worldwide. The evaluations indicate that mines in intact low-permeability rock and near-surface facilities with engineered covers generally have a high potential to perform well (within regulatory limits). Uncovered above-grade vaults and mines in highly fractured crystalline rock, however, have a high potential to perform poorly, exceeding regulatory limits.

  9. A systematic method for characterizing the time-range performance of ground penetrating radar

    International Nuclear Information System (INIS)

    Strange, A D

    2013-01-01

    The fundamental performance of ground penetrating radar (GPR) is linked to the ability to measure the signal time-of-flight in order to provide an accurate radar-to-target range estimate. Having knowledge of the actual time range and timing nonlinearities of a trace is therefore important when seeking to make quantitative range estimates. However, very few practical methods have been formally reported in the literature to characterize GPR time-range performance. This paper describes a method to accurately measure the true time range of a GPR to provide a quantitative assessment of the timing system performance and detect and quantify the effects of timing nonlinearity due to timing jitter. The effect of varying the number of samples per trace on the true time range has also been investigated and recommendations on how to minimize the effects of timing errors are described. The approach has been practically applied to characterize the timing performance of two commercial GPR systems. The importance of the method is that it provides the GPR community with a practical method to readily characterize the underlying accuracy of GPR systems. This in turn leads to enhanced target depth estimation as well as facilitating the accuracy of more sophisticated GPR signal processing methods. (paper)

  10. Evaluation between ultrahigh pressure liquid chromatography and high-performance liquid chromatography analytical methods for characterizing natural dyestuffs.

    Science.gov (United States)

    Serrano, Ana; van Bommel, Maarten; Hallett, Jessica

    2013-11-29

    An evaluation was undertaken of ultrahigh pressure liquid chromatography (UHPLC) in comparison to high-performance liquid chromatography (HPLC) for characterizing natural dyes in cultural heritage objects. A new UHPLC method was optimized by testing several analytical parameters adapted from prior UHPLC studies developed in diverse fields of research. Different gradient elution programs were tested on seven UHPLC columns with different dimensions and stationary phase compositions by applying several mobile phases, flow rates, temperatures, and runtimes. The UHPLC method successfully provided more improved data than that achieved by the HPLC method. Indeed, even though carminic acid has shown circa 146% higher resolution with HPLC, UHPLC resulted in an increase of 41-61% resolution and a decrease of 91-422% limit of detection, depending on the dye compound. The optimized method was subsequently assigned to analyse 59 natural reference materials, in which 85 different components were ascribed with different physicochemical properties, in order to create a spectral database for future characterization of dyes in cultural heritage objects. The majority of these reference samples could be successfully distinguished with one single method through the examination of these compounds' retention times and their spectra acquired with a photodiode array detector. These results demonstrate that UHPLC analyses are extremely valuable for the acquisition of more precise chromatographic information concerning natural dyes with complex mixtures of different and/or closely related physicochemical properties, essential for distinguishing similar species of plants and animals used to colour cultural heritage objects. Copyright © 2013 Elsevier B.V. All rights reserved.

  11. Improved inference in the evaluation of mutual fund performance using panel bootstrap methods

    OpenAIRE

    Blake, David; Caulfield, Tristan; Ioannidis, Christos; Tonks, I P

    2014-01-01

    Two new methodologies are introduced to improve inference in the evaluation of mutual fund performance against benchmarks. First, the benchmark models are estimated using panel methods with both fund and time effects. Second, the non-normality of individual mutual fund returns is accounted for by using panel bootstrap methods. We also augment the standard benchmark factors with fund-specific characteristics, such as fund size. Using a dataset of UK equity mutual fund returns, we find that fun...

  12. IMPORTANCE OF EARNED VALUE METHOD (EVA IN THE PERFORMANCE ANALYSIS OF PROJECTS

    Directory of Open Access Journals (Sweden)

    MARGARITA JANESKA

    2016-04-01

    Full Text Available The economics of the projects is an important process in the project management. Effective control of projects is based on appropriate methods for monitoring, and the entire project, the timetable, cost, quality and risk. Unlike accounting aspects in project management are looking for information that allow insight into the status and trends of financial progress of the project. Such information provide a method of earned value. It is a key component for assessing the actual performance of the project. The method of earned value is a better indicator of the progress of the project in comparison to the classical method of comparing planned and actual costs. This method shows the profit in relation to actual costs at a specific time point. This method is the basis for calculating the index of the project performance. In this sense, MS Project provides many opportunities for quality control and monitoring of the realization of the projects. In this study will be described the architecture of this method, outlined the benefits of the application of this method, and the basic criteria that should satisfy project management to be able to apply it.

  13. GeneNetWeaver: in silico benchmark generation and performance profiling of network inference methods.

    Science.gov (United States)

    Schaffter, Thomas; Marbach, Daniel; Floreano, Dario

    2011-08-15

    Over the last decade, numerous methods have been developed for inference of regulatory networks from gene expression data. However, accurate and systematic evaluation of these methods is hampered by the difficulty of constructing adequate benchmarks and the lack of tools for a differentiated analysis of network predictions on such benchmarks. Here, we describe a novel and comprehensive method for in silico benchmark generation and performance profiling of network inference methods available to the community as an open-source software called GeneNetWeaver (GNW). In addition to the generation of detailed dynamical models of gene regulatory networks to be used as benchmarks, GNW provides a network motif analysis that reveals systematic prediction errors, thereby indicating potential ways of improving inference methods. The accuracy of network inference methods is evaluated using standard metrics such as precision-recall and receiver operating characteristic curves. We show how GNW can be used to assess the performance and identify the strengths and weaknesses of six inference methods. Furthermore, we used GNW to provide the international Dialogue for Reverse Engineering Assessments and Methods (DREAM) competition with three network inference challenges (DREAM3, DREAM4 and DREAM5). GNW is available at http://gnw.sourceforge.net along with its Java source code, user manual and supporting data. Supplementary data are available at Bioinformatics online. dario.floreano@epfl.ch.

  14. Self-similar continued root approximants

    International Nuclear Information System (INIS)

    Gluzman, S.; Yukalov, V.I.

    2012-01-01

    A novel method of summing asymptotic series is advanced. Such series repeatedly arise when employing perturbation theory in powers of a small parameter for complicated problems of condensed matter physics, statistical physics, and various applied problems. The method is based on the self-similar approximation theory involving self-similar root approximants. The constructed self-similar continued roots extrapolate asymptotic series to finite values of the expansion parameter. The self-similar continued roots contain, as a particular case, continued fractions and Padé approximants. A theorem on the convergence of the self-similar continued roots is proved. The method is illustrated by several examples from condensed-matter physics.

  15. Review of Methods Related to Assessing Human Performance in Nuclear Power Plant Control Room Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Katya L Le Blanc; Ronald L Boring; David I Gertman

    2001-11-01

    With the increased use of digital systems in Nuclear Power Plant (NPP) control rooms comes a need to thoroughly understand the human performance issues associated with digital systems. A common way to evaluate human performance is to test operators and crews in NPP control room simulators. However, it is often challenging to characterize human performance in meaningful ways when measuring performance in NPP control room simulations. A review of the literature in NPP simulator studies reveals a variety of ways to measure human performance in NPP control room simulations including direct observation, automated computer logging, recordings from physiological equipment, self-report techniques, protocol analysis and structured debriefs, and application of model-based evaluation. These methods and the particular measures used are summarized and evaluated.

  16. Web-based application on employee performance assessment using exponential comparison method

    Science.gov (United States)

    Maryana, S.; Kurnia, E.; Ruyani, A.

    2017-02-01

    Employee performance assessment is also called a performance review, performance evaluation, or assessment of employees, is an effort to assess the achievements of staffing performance with the aim to increase productivity of employees and companies. This application helps in the assessment of employee performance using five criteria: Presence, Quality of Work, Quantity of Work, Discipline, and Teamwork. The system uses the Exponential Comparative Method and Weighting Eckenrode. Calculation results using graphs were provided to see the assessment of each employee. Programming language used in this system is written in Notepad++ and MySQL database. The testing result on the system can be concluded that this application is correspond with the design and running properly. The test conducted is structural test, functional test, and validation, sensitivity analysis, and SUMI testing.

  17. DNA barcoding of recently diverged species: relative performance of matching methods.

    Directory of Open Access Journals (Sweden)

    Robin van Velzen

    Full Text Available Recently diverged species are challenging for identification, yet they are frequently of special interest scientifically as well as from a regulatory perspective. DNA barcoding has proven instrumental in species identification, especially in insects and vertebrates, but for the identification of recently diverged species it has been reported to be problematic in some cases. Problems are mostly due to incomplete lineage sorting or simply lack of a 'barcode gap' and probably related to large effective population size and/or low mutation rate. Our objective was to compare six methods in their ability to correctly identify recently diverged species with DNA barcodes: neighbor joining and parsimony (both tree-based, nearest neighbor and BLAST (similarity-based, and the diagnostic methods DNA-BAR, and BLOG. We analyzed simulated data assuming three different effective population sizes as well as three selected empirical data sets from published studies. Results show, as expected, that success rates are significantly lower for recently diverged species (∼75% than for older species (∼97% (P<0.00001. Similarity-based and diagnostic methods significantly outperform tree-based methods, when applied to simulated DNA barcode data (P<0.00001. The diagnostic method BLOG had highest correct query identification rate based on simulated (86.2% as well as empirical data (93.1%, indicating that it is a consistently better method overall. Another advantage of BLOG is that it offers species-level information that can be used outside the realm of DNA barcoding, for instance in species description or molecular detection assays. Even though we can confirm that identification success based on DNA barcoding is generally high in our data, recently diverged species remain difficult to identify. Nevertheless, our results contribute to improved solutions for their accurate identification.

  18. Method for improving performance of high temperature superconductors within a magnetic field

    Science.gov (United States)

    Wang, Haiyan; Foltyn, Stephen R.; Maiorov, Boris A.; Civale, Leonardo

    2010-01-05

    The present invention provides articles including a base substrate including a layer of an oriented cubic oxide material having a rock-salt-like structure layer thereon; and, a buffer layer upon the oriented cubic oxide material having a rock-salt-like structure layer, the buffer layer having an outwardly facing surface with a surface morphology including particulate outgrowths of from 10 nm to 500 run in size at the surface, such particulate outgrowths serving as flux pinning centers whereby the article maintains higher performance within magnetic fields than similar articles without the necessary density of such outgrowths.

  19. APPLICATION OF THE PERFORMANCE SELECTION INDEX METHOD FOR SOLVING MACHINING MCDM PROBLEMS

    Directory of Open Access Journals (Sweden)

    Dušan Petković

    2017-04-01

    Full Text Available Complex nature of machining processes requires the use of different methods and techniques for process optimization. Over the past few years a number of different optimization methods have been proposed for solving continuous machining optimization problems. In manufacturing environment, engineers are also facing a number of discrete machining optimization problems. In order to help decision makers in solving this type of optimization problems a number of multi criteria decision making (MCDM methods have been proposed. This paper introduces the use of an almost unexplored MCDM method, i.e. performance selection index (PSI method for solving machining MCDM problems. The main motivation for using the PSI method is that it is not necessary to determine criteria weights as in other MCDM methods. Applicability and effectiveness of the PSI method have been demonstrated while solving two case studies dealing with machinability of materials and selection of the most suitable cutting fluid for the given machining application. The obtained rankings have good correlation with those derived by the past researchers using other MCDM methods which validate the usefulness of this method for solving machining MCDM problems.

  20. Evaluating Method Engineer Performance: an error classification and preliminary empirical study

    Directory of Open Access Journals (Sweden)

    Steven Kelly

    1998-11-01

    Full Text Available We describe an approach to empirically test the use of metaCASE environments to model methods. Both diagrams and matrices have been proposed as a means for presenting the methods. These different paradigms may have their own effects on how easily and well users can model methods. We extend Batra's classification of errors in data modelling to cover metamodelling, and use it to measure the performance of a group of metamodellers using either diagrams or matrices. The tentative results from this pilot study confirm the usefulness of the classification, and show some interesting differences between the paradigms.

  1. Numerical investigations on coupled heat transfer and synthetical performance of a pressurized volumetric receiver with MCRT–FVM method

    International Nuclear Information System (INIS)

    Cheng, Z.D.; He, Y.L.; Cui, F.Q.

    2013-01-01

    This paper presents an axisymmetric steady-state computational fluid dynamics model and further studies on the complex coupled heat transfer combined radiation–convection–conduction in the pressurized volumetric receiver (PVR), by combining the Finite Volume Method (FVM) and the Monte Carlo Ray-Trace (MCRT) method. Based on this, effects of geometric parameters of the compound parabolic concentrator (CPC) and properties of the porous absorber on synthetical characteristics and performance of the photo-thermal conversion process in the PVR are further analyzed and discussed detailedly. It is found that the solar flux density distributions are always very heterogeneous with large nonuniformities, and the variation trends of the corresponding temperature distributions are very similar to these but with much lower order of magnitude. The CPC shape determined by the CPC exit aperture has much larger effects on synthetical characteristics and performance of the PVR than that of the CPC entry aperture with a constant acceptance angle. And a suitable or optimal thickness of the porous absorber could be determined by examining where the drastic decreasing trends occur at the curves of variations of synthetical characteristics and performance with the porosity. - Highlights: ► An axisymmetric steady-state CFD model of PVR is presented with MCRT–FVM method. ► The complex coupled heat transfer and synthetical performance of the PVR are studied. ► The effects of geometric parameters and porous properties are analyzed and discussed. ► Solar flux and temperature in PVR are very heterogeneous with large nonuniformities. ► An optimal absorber thickness can be determined by examining the effects of porosity.

  2. Mining Diagnostic Assessment Data for Concept Similarity

    Science.gov (United States)

    Madhyastha, Tara; Hunt, Earl

    2009-01-01

    This paper introduces a method for mining multiple-choice assessment data for similarity of the concepts represented by the multiple choice responses. The resulting similarity matrix can be used to visualize the distance between concepts in a lower-dimensional space. This gives an instructor a visualization of the relative difficulty of concepts…

  3. Similarity Structure of Wave-Collapse

    DEFF Research Database (Denmark)

    Rypdal, Kristoffer; Juul Rasmussen, Jens; Thomsen, Kenneth

    1985-01-01

    Similarity transformations of the cubic Schrödinger equation (CSE) are investigated. The transformations are used to remove the explicit time variation in the CSE and reduce it to differential equations in the spatial variables only. Two different methods for similarity reduction are employed and...

  4. Comparison of simple additive weighting (SAW) and composite performance index (CPI) methods in employee remuneration determination

    Science.gov (United States)

    Karlitasari, L.; Suhartini, D.; Benny

    2017-01-01

    The process of determining the employee remuneration for PT Sepatu Mas Idaman currently are still using Microsoft Excel-based spreadsheet where in the spreadsheet there is the value of criterias that must be calculated for every employee. This can give the effect of doubt during the assesment process, therefore resulting in the process to take much longer time. The process of employee remuneration determination is conducted by the assesment team based on some criterias that have been predetermined. The criteria used in the assessment process are namely the ability to work, human relations, job responsibility, discipline, creativity, work, achievement of targets, and absence. To ease the determination of employee remuneration to be more efficient and effective, the Simple Additive Weighting (SAW) method is used. SAW method can help in decision making for a certain case, and the calculation that generates the greatest value will be chosen as the best alternative. Other than SAW, also by using another method was the CPI method which is one of the calculating method in decision making based on performance index. Where SAW method was more faster by 89-93% compared to CPI method. Therefore it is expected that this application can be an evaluation material for the need of training and development for employee performances to be more optimal.

  5. Critical assessment of three high performance liquid chromatography analytical methods for food carotenoid quantification

    NARCIS (Netherlands)

    Dias, M.G.; Oliveira, L.; Camoes, M.F.G.F.C.; Nunes, B.; Versloot, P.; Hulshof, P.J.M.

    2010-01-01

    Three sets of extraction/saponification/HPLC conditions for food carotenoid quantification were technically and economically compared. Samples were analysed for carotenoids a-carotene, ß-carotene, ß-cryptoxanthin, lutein, lycopene, and zeaxanthin. All methods demonstrated good performance in the

  6. An Assessment of the Effects of Teaching Methods on Academic Performance of Students in Accounting Courses

    Science.gov (United States)

    Hosal-Akman, Nazli; Simga-Mugan, Can

    2010-01-01

    This study explores the effect of teaching methods on the academic performance of students in accounting courses. The study was carried out over two semesters at a well-known university in Turkey in principles of financial accounting and managerial accounting courses. Students enrolled in the courses were assigned to treatment and control groups.…

  7. [Testing method research for key performance indicator of imaging acousto-optic tunable filter (AOTF)].

    Science.gov (United States)

    Hu, Shan-Zhou; Chen, Fen-Fei; Zeng, Li-Bo; Wu, Qiong-Shui

    2013-01-01

    Imaging AOTF is an important optical filter component for new spectral imaging instruments developed in recent years. The principle of imaging AOTF component was demonstrated, and a set of testing methods for some key performances were studied, such as diffraction efficiency, wavelength shift with temperature, homogeneity in space for diffraction efficiency, imaging shift, etc.

  8. Critical factors in the empirical performance of temporal difference and evolutionary methods for reinforcement learning

    NARCIS (Netherlands)

    Whiteson, S.; Taylor, M.E.; Stone, P.

    2010-01-01

    Temporal difference and evolutionary methods are two of the most common approaches to solving reinforcement learning problems. However, there is little consensus on their relative merits and there have been few empirical studies that directly compare their performance. This article aims to address

  9. Applied Chaos Level Test for Validation of Signal Conditions Underlying Optimal Performance of Voice Classification Methods

    Science.gov (United States)

    Liu, Boquan; Polce, Evan; Sprott, Julien C.; Jiang, Jack J.

    2018-01-01

    Purpose: The purpose of this study is to introduce a chaos level test to evaluate linear and nonlinear voice type classification method performances under varying signal chaos conditions without subjective impression. Study Design: Voice signals were constructed with differing degrees of noise to model signal chaos. Within each noise power, 100…

  10. Common genetic variants associated with cognitive performance identified using the proxy-phenotype method

    NARCIS (Netherlands)

    C.A. Rietveld (Niels); T. Esko (Tõnu); G. Davies (Gail); T.H. Pers (Tune); P. Turley (Patrick); B. Benyamin (Beben); C.F. Chabris (Christopher F.); V. Emilsson (Valur); A.D. Johnson (Andrew); J.J. Lee (James J.); C. de Leeuw (Christiaan); R.E. Marioni (Riccardo); S.E. Medland (Sarah Elizabeth); M. Miller (Mike); O. Rostapshova (Olga); S.J. van der Lee (Sven); A.A.E. Vinkhuyzen (Anna A.); N. Amin (Najaf); D. Conley (Dalton); J. Derringer; C.M. van Duijn (Cornelia); R.S.N. Fehrmann (Rudolf); L. Franke (Lude); E.L. Glaeser (Edward L.); N.K. Hansell (Narelle); C. Hayward (Caroline); W.G. Iacono (William); C.A. Ibrahim-Verbaas (Carla); V.W.V. Jaddoe (Vincent); J. Karjalainen (Juha); D. Laibson (David); P. Lichtenstein (Paul); D.C. Liewald (David C.); P.K. Magnusson (Patrik); N.G. Martin (Nicholas); M. McGue (Matt); G. Mcmahon (George); N.L. Pedersen (Nancy); S. Pinker (Steven); D.J. Porteous (David J.); D. Posthuma (Danielle); F. Rivadeneira Ramirez (Fernando); B.H. Smithk (Blair H.); J.M. Starr (John); H.W. Tiemeier (Henning); N.J. Timpsonm (Nicholas J.); M. Trzaskowskin (Maciej); A.G. Uitterlinden (André); F.C. Verhulst (Frank); M.E. Ward (Mary); M.J. Wright (Margaret); G.D. Smith; I.J. Deary (Ian J.); M. Johannesson (Magnus); R. Plomin (Robert); P.M. Visscher (Peter); D.J. Benjamin (Daniel J.); D. Cesarini (David); Ph.D. Koellinger (Philipp)

    2014-01-01

    textabstractWe identify common genetic variants associated with cognitive performance using a two-stage approach, which we call the proxyphenotype method. First, we conduct a genome-wide association study of educational attainment in a large sample (n = 106,736), which produces a set of 69

  11. The performance of workload control concepts in job shops : Improving the release method

    NARCIS (Netherlands)

    Land, MJ; Gaalman, GJC

    1998-01-01

    A specific class of production control concepts for jobs shops is based on the principles of workload control. Practitioners emphasise the importance of workload control. However, order release methods that reduce the workload on the shop floor show poor due date performance in job shop simulations.

  12. Personality and Student Performance on Evaluation Methods Used in Business Administration Courses

    Science.gov (United States)

    Lakhal, Sawsen; Sévigny, Serge; Frenette, Éric

    2015-01-01

    The objective of this study was to verify whether personality (Big Five model) influences performance on the evaluation methods used in business administration courses. A sample of 169 students enrolled in two compulsory undergraduate business courses responded to an online questionnaire. As it is difficult within the same course to assess…

  13. Aligning Order Picking Methods, Incentive Systems, and Regulatory Focus to Increase Performance

    NARCIS (Netherlands)

    de Vries, J.; de Koster, R.; Stam, D.

    2016-01-01

    A controlled field experiment investigates order picking performance in terms of productivity. We examined three manual picker-to-parts order picking methods (parallel, zone, and dynamic zone picking) under two different incentive systems (competition-based vs. cooperation-based) for pickers with

  14. Aligning order picking methods, incentive systems, and regulatory focus to increase performance

    NARCIS (Netherlands)

    J. de Vries (Jelle); M.B.M. de Koster (René); D.A. Stam (Daan)

    2015-01-01

    textabstractA unique controlled field experiment investigates order picking performance (in terms of productivity and quality). We examined three manual picker-to-parts order picking methods (parallel, zone, and dynamic zone picking) under two different incentive systems (competition- based versus

  15. Method determination of aerodynamic performances of profile in the plane airfoil cascade

    Directory of Open Access Journals (Sweden)

    Л. Г. Волянська

    2003-03-01

    Full Text Available Method determination of aerodynamic forces by direct measurement using three-component aerodynamic balance are given in the article. There are the schematic model of the facility for determination airfoil cascade aerodynamic performances in the article. Drawing and description of slewing pack of blades are shown in the article

  16. Academic Procrastination and the Performance of Graduate-Level Cooperative Groups in Research Methods Courses

    Science.gov (United States)

    Jiao, Qun G.; DaRos-Voseles, Denise A.; Collins, Kathleen M. T.; Onwuegbuzie, Anthony J.

    2011-01-01

    This study examined the extent to which academic procrastination predicted the performance of cooperative groups in graduate-level research methods courses. A total of 28 groups was examined (n = 83 students), ranging in size from 2 to 5 (M = 2.96, SD = 1.10). Multiple regression analyses revealed that neither within-group mean nor within-group…

  17. Performance Analysis and Experimental Validation of the Direct Strain Imaging Method

    Science.gov (United States)

    Athanasios Iliopoulos; John G. Michopoulos; John C. Hermanson

    2013-01-01

    Direct Strain Imaging accomplishes full field measurement of the strain tensor on the surface of a deforming body, by utilizing arbitrarily oriented engineering strain measurements originating from digital imaging. In this paper an evaluation of the method’s performance with respect to its operating parameter space is presented along with a preliminary...

  18. Performance improvement of extended boundary node method for solving elliptic boundary-value problems

    International Nuclear Information System (INIS)

    Saitoh, Ayumu; Kamitani, Atsushi; Takayama, Teruou; Nakamura, Hiroaki

    2016-01-01

    The extended boundary-node method (X-BNM) with the hierarchical-matrix (H-matrix) method has been developed and its performance has been investigated numerically. The results of computations show that the solver speed of the X-BNM with the H-matrix method is much faster than that of the standard X-BNM for the case where the number of boundary nodes exceeds a certain limit. Furthermore, the accuracy of the X-BNM with the H-matrix method is almost equal to that of the standard X-BNM. From these results, it is found that the H-matrix method is useful as the acceleration technique of the X-BNM. (author)

  19. HYPOTHESIS TESTING WITH THE SIMILARITY INDEX

    Science.gov (United States)

    Mulltilocus DNA fingerprinting methods have been used extensively to address genetic issues in wildlife populations. Hypotheses concerning population subdivision and differing levels of diversity can be addressed through the use of the similarity index (S), a band-sharing coeffic...

  20. Sleep-spindle detection: crowdsourcing and evaluating performance of experts, non-experts and automated methods

    DEFF Research Database (Denmark)

    Warby, Simon C.; Wendt, Sabrina Lyngbye; Welinder, Peter

    2014-01-01

    to crowdsource spindle identification by human experts and non-experts, and we compared their performance with that of automated detection algorithms in data from middle- to older-aged subjects from the general population. We also refined methods for forming group consensus and evaluating the performance...... of event detectors in physiological data such as electroencephalographic recordings from polysomnography. Compared to the expert group consensus gold standard, the highest performance was by individual experts and the non-expert group consensus, followed by automated spindle detectors. This analysis showed...... that crowdsourcing the scoring of sleep data is an efficient method to collect large data sets, even for difficult tasks such as spindle identification. Further refinements to spindle detection algorithms are needed for middle- to older-aged subjects....

  1. Performance Evaluation of Sub-manufacturing Sectors Using TOPSIS and ELECTRE Methods

    Directory of Open Access Journals (Sweden)

    Nuri ÖMÜRBEK

    2014-06-01

    Full Text Available Performance analysis is defined as a process of collecting, analyzing and reporting data systematically and regularly for a business to monitor its sources it has used, products and services it has produced, and the results it gained. For operators, it means quantitative expression of actions which are performed by a business or maintained in a program. In this study, financial performances of manufacture sectors are analyzed by the methods of TOPSIS and ELECTRE using current ratio, cash ratio, total debt / total assets, inventory turnover rate, equity turnover rate, net profit / equity, operating ıncome /net sales, net profit / sales and cost of good sold / net sales criteria. The findings suggest that coal and refined petroleum product manufacturing industry is in the first place in both methods.

  2. Gait Recognition Using Image Self-Similarity

    Directory of Open Access Journals (Sweden)

    Chiraz BenAbdelkader

    2004-04-01

    Full Text Available Gait is one of the few biometrics that can be measured at a distance, and is hence useful for passive surveillance as well as biometric applications. Gait recognition research is still at its infancy, however, and we have yet to solve the fundamental issue of finding gait features which at once have sufficient discrimination power and can be extracted robustly and accurately from low-resolution video. This paper describes a novel gait recognition technique based on the image self-similarity of a walking person. We contend that the similarity plot encodes a projection of gait dynamics. It is also correspondence-free, robust to segmentation noise, and works well with low-resolution video. The method is tested on multiple data sets of varying sizes and degrees of difficulty. Performance is best for fronto-parallel viewpoints, whereby a recognition rate of 98% is achieved for a data set of 6 people, and 70% for a data set of 54 people.

  3. A New Hybrid Method for Improving the Performance of Myocardial Infarction Prediction

    Directory of Open Access Journals (Sweden)

    Hojatollah Hamidi

    2016-06-01

    Full Text Available Abstract Introduction: Myocardial Infarction, also known as heart attack, normally occurs due to such causes as smoking, family history, diabetes, and so on. It is recognized as one of the leading causes of death in the world. Therefore, the present study aimed to evaluate the performance of classification models in order to predict Myocardial Infarction, using a feature selection method that includes Forward Selection and Genetic Algorithm. Materials & Methods: The Myocardial Infarction data set used in this study contains the information related to 519 visitors to Shahid Madani Specialized Hospital of Khorramabad, Iran. This data set includes 33 features. The proposed method includes a hybrid feature selection method in order to enhance the performance of classification algorithms. The first step of this method selects the features using Forward Selection. At the second step, the selected features were given to a genetic algorithm, in order to select the best features. Classification algorithms entail Ada Boost, Naïve Bayes, J48 decision tree and simpleCART are applied to the data set with selected features, for predicting Myocardial Infarction. Results: The best results have been achieved after applying the proposed feature selection method, which were obtained via simpleCART and J48 algorithms with the accuracies of 96.53% and 96.34%, respectively. Conclusion: Based on the results, the performances of classification algorithms are improved. So, applying the proposed feature selection method, along with classification algorithms seem to be considered as a confident method with respect to predicting the Myocardial Infarction.

  4. A proposal on evaluation method of neutron absorption performance to substitute conventional neutron attenuation test

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Je Hyun; Shim, Chang Ho [Dept. of Nuclear Engineering, Hanyang University, Seoul (Korea, Republic of); Kim, Sung Hyun [Nuclear Fuel Cycle Waste Treatment Research Division, Research Reactor Institute, Kyoto University, Osaka (Japan); Choe, Jung Hun; Cho, In Hak; Park, Hwan Seo [Ionizing Radiation Center, Nuclear Fuel Cycle Waste Treatment Research Division, Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Park, Hyun Seo; Kim, Jung Ho; Kim, Yoon Ho [Ionizing Radiation Center, Korea Research Institute of Standards and Science, Daejeon (Korea, Republic of)

    2016-12-15

    For a verification of newly-developed neutron absorbers, one of guidelines on the qualification and acceptance of neutron absorbers is the neutron attenuation test. However, this approach can cause a problem for the qualifications that it cannot distinguish how the neutron attenuates from materials. In this study, an estimation method of neutron absorption performances for materials is proposed to detect both direct penetration and back-scattering neutrons. For the verification of the proposed method, MCNP simulations with the experimental system designed in this study were pursued using the polyethylene, iron, normal glass and the vitrified form. The results show that it can easily test neutron absorption ability using single absorber model. Also, from simulation results of single absorber and double absorbers model, it is verified that the proposed method can evaluate not only the direct thermal neutrons passing through materials, but also the scattered neutrons reflected to the materials. Therefore, the neutron absorption performances can be accurately estimated using the proposed method comparing with the conventional neutron attenuation test. It is expected that the proposed method can contribute to increase the reliability of the performance of neutron absorbers.

  5. Determination of Urine Albumin by New Simple High-Performance Liquid Chromatography Method.

    Science.gov (United States)

    Klapkova, Eva; Fortova, Magdalena; Prusa, Richard; Moravcova, Libuse; Kotaska, Karel

    2016-11-01

    A simple high-performance liquid chromatography (HPLC) method was developed for the determination of albumin in patients' urine samples without coeluting proteins and was compared with the immunoturbidimetric determination of albumin. Urine albumin is important biomarker in diabetic patients, but part of it is immuno-nonreactive. Albumin was determined by high-performance liquid chromatography (HPLC), UV detection at 280 nm, Zorbax 300SB-C3 column. Immunoturbidimetric analysis was performed using commercial kit on automatic biochemistry analyzer COBAS INTEGRA ® 400, Roche Diagnostics GmbH, Manheim, Germany. The HLPC method was fully validated. No significant interference with other proteins (transferrin, α-1-acid glycoprotein, α-1-antichymotrypsin, antitrypsin, hemopexin) was found. The results from 301 urine samples were compared with immunochemical determination. We found a statistically significant difference between these methods (P = 0.0001, Mann-Whitney test). New simple HPLC method was developed for the determination of urine albumin without coeluting proteins. Our data indicate that the HPLC method is highly specific and more sensitive than immunoturbidimetry. © 2016 Wiley Periodicals, Inc.

  6. Methods for determining unimpeded aircraft taxiing time and evaluating airport taxiing performance

    Directory of Open Access Journals (Sweden)

    Yu Zhang

    2017-04-01

    Full Text Available The objective of this study is to improve the methods of determining unimpeded (nominal taxiing time, which is the reference time used for estimating taxiing delay, a widely accepted performance indicator of airport surface movement. After reviewing existing methods used widely by different air navigation service providers (ANSP, new methods relying on computer software and statistical tools, and econometrics regression models are proposed. Regression models are highly recommended because they require less detailed data and can serve the needs of general performance analysis of airport surface operations. The proposed econometrics model outperforms existing ones by introducing more explanatory variables, especially taking aircraft passing and over-passing into the considering of queue length calculation and including runway configuration, ground delay program, and weather factors. The length of the aircraft queue in the taxiway system and the interaction between queues are major contributors to long taxi-out times. The proposed method provides a consistent and more accurate method of calculating taxiing delay and it can be used for ATM-related performance analysis and international comparison.

  7. A proposal on evaluation method of neutron absorption performance to substitute conventional neutron attenuation test

    International Nuclear Information System (INIS)

    Kim, Je Hyun; Shim, Chang Ho; Kim, Sung Hyun; Choe, Jung Hun; Cho, In Hak; Park, Hwan Seo; Park, Hyun Seo; Kim, Jung Ho; Kim, Yoon Ho

    2016-01-01

    For a verification of newly-developed neutron absorbers, one of guidelines on the qualification and acceptance of neutron absorbers is the neutron attenuation test. However, this approach can cause a problem for the qualifications that it cannot distinguish how the neutron attenuates from materials. In this study, an estimation method of neutron absorption performances for materials is proposed to detect both direct penetration and back-scattering neutrons. For the verification of the proposed method, MCNP simulations with the experimental system designed in this study were pursued using the polyethylene, iron, normal glass and the vitrified form. The results show that it can easily test neutron absorption ability using single absorber model. Also, from simulation results of single absorber and double absorbers model, it is verified that the proposed method can evaluate not only the direct thermal neutrons passing through materials, but also the scattered neutrons reflected to the materials. Therefore, the neutron absorption performances can be accurately estimated using the proposed method comparing with the conventional neutron attenuation test. It is expected that the proposed method can contribute to increase the reliability of the performance of neutron absorbers

  8. Stego Keys Performance on Feature Based Coding Method in Text Domain

    Directory of Open Access Journals (Sweden)

    Din Roshidi

    2017-01-01

    Full Text Available A main critical factor on embedding process in any text steganography method is a key used known as stego key. This factor will be influenced the success of the embedding process of text steganography method to hide a message from third party or any adversary. One of the important aspects on embedding process in text steganography method is the fitness performance of the stego key. Three parameters of the fitness performance of the stego key have been identified such as capacity ratio, embedded fitness ratio and saving space ratio. It is because a better as capacity ratio, embedded fitness ratio and saving space ratio offers of any stego key; a more message can be hidden. Therefore, main objective of this paper is to analyze three features coding based namely CALP, VERT and QUAD of stego keys in text steganography on their capacity ratio, embedded fitness ratio and saving space ratio. It is found that CALP method give a good effort performance compared to VERT and QUAD methods.

  9. Optical performance of multifocal soft contact lenses via a single-pass method.

    Science.gov (United States)

    Bakaraju, Ravi C; Ehrmann, Klaus; Falk, Darrin; Ho, Arthur; Papas, Eric

    2012-08-01

    A physical model eye capable of carrying soft contact lenses (CLs) was used as a platform to evaluate optical performance of several commercial multifocals (MFCLs) with high- and low-add powers and a single-vision control. Optical performance was evaluated at three pupil sizes, six target vergences, and five CL-correcting positions using a spatially filtered monochromatic (632.8 nm) light source. The various target vergences were achieved by using negative trial lenses. A photosensor in the retinal plane recorded the image point-spread that enabled the computation of visual Strehl ratios. The centration of CLs was monitored by an additional integrated en face camera. Hydration of the correcting lens was maintained using a humidity chamber and repeated instillations of rewetting saline drops. All the MFCLs reduced performance for distance but considerably improved performance along the range of distance to near target vergences, relative to the single-vision CL. Performance was dependent on add power, design, pupil, and centration of the correcting CLs. Proclear (D) design produced good performance for intermediate vision, whereas Proclear (N) design performed well at near vision (p 4 mm in diameter. Acuvue Oasys bifocal produced performance comparable with single-vision CL for most vergences. Direct measurement of single-pass images at the retinal plane of a physical model eye used in conjunction with various MFCLs is demonstrated. This method may have utility in evaluating the relative effectiveness of commercial and prototype designs.

  10. Three-dimensional vision enhances task performance independently of the surgical method.

    Science.gov (United States)

    Wagner, O J; Hagen, M; Kurmann, A; Horgan, S; Candinas, D; Vorburger, S A

    2012-10-01

    Within the next few years, the medical industry will launch increasingly affordable three-dimensional (3D) vision systems for the operating room (OR). This study aimed to evaluate the effect of two-dimensional (2D) and 3D visualization on surgical skills and task performance. In this study, 34 individuals with varying laparoscopic experience (18 inexperienced individuals) performed three tasks to test spatial relationships, grasping and positioning, dexterity, precision, and hand-eye and hand-hand coordination. Each task was performed in 3D using binocular vision for open performance, the Viking 3Di Vision System for laparoscopic performance, and the DaVinci robotic system. The same tasks were repeated in 2D using an eye patch for monocular vision, conventional laparoscopy, and the DaVinci robotic system. Loss of 3D vision significantly increased the perceived difficulty of a task and the time required to perform it, independently of the approach (P robot than with laparoscopy (P = 0.005). In every case, 3D robotic performance was superior to conventional laparoscopy (2D) (P < 0.001-0.015). The more complex the task, the more 3D vision accelerates task completion compared with 2D vision. The gain in task performance is independent of the surgical method.

  11. A novel simulation method to evaluate the collection performance of a monolithic active pixel sensor

    International Nuclear Information System (INIS)

    Fu Min; Tang Zhen'an

    2011-01-01

    A novel simulation method is presented in this paper to evaluate the collection performance of monolithic active pixel sensor (MAPS) devices for minimum ionizing particle tracking. A simplified 3D matrix pixel structure is built using the computer aided design software Sentaurus. The virtual device is then divided into hundreds of parts and an independent customized X photon model is involved in each part to simulate the conditions under 55 Fe radiation. After data processing and analysis, charge collection efficiency, collection time and diffusion conditions can be estimated in detail. In order to verify the reliability of the method, comparisons are made between the simulations and experiments. Although there are some defects, it can be concluded that the proposed idea is a feasible method for the evaluation of the MAPS collection performance. (authors)

  12. Model Multi Criteria Decision Making with Fuzzy ANP Method for Performance Measurement Small Medium Enterprise (SME)

    Science.gov (United States)

    Rahmanita, E.; Widyaningrum, V. T.; Kustiyahningsih, Y.; Purnama, J.

    2018-04-01

    SMEs have a very important role in the development of the economy in Indonesia. SMEs assist the government in terms of creating new jobs and can support household income. The number of SMEs in Madura and the number of measurement indicators in the SME mapping so that it requires a method.This research uses Fuzzy Analytic Network Process (FANP) method for performance measurement SME. The FANP method can handle data that contains uncertainty. There is consistency index in determining decisions. Performance measurement in this study is based on a perspective of the Balanced Scorecard. This research approach integrated internal business perspective, learning, and growth perspective and fuzzy Analytic Network Process (FANP). The results of this research areframework a priority weighting of assessment indicators SME.

  13. Research on design methods and aerodynamics performance of CQUDTU-B21 airfoil

    DEFF Research Database (Denmark)

    Chen, Jin; Cheng, Jiangtao; Wen, Zhong Shen

    2012-01-01

    This paper presents the design methods of CQU-DTU-B21 airfoil for wind turbine. Compared with the traditional method of inverse design, the new method is described directly by a compound objective function to balance several conflicting requirements for design wind turbine airfoils, which based...... on design theory of airfoil profiles, blade element momentum (BEM) theory and airfoil Self-Noise prediction model. And then an optimization model with the target of maximum power performance on a 2D airfoil and low noise emission of design ranges for angle of attack has been developed for designing CQU......-DTU-B21 airfoil. To validate the optimization results, the comparison of the aerodynamics performance by XFOIL and wind tunnels test respectively at Re=3×106 is made between the CQU-DTU-B21 and DU93-W-210 which is widely used in wind turbines. © (2012) Trans Tech Publications, Switzerland....

  14. Performance of NAA methods in an International Interlaboratory Reference Material Characterization Campaign

    International Nuclear Information System (INIS)

    Ihnat, M.

    2000-01-01

    An extensive database of analytical results from a recent biological matrix Reference Material Characterization Campaign permitted an intercomparison of the performances of various methods among each other and with 'true' best estimate concentration values established for these materials. Six different variants of neutron activation analysis (NAA) methods were employed including: instrumental neutron activation analysis, instrumental neutron activation analysis with acid digestion, neutron activation analysis with radiochemical separation, neutron capture prompt gamma activation analysis, epithermal instrumental neutron activation analysis, and neutron activation analysis with preconcentration. The precision and accuracy performance of NAA-based analytical methods are compared with three other major techniques, atomic absorption spectrometry (AAS), atomic emission spectrometry (AES) and mass spectrometry (MS) for 28 elements in 10 natural matrix materials. (author)

  15. PERFORMANCE CRITERIA of RUBBER UKM BASED ON COOPERATIVE USING BALANCED SCORECARD METHOD

    Directory of Open Access Journals (Sweden)

    Dessy Iriani Putri

    2017-01-01

    Full Text Available Small and Medium Enterprises (UKM of Usaha Mandiri is the only UKM in Putri Hijau district, North Bengkulu that carries out its activities based on cooperative principles. The UKM was established in 2010 due to the rubber farmer’s concern about their future. In the last few years, the rubber price has declined, resulting in the decreasing performance of UKM Usaha Mandiri’s.  The objective of this research is to analyze the performance of UKM of Usaha Mandiri.  The method used is balanced scorecard (BSC and Focus Group Discussion (FGD to identify the UKM’s strategic aim and the BSC value. The result of the research shows that the performance of UKM of Usaha Mandiri   started to decline from 2013–2015.  The BSC score from 2011–2015 was 112.93 (excellent, 141.84 (excellent, 95.41 (excellent, 47.42 (average and 81.62 (very good.  The decline of the performance was due to the decline of natural rubber price.  The conclusion for the research is that in 2015 the performance of UKM of Usaha Mandiri   was categorized as very good (BSC method with the most contribution from growth and learning perspective.        Keywords : Balanced scorecard, BSC, cooperation, performance measurement, UKM

  16. ANALYSIS AND PERFORMANCE MEASUREMENT OF EXISTING SOLUTION METHODS OF QUADRATIC ASSIGNMENT PROBLEM

    Directory of Open Access Journals (Sweden)

    Morteza KARAMI

    2014-01-01

    Full Text Available Quadratic Assignment Problem (QAP is known as one of the most difficult combinatorial optimization problems that is classified in the category of NP-hard problems. Quadratic Assignment Problem Library (QAPLIB is a full database of QAPs which contains several problems from different authors and different sizes. Many exact and meta-heuristic solution methods have been introduced to solve QAP. In this study we focus on previously introduced solution methods of QAP e.g. Branch and Bound (B&B, Simulated Annealing (SA Algorithm, Greedy Randomized Adaptive Search Procedure (GRASP for dense and sparse QAPs. The codes of FORTRAN for these methods were downloaded from QAPLIB. All problems of QAPLIB were solved by the abovementioned methods. Several results were obtained from the computational experiments part. The Results show that the Branch and Bound method is able to introduce a feasible solution for all problems while Simulated Annealing Algorithm and GRASP methods are not able to find any solution for some problems. On the other hand, Simulated Annealing and GRASP methods have shorter run time comparing to the Branch and Bound method. In addition, the performance of the methods on the objective function value is discussed.

  17. Performance of wave function and density functional methods for water hydrogen bond spin-spin coupling constants.

    Science.gov (United States)

    García de la Vega, J M; Omar, S; San Fabián, J

    2017-04-01

    Spin-spin coupling constants in water monomer and dimer have been calculated using several wave function and density functional-based methods. CCSD, MCSCF, and SOPPA wave functions methods yield similar results, specially when an additive approach is used with the MCSCF. Several functionals have been used to analyze their performance with the Jacob's ladder and a set of functionals with different HF exchange were tested. Functionals with large HF exchange appropriately predict 1 J O H , 2 J H H and 2h J O O couplings, while 1h J O H is better calculated with functionals that include a reduced fraction of HF exchange. Accurate functionals for 1 J O H and 2 J H H have been tested in a tetramer water model. The hydrogen bond effects on these intramolecular couplings are additive when they are calculated by SOPPA(CCSD) wave function and DFT methods. Graphical Abstract Evaluation of the additive effect of the hydrogen bond on spin-spin coupling constants of water using WF and DFT methods.

  18. Advanced nuclear power plant regulation using risk-informed and performance-based methods

    International Nuclear Information System (INIS)

    Modarres, Mohammad

    2009-01-01

    This paper proposes and discusses implications of a largely probabilistic regulatory framework using best-estimate, goal-driven, risk-informed, and performance-based methods. This framework relies on continuous probabilistic assessment of performance of a set of time-dependent, safety-critical systems, structures, components, and procedures that assure attainment of a broad set of overarching technology-neutral protective, mitigative, and preventive goals under all phases of plant operations. In this framework acceptable levels of performance are set through formal apportionment so that they are commensurate with the overarching goals. Regulatory acceptance would be the based on the confidence level with which the plant conforms to these goals and performance objectives. The proposed framework uses the traditional defense-in-depth design and operation regulatory philosophy when uncertainty in conforming to specific goals and objectives is high. Finally, the paper discusses the steps needed to develop a corresponding technology-neutral regulatory approach from the proposed framework

  19. Method and Apparatus for Performance Optimization Through Physical Perturbation of Task Elements

    Science.gov (United States)

    Prinzel, Lawrence J., III (Inventor); Pope, Alan T. (Inventor); Palsson, Olafur S. (Inventor); Turner, Marsha J. (Inventor)

    2016-01-01

    The invention is an apparatus and method of biofeedback training for attaining a physiological state optimally consistent with the successful performance of a task, wherein the probability of successfully completing the task is made is inversely proportional to a physiological difference value, computed as the absolute value of the difference between at least one physiological signal optimally consistent with the successful performance of the task and at least one corresponding measured physiological signal of a trainee performing the task. The probability of successfully completing the task is made inversely proportional to the physiological difference value by making one or more measurable physical attributes of the environment in which the task is performed, and upon which completion of the task depends, vary in inverse proportion to the physiological difference value.

  20. Complex Hand Dexterity: A Review of Biomechanical Methods for Measuring Musical Performance

    Directory of Open Access Journals (Sweden)

    Cheryl Diane Metcalf

    2014-05-01

    Full Text Available Complex hand dexterity is fundamental to our interactions with the physical, social and cultural environment. Dexterity can be an expression of creativity and precision in a range of activities, including musical performance. Little is understood about complex hand dexterity or how virtuoso expertise is acquired, due to the versatility of movement combinations available to complete any given task. This has historically limited progress of the field because of difficulties in measuring movements of the hand. Recent developments in methods of motion capture and analysis mean it is now possible to explore the intricate movements of the hand and fingers. These methods allow us insights into the neurophysiological mechanisms underpinning complex hand dexterity and motor learning. They also allow investigation into the key factors that contribute to injury, recovery and functional compensation.The application of such analytical techniques within musical performance provides a multidisciplinary framework for purposeful investigation into the process of learning and skill acquisition in instrumental performance. These highly skilled manual and cognitive tasks present the ultimate achievement in complex hand dexterity. This paper will review methods of assessing instrumental performance in music, focusing specifically on biomechanical measurement and the associated technical challenges faced when measuring highly dexterous activities.