WorldWideScience

Sample records for classification schemes based

  1. Simple adaptive sparse representation based classification schemes for EEG based brain-computer interface applications.

    Science.gov (United States)

    Shin, Younghak; Lee, Seungchan; Ahn, Minkyu; Cho, Hohyun; Jun, Sung Chan; Lee, Heung-No

    2015-11-01

    One of the main problems related to electroencephalogram (EEG) based brain-computer interface (BCI) systems is the non-stationarity of the underlying EEG signals. This results in the deterioration of the classification performance during experimental sessions. Therefore, adaptive classification techniques are required for EEG based BCI applications. In this paper, we propose simple adaptive sparse representation based classification (SRC) schemes. Supervised and unsupervised dictionary update techniques for new test data and a dictionary modification method by using the incoherence measure of the training data are investigated. The proposed methods are very simple and additional computation for the re-training of the classifier is not needed. The proposed adaptive SRC schemes are evaluated using two BCI experimental datasets. The proposed methods are assessed by comparing classification results with the conventional SRC and other adaptive classification methods. On the basis of the results, we find that the proposed adaptive schemes show relatively improved classification accuracy as compared to conventional methods without requiring additional computation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. A Classification Scheme for Production System Processes

    DEFF Research Database (Denmark)

    Sørensen, Daniel Grud Hellerup; Brunø, Thomas Ditlev; Nielsen, Kjeld

    2018-01-01

    Manufacturing companies often have difficulties developing production platforms, partly due to the complexity of many production systems and difficulty determining which processes constitute a platform. Understanding production processes is an important step to identifying candidate processes...... for a production platform based on existing production systems. Reviewing a number of existing classifications and taxonomies, a consolidated classification scheme for processes in production of discrete products has been outlined. The classification scheme helps ensure consistency during mapping of existing...

  3. Small-scale classification schemes

    DEFF Research Database (Denmark)

    Hertzum, Morten

    2004-01-01

    Small-scale classification schemes are used extensively in the coordination of cooperative work. This study investigates the creation and use of a classification scheme for handling the system requirements during the redevelopment of a nation-wide information system. This requirements...... classification inherited a lot of its structure from the existing system and rendered requirements that transcended the framework laid out by the existing system almost invisible. As a result, the requirements classification became a defining element of the requirements-engineering process, though its main...... effects remained largely implicit. The requirements classification contributed to constraining the requirements-engineering process by supporting the software engineers in maintaining some level of control over the process. This way, the requirements classification provided the software engineers...

  4. Classification schemes for knowledge translation interventions: a practical resource for researchers.

    Science.gov (United States)

    Slaughter, Susan E; Zimmermann, Gabrielle L; Nuspl, Megan; Hanson, Heather M; Albrecht, Lauren; Esmail, Rosmin; Sauro, Khara; Newton, Amanda S; Donald, Maoliosa; Dyson, Michele P; Thomson, Denise; Hartling, Lisa

    2017-12-06

    As implementation science advances, the number of interventions to promote the translation of evidence into healthcare, health systems, or health policy is growing. Accordingly, classification schemes for these knowledge translation (KT) interventions have emerged. A recent scoping review identified 51 classification schemes of KT interventions to integrate evidence into healthcare practice; however, the review did not evaluate the quality of the classification schemes or provide detailed information to assist researchers in selecting a scheme for their context and purpose. This study aimed to further examine and assess the quality of these classification schemes of KT interventions, and provide information to aid researchers when selecting a classification scheme. We abstracted the following information from each of the original 51 classification scheme articles: authors' objectives; purpose of the scheme and field of application; socioecologic level (individual, organizational, community, system); adaptability (broad versus specific); target group (patients, providers, policy-makers), intent (policy, education, practice), and purpose (dissemination versus implementation). Two reviewers independently evaluated the methodological quality of the development of each classification scheme using an adapted version of the AGREE II tool. Based on these assessments, two independent reviewers reached consensus about whether to recommend each scheme for researcher use, or not. Of the 51 original classification schemes, we excluded seven that were not specific classification schemes, not accessible or duplicates. Of the remaining 44 classification schemes, nine were not recommended. Of the 35 recommended classification schemes, ten focused on behaviour change and six focused on population health. Many schemes (n = 29) addressed practice considerations. Fewer schemes addressed educational or policy objectives. Twenty-five classification schemes had broad applicability

  5. Evaluation of Effectiveness of Wavelet Based Denoising Schemes Using ANN and SVM for Bearing Condition Classification

    Directory of Open Access Journals (Sweden)

    Vijay G. S.

    2012-01-01

    Full Text Available The wavelet based denoising has proven its ability to denoise the bearing vibration signals by improving the signal-to-noise ratio (SNR and reducing the root-mean-square error (RMSE. In this paper seven wavelet based denoising schemes have been evaluated based on the performance of the Artificial Neural Network (ANN and the Support Vector Machine (SVM, for the bearing condition classification. The work consists of two parts, the first part in which a synthetic signal simulating the defective bearing vibration signal with Gaussian noise was subjected to these denoising schemes. The best scheme based on the SNR and the RMSE was identified. In the second part, the vibration signals collected from a customized Rolling Element Bearing (REB test rig for four bearing conditions were subjected to these denoising schemes. Several time and frequency domain features were extracted from the denoised signals, out of which a few sensitive features were selected using the Fisher’s Criterion (FC. Extracted features were used to train and test the ANN and the SVM. The best denoising scheme identified, based on the classification performances of the ANN and the SVM, was found to be the same as the one obtained using the synthetic signal.

  6. A classification scheme for LWR fuel assemblies

    Energy Technology Data Exchange (ETDEWEB)

    Moore, R.S.; Williamson, D.A.; Notz, K.J.

    1988-11-01

    With over 100 light water nuclear reactors operating nationwide, representing designs by four primary vendors, and with reload fuel manufactured by these vendors and additional suppliers, a wide variety of fuel assembly types are in existence. At Oak Ridge National Laboratory, both the Systems Integration Program and the Characteristics Data Base project required a classification scheme for these fuels. This scheme can be applied to other areas and is expected to be of value to many Office of Civilian Radioactive Waste Management programs. To develop the classification scheme, extensive information on the fuel assemblies that have been and are being manufactured by the various nuclear fuel vendors was compiled, reviewed, and evaluated. It was determined that it is possible to characterize assemblies in a systematic manner, using a combination of physical factors. A two-stage scheme was developed consisting of 79 assembly types, which are grouped into 22 assembly classes. The assembly classes are determined by the general design of the reactor cores in which the assemblies are, or were, used. The general BWR and PWR classes are divided differently but both are based on reactor core configuration. 2 refs., 15 tabs.

  7. A classification scheme for LWR fuel assemblies

    International Nuclear Information System (INIS)

    Moore, R.S.; Williamson, D.A.; Notz, K.J.

    1988-11-01

    With over 100 light water nuclear reactors operating nationwide, representing designs by four primary vendors, and with reload fuel manufactured by these vendors and additional suppliers, a wide variety of fuel assembly types are in existence. At Oak Ridge National Laboratory, both the Systems Integration Program and the Characteristics Data Base project required a classification scheme for these fuels. This scheme can be applied to other areas and is expected to be of value to many Office of Civilian Radioactive Waste Management programs. To develop the classification scheme, extensive information on the fuel assemblies that have been and are being manufactured by the various nuclear fuel vendors was compiled, reviewed, and evaluated. It was determined that it is possible to characterize assemblies in a systematic manner, using a combination of physical factors. A two-stage scheme was developed consisting of 79 assembly types, which are grouped into 22 assembly classes. The assembly classes are determined by the general design of the reactor cores in which the assemblies are, or were, used. The general BWR and PWR classes are divided differently but both are based on reactor core configuration. 2 refs., 15 tabs

  8. Proposal of a new classification scheme for periocular injuries

    Directory of Open Access Journals (Sweden)

    Devi Prasad Mohapatra

    2017-01-01

    Full Text Available Background: Eyelids are important structures and play a role in protecting the globe from trauma, brightness, in maintaining the integrity of tear films and moving the tears towards the lacrimal drainage system and contribute to aesthetic appearance of the face. Ophthalmic trauma is an important cause of morbidity among individuals and has also been responsible for additional cost of healthcare. Periocular trauma involving eyelids and adjacent structures has been found to have increased recently probably due to increased pace of life and increased dependence on machinery. A comprehensive classification of periocular trauma would help in stratifying these injuries as well as study outcomes. Material and Methods: This study was carried out at our institute from June 2015 to Dec 2015. We searched multiple English language databases for existing classification systems for periocular trauma. We designed a system of classification of periocular soft tissue injuries based on clinico-anatomical presentations. This classification was applied prospectively to patients presenting with periocular soft tissue injuries to our department. Results: A comprehensive classification scheme was designed consisting of five types of periocular injuries. A total of 38 eyelid injuries in 34 patients were evaluated in this study. According to the System for Peri-Ocular Trauma (SPOT classification, Type V injuries were most common. SPOT Type II injuries were more common isolated injuries among all zones. Discussion: Classification systems are necessary in order to provide a framework in which to scientifically study the etiology, pathogenesis, and treatment of diseases in an orderly fashion. The SPOT classification has taken into account the periocular soft tissue injuries i.e., upper eyelid, lower eyelid, medial and lateral canthus injuries., based on observed clinico-anatomical patterns of eyelid injuries. Conclusion: The SPOT classification seems to be a reliable

  9. A Classification Scheme for Literary Characters

    Directory of Open Access Journals (Sweden)

    Matthew Berry

    2017-10-01

    Full Text Available There is no established classification scheme for literary characters in narrative theory short of generic categories like protagonist vs. antagonist or round vs. flat. This is so despite the ubiquity of stock characters that recur across media, cultures, and historical time periods. We present here a proposal of a systematic psychological scheme for classifying characters from the literary and dramatic fields based on a modification of the Thomas-Kilmann (TK Conflict Mode Instrument used in applied studies of personality. The TK scheme classifies personality along the two orthogonal dimensions of assertiveness and cooperativeness. To examine the validity of a modified version of this scheme, we had 142 participants provide personality ratings for 40 characters using two of the Big Five personality traits as well as assertiveness and cooperativeness from the TK scheme. The results showed that assertiveness and cooperativeness were orthogonal dimensions, thereby supporting the validity of using a modified version of TK’s two-dimensional scheme for classifying characters.

  10. A scheme for a flexible classification of dietary and health biomarkers

    DEFF Research Database (Denmark)

    Gao, Qian; Pratico, Giulia; Scalbert, Augustin

    2017-01-01

    to have a solid scheme for biomarker classification that will provide a well-defined ontology for the field. In this manuscript, we provide an improved scheme for biomarker classification based on their intended use rather than the technology or outcomes (six subclasses are suggested: food compound intake...... in the scientific literature. However, the existing concepts for classification of biomarkers in the dietary and health area may be ambiguous, leading to uncertainty about their application. In order to better understand the potential of biomarkers and to communicate their use and application, it is imperative...... with previous biomarker classification for this field of research....

  11. A new classification scheme of European cold-water coral habitats: Implications for ecosystem-based management of the deep sea

    Science.gov (United States)

    Davies, J. S.; Guillaumont, B.; Tempera, F.; Vertino, A.; Beuck, L.; Ólafsdóttir, S. H.; Smith, C. J.; Fosså, J. H.; van den Beld, I. M. J.; Savini, A.; Rengstorf, A.; Bayle, C.; Bourillet, J.-F.; Arnaud-Haond, S.; Grehan, A.

    2017-11-01

    Cold-water corals (CWC) can form complex structures which provide refuge, nursery grounds and physical support for a diversity of other living organisms. However, irrespectively from such ecological significance, CWCs are still vulnerable to human pressures such as fishing, pollution, ocean acidification and global warming Providing coherent and representative conservation of vulnerable marine ecosystems including CWCs is one of the aims of the Marine Protected Areas networks being implemented across European seas and oceans under the EC Habitats Directive, the Marine Strategy Framework Directive and the OSPAR Convention. In order to adequately represent ecosystem diversity, these initiatives require a standardised habitat classification that organises the variety of biological assemblages and provides consistent and functional criteria to map them across European Seas. One such classification system, EUNIS, enables a broad level classification of the deep sea based on abiotic and geomorphological features. More detailed lower biotope-related levels are currently under-developed, particularly with regards to deep-water habitats (>200 m depth). This paper proposes a hierarchical CWC biotope classification scheme that could be incorporated by existing classification schemes such as EUNIS. The scheme was developed within the EU FP7 project CoralFISH to capture the variability of CWC habitats identified using a wealth of seafloor imagery datasets from across the Northeast Atlantic and Mediterranean. Depending on the resolution of the imagery being interpreted, this hierarchical scheme allows data to be recorded from broad CWC biotope categories down to detailed taxonomy-based levels, thereby providing a flexible yet valuable information level for management. The CWC biotope classification scheme identifies 81 biotopes and highlights the limitations of the classification framework and guidance provided by EUNIS, the EC Habitats Directive, OSPAR and FAO; which largely

  12. A classification scheme for risk assessment methods.

    Energy Technology Data Exchange (ETDEWEB)

    Stamp, Jason Edwin; Campbell, Philip LaRoche

    2004-08-01

    This report presents a classification scheme for risk assessment methods. This scheme, like all classification schemes, provides meaning by imposing a structure that identifies relationships. Our scheme is based on two orthogonal aspects--level of detail, and approach. The resulting structure is shown in Table 1 and is explained in the body of the report. Each cell in the Table represent a different arrangement of strengths and weaknesses. Those arrangements shift gradually as one moves through the table, each cell optimal for a particular situation. The intention of this report is to enable informed use of the methods so that a method chosen is optimal for a situation given. This report imposes structure on the set of risk assessment methods in order to reveal their relationships and thus optimize their usage.We present a two-dimensional structure in the form of a matrix, using three abstraction levels for the rows and three approaches for the columns. For each of the nine cells in the matrix we identify the method type by name and example. The matrix helps the user understand: (1) what to expect from a given method, (2) how it relates to other methods, and (3) how best to use it. Each cell in the matrix represent a different arrangement of strengths and weaknesses. Those arrangements shift gradually as one moves through the table, each cell optimal for a particular situation. The intention of this report is to enable informed use of the methods so that a method chosen is optimal for a situation given. The matrix, with type names in the cells, is introduced in Table 2 on page 13 below. Unless otherwise stated we use the word 'method' in this report to refer to a 'risk assessment method', though often times we use the full phrase. The use of the terms 'risk assessment' and 'risk management' are close enough that we do not attempt to distinguish them in this report. The remainder of this report is organized as follows. In

  13. Establishment of water quality classification scheme: a case study of ...

    African Journals Online (AJOL)

    A water quality classification scheme based on 11 routinely measured physicochemical variables has been developed for the Calabar River Estuary. The variables considered include water temperature, pH. Eh, DO, DO saturation, BOD5, COD, TSS, turbidity, NH4-N and electrical conductivity. Classification of water source ...

  14. Construction of a knowledge classification scheme for sharing and usage of knowledge

    International Nuclear Information System (INIS)

    Yoo, Jae Bok; Oh, Jeong Hoon; Lee, Ji Ho; Ko, Young Chul

    2003-12-01

    To efficiently share knowledge among our members on the basis of knowledge management system, first of all, we need to systematically design the knowledge classification scheme that we can classify these knowledge well. The objective of this project is to construct the most suitable knowledge classification scheme that all of us can share them in Korea Atomic Energy Research Institute(KAERI). To construct the knowledge classification scheme all over the our organization, we established a few principles to design it and examined related many classification schemes. And we carried out 3 steps to complete the best desirable KAERI's knowledge classification scheme, that is, 1) the step to design a draft of the knowledge classification scheme, 2) the step to revise a draft of the knowledge classification scheme, 3) the step to verify the revised scheme and to decide its scheme. The scheme completed as a results of this project is consisted of total 218 items, that is, sections of 8 items, classes of 43 items and sub-classes of 167 items. We expect that the knowledge classification scheme designed as the results of this project can be played an important role as the frame to share knowledge among our members when we introduce knowledge management system in our organization. In addition, we expect that methods to design its scheme as well as this scheme itself can be applied when design a knowledge classification scheme at the other R and D institutes and enterprises

  15. Development and test of a classification scheme for human factors in incident reports

    International Nuclear Information System (INIS)

    Miller, R.; Freitag, M.; Wilpert, B.

    1997-01-01

    The Research Center System Safety of the Berlin University of Technology conducted a research project on the analysis of Human Factors (HF) aspects in incident reported by German Nuclear Power Plants. Based on psychological theories and empirical studies a classification scheme was developed which permits the identification of human involvement in incidents. The classification scheme was applied in an epidemiological study to a selection of more than 600 HF - relevant incidents. The results allow insights into HF related problem areas. An additional study proved that the application of the classification scheme produces results which are reliable and independent from raters. (author). 13 refs, 1 fig

  16. A NEW SAR CLASSIFICATION SCHEME FOR SEDIMENTS ON INTERTIDAL FLATS BASED ON MULTI-FREQUENCY POLARIMETRIC SAR IMAGERY

    Directory of Open Access Journals (Sweden)

    W. Wang

    2017-11-01

    Full Text Available We present a new classification scheme for muddy and sandy sediments on exposed intertidal flats, which is based on synthetic aperture radar (SAR data, and use ALOS-2 (L-band, Radarsat-2 (C-band and TerraSAR-X (X-band fully polarimetric SAR imagery to demonstrate its effectiveness. Four test sites on the German North Sea coast were chosen, which represent typical surface compositions of different sediments, vegetation, and habitats, and of which a large amount of SAR is used for our analyses. Both Freeman-Durden and Cloude-Pottier polarimetric decomposition are utilized, and an additional descriptor called Double-Bounce Eigenvalue Relative Difference (DERD is introduced into the feature sets instead of the original polarimetric intensity channels. The classification is conducted following Random Forest theory, and the results are verified using ground truth data from field campaigns and an existing classification based on optical imagery. In addition, the use of Kennaugh elements for classification purposes is demonstrated using both fully and dual-polarization multi-frequency and multi-temporal SAR data. Our results show that the proposed classification scheme can be applied for the discrimination of muddy and sandy sediments using L-, C-, and X-band SAR images, while SAR imagery acquired at short wavelengths (C- and X-band can also be used to detect more detailed features such as bivalve beds on intertidal flats.

  17. Sound classification of dwellings - Comparison of schemes in Europe

    DEFF Research Database (Denmark)

    Rasmussen, Birgit

    2009-01-01

    National sound classification schemes for dwellings exist in nine countries in Europe, and proposals are under preparation in more countries. The schemes specify class criteria concerning several acoustic aspects, the main criteria being about airborne and impact sound insulation between dwellings......, facade sound insulation and installation noise. The quality classes reflect dierent levels of acoustical comfort. The paper presents and compares the sound classification schemes in Europe. The schemes have been implemented and revised gradually since the 1990es. However, due to lack of coordination...

  18. A hierarchical classification scheme of psoriasis images

    DEFF Research Database (Denmark)

    Maletti, Gabriela Mariel; Ersbøll, Bjarne Kjær

    2003-01-01

    A two-stage hierarchical classification scheme of psoriasis lesion images is proposed. These images are basically composed of three classes: normal skin, lesion and background. The scheme combines conventional tools to separate the skin from the background in the first stage, and the lesion from...

  19. A risk-based classification scheme for genetically modified foods. I: Conceptual development.

    Science.gov (United States)

    Chao, Eunice; Krewski, Daniel

    2008-12-01

    The predominant paradigm for the premarket assessment of genetically modified (GM) foods reflects heightened public concern by focusing on foods modified by recombinant deoxyribonucleic acid (rDNA) techniques, while foods modified by other methods of genetic modification are generally not assessed for safety. To determine whether a GM product requires less or more regulatory oversight and testing, we developed and evaluated a risk-based classification scheme (RBCS) for crop-derived GM foods. The results of this research are presented in three papers. This paper describes the conceptual development of the proposed RBCS that focuses on two categories of adverse health effects: (1) toxic and antinutritional effects, and (2) allergenic effects. The factors that may affect the level of potential health risks of GM foods are identified. For each factor identified, criteria for differentiating health risk potential are developed. The extent to which a GM food satisfies applicable criteria for each factor is rated separately. A concern level for each category of health effects is then determined by aggregating the ratings for the factors using predetermined aggregation rules. An overview of the proposed scheme is presented, as well as the application of the scheme to a hypothetical GM food.

  20. International proposal for an acoustic classification scheme for dwellings

    DEFF Research Database (Denmark)

    Rasmussen, Birgit

    2014-01-01

    Acoustic classification schemes specify different quality levels for acoustic conditions. Regulations and classification schemes for dwellings typically include criteria for airborne and impact sound insulation, façade sound insulation and service equipment noise. However, although important...... classes, implying also trade barriers. Thus, a harmonized classification scheme would be useful, and the European COST Action TU0901 "Integrating and Harmonizing Sound Insulation Aspects in Sustainable Urban Housing Constructions", running 2009-2013 with members from 32 countries, including three overseas...... for quality of life, information about acoustic conditions is rarely available, neither for new or existing housing. Regulatory acoustic requirements will, if enforced, ensure a corresponding quality for new dwellings, but satisfactory conditions for occupants are not guaranteed. Consequently, several...

  1. A risk-based classification scheme for genetically modified foods. III: Evaluation using a panel of reference foods.

    Science.gov (United States)

    Chao, Eunice; Krewski, Daniel

    2008-12-01

    This paper presents an exploratory evaluation of four functional components of a proposed risk-based classification scheme (RBCS) for crop-derived genetically modified (GM) foods in a concordance study. Two independent raters assigned concern levels to 20 reference GM foods using a rating form based on the proposed RBCS. The four components of evaluation were: (1) degree of concordance, (2) distribution across concern levels, (3) discriminating ability of the scheme, and (4) ease of use. At least one of the 20 reference foods was assigned to each of the possible concern levels, demonstrating the ability of the scheme to identify GM foods of different concern with respect to potential health risk. There was reasonably good concordance between the two raters for the three separate parts of the RBCS. The raters agreed that the criteria in the scheme were sufficiently clear in discriminating reference foods into different concern levels, and that with some experience, the scheme was reasonably easy to use. Specific issues and suggestions for improvements identified in the concordance study are discussed.

  2. State of the Art in the Cramer Classification Scheme and ...

    Science.gov (United States)

    Slide presentation at the SOT FDA Colloquium on State of the Art in the Cramer Classification Scheme and Threshold of Toxicological Concern in College Park, MD. Slide presentation at the SOT FDA Colloquium on State of the Art in the Cramer Classification Scheme and Threshold of Toxicological Concern in College Park, MD.

  3. A new classification scheme of plastic wastes based upon recycling labels

    Energy Technology Data Exchange (ETDEWEB)

    Özkan, Kemal, E-mail: kozkan@ogu.edu.tr [Computer Engineering Dept., Eskişehir Osmangazi University, 26480 Eskişehir (Turkey); Ergin, Semih, E-mail: sergin@ogu.edu.tr [Electrical Electronics Engineering Dept., Eskişehir Osmangazi University, 26480 Eskişehir (Turkey); Işık, Şahin, E-mail: sahini@ogu.edu.tr [Computer Engineering Dept., Eskişehir Osmangazi University, 26480 Eskişehir (Turkey); Işıklı, İdil, E-mail: idil.isikli@bilecik.edu.tr [Electrical Electronics Engineering Dept., Bilecik University, 11210 Bilecik (Turkey)

    2015-01-15

    Highlights: • PET, HPDE or PP types of plastics are considered. • An automated classification of plastic bottles based on the feature extraction and classification methods is performed. • The decision mechanism consists of PCA, Kernel PCA, FLDA, SVD and Laplacian Eigenmaps methods. • SVM is selected to achieve the classification task and majority voting technique is used. - Abstract: Since recycling of materials is widely assumed to be environmentally and economically beneficial, reliable sorting and processing of waste packaging materials such as plastics is very important for recycling with high efficiency. An automated system that can quickly categorize these materials is certainly needed for obtaining maximum classification while maintaining high throughput. In this paper, first of all, the photographs of the plastic bottles have been taken and several preprocessing steps were carried out. The first preprocessing step is to extract the plastic area of a bottle from the background. Then, the morphological image operations are implemented. These operations are edge detection, noise removal, hole removing, image enhancement, and image segmentation. These morphological operations can be generally defined in terms of the combinations of erosion and dilation. The effect of bottle color as well as label are eliminated using these operations. Secondly, the pixel-wise intensity values of the plastic bottle images have been used together with the most popular subspace and statistical feature extraction methods to construct the feature vectors in this study. Only three types of plastics are considered due to higher existence ratio of them than the other plastic types in the world. The decision mechanism consists of five different feature extraction methods including as Principal Component Analysis (PCA), Kernel PCA (KPCA), Fisher’s Linear Discriminant Analysis (FLDA), Singular Value Decomposition (SVD) and Laplacian Eigenmaps (LEMAP) and uses a simple

  4. A new classification scheme of plastic wastes based upon recycling labels

    International Nuclear Information System (INIS)

    Özkan, Kemal; Ergin, Semih; Işık, Şahin; Işıklı, İdil

    2015-01-01

    Highlights: • PET, HPDE or PP types of plastics are considered. • An automated classification of plastic bottles based on the feature extraction and classification methods is performed. • The decision mechanism consists of PCA, Kernel PCA, FLDA, SVD and Laplacian Eigenmaps methods. • SVM is selected to achieve the classification task and majority voting technique is used. - Abstract: Since recycling of materials is widely assumed to be environmentally and economically beneficial, reliable sorting and processing of waste packaging materials such as plastics is very important for recycling with high efficiency. An automated system that can quickly categorize these materials is certainly needed for obtaining maximum classification while maintaining high throughput. In this paper, first of all, the photographs of the plastic bottles have been taken and several preprocessing steps were carried out. The first preprocessing step is to extract the plastic area of a bottle from the background. Then, the morphological image operations are implemented. These operations are edge detection, noise removal, hole removing, image enhancement, and image segmentation. These morphological operations can be generally defined in terms of the combinations of erosion and dilation. The effect of bottle color as well as label are eliminated using these operations. Secondly, the pixel-wise intensity values of the plastic bottle images have been used together with the most popular subspace and statistical feature extraction methods to construct the feature vectors in this study. Only three types of plastics are considered due to higher existence ratio of them than the other plastic types in the world. The decision mechanism consists of five different feature extraction methods including as Principal Component Analysis (PCA), Kernel PCA (KPCA), Fisher’s Linear Discriminant Analysis (FLDA), Singular Value Decomposition (SVD) and Laplacian Eigenmaps (LEMAP) and uses a simple

  5. A Classification Scheme for Glaciological AVA Responses

    Science.gov (United States)

    Booth, A.; Emir, E.

    2014-12-01

    A classification scheme is proposed for amplitude vs. angle (AVA) responses as an aid to the interpretation of seismic reflectivity in glaciological research campaigns. AVA responses are a powerful tool in characterising the material properties of glacier ice and its substrate. However, before interpreting AVA data, careful true amplitude processing is required to constrain basal reflectivity and compensate amplitude decay mechanisms, including anelastic attenuation and spherical divergence. These fundamental processing steps can be difficult to design in cases of noisy data, e.g. where a target reflection is contaminated by surface wave energy (in the case of shallow glaciers) or by energy reflected from out of the survey plane. AVA methods have equally powerful usage in estimating the fluid fill of potential hydrocarbon reservoirs. However, such applications seldom use true amplitude data and instead consider qualitative AVA responses using a well-defined classification scheme. Such schemes are often defined in terms of the characteristics of best-fit responses to the observed reflectivity, e.g. the intercept (I) and gradient (G) of a linear approximation to the AVA data. The position of the response on a cross-plot of I and G then offers a diagnostic attribute for certain fluid types. We investigate the advantages in glaciology of emulating this practice, and develop a cross-plot based on the 3-term Shuey AVA approximation (using I, G, and a curvature term C). Model AVA curves define a clear lithification trend: AVA responses to stiff (lithified) substrates fall discretely into one quadrant of the cross-plot, with positive I and negative G, whereas those to fluid-rich substrates plot diagonally opposite (in the negative I and positive G quadrant). The remaining quadrants are unoccupied by plausible single-layer responses and may therefore be diagnostic of complex thin-layer reflectivity, and the magnitude and polarity of the C term serves as a further indicator

  6. An Empirical Study on User-oriented Association Analysis of Library Classification Schemes

    Directory of Open Access Journals (Sweden)

    Hsiao-Tieh Pu

    2002-12-01

    Full Text Available Library classification schemes are mostly organized based on disciplines with a hierarchical structure. From the user point of view, some highly related yet non-hierarchical classes may not be easy to perceive in these schemes. This paper is to discover hidden associations between classes by analyzing users’ usage of library collections. The proposed approach employs collaborative filtering techniques to discover associated classes based on the circulation patterns of similar users. Many associated classes scattered across different subject hierarchies could be discovered from the circulation patterns of similar users. The obtained association norms between classes were found to be useful in understanding users' subject preferences for a given class. Classification schemes can, therefore, be made more adaptable to changes of users and the uses of different library collections. There are implications for applications in information organization and retrieval as well. For example, catalogers could refer to the ranked associated classes when they perform multi-classification, and users could also browse the associated classes for related subjects in an enhanced OPAC system. In future research, more empirical studies will be needed to validate the findings, and methods for obtaining user-oriented associations can still be improved.[Article content in Chinese

  7. Modern radiosurgical and endovascular classification schemes for brain arteriovenous malformations.

    Science.gov (United States)

    Tayebi Meybodi, Ali; Lawton, Michael T

    2018-05-04

    Stereotactic radiosurgery (SRS) and endovascular techniques are commonly used for treating brain arteriovenous malformations (bAVMs). They are usually used as ancillary techniques to microsurgery but may also be used as solitary treatment options. Careful patient selection requires a clear estimate of the treatment efficacy and complication rates for the individual patient. As such, classification schemes are an essential part of patient selection paradigm for each treatment modality. While the Spetzler-Martin grading system and its subsequent modifications are commonly used for microsurgical outcome prediction for bAVMs, the same system(s) may not be easily applicable to SRS and endovascular therapy. Several radiosurgical- and endovascular-based grading scales have been proposed for bAVMs. However, a comprehensive review of these systems including a discussion on their relative advantages and disadvantages is missing. This paper is dedicated to modern classification schemes designed for SRS and endovascular techniques.

  8. Dissimilarity-based classification of anatomical tree structures

    DEFF Research Database (Denmark)

    Sørensen, Lauge; Lo, Pechin Chien Pau; Dirksen, Asger

    2011-01-01

    A novel method for classification of abnormality in anatomical tree structures is presented. A tree is classified based on direct comparisons with other trees in a dissimilarity-based classification scheme. The pair-wise dissimilarity measure between two trees is based on a linear assignment betw...

  9. Dissimilarity-based classification of anatomical tree structures

    DEFF Research Database (Denmark)

    Sørensen, Lauge Emil Borch Laurs; Lo, Pechin Chien Pau; Dirksen, Asger

    2011-01-01

    A novel method for classification of abnormality in anatomical tree structures is presented. A tree is classified based on direct comparisons with other trees in a dissimilarity-based classification scheme. The pair-wise dissimilarity measure between two trees is based on a linear assignment...

  10. A Classification Scheme for Adult Education. Education Libraries Bulletin, Supplement Twelve.

    Science.gov (United States)

    Greaves, Monica A., Comp.

    This classification scheme, based on the 'facet formula' theory of Ranganathan, is designed primarily for the library of the National Institute of Adult Education in London, England. Kinds of persons being educated (educands), methods and problems of education, specific countries, specific organizations, and forms in which the information is…

  11. Sound classification schemes in Europe - Quality classes intended for renovated housing

    DEFF Research Database (Denmark)

    Rasmussen, Birgit

    2010-01-01

    exposure in the home included in the proposed main objectives for a housing policy. In most countries in Europe, building regulations specify minimum requirements concerning acoustical conditions for new dwellings. In addition, several countries have introduced sound classification schemes with classes...... intended to reflect different levels of acoustical comfort. Consequently, acoustic requirements for a dwelling can be specified as the legal minimum requirements or as a specific class in a classification scheme. Most schemes have both higher classes than corresponding to the regulatory requirements...

  12. Transporter taxonomy - a comparison of different transport protein classification schemes.

    Science.gov (United States)

    Viereck, Michael; Gaulton, Anna; Digles, Daniela; Ecker, Gerhard F

    2014-06-01

    Currently, there are more than 800 well characterized human membrane transport proteins (including channels and transporters) and there are estimates that about 10% (approx. 2000) of all human genes are related to transport. Membrane transport proteins are of interest as potential drug targets, for drug delivery, and as a cause of side effects and drug–drug interactions. In light of the development of Open PHACTS, which provides an open pharmacological space, we analyzed selected membrane transport protein classification schemes (Transporter Classification Database, ChEMBL, IUPHAR/BPS Guide to Pharmacology, and Gene Ontology) for their ability to serve as a basis for pharmacology driven protein classification. A comparison of these membrane transport protein classification schemes by using a set of clinically relevant transporters as use-case reveals the strengths and weaknesses of the different taxonomy approaches.

  13. A New Feature Ensemble with a Multistage Classification Scheme for Breast Cancer Diagnosis

    Directory of Open Access Journals (Sweden)

    Idil Isikli Esener

    2017-01-01

    Full Text Available A new and effective feature ensemble with a multistage classification is proposed to be implemented in a computer-aided diagnosis (CAD system for breast cancer diagnosis. A publicly available mammogram image dataset collected during the Image Retrieval in Medical Applications (IRMA project is utilized to verify the suggested feature ensemble and multistage classification. In achieving the CAD system, feature extraction is performed on the mammogram region of interest (ROI images which are preprocessed by applying a histogram equalization followed by a nonlocal means filtering. The proposed feature ensemble is formed by concatenating the local configuration pattern-based, statistical, and frequency domain features. The classification process of these features is implemented in three cases: a one-stage study, a two-stage study, and a three-stage study. Eight well-known classifiers are used in all cases of this multistage classification scheme. Additionally, the results of the classifiers that provide the top three performances are combined via a majority voting technique to improve the recognition accuracy on both two- and three-stage studies. A maximum of 85.47%, 88.79%, and 93.52% classification accuracies are attained by the one-, two-, and three-stage studies, respectively. The proposed multistage classification scheme is more effective than the single-stage classification for breast cancer diagnosis.

  14. A novel fractal image compression scheme with block classification and sorting based on Pearson's correlation coefficient.

    Science.gov (United States)

    Wang, Jianji; Zheng, Nanning

    2013-09-01

    Fractal image compression (FIC) is an image coding technology based on the local similarity of image structure. It is widely used in many fields such as image retrieval, image denoising, image authentication, and encryption. FIC, however, suffers from the high computational complexity in encoding. Although many schemes are published to speed up encoding, they do not easily satisfy the encoding time or the reconstructed image quality requirements. In this paper, a new FIC scheme is proposed based on the fact that the affine similarity between two blocks in FIC is equivalent to the absolute value of Pearson's correlation coefficient (APCC) between them. First, all blocks in the range and domain pools are chosen and classified using an APCC-based block classification method to increase the matching probability. Second, by sorting the domain blocks with respect to APCCs between these domain blocks and a preset block in each class, the matching domain block for a range block can be searched in the selected domain set in which these APCCs are closer to APCC between the range block and the preset block. Experimental results show that the proposed scheme can significantly speed up the encoding process in FIC while preserving the reconstructed image quality well.

  15. Monitoring nanotechnology using patent classifications: an overview and comparison of nanotechnology classification schemes

    Energy Technology Data Exchange (ETDEWEB)

    Jürgens, Björn, E-mail: bjurgens@agenciaidea.es [Agency of Innovation and Development of Andalusia, CITPIA PATLIB Centre (Spain); Herrero-Solana, Victor, E-mail: victorhs@ugr.es [University of Granada, SCImago-UGR (SEJ036) (Spain)

    2017-04-15

    Patents are an essential information source used to monitor, track, and analyze nanotechnology. When it comes to search nanotechnology-related patents, a keyword search is often incomplete and struggles to cover such an interdisciplinary discipline. Patent classification schemes can reveal far better results since they are assigned by experts who classify the patent documents according to their technology. In this paper, we present the most important classifications to search nanotechnology patents and analyze how nanotechnology is covered in the main patent classification systems used in search systems nowadays: the International Patent Classification (IPC), the United States Patent Classification (USPC), and the Cooperative Patent Classification (CPC). We conclude that nanotechnology has a significantly better patent coverage in the CPC since considerable more nanotechnology documents were retrieved than by using other classifications, and thus, recommend its use for all professionals involved in nanotechnology patent searches.

  16. Monitoring nanotechnology using patent classifications: an overview and comparison of nanotechnology classification schemes

    International Nuclear Information System (INIS)

    Jürgens, Björn; Herrero-Solana, Victor

    2017-01-01

    Patents are an essential information source used to monitor, track, and analyze nanotechnology. When it comes to search nanotechnology-related patents, a keyword search is often incomplete and struggles to cover such an interdisciplinary discipline. Patent classification schemes can reveal far better results since they are assigned by experts who classify the patent documents according to their technology. In this paper, we present the most important classifications to search nanotechnology patents and analyze how nanotechnology is covered in the main patent classification systems used in search systems nowadays: the International Patent Classification (IPC), the United States Patent Classification (USPC), and the Cooperative Patent Classification (CPC). We conclude that nanotechnology has a significantly better patent coverage in the CPC since considerable more nanotechnology documents were retrieved than by using other classifications, and thus, recommend its use for all professionals involved in nanotechnology patent searches.

  17. The Nutraceutical Bioavailability Classification Scheme: Classifying Nutraceuticals According to Factors Limiting their Oral Bioavailability.

    Science.gov (United States)

    McClements, David Julian; Li, Fang; Xiao, Hang

    2015-01-01

    The oral bioavailability of a health-promoting dietary component (nutraceutical) may be limited by various physicochemical and physiological phenomena: liberation from food matrices, solubility in gastrointestinal fluids, interaction with gastrointestinal components, chemical degradation or metabolism, and epithelium cell permeability. Nutraceutical bioavailability can therefore be improved by designing food matrices that control their bioaccessibility (B*), absorption (A*), and transformation (T*) within the gastrointestinal tract (GIT). This article reviews the major factors influencing the gastrointestinal fate of nutraceuticals, and then uses this information to develop a new scheme to classify the major factors limiting nutraceutical bioavailability: the nutraceutical bioavailability classification scheme (NuBACS). This new scheme is analogous to the biopharmaceutical classification scheme (BCS) used by the pharmaceutical industry to classify drug bioavailability, but it contains additional factors important for understanding nutraceutical bioavailability in foods. The article also highlights potential strategies for increasing the oral bioavailability of nutraceuticals based on their NuBACS designation (B*A*T*).

  18. Sound classification of dwellings – A diversity of national schemes in Europe

    DEFF Research Database (Denmark)

    Rasmussen, Birgit

    2011-01-01

    Sound classification schemes for dwellings exist in ten countries in Europe, typically prepared and published as national standards. The schemes define quality classes intended to reflect different levels of acoustical comfort. The main criteria concern airborne and impact sound insulation between...... dwellings, facade sound insulation and installation noise. This paper presents the sound classification schemes in Europe and compares the class criteria for sound insulation between dwellings. The schemes have been implemented and revised gradually since the early 1990s. However, due to lack...... constructions fulfilling different classes. The current variety of descriptors and classes also causes trade barriers. Thus, there is a need to harmonize characteristics of the schemes, and a European COST Action TU0901 "Integrating and Harmonizing Sound Insulation Aspects in Sustainable Urban Housing...

  19. Mammogram classification scheme using 2D-discrete wavelet and local binary pattern for detection of breast cancer

    Science.gov (United States)

    Adi Putra, Januar

    2018-04-01

    In this paper, we propose a new mammogram classification scheme to classify the breast tissues as normal or abnormal. Feature matrix is generated using Local Binary Pattern to all the detailed coefficients from 2D-DWT of the region of interest (ROI) of a mammogram. Feature selection is done by selecting the relevant features that affect the classification. Feature selection is used to reduce the dimensionality of data and features that are not relevant, in this paper the F-test and Ttest will be performed to the results of the feature extraction dataset to reduce and select the relevant feature. The best features are used in a Neural Network classifier for classification. In this research we use MIAS and DDSM database. In addition to the suggested scheme, the competent schemes are also simulated for comparative analysis. It is observed that the proposed scheme has a better say with respect to accuracy, specificity and sensitivity. Based on experiments, the performance of the proposed scheme can produce high accuracy that is 92.71%, while the lowest accuracy obtained is 77.08%.

  20. Acoustic classification schemes in Europe – Applicability for new, existing and renovated housing

    DEFF Research Database (Denmark)

    Rasmussen, Birgit

    2016-01-01

    The first acoustic classification schemes for dwellings were published in the 1990’es as national standards with the main purpose to introduce the possibility of specifying easily stricter acoustic criteria for new-build than the minimum requirements found in building regulations. Since then, more...... countries have introduced acoustic classification schemes, the first countries updated more times and some countries introduced acoustic classification also for other building categories. However, the classification schemes continued to focus on new buildings and have in general limited applicability...... for existing buildings from before implementation of acoustic regulations, typically in the 1950’es or later. The paper will summarize main characteristics, differences and similarities of the current national quality classes for housing in ten countries in Europe. In addition, the status and challenges...

  1. A new classification scheme of plastic wastes based upon recycling labels.

    Science.gov (United States)

    Özkan, Kemal; Ergin, Semih; Işık, Şahin; Işıklı, Idil

    2015-01-01

    results agree on. The proposed classification scheme provides high accuracy rate, and also it is able to run in real-time applications. It can automatically classify the plastic bottle types with approximately 90% recognition accuracy. Besides this, the proposed methodology yields approximately 96% classification rate for the separation of PET or non-PET plastic types. It also gives 92% accuracy for the categorization of non-PET plastic types into HPDE or PP. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. Mental Task Classification Scheme Utilizing Correlation Coefficient Extracted from Interchannel Intrinsic Mode Function.

    Science.gov (United States)

    Rahman, Md Mostafizur; Fattah, Shaikh Anowarul

    2017-01-01

    In view of recent increase of brain computer interface (BCI) based applications, the importance of efficient classification of various mental tasks has increased prodigiously nowadays. In order to obtain effective classification, efficient feature extraction scheme is necessary, for which, in the proposed method, the interchannel relationship among electroencephalogram (EEG) data is utilized. It is expected that the correlation obtained from different combination of channels will be different for different mental tasks, which can be exploited to extract distinctive feature. The empirical mode decomposition (EMD) technique is employed on a test EEG signal obtained from a channel, which provides a number of intrinsic mode functions (IMFs), and correlation coefficient is extracted from interchannel IMF data. Simultaneously, different statistical features are also obtained from each IMF. Finally, the feature matrix is formed utilizing interchannel correlation features and intrachannel statistical features of the selected IMFs of EEG signal. Different kernels of the support vector machine (SVM) classifier are used to carry out the classification task. An EEG dataset containing ten different combinations of five different mental tasks is utilized to demonstrate the classification performance and a very high level of accuracy is achieved by the proposed scheme compared to existing methods.

  3. New Course Design: Classification Schemes and Information Architecture.

    Science.gov (United States)

    Weinberg, Bella Hass

    2002-01-01

    Describes a course developed at St. John's University (New York) in the Division of Library and Information Science that relates traditional classification schemes to information architecture and Web sites. Highlights include functional aspects of information architecture, that is, the way content is structured; assignments; student reactions; and…

  4. EEG Classification for Hybrid Brain-Computer Interface Using a Tensor Based Multiclass Multimodal Analysis Scheme.

    Science.gov (United States)

    Ji, Hongfei; Li, Jie; Lu, Rongrong; Gu, Rong; Cao, Lei; Gong, Xiaoliang

    2016-01-01

    Electroencephalogram- (EEG-) based brain-computer interface (BCI) systems usually utilize one type of changes in the dynamics of brain oscillations for control, such as event-related desynchronization/synchronization (ERD/ERS), steady state visual evoked potential (SSVEP), and P300 evoked potentials. There is a recent trend to detect more than one of these signals in one system to create a hybrid BCI. However, in this case, EEG data were always divided into groups and analyzed by the separate processing procedures. As a result, the interactive effects were ignored when different types of BCI tasks were executed simultaneously. In this work, we propose an improved tensor based multiclass multimodal scheme especially for hybrid BCI, in which EEG signals are denoted as multiway tensors, a nonredundant rank-one tensor decomposition model is proposed to obtain nonredundant tensor components, a weighted fisher criterion is designed to select multimodal discriminative patterns without ignoring the interactive effects, and support vector machine (SVM) is extended to multiclass classification. Experiment results suggest that the proposed scheme can not only identify the different changes in the dynamics of brain oscillations induced by different types of tasks but also capture the interactive effects of simultaneous tasks properly. Therefore, it has great potential use for hybrid BCI.

  5. Mapping of the Universe of Knowledge in Different Classification Schemes

    Directory of Open Access Journals (Sweden)

    M. P. Satija

    2017-06-01

    Full Text Available Given the variety of approaches to mapping the universe of knowledge that have been presented and discussed in the literature, the purpose of this paper is to systematize their main principles and their applications in the major general modern library classification schemes. We conducted an analysis of the literature on classification and the main classification systems, namely Dewey/Universal Decimal Classification, Cutter’s Expansive Classification, Subject Classification of J.D. Brown, Colon Classification, Library of Congress Classification, Bibliographic Classification, Rider’s International Classification, Bibliothecal Bibliographic Klassification (BBK, and Broad System of Ordering (BSO. We conclude that the arrangement of the main classes can be done following four principles that are not mutually exclusive: ideological principle, social purpose principle, scientific order, and division by discipline. The paper provides examples and analysis of each system. We also conclude that as knowledge is ever-changing, classifications also change and present a different structure of knowledge depending upon the society and time of their design.

  6. Medical X-ray Image Hierarchical Classification Using a Merging and Splitting Scheme in Feature Space.

    Science.gov (United States)

    Fesharaki, Nooshin Jafari; Pourghassem, Hossein

    2013-07-01

    Due to the daily mass production and the widespread variation of medical X-ray images, it is necessary to classify these for searching and retrieving proposes, especially for content-based medical image retrieval systems. In this paper, a medical X-ray image hierarchical classification structure based on a novel merging and splitting scheme and using shape and texture features is proposed. In the first level of the proposed structure, to improve the classification performance, similar classes with regard to shape contents are grouped based on merging measures and shape features into the general overlapped classes. In the next levels of this structure, the overlapped classes split in smaller classes based on the classification performance of combination of shape and texture features or texture features only. Ultimately, in the last levels, this procedure is also continued forming all the classes, separately. Moreover, to optimize the feature vector in the proposed structure, we use orthogonal forward selection algorithm according to Mahalanobis class separability measure as a feature selection and reduction algorithm. In other words, according to the complexity and inter-class distance of each class, a sub-space of the feature space is selected in each level and then a supervised merging and splitting scheme is applied to form the hierarchical classification. The proposed structure is evaluated on a database consisting of 2158 medical X-ray images of 18 classes (IMAGECLEF 2005 database) and accuracy rate of 93.6% in the last level of the hierarchical structure for an 18-class classification problem is obtained.

  7. A Classification Scheme for Career Education Resource Materials.

    Science.gov (United States)

    Koontz, Ronald G.

    The introductory section of the paper expresses its purpose: to devise a classification scheme for career education resource material, which will be used to develop the USOE Office of Career Education Resource Library and will be disseminated to interested State departments of education and local school districts to assist them in classifying…

  8. A risk-based classification scheme for genetically modified foods. II: Graded testing.

    Science.gov (United States)

    Chao, Eunice; Krewski, Daniel

    2008-12-01

    This paper presents a graded approach to the testing of crop-derived genetically modified (GM) foods based on concern levels in a proposed risk-based classification scheme (RBCS) and currently available testing methods. A graded approach offers the potential for more efficient use of testing resources by focusing less on lower concern GM foods, and more on higher concern foods. In this proposed approach to graded testing, products that are classified as Level I would have met baseline testing requirements that are comparable to what is widely applied to premarket assessment of GM foods at present. In most cases, Level I products would require no further testing, or very limited confirmatory analyses. For products classified as Level II or higher, additional testing would be required, depending on the type of the substance, prior dietary history, estimated exposure level, prior knowledge of toxicity of the substance, and the nature of the concern related to unintended changes in the modified food. Level III testing applies only to the assessment of toxic and antinutritional effects from intended changes and is tailored to the nature of the substance in question. Since appropriate test methods are not currently available for all effects of concern, future research to strengthen the testing of GM foods is discussed.

  9. Developing a contributing factor classification scheme for Rasmussen's AcciMap: Reliability and validity evaluation.

    Science.gov (United States)

    Goode, N; Salmon, P M; Taylor, N Z; Lenné, M G; Finch, C F

    2017-10-01

    One factor potentially limiting the uptake of Rasmussen's (1997) Accimap method by practitioners is the lack of a contributing factor classification scheme to guide accident analyses. This article evaluates the intra- and inter-rater reliability and criterion-referenced validity of a classification scheme developed to support the use of Accimap by led outdoor activity (LOA) practitioners. The classification scheme has two levels: the system level describes the actors, artefacts and activity context in terms of 14 codes; the descriptor level breaks the system level codes down into 107 specific contributing factors. The study involved 11 LOA practitioners using the scheme on two separate occasions to code a pre-determined list of contributing factors identified from four incident reports. Criterion-referenced validity was assessed by comparing the codes selected by LOA practitioners to those selected by the method creators. Mean intra-rater reliability scores at the system (M = 83.6%) and descriptor (M = 74%) levels were acceptable. Mean inter-rater reliability scores were not consistently acceptable for both coding attempts at the system level (M T1  = 68.8%; M T2  = 73.9%), and were poor at the descriptor level (M T1  = 58.5%; M T2  = 64.1%). Mean criterion referenced validity scores at the system level were acceptable (M T1  = 73.9%; M T2  = 75.3%). However, they were not consistently acceptable at the descriptor level (M T1  = 67.6%; M T2  = 70.8%). Overall, the results indicate that the classification scheme does not currently satisfy reliability and validity requirements, and that further work is required. The implications for the design and development of contributing factors classification schemes are discussed. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Waste-acceptance criteria and risk-based thinking for radioactive-waste classification

    International Nuclear Information System (INIS)

    Lowenthal, M.D.

    1998-01-01

    The US system of radioactive-waste classification and its development provide a reference point for the discussion of risk-based thinking in waste classification. The official US system is described and waste-acceptance criteria for disposal sites are introduced because they constitute a form of de facto waste classification. Risk-based classification is explored and it is found that a truly risk-based system is context-dependent: risk depends not only on the waste-management activity but, for some activities such as disposal, it depends on the specific physical context. Some of the elements of the official US system incorporate risk-based thinking, but like many proposed alternative schemes, the physical context of disposal is ignored. The waste-acceptance criteria for disposal sites do account for this context dependence and could be used as a risk-based classification scheme for disposal. While different classes would be necessary for different management activities, the waste-acceptance criteria would obviate the need for the current system and could better match wastes to disposal environments saving money or improving safety or both

  11. Social Constructivism: Botanical Classification Schemes of Elementary School Children.

    Science.gov (United States)

    Tull, Delena

    The assertion that there is a social component to children's construction of knowledge about natural phenomena is supported by evidence from an examination of children's classification schemes for plants. An ethnographic study was conducted with nine sixth grade children in central Texas. The children classified plants in the outdoors, in a…

  12. A New Well Classification Scheme For The Nigerian Oil Industry

    International Nuclear Information System (INIS)

    Ojoh, K.

    2002-01-01

    Oil was discovered in the Niger Delta Basin in 1956, with Oloibiri 1, after 21 wildcats had been drilled with lack of success. In the 46 years since, 25 companies have discovered 52 Billion barrels, of which 20 Billion has been produced, leaving proven reserves of 32 Billion Barrels.Between now and 2010, the country would like to add 15 billion barrels of oil to these reserves. The target is 40 Billion barrels. The National aspiration is to be able to obtain OPEC quota to produce 4 million barrels of oil per day. A large percentage of the reserves additions will definitely come from the deepwater segment of the basin, where fields of over 500 Million barrels are expected. Exploration also continues on the shelf and on land, but the rate of discovery in these areas is - after 46 years of constant effort - constrained by the relative maturity of the basin.The challenges are that few, small, untested structures remain on shelf and land, whereas most undiscovered reserves are in stratigraphic accumulations within known producing areas. These are only visible on 3-D seismic after it is processed using state-of-the-art, high-technology attribute analyses. In the deepwater province, the stratigraphy throws up problems of reservoir continuity. Channels and lobe fans have complex spatial distribution which systematically require more than the classical two appraisal wells in conventional classification.The industry agrees that the current well classification scheme, which came into place in 1977, needs to be overhauled to take cognisance of these challenges.At a workshop last May, a Well Classification Committee comprising members from OPTS, DEWOG, NAIPEC as well as the DPR was mandated to produce a well classification scheme for the industry. This paper examines the current scheme and comes with a technically sound, widely accepted alternative, complete with exhaustive illustrations

  13. Acoustic classification of buildings in Europe – Main characteristics of national schemes for housing, schools, hospitals and office buildings

    DEFF Research Database (Denmark)

    Rasmussen, Birgit

    2018-01-01

    schemes define limit values for a number of acoustic performance areas, typically airborne and impact sound insulation, service equipment noise, traffic noise and reverberation time, i.e. the same as in regulations. Comparative studies of the national acoustic classification schemes in Europe show main......Building regulations specify minimum requirements, and more than ten countries in Europe have published national acoustic classification schemes with quality classes, the main purpose being to introduce easy specification of stricter acoustic criteria than defined in regulations. The very first...... classification schemes were published in the mid 1990’es and for dwellings only. Since then, more countries have introduced such schemes, some including also other building categories like e.g. schools, hospitals and office buildings, and the first countries have made updates more times. Acoustic classification...

  14. A new scheme for urban impervious surface classification from SAR images

    Science.gov (United States)

    Zhang, Hongsheng; Lin, Hui; Wang, Yunpeng

    2018-05-01

    Urban impervious surfaces have been recognized as a significant indicator for various environmental and socio-economic studies. There is an increasingly urgent demand for timely and accurate monitoring of the impervious surfaces with satellite technology from local to global scales. In the past decades, optical remote sensing has been widely employed for this task with various techniques. However, there are still a range of challenges, e.g. handling cloud contamination on optical data. Therefore, the Synthetic Aperture Radar (SAR) was introduced for the challenging task because it is uniquely all-time- and all-weather-capable. Nevertheless, with an increasing number of SAR data applied, the methodology used for impervious surfaces classification remains unchanged from the methods used for optical datasets. This shortcoming has prevented the community from fully exploring the potential of using SAR data for impervious surfaces classification. We proposed a new scheme that is comparable to the well-known and fundamental Vegetation-Impervious surface-Soil (V-I-S) model for mapping urban impervious surfaces. Three scenes of fully polarimetric Radsarsat-2 data for the cities of Shenzhen, Hong Kong and Macau were employed to test and validate the proposed methodology. Experimental results indicated that the overall accuracy and Kappa coefficient were 96.00% and 0.8808 in Shenzhen, 93.87% and 0.8307 in Hong Kong and 97.48% and 0.9354 in Macau, indicating the applicability and great potential of the new scheme for impervious surfaces classification using polarimetric SAR data. Comparison with the traditional scheme indicated that this new scheme was able to improve the overall accuracy by up to 4.6% and Kappa coefficient by up to 0.18.

  15. Inventory classification based on decoupling points

    Directory of Open Access Journals (Sweden)

    Joakim Wikner

    2015-01-01

    Full Text Available The ideal state of continuous one-piece flow may never be achieved. Still the logistics manager can improve the flow by carefully positioning inventory to buffer against variations. Strategies such as lean, postponement, mass customization, and outsourcing all rely on strategic positioning of decoupling points to separate forecast-driven from customer-order-driven flows. Planning and scheduling of the flow are also based on classification of decoupling points as master scheduled or not. A comprehensive classification scheme for these types of decoupling points is introduced. The approach rests on identification of flows as being either demand based or supply based. The demand or supply is then combined with exogenous factors, classified as independent, or endogenous factors, classified as dependent. As a result, eight types of strategic as well as tactical decoupling points are identified resulting in a process-based framework for inventory classification that can be used for flow design.

  16. Maxillectomy defects: a suggested classification scheme.

    Science.gov (United States)

    Akinmoladun, V I; Dosumu, O O; Olusanya, A A; Ikusika, O F

    2013-06-01

    The term "maxillectomy" has been used to describe a variety of surgical procedures for a spectrum of diseases involving a diverse anatomical site. Hence, classifications of maxillectomy defects have often made communication difficult. This article highlights this problem, emphasises the need for a uniform system of classification and suggests a classification system which is simple and comprehensive. Articles related to this subject, especially those with specified classifications of maxillary surgical defects were sourced from the internet through Google, Scopus and PubMed using the search terms maxillectomy defects classification. A manual search through available literature was also done. The review of the materials revealed many classifications and modifications of classifications from the descriptive, reconstructive and prosthodontic perspectives. No globally acceptable classification exists among practitioners involved in the management of diseases in the mid-facial region. There were over 14 classifications of maxillary defects found in the English literature. Attempts made to address the inadequacies of previous classifications have tended to result in cumbersome and relatively complex classifications. A single classification that is based on both surgical and prosthetic considerations is most desirable and is hereby proposed.

  17. Physiotherapy movement based classification approaches to low back pain: comparison of subgroups through review and developer/expert survey

    Directory of Open Access Journals (Sweden)

    Karayannis Nicholas V

    2012-02-01

    Full Text Available Abstract Background Several classification schemes, each with its own philosophy and categorizing method, subgroup low back pain (LBP patients with the intent to guide treatment. Physiotherapy derived schemes usually have a movement impairment focus, but the extent to which other biological, psychological, and social factors of pain are encompassed requires exploration. Furthermore, within the prevailing 'biological' domain, the overlap of subgrouping strategies within the orthopaedic examination remains unexplored. The aim of this study was "to review and clarify through developer/expert survey, the theoretical basis and content of physical movement classification schemes, determine their relative reliability and similarities/differences, and to consider the extent of incorporation of the bio-psycho-social framework within the schemes". Methods A database search for relevant articles related to LBP and subgrouping or classification was conducted. Five dominant movement-based schemes were identified: Mechanical Diagnosis and Treatment (MDT, Treatment Based Classification (TBC, Pathoanatomic Based Classification (PBC, Movement System Impairment Classification (MSI, and O'Sullivan Classification System (OCS schemes. Data were extracted and a survey sent to the classification scheme developers/experts to clarify operational criteria, reliability, decision-making, and converging/diverging elements between schemes. Survey results were integrated into the review and approval obtained for accuracy. Results Considerable diversity exists between schemes in how movement informs subgrouping and in the consideration of broader neurosensory, cognitive, emotional, and behavioural dimensions of LBP. Despite differences in assessment philosophy, a common element lies in their objective to identify a movement pattern related to a pain reduction strategy. Two dominant movement paradigms emerge: (i loading strategies (MDT, TBC, PBC aimed at eliciting a phenomenon

  18. Development of a Hazard Classification Scheme for Substances Used in the Fraudulent Adulteration of Foods.

    Science.gov (United States)

    Everstine, Karen; Abt, Eileen; McColl, Diane; Popping, Bert; Morrison-Rowe, Sara; Lane, Richard W; Scimeca, Joseph; Winter, Carl; Ebert, Andrew; Moore, Jeffrey C; Chin, Henry B

    2018-01-01

    Food fraud, the intentional misrepresentation of the true identity of a food product or ingredient for economic gain, is a threat to consumer confidence and public health and has received increased attention from both regulators and the food industry. Following updates to food safety certification standards and publication of new U.S. regulatory requirements, we undertook a project to (i) develop a scheme to classify food fraud-related adulterants based on their potential health hazard and (ii) apply this scheme to the adulterants in a database of 2,970 food fraud records. The classification scheme was developed by a panel of experts in food safety and toxicology from the food industry, academia, and the U.S. Food and Drug Administration. Categories and subcategories were created through an iterative process of proposal, review, and validation using a subset of substances known to be associated with the fraudulent adulteration of foods. Once developed, the scheme was applied to the adulterants in the database. The resulting scheme included three broad categories: 1, potentially hazardous adulterants; 2, adulterants that are unlikely to be hazardous; and 3, unclassifiable adulterants. Categories 1 and 2 consisted of seven subcategories intended to further define the range of hazard potential for adulterants. Application of the scheme to the 1,294 adulterants in the database resulted in 45% of adulterants classified in category 1 (potentially hazardous). Twenty-seven percent of the 1,294 adulterants had a history of causing consumer illness or death, were associated with safety-related regulatory action, or were classified as allergens. These results reinforce the importance of including a consideration of food fraud-related adulterants in food safety systems. This classification scheme supports food fraud mitigation efforts and hazard identification as required in the U.S. Food Safety Modernization Act Preventive Controls Rules.

  19. A comparison between national scheme for the acoustic classification of dwellings in Europe and in the U.S

    DEFF Research Database (Denmark)

    Berardi, Umberto; Rasmussen, Birgit

    2015-01-01

    , focusing on sound insulation performance, national schemes for sound classification of dwellings have been developed in several European countries. These schemes define acoustic classes according to different levels of sound insulation. Due to the lack of coordination among countries, a significant...... scheme may facilitate exchanging experiences about constructions fulfilling different classes, reducing trade barriers, and finally increasing the sound insulation of dwellings....... diversity in terms of descriptors, number of classes, and class intervals occurred between national schemes. However, a proposal ”acoustic classification scheme for dwellings” has been developed recently in the European COST Action TU0901 with 32 member countries. This proposal has been accepted as an ISO...

  20. A cancelable biometric scheme based on multi-lead ECGs.

    Science.gov (United States)

    Peng-Tzu Chen; Shun-Chi Wu; Jui-Hsuan Hsieh

    2017-07-01

    Biometric technologies offer great advantages over other recognition methods, but there are concerns that they may compromise the privacy of individuals. In this paper, an electrocardiogram (ECG)-based cancelable biometric scheme is proposed to relieve such concerns. In this scheme, distinct biometric templates for a given beat bundle are constructed via "subspace collapsing." To determine the identity of any unknown beat bundle, the multiple signal classification (MUSIC) algorithm, incorporating a "suppression and poll" strategy, is adopted. Unlike the existing cancelable biometric schemes, knowledge of the distortion transform is not required for recognition. Experiments with real ECGs from 285 subjects are presented to illustrate the efficacy of the proposed scheme. The best recognition rate of 97.58 % was achieved under the test condition N train = 10 and N test = 10.

  1. CANDELS Visual Classifications: Scheme, Data Release, and First Results

    Science.gov (United States)

    Kartaltepe, Jeyhan S.; Mozena, Mark; Kocevski, Dale; McIntosh, Daniel H.; Lotz, Jennifer; Bell, Eric F.; Faber, Sandy; Ferguson, Henry; Koo, David; Bassett, Robert; hide

    2014-01-01

    We have undertaken an ambitious program to visually classify all galaxies in the five CANDELS fields down to H <24.5 involving the dedicated efforts of 65 individual classifiers. Once completed, we expect to have detailed morphological classifications for over 50,000 galaxies spanning 0 < z < 4 over all the fields. Here, we present our detailed visual classification scheme, which was designed to cover a wide range of CANDELS science goals. This scheme includes the basic Hubble sequence types, but also includes a detailed look at mergers and interactions, the clumpiness of galaxies, k-corrections, and a variety of other structural properties. In this paper, we focus on the first field to be completed - GOODS-S, which has been classified at various depths. The wide area coverage spanning the full field (wide+deep+ERS) includes 7634 galaxies that have been classified by at least three different people. In the deep area of the field, 2534 galaxies have been classified by at least five different people at three different depths. With this paper, we release to the public all of the visual classifications in GOODS-S along with the Perl/Tk GUI that we developed to classify galaxies. We present our initial results here, including an analysis of our internal consistency and comparisons among multiple classifiers as well as a comparison to the Sersic index. We find that the level of agreement among classifiers is quite good and depends on both the galaxy magnitude and the galaxy type, with disks showing the highest level of agreement and irregulars the lowest. A comparison of our classifications with the Sersic index and restframe colors shows a clear separation between disk and spheroid populations. Finally, we explore morphological k-corrections between the V-band and H-band observations and find that a small fraction (84 galaxies in total) are classified as being very different between these two bands. These galaxies typically have very clumpy and extended morphology or

  2. Ship Classification with High Resolution TerraSAR-X Imagery Based on Analytic Hierarchy Process

    Directory of Open Access Journals (Sweden)

    Zhi Zhao

    2013-01-01

    Full Text Available Ship surveillance using space-borne synthetic aperture radar (SAR, taking advantages of high resolution over wide swaths and all-weather working capability, has attracted worldwide attention. Recent activity in this field has concentrated mainly on the study of ship detection, but the classification is largely still open. In this paper, we propose a novel ship classification scheme based on analytic hierarchy process (AHP in order to achieve better performance. The main idea is to apply AHP on both feature selection and classification decision. On one hand, the AHP based feature selection constructs a selection decision problem based on several feature evaluation measures (e.g., discriminability, stability, and information measure and provides objective criteria to make comprehensive decisions for their combinations quantitatively. On the other hand, we take the selected feature sets as the input of KNN classifiers and fuse the multiple classification results based on AHP, in which the feature sets’ confidence is taken into account when the AHP based classification decision is made. We analyze the proposed classification scheme and demonstrate its results on a ship dataset that comes from TerraSAR-X SAR images.

  3. Standard land-cover classification scheme for remote-sensing applications in South Africa

    CSIR Research Space (South Africa)

    Thompson, M

    1996-01-01

    Full Text Available For large areas, satellite remote-sensing techniques have now become the single most effective method for land-cover and land-use data acquisition. However, the majority of land-cover (and land-use) classification schemes used have been developed...

  4. Polsar Land Cover Classification Based on Hidden Polarimetric Features in Rotation Domain and Svm Classifier

    Science.gov (United States)

    Tao, C.-S.; Chen, S.-W.; Li, Y.-Z.; Xiao, S.-P.

    2017-09-01

    Land cover classification is an important application for polarimetric synthetic aperture radar (PolSAR) data utilization. Rollinvariant polarimetric features such as H / Ani / text-decoration: overline">α / Span are commonly adopted in PolSAR land cover classification. However, target orientation diversity effect makes PolSAR images understanding and interpretation difficult. Only using the roll-invariant polarimetric features may introduce ambiguity in the interpretation of targets' scattering mechanisms and limit the followed classification accuracy. To address this problem, this work firstly focuses on hidden polarimetric feature mining in the rotation domain along the radar line of sight using the recently reported uniform polarimetric matrix rotation theory and the visualization and characterization tool of polarimetric coherence pattern. The former rotates the acquired polarimetric matrix along the radar line of sight and fully describes the rotation characteristics of each entry of the matrix. Sets of new polarimetric features are derived to describe the hidden scattering information of the target in the rotation domain. The latter extends the traditional polarimetric coherence at a given rotation angle to the rotation domain for complete interpretation. A visualization and characterization tool is established to derive new polarimetric features for hidden information exploration. Then, a classification scheme is developed combing both the selected new hidden polarimetric features in rotation domain and the commonly used roll-invariant polarimetric features with a support vector machine (SVM) classifier. Comparison experiments based on AIRSAR and multi-temporal UAVSAR data demonstrate that compared with the conventional classification scheme which only uses the roll-invariant polarimetric features, the proposed classification scheme achieves both higher classification accuracy and better robustness. For AIRSAR data, the overall classification

  5. POLSAR LAND COVER CLASSIFICATION BASED ON HIDDEN POLARIMETRIC FEATURES IN ROTATION DOMAIN AND SVM CLASSIFIER

    Directory of Open Access Journals (Sweden)

    C.-S. Tao

    2017-09-01

    Full Text Available Land cover classification is an important application for polarimetric synthetic aperture radar (PolSAR data utilization. Rollinvariant polarimetric features such as H / Ani / α / Span are commonly adopted in PolSAR land cover classification. However, target orientation diversity effect makes PolSAR images understanding and interpretation difficult. Only using the roll-invariant polarimetric features may introduce ambiguity in the interpretation of targets’ scattering mechanisms and limit the followed classification accuracy. To address this problem, this work firstly focuses on hidden polarimetric feature mining in the rotation domain along the radar line of sight using the recently reported uniform polarimetric matrix rotation theory and the visualization and characterization tool of polarimetric coherence pattern. The former rotates the acquired polarimetric matrix along the radar line of sight and fully describes the rotation characteristics of each entry of the matrix. Sets of new polarimetric features are derived to describe the hidden scattering information of the target in the rotation domain. The latter extends the traditional polarimetric coherence at a given rotation angle to the rotation domain for complete interpretation. A visualization and characterization tool is established to derive new polarimetric features for hidden information exploration. Then, a classification scheme is developed combing both the selected new hidden polarimetric features in rotation domain and the commonly used roll-invariant polarimetric features with a support vector machine (SVM classifier. Comparison experiments based on AIRSAR and multi-temporal UAVSAR data demonstrate that compared with the conventional classification scheme which only uses the roll-invariant polarimetric features, the proposed classification scheme achieves both higher classification accuracy and better robustness. For AIRSAR data, the overall classification accuracy

  6. Segmentation of Clinical Endoscopic Images Based on the Classification of Topological Vector Features

    Directory of Open Access Journals (Sweden)

    O. A. Dunaeva

    2013-01-01

    Full Text Available In this work, we describe a prototype of an automatic segmentation system and annotation of endoscopy images. The used algorithm is based on the classification of vectors of the topological features of the original image. We use the image processing scheme which includes image preprocessing, calculation of vector descriptors defined for every point of the source image and the subsequent classification of descriptors. Image preprocessing includes finding and selecting artifacts and equalizating the image brightness. In this work, we give the detailed algorithm of the construction of topological descriptors and the classifier creating procedure based on mutual sharing the AdaBoost scheme and a naive Bayes classifier. In the final section, we show the results of the classification of real endoscopic images.

  7. Classification of childhood epilepsies in a tertiary pediatric neurology clinic using a customized classification scheme from the international league against epilepsy 2010 report.

    Science.gov (United States)

    Khoo, Teik-Beng

    2013-01-01

    In its 2010 report, the International League Against Epilepsy Commission on Classification and Terminology had made a number of changes to the organization, terminology, and classification of seizures and epilepsies. This study aims to test the usefulness of this revised classification scheme on children with epilepsies aged between 0 and 18 years old. Of 527 patients, 75.1% only had 1 type of seizure and the commonest was focal seizure (61.9%). A specific electroclinical syndrome diagnosis could be made in 27.5%. Only 2.1% had a distinctive constellation. In this cohort, 46.9% had an underlying structural, metabolic, or genetic etiology. Among the important causes were pre-/perinatal insults, malformation of cortical development, intracranial infections, and neurocutaneous syndromes. However, 23.5% of the patients in our cohort were classified as having "epilepsies of unknown cause." The revised classification scheme is generally useful for pediatric patients. To make it more inclusive and clinically meaningful, some local customizations are required.

  8. DREAM: Classification scheme for dialog acts in clinical research query mediation.

    Science.gov (United States)

    Hoxha, Julia; Chandar, Praveen; He, Zhe; Cimino, James; Hanauer, David; Weng, Chunhua

    2016-02-01

    Clinical data access involves complex but opaque communication between medical researchers and query analysts. Understanding such communication is indispensable for designing intelligent human-machine dialog systems that automate query formulation. This study investigates email communication and proposes a novel scheme for classifying dialog acts in clinical research query mediation. We analyzed 315 email messages exchanged in the communication for 20 data requests obtained from three institutions. The messages were segmented into 1333 utterance units. Through a rigorous process, we developed a classification scheme and applied it for dialog act annotation of the extracted utterances. Evaluation results with high inter-annotator agreement demonstrate the reliability of this scheme. This dataset is used to contribute preliminary understanding of dialog acts distribution and conversation flow in this dialog space. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. Computer-aided diagnostic scheme for the detection of lung nodules on chest radiographs: Localized search method based on anatomical classification

    International Nuclear Information System (INIS)

    Shiraishi, Junji; Li Qiang; Suzuki, Kenji; Engelmann, Roger; Doi, Kunio

    2006-01-01

    We developed an advanced computer-aided diagnostic (CAD) scheme for the detection of various types of lung nodules on chest radiographs intended for implementation in clinical situations. We used 924 digitized chest images (992 noncalcified nodules) which had a 500x500 matrix size with a 1024 gray scale. The images were divided randomly into two sets which were used for training and testing of the computerized scheme. In this scheme, the lung field was first segmented by use of a ribcage detection technique, and then a large search area (448x448 matrix size) within the chest image was automatically determined by taking into account the locations of a midline and a top edge of the segmented ribcage. In order to detect lung nodule candidates based on a localized search method, we divided the entire search area into 7x7 regions of interest (ROIs: 64x64 matrix size). In the next step, each ROI was classified anatomically into apical, peripheral, hilar, and diaphragm/heart regions by use of its image features. Identification of lung nodule candidates and extraction of image features were applied for each localized region (128x128 matrix size), each having its central part (64x64 matrix size) located at a position corresponding to a ROI that was classified anatomically in the previous step. Initial candidates were identified by use of the nodule-enhanced image obtained with the average radial-gradient filtering technique, in which the filter size was varied adaptively depending on the location and the anatomical classification of the ROI. We extracted 57 image features from the original and nodule-enhanced images based on geometric, gray-level, background structure, and edge-gradient features. In addition, 14 image features were obtained from the corresponding locations in the contralateral subtraction image. A total of 71 image features were employed for three sequential artificial neural networks (ANNs) in order to reduce the number of false-positive candidates. All

  10. KNN BASED CLASSIFICATION OF DIGITAL MODULATED SIGNALS

    Directory of Open Access Journals (Sweden)

    Sajjad Ahmed Ghauri

    2016-11-01

    Full Text Available Demodulation process without the knowledge of modulation scheme requires Automatic Modulation Classification (AMC. When receiver has limited information about received signal then AMC become essential process. AMC finds important place in the field many civil and military fields such as modern electronic warfare, interfering source recognition, frequency management, link adaptation etc. In this paper we explore the use of K-nearest neighbor (KNN for modulation classification with different distance measurement methods. Five modulation schemes are used for classification purpose which is Binary Phase Shift Keying (BPSK, Quadrature Phase Shift Keying (QPSK, Quadrature Amplitude Modulation (QAM, 16-QAM and 64-QAM. Higher order cummulants (HOC are used as an input feature set to the classifier. Simulation results shows that proposed classification method provides better results for the considered modulation formats.

  11. Adaptive PCA based fault diagnosis scheme in imperial smelting process.

    Science.gov (United States)

    Hu, Zhikun; Chen, Zhiwen; Gui, Weihua; Jiang, Bin

    2014-09-01

    In this paper, an adaptive fault detection scheme based on a recursive principal component analysis (PCA) is proposed to deal with the problem of false alarm due to normal process changes in real process. Our further study is also dedicated to develop a fault isolation approach based on Generalized Likelihood Ratio (GLR) test and Singular Value Decomposition (SVD) which is one of general techniques of PCA, on which the off-set and scaling fault can be easily isolated with explicit off-set fault direction and scaling fault classification. The identification of off-set and scaling fault is also applied. The complete scheme of PCA-based fault diagnosis procedure is proposed. The proposed scheme is first applied to Imperial Smelting Process, and the results show that the proposed strategies can be able to mitigate false alarms and isolate faults efficiently. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  12. Sound classification of dwellings in the Nordic countries – Differences and similarities between the five national schemes

    DEFF Research Database (Denmark)

    Rasmussen, Birgit

    2012-01-01

    having several similarities. In 2012, status is that number and denotations of classes for dwellings are identical in the Nordic countries, but the structures of the standards and several details are quite different. Also the issues dealt with are different. Examples of differences are sound insulation...... for classification of such buildings. This paper presents and compares the main class criteria for sound insulation of dwellings and summarizes differences and similarities in criteria and in structures of standards. Classification schemes for dwellings also exist in several other countries in Europe......In all five Nordic countries, sound classification schemes for dwellings have been published in national standards being implemented and revised gradually since the late 1990s. The national classification criteria for dwellings originate from a common Nordic INSTA-B proposal from the 1990s, thus...

  13. Using two classification schemes to develop vegetation indices of biological integrity for wetlands in West Virginia, USA.

    Science.gov (United States)

    Veselka, Walter; Rentch, James S; Grafton, William N; Kordek, Walter S; Anderson, James T

    2010-11-01

    Bioassessment methods for wetlands, and other bodies of water, have been developed worldwide to measure and quantify changes in "biological integrity." These assessments are based on a classification system, meant to ensure appropriate comparisons between wetland types. Using a local site-specific disturbance gradient, we built vegetation indices of biological integrity (Veg-IBIs) based on two commonly used wetland classification systems in the USA: One based on vegetative structure and the other based on a wetland's position in a landscape and sources of water. The resulting class-specific Veg-IBIs were comprised of 1-5 metrics that varied in their sensitivity to the disturbance gradient (R2=0.14-0.65). Moreover, the sensitivity to the disturbance gradient increased as metrics from each of the two classification schemes were combined (added). Using this information to monitor natural and created wetlands will help natural resource managers track changes in biological integrity of wetlands in response to anthropogenic disturbance and allows the use of vegetative communities to set ecological performance standards for mitigation banks.

  14. Kernel Clustering with a Differential Harmony Search Algorithm for Scheme Classification

    Directory of Open Access Journals (Sweden)

    Yu Feng

    2017-01-01

    Full Text Available This paper presents a kernel fuzzy clustering with a novel differential harmony search algorithm to coordinate with the diversion scheduling scheme classification. First, we employed a self-adaptive solution generation strategy and differential evolution-based population update strategy to improve the classical harmony search. Second, we applied the differential harmony search algorithm to the kernel fuzzy clustering to help the clustering method obtain better solutions. Finally, the combination of the kernel fuzzy clustering and the differential harmony search is applied for water diversion scheduling in East Lake. A comparison of the proposed method with other methods has been carried out. The results show that the kernel clustering with the differential harmony search algorithm has good performance to cooperate with the water diversion scheduling problems.

  15. A proposed radiographic classification scheme for congenital thoracic vertebral malformations in brachycephalic "screw-tailed" dog breeds.

    Science.gov (United States)

    Gutierrez-Quintana, Rodrigo; Guevar, Julien; Stalin, Catherine; Faller, Kiterie; Yeamans, Carmen; Penderis, Jacques

    2014-01-01

    Congenital vertebral malformations are common in brachycephalic "screw-tailed" dog breeds such as French bulldogs, English bulldogs, Boston terriers, and pugs. The aim of this retrospective study was to determine whether a radiographic classification scheme developed for use in humans would be feasible for use in these dog breeds. Inclusion criteria were hospital admission between September 2009 and April 2013, neurologic examination findings available, diagnostic quality lateral and ventro-dorsal digital radiographs of the thoracic vertebral column, and at least one congenital vertebral malformation. Radiographs were retrieved and interpreted by two observers who were unaware of neurologic status. Vertebral malformations were classified based on a classification scheme modified from a previous human study and a consensus of both observers. Twenty-eight dogs met inclusion criteria (12 with neurologic deficits, 16 with no neurologic deficits). Congenital vertebral malformations affected 85/362 (23.5%) of thoracic vertebrae. Vertebral body formation defects were the most common (butterfly vertebrae 6.6%, ventral wedge-shaped vertebrae 5.5%, dorsal hemivertebrae 0.8%, and dorso-lateral hemivertebrae 0.5%). No lateral hemivertebrae or lateral wedge-shaped vertebrae were identified. The T7 vertebra was the most commonly affected (11/28 dogs), followed by T8 (8/28 dogs) and T12 (8/28 dogs). The number and type of vertebral malformations differed between groups (P = 0.01). Based on MRI, dorsal, and dorso-lateral hemivertebrae were the cause of spinal cord compression in 5/12 (41.6%) of dogs with neurologic deficits. Findings indicated that a modified human radiographic classification system of vertebral malformations is feasible for use in future studies of brachycephalic "screw-tailed" dogs. © 2014 American College of Veterinary Radiology.

  16. Task Classification Based Energy-Aware Consolidation in Clouds

    Directory of Open Access Journals (Sweden)

    HeeSeok Choi

    2016-01-01

    Full Text Available We consider a cloud data center, in which the service provider supplies virtual machines (VMs on hosts or physical machines (PMs to its subscribers for computation in an on-demand fashion. For the cloud data center, we propose a task consolidation algorithm based on task classification (i.e., computation-intensive and data-intensive and resource utilization (e.g., CPU and RAM. Furthermore, we design a VM consolidation algorithm to balance task execution time and energy consumption without violating a predefined service level agreement (SLA. Unlike the existing research on VM consolidation or scheduling that applies none or single threshold schemes, we focus on a double threshold (upper and lower scheme, which is used for VM consolidation. More specifically, when a host operates with resource utilization below the lower threshold, all the VMs on the host will be scheduled to be migrated to other hosts and then the host will be powered down, while when a host operates with resource utilization above the upper threshold, a VM will be migrated to avoid using 100% of resource utilization. Based on experimental performance evaluations with real-world traces, we prove that our task classification based energy-aware consolidation algorithm (TCEA achieves a significant energy reduction without incurring predefined SLA violations.

  17. Joint efforts to harmonize sound insulation descriptors and classification schemes in Europe (COST TU0901)

    DEFF Research Database (Denmark)

    Rasmussen, Birgit

    2010-01-01

    Sound insulation descriptors, regulatory requirements and classification schemes in Europe represent a high degree of diversity. One implication is very little exchange of experience of housing design and construction details for different levels of sound insulation; another is trade barriers...... for building systems and products. Unfortunately, there is evidence for a development in the "wrong" direction. For example, sound classification schemes for dwellings exist in nine countries. There is no sign on increasing harmonization, rather the contrary, as more countries are preparing proposals with new......, new housing must meet the needs of the people and offer comfort. Also for existing housing, sound insulation aspects should be taken into account, when renovating housing; otherwise the renovation is not “sustainable”. A joint European Action, COST TU0901 "Integrating and Harmonizing Sound Insulation...

  18. Proposed classification scheme for high-level and other radioactive wastes

    International Nuclear Information System (INIS)

    Kocher, D.C.; Croff, A.G.

    1986-01-01

    The Nuclear Waste Policy Act (NWPA) of 1982 defines high-level radioactive waste (HLW) as: (A) the highly radioactive material resulting from the reprocessing of spent nuclear fuel....that contains fission products in sufficient concentrations; and (B) other highly radioactive material that the Commission....determines....requires permanent isolation. This paper presents a generally applicable quantitative definition of HLW that addresses the description in paragraph (B). The approach also results in definitions of other waste classes, i.e., transuranic (TRU) and low-level waste (LLW). A basic waste classification scheme results from the quantitative definitions

  19. Comprehensive Evaluation of Car-Body Light-Weighting Scheme Based on LCC Theory

    Directory of Open Access Journals (Sweden)

    Han Qing-lan

    2016-01-01

    Full Text Available In this paper, a comprehensive evaluation model of light-weighting scheme is established, which is based on three dimensions, including the life cycle costs of the resource consumed by the designed objects (LCC, willingness to pay for the environmental effect of resource consumption (WTP and performance (P. Firstly, cost of each stage is determined. Then, based on the resource classification, which is based on cost elements, determine the material list needed, and apply WTP weight coefficient to monetize life cycle environmental impact and obtain the life cycle comprehensive cost of designed scheme (TCC. In the next step Performance (P index is calculated to measure the value of the life cycle costs by applying AHP and SAW method, integrated (TCC and (P to achieve comprehensive evaluation of light-weighting scheme. Finally, the effectiveness of the evaluation model is verified by the example of car engine hood.

  20. Energy-efficiency based classification of the manufacturing workstation

    Science.gov (United States)

    Frumuşanu, G.; Afteni, C.; Badea, N.; Epureanu, A.

    2017-08-01

    EU Directive 92/75/EC established for the first time an energy consumption labelling scheme, further implemented by several other directives. As consequence, nowadays many products (e.g. home appliances, tyres, light bulbs, houses) have an EU Energy Label when offered for sale or rent. Several energy consumption models of manufacturing equipments have been also developed. This paper proposes an energy efficiency - based classification of the manufacturing workstation, aiming to characterize its energetic behaviour. The concept of energy efficiency of the manufacturing workstation is defined. On this base, a classification methodology has been developed. It refers to specific criteria and their evaluation modalities, together to the definition & delimitation of energy efficiency classes. The energy class position is defined after the amount of energy needed by the workstation in the middle point of its operating domain, while its extension is determined by the value of the first coefficient from the Taylor series that approximates the dependence between the energy consume and the chosen parameter of the working regime. The main domain of interest for this classification looks to be the optimization of the manufacturing activities planning and programming. A case-study regarding an actual lathe classification from energy efficiency point of view, based on two different approaches (analytical and numerical) is also included.

  1. Comparison of wavelet based denoising schemes for gear condition monitoring: An Artificial Neural Network based Approach

    Science.gov (United States)

    Ahmed, Rounaq; Srinivasa Pai, P.; Sriram, N. S.; Bhat, Vasudeva

    2018-02-01

    Vibration Analysis has been extensively used in recent past for gear fault diagnosis. The vibration signals extracted is usually contaminated with noise and may lead to wrong interpretation of results. The denoising of extracted vibration signals helps the fault diagnosis by giving meaningful results. Wavelet Transform (WT) increases signal to noise ratio (SNR), reduces root mean square error (RMSE) and is effective to denoise the gear vibration signals. The extracted signals have to be denoised by selecting a proper denoising scheme in order to prevent the loss of signal information along with noise. An approach has been made in this work to show the effectiveness of Principal Component Analysis (PCA) to denoise gear vibration signal. In this regard three selected wavelet based denoising schemes namely PCA, Empirical Mode Decomposition (EMD), Neighcoeff Coefficient (NC), has been compared with Adaptive Threshold (AT) an extensively used wavelet based denoising scheme for gear vibration signal. The vibration signals acquired from a customized gear test rig were denoised by above mentioned four denoising schemes. The fault identification capability as well as SNR, Kurtosis and RMSE for the four denoising schemes have been compared. Features extracted from the denoised signals have been used to train and test artificial neural network (ANN) models. The performances of the four denoising schemes have been evaluated based on the performance of the ANN models. The best denoising scheme has been identified, based on the classification accuracy results. PCA is effective in all the regards as a best denoising scheme.

  2. A Classification Scheme for Analyzing Mobile Apps Used to Prevent and Manage Disease in Late Life

    Science.gov (United States)

    Wang, Aiguo; Lu, Xin; Chen, Hongtu; Li, Changqun; Levkoff, Sue

    2014-01-01

    Background There are several mobile apps that offer tools for disease prevention and management among older adults, and promote health behaviors that could potentially reduce or delay the onset of disease. A classification scheme that categorizes apps could be useful to both older adult app users and app developers. Objective The objective of our study was to build and evaluate the effectiveness of a classification scheme that classifies mobile apps available for older adults in the “Health & Fitness” category of the iTunes App Store. Methods We constructed a classification scheme for mobile apps according to three dimensions: (1) the Precede-Proceed Model (PPM), which classifies mobile apps in terms of predisposing, enabling, and reinforcing factors for behavior change; (2) health care process, specifically prevention versus management of disease; and (3) health conditions, including physical health and mental health. Content analysis was conducted by the research team on health and fitness apps designed specifically for older adults, as well as those applicable to older adults, released during the months of June and August 2011 and August 2012. Face validity was assessed by a different group of individuals, who were not related to the study. A reliability analysis was conducted to confirm the accuracy of the coding scheme of the sample apps in this study. Results After applying sample inclusion and exclusion criteria, a total of 119 apps were included in the study sample, of which 26/119 (21.8%) were released in June 2011, 45/119 (37.8%) in August 2011, and 48/119 (40.3%) in August 2012. Face validity was determined by interviewing 11 people, who agreed that this scheme accurately reflected the nature of this application. The entire study sample was successfully coded, demonstrating satisfactory inter-rater reliability by two independent coders (95.8% initial concordance and 100% concordance after consensus was reached). The apps included in the study sample

  3. Prototype-based Models for the Supervised Learning of Classification Schemes

    Science.gov (United States)

    Biehl, Michael; Hammer, Barbara; Villmann, Thomas

    2017-06-01

    An introduction is given to the use of prototype-based models in supervised machine learning. The main concept of the framework is to represent previously observed data in terms of so-called prototypes, which reflect typical properties of the data. Together with a suitable, discriminative distance or dissimilarity measure, prototypes can be used for the classification of complex, possibly high-dimensional data. We illustrate the framework in terms of the popular Learning Vector Quantization (LVQ). Most frequently, standard Euclidean distance is employed as a distance measure. We discuss how LVQ can be equipped with more general dissimilarites. Moreover, we introduce relevance learning as a tool for the data-driven optimization of parameterized distances.

  4. On the Feature Selection and Classification Based on Information Gain for Document Sentiment Analysis

    Directory of Open Access Journals (Sweden)

    Asriyanti Indah Pratiwi

    2018-01-01

    Full Text Available Sentiment analysis in a movie review is the needs of today lifestyle. Unfortunately, enormous features make the sentiment of analysis slow and less sensitive. Finding the optimum feature selection and classification is still a challenge. In order to handle an enormous number of features and provide better sentiment classification, an information-based feature selection and classification are proposed. The proposed method reduces more than 90% unnecessary features while the proposed classification scheme achieves 96% accuracy of sentiment classification. From the experimental results, it can be concluded that the combination of proposed feature selection and classification achieves the best performance so far.

  5. [Object-oriented stand type classification based on the combination of multi-source remote sen-sing data].

    Science.gov (United States)

    Mao, Xue Gang; Wei, Jing Yu

    2017-11-01

    The recognition of forest type is one of the key problems in forest resource monitoring. The Radarsat-2 data and QuickBird remote sensing image were used for object-based classification to study the object-based forest type classification and recognition based on the combination of multi-source remote sensing data. In the process of object-based classification, three segmentation schemes (segmentation with QuickBird remote sensing image only, segmentation with Radarsat-2 data only, segmentation with combination of QuickBird and Radarsat-2) were adopted. For the three segmentation schemes, ten segmentation scale parameters were adopted (25-250, step 25), and modified Euclidean distance 3 index was further used to evaluate the segmented results to determine the optimal segmentation scheme and segmentation scale. Based on the optimal segmented result, three forest types of Chinese fir, Masson pine and broad-leaved forest were classified and recognized using Support Vector Machine (SVM) classifier with Radial Basis Foundation (RBF) kernel according to different feature combinations of topography, height, spectrum and common features. The results showed that the combination of Radarsat-2 data and QuickBird remote sensing image had its advantages of object-based forest type classification over using Radarsat-2 data or QuickBird remote sensing image only. The optimal scale parameter for QuickBirdRadarsat-2 segmentation was 100, and at the optimal scale, the accuracy of object-based forest type classification was the highest (OA=86%, Kappa=0.86), when using all features which were extracted from two kinds of data resources. This study could not only provide a reference for forest type recognition using multi-source remote sensing data, but also had a practical significance for forest resource investigation and monitoring.

  6. PARALLEL IMPLEMENTATION OF MORPHOLOGICAL PROFILE BASED SPECTRAL-SPATIAL CLASSIFICATION SCHEME FOR HYPERSPECTRAL IMAGERY

    Directory of Open Access Journals (Sweden)

    B. Kumar

    2016-06-01

    Full Text Available Extended morphological profile (EMP is a good technique for extracting spectral-spatial information from the images but large size of hyperspectral images is an important concern for creating EMPs. However, with the availability of modern multi-core processors and commodity parallel processing systems like graphics processing units (GPUs at desktop level, parallel computing provides a viable option to significantly accelerate execution of such computations. In this paper, parallel implementation of an EMP based spectralspatial classification method for hyperspectral imagery is presented. The parallel implementation is done both on multi-core CPU and GPU. The impact of parallelization on speed up and classification accuracy is analyzed. For GPU, the implementation is done in compute unified device architecture (CUDA C. The experiments are carried out on two well-known hyperspectral images. It is observed from the experimental results that GPU implementation provides a speed up of about 7 times, while parallel implementation on multi-core CPU resulted in speed up of about 3 times. It is also observed that parallel implementation has no adverse impact on the classification accuracy.

  7. Use of Ecohydraulic-Based Mesohabitat Classification and Fish Species Traits for Stream Restoration Design

    Directory of Open Access Journals (Sweden)

    John S. Schwartz

    2016-11-01

    Full Text Available Stream restoration practice typically relies on a geomorphological design approach in which the integration of ecological criteria is limited and generally qualitative, although the most commonly stated project objective is to restore biological integrity by enhancing habitat and water quality. Restoration has achieved mixed results in terms of ecological successes and it is evident that improved methodologies for assessment and design are needed. A design approach is suggested for mesohabitat restoration based on a review and integration of fundamental processes associated with: (1 lotic ecological concepts; (2 applied geomorphic processes for mesohabitat self-maintenance; (3 multidimensional hydraulics and habitat suitability modeling; (4 species functional traits correlated with fish mesohabitat use; and (5 multi-stage ecohydraulics-based mesohabitat classification. Classification of mesohabitat units demonstrated in this article were based on fish preferences specifically linked to functional trait strategies (i.e., feeding resting, evasion, spawning, and flow refugia, recognizing that habitat preferences shift by season and flow stage. A multi-stage classification scheme developed under this premise provides the basic “building blocks” for ecological design criteria for stream restoration. The scheme was developed for Midwest US prairie streams, but the conceptual framework for mesohabitat classification and functional traits analysis can be applied to other ecoregions.

  8. Estimating persistence of brominated and chlorinated organic pollutants in air, water, soil, and sediments with the QSPR-based classification scheme.

    Science.gov (United States)

    Puzyn, T; Haranczyk, M; Suzuki, N; Sakurai, T

    2011-02-01

    We have estimated degradation half-lives of both brominated and chlorinated dibenzo-p-dioxins (PBDDs and PCDDs), furans (PBDFs and PCDFs), biphenyls (PBBs and PCBs), naphthalenes (PBNs and PCNs), diphenyl ethers (PBDEs and PCDEs) as well as selected unsubstituted polycyclic aromatic hydrocarbons (PAHs) in air, surface water, surface soil, and sediments (in total of 1,431 compounds in four compartments). Next, we compared the persistence between chloro- (relatively well-studied) and bromo- (less studied) analogs. The predictions have been performed based on the quantitative structure-property relationship (QSPR) scheme with use of k-nearest neighbors (kNN) classifier and the semi-quantitative system of persistence classes. The classification models utilized principal components derived from the principal component analysis of a set of 24 constitutional and quantum mechanical descriptors as input variables. Accuracies of classification (based on an external validation) were 86, 85, 87, and 75% for air, surface water, surface soil, and sediments, respectively. The persistence of all chlorinated species increased with increasing halogenation degree. In the case of brominated organic pollutants (Br-OPs), the trend was the same for air and sediments. However, we noticed that the opposite trend for persistence in surface water and soil. The results suggest that, due to high photoreactivity of C-Br chemical bonds, photolytic processes occurring in surface water and soil are able to play significant role in transforming and removing Br-OPs from these compartments. This contribution is the first attempt of classifying together Br-OPs and Cl-OPs according to their persistence, in particular, environmental compartments.

  9. A Region-Based GeneSIS Segmentation Algorithm for the Classification of Remotely Sensed Images

    Directory of Open Access Journals (Sweden)

    Stelios K. Mylonas

    2015-03-01

    Full Text Available This paper proposes an object-based segmentation/classification scheme for remotely sensed images, based on a novel variant of the recently proposed Genetic Sequential Image Segmentation (GeneSIS algorithm. GeneSIS segments the image in an iterative manner, whereby at each iteration a single object is extracted via a genetic-based object extraction algorithm. Contrary to the previous pixel-based GeneSIS where the candidate objects to be extracted were evaluated through the fuzzy content of their included pixels, in the newly developed region-based GeneSIS algorithm, a watershed-driven fine segmentation map is initially obtained from the original image, which serves as the basis for the forthcoming GeneSIS segmentation. Furthermore, in order to enhance the spatial search capabilities, we introduce a more descriptive encoding scheme in the object extraction algorithm, where the structural search modules are represented by polygonal shapes. Our objectives in the new framework are posed as follows: enhance the flexibility of the algorithm in extracting more flexible object shapes, assure high level classification accuracies, and reduce the execution time of the segmentation, while at the same time preserving all the inherent attributes of the GeneSIS approach. Finally, exploiting the inherent attribute of GeneSIS to produce multiple segmentations, we also propose two segmentation fusion schemes that operate on the ensemble of segmentations generated by GeneSIS. Our approaches are tested on an urban and two agricultural images. The results show that region-based GeneSIS has considerably lower computational demands compared to the pixel-based one. Furthermore, the suggested methods achieve higher classification accuracies and good segmentation maps compared to a series of existing algorithms.

  10. Object-Based Crop Species Classification Based on the Combination of Airborne Hyperspectral Images and LiDAR Data

    Directory of Open Access Journals (Sweden)

    Xiaolong Liu

    2015-01-01

    Full Text Available Identification of crop species is an important issue in agricultural management. In recent years, many studies have explored this topic using multi-spectral and hyperspectral remote sensing data. In this study, we perform dedicated research to propose a framework for mapping crop species by combining hyperspectral and Light Detection and Ranging (LiDAR data in an object-based image analysis (OBIA paradigm. The aims of this work were the following: (i to understand the performances of different spectral dimension-reduced features from hyperspectral data and their combination with LiDAR derived height information in image segmentation; (ii to understand what classification accuracies of crop species can be achieved by combining hyperspectral and LiDAR data in an OBIA paradigm, especially in regions that have fragmented agricultural landscape and complicated crop planting structure; and (iii to understand the contributions of the crop height that is derived from LiDAR data, as well as the geometric and textural features of image objects, to the crop species’ separabilities. The study region was an irrigated agricultural area in the central Heihe river basin, which is characterized by many crop species, complicated crop planting structures, and fragmented landscape. The airborne hyperspectral data acquired by the Compact Airborne Spectrographic Imager (CASI with a 1 m spatial resolution and the Canopy Height Model (CHM data derived from the LiDAR data acquired by the airborne Leica ALS70 LiDAR system were used for this study. The image segmentation accuracies of different feature combination schemes (very high-resolution imagery (VHR, VHR/CHM, and minimum noise fractional transformed data (MNF/CHM were evaluated and analyzed. The results showed that VHR/CHM outperformed the other two combination schemes with a segmentation accuracy of 84.8%. The object-based crop species classification results of different feature integrations indicated that

  11. Asteroid taxonomic classifications

    International Nuclear Information System (INIS)

    Tholen, D.J.

    1989-01-01

    This paper reports on three taxonomic classification schemes developed and applied to the body of available color and albedo data. Asteroid taxonomic classifications according to two of these schemes are reproduced

  12. An approach toward a combined scheme for the petrographic classification of fly ash: Revision and clarification

    Science.gov (United States)

    Hower, J.C.; Suarez-Ruiz, I.; Mastalerz, Maria

    2005-01-01

    Hower and Mastalerz's classification scheme for fly ash is modified to make more widely acceptable. First, proper consideration is given to the potential role of anthracite in the development of isotropic and anisotropic chars. Second, the role of low-reflectance inertinite in producing vesicular chars is noted. It is shown that noncoal chars in the fuel can potentially produce chars that have the potential to stretch the limits of the classification. With care, it is possible to classify certain biomass chars as being distinct from coal-derived chars.

  13. Fused man-machine classification schemes to enhance diagnosis of breast microcalcifications

    International Nuclear Information System (INIS)

    Andreadis, Ioannis; Sevastianos, Chatzistergos; Konstantina, Nikita; George, Spyrou

    2017-01-01

    Computer aided diagnosis (CAD x ) approaches are developed towards the effective discrimination between benign and malignant clusters of microcalcifications. Different sources of information are exploited, such as features extracted from the image analysis of the region of interest, features related to the location of the cluster inside the breast, age of the patient and descriptors provided by the radiologists while performing their diagnostic task. A series of different CAD x schemes are implemented, each of which uses a different category of features and adopts a variety of machine learning algorithms and alternative image processing techniques. A novel framework is introduced where these independent diagnostic components are properly combined according to features critical to a radiologist in an attempt to identify the most appropriate CAD x schemes for the case under consideration. An open access database (Digital Database of Screening Mammography (DDSM)) has been elaborated to construct a large dataset with cases of varying subtlety, in order to ensure the development of schemes with high generalization ability, as well as extensive evaluation of their performance. The obtained results indicate that the proposed framework succeeds in improving the diagnostic procedure, as the achieved overall classification performance outperforms all the independent single diagnostic components, as well as the radiologists that assessed the same cases, in terms of accuracy, sensitivity, specificity and area under the curve following receiver operating characteristic analysis. (paper)

  14. Fused man-machine classification schemes to enhance diagnosis of breast microcalcifications

    Science.gov (United States)

    Andreadis, Ioannis; Sevastianos, Chatzistergos; George, Spyrou; Konstantina, Nikita

    2017-11-01

    Computer aided diagnosis (CAD x ) approaches are developed towards the effective discrimination between benign and malignant clusters of microcalcifications. Different sources of information are exploited, such as features extracted from the image analysis of the region of interest, features related to the location of the cluster inside the breast, age of the patient and descriptors provided by the radiologists while performing their diagnostic task. A series of different CAD x schemes are implemented, each of which uses a different category of features and adopts a variety of machine learning algorithms and alternative image processing techniques. A novel framework is introduced where these independent diagnostic components are properly combined according to features critical to a radiologist in an attempt to identify the most appropriate CAD x schemes for the case under consideration. An open access database (Digital Database of Screening Mammography (DDSM)) has been elaborated to construct a large dataset with cases of varying subtlety, in order to ensure the development of schemes with high generalization ability, as well as extensive evaluation of their performance. The obtained results indicate that the proposed framework succeeds in improving the diagnostic procedure, as the achieved overall classification performance outperforms all the independent single diagnostic components, as well as the radiologists that assessed the same cases, in terms of accuracy, sensitivity, specificity and area under the curve following receiver operating characteristic analysis.

  15. Proposed classification scheme for high-level and other radioactive wastes

    International Nuclear Information System (INIS)

    Kocher, D.C.; Croff, A.G.

    1986-01-01

    The Nuclear Waste Policy Act (NWPA) of 1982 defines high-level (radioactive) waste (HLW) as (A) the highly radioactive material resulting from the reprocessing of spent nuclear fuel...that contains fission products in sufficient concentrations; and (B) other highly radioactive material that the Commission...determines...requires permanent isolation. This paper presents a generally applicable quantitative definition of HLW that addresses the description in paragraph B. The approach also results in definitions of other wastes classes, i.e., transuranic (TRU) and low-level waste (LLW). The basic waste classification scheme that results from the quantitative definitions of highly radioactive and requires permanent isolation is depicted. The concentrations of radionuclides that correspond to these two boundaries, and that may be used to classify radioactive wastes, are given

  16. Image-Based Airborne Sensors: A Combined Approach for Spectral Signatures Classification through Deterministic Simulated Annealing

    Science.gov (United States)

    Guijarro, María; Pajares, Gonzalo; Herrera, P. Javier

    2009-01-01

    The increasing technology of high-resolution image airborne sensors, including those on board Unmanned Aerial Vehicles, demands automatic solutions for processing, either on-line or off-line, the huge amountds of image data sensed during the flights. The classification of natural spectral signatures in images is one potential application. The actual tendency in classification is oriented towards the combination of simple classifiers. In this paper we propose a combined strategy based on the Deterministic Simulated Annealing (DSA) framework. The simple classifiers used are the well tested supervised parametric Bayesian estimator and the Fuzzy Clustering. The DSA is an optimization approach, which minimizes an energy function. The main contribution of DSA is its ability to avoid local minima during the optimization process thanks to the annealing scheme. It outperforms simple classifiers used for the combination and some combined strategies, including a scheme based on the fuzzy cognitive maps and an optimization approach based on the Hopfield neural network paradigm. PMID:22399989

  17. An improved fault detection classification and location scheme based on wavelet transform and artificial neural network for six phase transmission line using single end data only.

    Science.gov (United States)

    Koley, Ebha; Verma, Khushaboo; Ghosh, Subhojit

    2015-01-01

    Restrictions on right of way and increasing power demand has boosted development of six phase transmission. It offers a viable alternative for transmitting more power, without major modification in existing structure of three phase double circuit transmission system. Inspite of the advantages, low acceptance of six phase system is attributed to the unavailability of a proper protection scheme. The complexity arising from large number of possible faults in six phase lines makes the protection quite challenging. The proposed work presents a hybrid wavelet transform and modular artificial neural network based fault detector, classifier and locator for six phase lines using single end data only. The standard deviation of the approximate coefficients of voltage and current signals obtained using discrete wavelet transform are applied as input to the modular artificial neural network for fault classification and location. The proposed scheme has been tested for all 120 types of shunt faults with variation in location, fault resistance, fault inception angles. The variation in power system parameters viz. short circuit capacity of the source and its X/R ratio, voltage, frequency and CT saturation has also been investigated. The result confirms the effectiveness and reliability of the proposed protection scheme which makes it ideal for real time implementation.

  18. Toward functional classification of neuronal types.

    Science.gov (United States)

    Sharpee, Tatyana O

    2014-09-17

    How many types of neurons are there in the brain? This basic neuroscience question remains unsettled despite many decades of research. Classification schemes have been proposed based on anatomical, electrophysiological, or molecular properties. However, different schemes do not always agree with each other. This raises the question of whether one can classify neurons based on their function directly. For example, among sensory neurons, can a classification scheme be devised that is based on their role in encoding sensory stimuli? Here, theoretical arguments are outlined for how this can be achieved using information theory by looking at optimal numbers of cell types and paying attention to two key properties: correlations between inputs and noise in neural responses. This theoretical framework could help to map the hierarchical tree relating different neuronal classes within and across species. Copyright © 2014 Elsevier Inc. All rights reserved.

  19. The Net Enabled Waste Management Database in the context of radioactive waste classification

    International Nuclear Information System (INIS)

    Csullog, G.W.; Burcl, R.; Tonkay, D.; Petoe, A.

    2002-01-01

    There is an emerging, international consensus that a common, comprehensive radioactive waste classification system is needed, which derives from the fact that the implementation of radioactive waste classification within countries is highly diverse. Within IAEA Member States, implementation ranges from none to complex systems that vary a great deal from one another. Both the IAEA and the European Commission have recommended common classification schemes but only for the purpose of facilitating communication with the public and national- and international-level organizations and to serve as the basis for developing comprehensive, national waste classification schemes. In the context described above, the IAEA's newly developed Net Enabled Waste Management Database (NEWMDB) contains a feature, the Waste Class Matrix, that Member States use to describe the waste classification schemes they use and to compare them with the IAEA's proposed waste classification scheme. Member States then report waste inventories to the NEWMDB according to their own waste classification schemes, allowing traceability back to nationally based reports. The IAEA uses the information provided in the Waste Class Matrix to convert radioactive waste inventory data reported according to a wide variety of classifications into an single inventory according to the IAEA's proposed scheme. This approach allows the international community time to develop a comprehensive, common classification scheme and allows Member States time to develop and implement effective, operational waste classification schemes while, at the same time, the IAEA can collect the information needed to compile a comprehensive, international radioactive waste inventory. (author)

  20. Knowledge-based sea ice classification by polarimetric SAR

    DEFF Research Database (Denmark)

    Skriver, Henning; Dierking, Wolfgang

    2004-01-01

    Polarimetric SAR images acquired at C- and L-band over sea ice in the Greenland Sea, Baltic Sea, and Beaufort Sea have been analysed with respect to their potential for ice type classification. The polarimetric data were gathered by the Danish EMISAR and the US AIRSAR which both are airborne...... systems. A hierarchical classification scheme was chosen for sea ice because our knowledge about magnitudes, variations, and dependences of sea ice signatures can be directly considered. The optimal sequence of classification rules and the rules themselves depend on the ice conditions/regimes. The use...... of the polarimetric phase information improves the classification only in the case of thin ice types but is not necessary for thicker ice (above about 30 cm thickness)...

  1. A soft computing scheme incorporating ANN and MOV energy in fault detection, classification and distance estimation of EHV transmission line with FSC.

    Science.gov (United States)

    Khadke, Piyush; Patne, Nita; Singh, Arvind; Shinde, Gulab

    2016-01-01

    In this article, a novel and accurate scheme for fault detection, classification and fault distance estimation for a fixed series compensated transmission line is proposed. The proposed scheme is based on artificial neural network (ANN) and metal oxide varistor (MOV) energy, employing Levenberg-Marquardt training algorithm. The novelty of this scheme is the use of MOV energy signals of fixed series capacitors (FSC) as input to train the ANN. Such approach has never been used in any earlier fault analysis algorithms in the last few decades. Proposed scheme uses only single end measurement energy signals of MOV in all the 3 phases over one cycle duration from the occurrence of a fault. Thereafter, these MOV energy signals are fed as input to ANN for fault distance estimation. Feasibility and reliability of the proposed scheme have been evaluated for all ten types of fault in test power system model at different fault inception angles over numerous fault locations. Real transmission system parameters of 3-phase 400 kV Wardha-Aurangabad transmission line (400 km) with 40 % FSC at Power Grid Wardha Substation, India is considered for this research. Extensive simulation experiments show that the proposed scheme provides quite accurate results which demonstrate complete protection scheme with high accuracy, simplicity and robustness.

  2. Correlation-based motion vector processing with adaptive interpolation scheme for motion-compensated frame interpolation.

    Science.gov (United States)

    Huang, Ai-Mei; Nguyen, Truong

    2009-04-01

    In this paper, we address the problems of unreliable motion vectors that cause visual artifacts but cannot be detected by high residual energy or bidirectional prediction difference in motion-compensated frame interpolation. A correlation-based motion vector processing method is proposed to detect and correct those unreliable motion vectors by explicitly considering motion vector correlation in the motion vector reliability classification, motion vector correction, and frame interpolation stages. Since our method gradually corrects unreliable motion vectors based on their reliability, we can effectively discover the areas where no motion is reliable to be used, such as occlusions and deformed structures. We also propose an adaptive frame interpolation scheme for the occlusion areas based on the analysis of their surrounding motion distribution. As a result, the interpolated frames using the proposed scheme have clearer structure edges and ghost artifacts are also greatly reduced. Experimental results show that our interpolated results have better visual quality than other methods. In addition, the proposed scheme is robust even for those video sequences that contain multiple and fast motions.

  3. Movie Popularity Classification based on Inherent Movie Attributes using C4.5, PART and Correlation Coefficient

    DEFF Research Database (Denmark)

    Ibnal Asad, Khalid; Ahmed, Tanvir; Rahman, Md. Saiedur

    2012-01-01

    Abundance of movie data across the internet makes it an obvious candidate for machine learning and knowledge discovery. But most researches are directed towards bi-polar classification of movie or generation of a movie recommendation system based on reviews given by viewers on various internet...... sites. Classification of movie popularity based solely on attributes of a movie i.e. actor, actress, director rating, language, country and budget etc. has been less highlighted due to large number of attributes that are associated with each movie and their differences in dimensions. In this paper, we...... propose classification scheme of pre-release movie popularity based on inherent attributes using C4.S and PART classifier algorithm and define the relation between attributes of post release movies using correlation coefficient....

  4. Dense Iterative Contextual Pixel Classification using Kriging

    DEFF Research Database (Denmark)

    Ganz, Melanie; Loog, Marco; Brandt, Sami

    2009-01-01

    have been proposed to this end, e.g., iterative contextual pixel classification, iterated conditional modes, and other approaches related to Markov random fields. A problem of these methods, however, is their computational complexity, especially when dealing with high-resolution images in which......In medical applications, segmentation has become an ever more important task. One of the competitive schemes to perform such segmentation is by means of pixel classification. Simple pixel-based classification schemes can be improved by incorporating contextual label information. Various methods...... relatively long range interactions may play a role. We propose a new method based on Kriging that makes it possible to include such long range interactions, while keeping the computations manageable when dealing with large medical images....

  5. Automatic Hierarchical Color Image Classification

    Directory of Open Access Journals (Sweden)

    Jing Huang

    2003-02-01

    Full Text Available Organizing images into semantic categories can be extremely useful for content-based image retrieval and image annotation. Grouping images into semantic classes is a difficult problem, however. Image classification attempts to solve this hard problem by using low-level image features. In this paper, we propose a method for hierarchical classification of images via supervised learning. This scheme relies on using a good low-level feature and subsequently performing feature-space reconfiguration using singular value decomposition to reduce noise and dimensionality. We use the training data to obtain a hierarchical classification tree that can be used to categorize new images. Our experimental results suggest that this scheme not only performs better than standard nearest-neighbor techniques, but also has both storage and computational advantages.

  6. A malware detection scheme based on mining format information.

    Science.gov (United States)

    Bai, Jinrong; Wang, Junfeng; Zou, Guozhong

    2014-01-01

    Malware has become one of the most serious threats to computer information system and the current malware detection technology still has very significant limitations. In this paper, we proposed a malware detection approach by mining format information of PE (portable executable) files. Based on in-depth analysis of the static format information of the PE files, we extracted 197 features from format information of PE files and applied feature selection methods to reduce the dimensionality of the features and achieve acceptable high performance. When the selected features were trained using classification algorithms, the results of our experiments indicate that the accuracy of the top classification algorithm is 99.1% and the value of the AUC is 0.998. We designed three experiments to evaluate the performance of our detection scheme and the ability of detecting unknown and new malware. Although the experimental results of identifying new malware are not perfect, our method is still able to identify 97.6% of new malware with 1.3% false positive rates.

  7. MR/PET quantification tools: Registration, segmentation, classification, and MR-based attenuation correction

    Science.gov (United States)

    Fei, Baowei; Yang, Xiaofeng; Nye, Jonathon A.; Aarsvold, John N.; Raghunath, Nivedita; Cervo, Morgan; Stark, Rebecca; Meltzer, Carolyn C.; Votaw, John R.

    2012-01-01

    Purpose: Combined MR/PET is a relatively new, hybrid imaging modality. A human MR/PET prototype system consisting of a Siemens 3T Trio MR and brain PET insert was installed and tested at our institution. Its present design does not offer measured attenuation correction (AC) using traditional transmission imaging. This study is the development of quantification tools including MR-based AC for quantification in combined MR/PET for brain imaging. Methods: The developed quantification tools include image registration, segmentation, classification, and MR-based AC. These components were integrated into a single scheme for processing MR/PET data. The segmentation method is multiscale and based on the Radon transform of brain MR images. It was developed to segment the skull on T1-weighted MR images. A modified fuzzy C-means classification scheme was developed to classify brain tissue into gray matter, white matter, and cerebrospinal fluid. Classified tissue is assigned an attenuation coefficient so that AC factors can be generated. PET emission data are then reconstructed using a three-dimensional ordered sets expectation maximization method with the MR-based AC map. Ten subjects had separate MR and PET scans. The PET with [11C]PIB was acquired using a high-resolution research tomography (HRRT) PET. MR-based AC was compared with transmission (TX)-based AC on the HRRT. Seventeen volumes of interest were drawn manually on each subject image to compare the PET activities between the MR-based and TX-based AC methods. Results: For skull segmentation, the overlap ratio between our segmented results and the ground truth is 85.2 ± 2.6%. Attenuation correction results from the ten subjects show that the difference between the MR and TX-based methods was <6.5%. Conclusions: MR-based AC compared favorably with conventional transmission-based AC. Quantitative tools including registration, segmentation, classification, and MR-based AC have been developed for use in combined MR

  8. MR/PET quantification tools: Registration, segmentation, classification, and MR-based attenuation correction

    Energy Technology Data Exchange (ETDEWEB)

    Fei, Baowei, E-mail: bfei@emory.edu [Department of Radiology and Imaging Sciences, Emory University School of Medicine, 1841 Clifton Road Northeast, Atlanta, Georgia 30329 (United States); Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, Georgia 30322 (United States); Department of Mathematics and Computer Sciences, Emory University, Atlanta, Georgia 30322 (United States); Yang, Xiaofeng; Nye, Jonathon A.; Raghunath, Nivedita; Votaw, John R. [Department of Radiology and Imaging Sciences, Emory University School of Medicine, Atlanta, Georgia 30329 (United States); Aarsvold, John N. [Department of Radiology and Imaging Sciences, Emory University School of Medicine, Atlanta, Georgia 30329 (United States); Nuclear Medicine Service, Atlanta Veterans Affairs Medical Center, Atlanta, Georgia 30033 (United States); Cervo, Morgan; Stark, Rebecca [The Medical Physics Graduate Program in the George W. Woodruff School, Georgia Institute of Technology, Atlanta, Georgia 30332 (United States); Meltzer, Carolyn C. [Department of Radiology and Imaging Sciences, Emory University School of Medicine, Atlanta, Georgia 30329 (United States); Department of Neurology and Department of Psychiatry and Behavior Sciences, Emory University School of Medicine, Atlanta, Georgia 30322 (United States)

    2012-10-15

    Purpose: Combined MR/PET is a relatively new, hybrid imaging modality. A human MR/PET prototype system consisting of a Siemens 3T Trio MR and brain PET insert was installed and tested at our institution. Its present design does not offer measured attenuation correction (AC) using traditional transmission imaging. This study is the development of quantification tools including MR-based AC for quantification in combined MR/PET for brain imaging. Methods: The developed quantification tools include image registration, segmentation, classification, and MR-based AC. These components were integrated into a single scheme for processing MR/PET data. The segmentation method is multiscale and based on the Radon transform of brain MR images. It was developed to segment the skull on T1-weighted MR images. A modified fuzzy C-means classification scheme was developed to classify brain tissue into gray matter, white matter, and cerebrospinal fluid. Classified tissue is assigned an attenuation coefficient so that AC factors can be generated. PET emission data are then reconstructed using a three-dimensional ordered sets expectation maximization method with the MR-based AC map. Ten subjects had separate MR and PET scans. The PET with [{sup 11}C]PIB was acquired using a high-resolution research tomography (HRRT) PET. MR-based AC was compared with transmission (TX)-based AC on the HRRT. Seventeen volumes of interest were drawn manually on each subject image to compare the PET activities between the MR-based and TX-based AC methods. Results: For skull segmentation, the overlap ratio between our segmented results and the ground truth is 85.2 ± 2.6%. Attenuation correction results from the ten subjects show that the difference between the MR and TX-based methods was <6.5%. Conclusions: MR-based AC compared favorably with conventional transmission-based AC. Quantitative tools including registration, segmentation, classification, and MR-based AC have been developed for use in combined MR/PET.

  9. MR/PET quantification tools: Registration, segmentation, classification, and MR-based attenuation correction

    International Nuclear Information System (INIS)

    Fei, Baowei; Yang, Xiaofeng; Nye, Jonathon A.; Raghunath, Nivedita; Votaw, John R.; Aarsvold, John N.; Cervo, Morgan; Stark, Rebecca; Meltzer, Carolyn C.

    2012-01-01

    Purpose: Combined MR/PET is a relatively new, hybrid imaging modality. A human MR/PET prototype system consisting of a Siemens 3T Trio MR and brain PET insert was installed and tested at our institution. Its present design does not offer measured attenuation correction (AC) using traditional transmission imaging. This study is the development of quantification tools including MR-based AC for quantification in combined MR/PET for brain imaging. Methods: The developed quantification tools include image registration, segmentation, classification, and MR-based AC. These components were integrated into a single scheme for processing MR/PET data. The segmentation method is multiscale and based on the Radon transform of brain MR images. It was developed to segment the skull on T1-weighted MR images. A modified fuzzy C-means classification scheme was developed to classify brain tissue into gray matter, white matter, and cerebrospinal fluid. Classified tissue is assigned an attenuation coefficient so that AC factors can be generated. PET emission data are then reconstructed using a three-dimensional ordered sets expectation maximization method with the MR-based AC map. Ten subjects had separate MR and PET scans. The PET with ["1"1C]PIB was acquired using a high-resolution research tomography (HRRT) PET. MR-based AC was compared with transmission (TX)-based AC on the HRRT. Seventeen volumes of interest were drawn manually on each subject image to compare the PET activities between the MR-based and TX-based AC methods. Results: For skull segmentation, the overlap ratio between our segmented results and the ground truth is 85.2 ± 2.6%. Attenuation correction results from the ten subjects show that the difference between the MR and TX-based methods was <6.5%. Conclusions: MR-based AC compared favorably with conventional transmission-based AC. Quantitative tools including registration, segmentation, classification, and MR-based AC have been developed for use in combined MR/PET.

  10. A Classification-based Review Recommender

    Science.gov (United States)

    O'Mahony, Michael P.; Smyth, Barry

    Many online stores encourage their users to submit product/service reviews in order to guide future purchasing decisions. These reviews are often listed alongside product recommendations but, to date, limited attention has been paid as to how best to present these reviews to the end-user. In this paper, we describe a supervised classification approach that is designed to identify and recommend the most helpful product reviews. Using the TripAdvisor service as a case study, we compare the performance of several classification techniques using a range of features derived from hotel reviews. We then describe how these classifiers can be used as the basis for a practical recommender that automatically suggests the mosthelpful contrasting reviews to end-users. We present an empirical evaluation which shows that our approach achieves a statistically significant improvement over alternative review ranking schemes.

  11. Classification of High-Rise Residential Building Facilities: A Descriptive Survey on 170 Housing Scheme in Klang Valley

    Directory of Open Access Journals (Sweden)

    Abd Wahab Siti Rashidah Hanum

    2016-01-01

    Full Text Available High-rise residential building is a type of housing that has multi-dwelling units built on the same land. This type of housing has become popular each year in urban area due to the increasing cost of land. There are several common facilities provided in high-rise residential building. For example playground, swimming pool, gymnasium, 24 hours security system such as CCTV, access card and so on. Thus, maintenance works of the common facilities must be well organised. The purpose of this paper is to identify the classification of facilities provided at high rise residential building. The survey was done on 170 high-rise residential schemes by using stratified random sampling technique. The scope of this research is within Klang Valley area. This area is rapidly developed with high-rise residential building. The objective of this survey is to list down all the facilities provided in each sample of the schemes. The result, there are nine classification of facilities provided for high-rise residential building.

  12. Classification of Flotation Frothers

    Directory of Open Access Journals (Sweden)

    Jan Drzymala

    2018-02-01

    Full Text Available In this paper, a scheme of flotation frothers classification is presented. The scheme first indicates the physical system in which a frother is present and four of them i.e., pure state, aqueous solution, aqueous solution/gas system and aqueous solution/gas/solid system are distinguished. As a result, there are numerous classifications of flotation frothers. The classifications can be organized into a scheme described in detail in this paper. The frother can be present in one of four physical systems, that is pure state, aqueous solution, aqueous solution/gas and aqueous solution/gas/solid system. It results from the paper that a meaningful classification of frothers relies on choosing the physical system and next feature, trend, parameter or parameters according to which the classification is performed. The proposed classification can play a useful role in characterizing and evaluation of flotation frothers.

  13. The Performance-based Funding Scheme of Universities

    Directory of Open Access Journals (Sweden)

    Juha KETTUNEN

    2016-05-01

    Full Text Available The purpose of this study is to analyse the effectiveness of the performance-based funding scheme of the Finnish universities that was adopted at the beginning of 2013. The political decision-makers expect that the funding scheme will create incentives for the universities to improve performance, but these funding schemes have largely failed in many other countries, primarily because public funding is only a small share of the total funding of universities. This study is interesting because Finnish universities have no tuition fees, unlike in many other countries, and the state allocates funding based on the objectives achieved. The empirical evidence of the graduation rates indicates that graduation rates increased when a new scheme was adopted, especially among male students, who have more room for improvement than female students. The new performance-based funding scheme allocates the funding according to the output-based indicators and limits the scope of strategic planning and the autonomy of the university. The performance-based funding scheme is transformed to the strategy map of the balanced scorecard. The new funding scheme steers universities in many respects but leaves the research and teaching skills to the discretion of the universities. The new scheme has also diminished the importance of the performance agreements between the university and the Ministry. The scheme increases the incentives for universities to improve the processes and structures in order to attain as much public funding as possible. It is optimal for the central administration of the university to allocate resources to faculties and other organisational units following the criteria of the performance-based funding scheme. The new funding scheme has made the universities compete with each other, because the total funding to the universities is allocated to each university according to the funding scheme. There is a tendency that the funding schemes are occasionally

  14. A novel transferable individual tree crown delineation model based on Fishing Net Dragging and boundary classification

    Science.gov (United States)

    Liu, Tao; Im, Jungho; Quackenbush, Lindi J.

    2015-12-01

    This study provides a novel approach to individual tree crown delineation (ITCD) using airborne Light Detection and Ranging (LiDAR) data in dense natural forests using two main steps: crown boundary refinement based on a proposed Fishing Net Dragging (FiND) method, and segment merging based on boundary classification. FiND starts with approximate tree crown boundaries derived using a traditional watershed method with Gaussian filtering and refines these boundaries using an algorithm that mimics how a fisherman drags a fishing net. Random forest machine learning is then used to classify boundary segments into two classes: boundaries between trees and boundaries between branches that belong to a single tree. Three groups of LiDAR-derived features-two from the pseudo waveform generated along with crown boundaries and one from a canopy height model (CHM)-were used in the classification. The proposed ITCD approach was tested using LiDAR data collected over a mountainous region in the Adirondack Park, NY, USA. Overall accuracy of boundary classification was 82.4%. Features derived from the CHM were generally more important in the classification than the features extracted from the pseudo waveform. A comprehensive accuracy assessment scheme for ITCD was also introduced by considering both area of crown overlap and crown centroids. Accuracy assessment using this new scheme shows the proposed ITCD achieved 74% and 78% as overall accuracy, respectively, for deciduous and mixed forest.

  15. An enhanced forest classification scheme for modeling vegetation-climate interactions based on national forest inventory data

    Science.gov (United States)

    Majasalmi, Titta; Eisner, Stephanie; Astrup, Rasmus; Fridman, Jonas; Bright, Ryan M.

    2018-01-01

    Forest management affects the distribution of tree species and the age class of a forest, shaping its overall structure and functioning and in turn the surface-atmosphere exchanges of mass, energy, and momentum. In order to attribute climate effects to anthropogenic activities like forest management, good accounts of forest structure are necessary. Here, using Fennoscandia as a case study, we make use of Fennoscandic National Forest Inventory (NFI) data to systematically classify forest cover into groups of similar aboveground forest structure. An enhanced forest classification scheme and related lookup table (LUT) of key forest structural attributes (i.e., maximum growing season leaf area index (LAImax), basal-area-weighted mean tree height, tree crown length, and total stem volume) was developed, and the classification was applied for multisource NFI (MS-NFI) maps from Norway, Sweden, and Finland. To provide a complete surface representation, our product was integrated with the European Space Agency Climate Change Initiative Land Cover (ESA CCI LC) map of present day land cover (v.2.0.7). Comparison of the ESA LC and our enhanced LC products (https://doi.org/10.21350/7zZEy5w3) showed that forest extent notably (κ = 0.55, accuracy 0.64) differed between the two products. To demonstrate the potential of our enhanced LC product to improve the description of the maximum growing season LAI (LAImax) of managed forests in Fennoscandia, we compared our LAImax map with reference LAImax maps created using the ESA LC product (and related cross-walking table) and PFT-dependent LAImax values used in three leading land models. Comparison of the LAImax maps showed that our product provides a spatially more realistic description of LAImax in managed Fennoscandian forests compared to reference maps. This study presents an approach to account for the transient nature of forest structural attributes due to human intervention in different land models.

  16. Hyperspectral Image Classification Based on the Combination of Spatial-spectral Feature and Sparse Representation

    Directory of Open Access Journals (Sweden)

    YANG Zhaoxia

    2015-07-01

    Full Text Available In order to avoid the problem of being over-dependent on high-dimensional spectral feature in the traditional hyperspectral image classification, a novel approach based on the combination of spatial-spectral feature and sparse representation is proposed in this paper. Firstly, we extract the spatial-spectral feature by reorganizing the local image patch with the first d principal components(PCs into a vector representation, followed by a sorting scheme to make the vector invariant to local image rotation. Secondly, we learn the dictionary through a supervised method, and use it to code the features from test samples afterwards. Finally, we embed the resulting sparse feature coding into the support vector machine(SVM for hyperspectral image classification. Experiments using three hyperspectral data show that the proposed method can effectively improve the classification accuracy comparing with traditional classification methods.

  17. MeMoVolc report on classification and dynamics of volcanic explosive eruptions

    Science.gov (United States)

    Bonadonna, C.; Cioni, R.; Costa, A.; Druitt, T.; Phillips, J.; Pioli, L.; Andronico, D.; Harris, A.; Scollo, S.; Bachmann, O.; Bagheri, G.; Biass, S.; Brogi, F.; Cashman, K.; Dominguez, L.; Dürig, T.; Galland, O.; Giordano, G.; Gudmundsson, M.; Hort, M.; Höskuldsson, A.; Houghton, B.; Komorowski, J. C.; Küppers, U.; Lacanna, G.; Le Pennec, J. L.; Macedonio, G.; Manga, M.; Manzella, I.; Vitturi, M. de'Michieli; Neri, A.; Pistolesi, M.; Polacci, M.; Ripepe, M.; Rossi, E.; Scheu, B.; Sulpizio, R.; Tripoli, B.; Valade, S.; Valentine, G.; Vidal, C.; Wallenstein, N.

    2016-11-01

    Classifications of volcanic eruptions were first introduced in the early twentieth century mostly based on qualitative observations of eruptive activity, and over time, they have gradually been developed to incorporate more quantitative descriptions of the eruptive products from both deposits and observations of active volcanoes. Progress in physical volcanology, and increased capability in monitoring, measuring and modelling of explosive eruptions, has highlighted shortcomings in the way we classify eruptions and triggered a debate around the need for eruption classification and the advantages and disadvantages of existing classification schemes. Here, we (i) review and assess existing classification schemes, focussing on subaerial eruptions; (ii) summarize the fundamental processes that drive and parameters that characterize explosive volcanism; (iii) identify and prioritize the main research that will improve the understanding, characterization and classification of volcanic eruptions and (iv) provide a roadmap for producing a rational and comprehensive classification scheme. In particular, classification schemes need to be objective-driven and simple enough to permit scientific exchange and promote transfer of knowledge beyond the scientific community. Schemes should be comprehensive and encompass a variety of products, eruptive styles and processes, including for example, lava flows, pyroclastic density currents, gas emissions and cinder cone or caldera formation. Open questions, processes and parameters that need to be addressed and better characterized in order to develop more comprehensive classification schemes and to advance our understanding of volcanic eruptions include conduit processes and dynamics, abrupt transitions in eruption regime, unsteadiness, eruption energy and energy balance.

  18. A New Classification Approach Based on Multiple Classification Rules

    OpenAIRE

    Zhongmei Zhou

    2014-01-01

    A good classifier can correctly predict new data for which the class label is unknown, so it is important to construct a high accuracy classifier. Hence, classification techniques are much useful in ubiquitous computing. Associative classification achieves higher classification accuracy than some traditional rule-based classification approaches. However, the approach also has two major deficiencies. First, it generates a very large number of association classification rules, especially when t...

  19. Time-and-ID-Based Proxy Reencryption Scheme

    Directory of Open Access Journals (Sweden)

    Kambombo Mtonga

    2014-01-01

    Full Text Available Time- and ID-based proxy reencryption scheme is proposed in this paper in which a type-based proxy reencryption enables the delegator to implement fine-grained policies with one key pair without any additional trust on the proxy. However, in some applications, the time within which the data was sampled or collected is very critical. In such applications, for example, healthcare and criminal investigations, the delegatee may be interested in only some of the messages with some types sampled within some time bound instead of the entire subset. Hence, in order to carter for such situations, in this paper, we propose a time-and-identity-based proxy reencryption scheme that takes into account the time within which the data was collected as a factor to consider when categorizing data in addition to its type. Our scheme is based on Boneh and Boyen identity-based scheme (BB-IBE and Matsuo’s proxy reencryption scheme for identity-based encryption (IBE to IBE. We prove that our scheme is semantically secure in the standard model.

  20. Functional Basis of Microorganism Classification.

    Science.gov (United States)

    Zhu, Chengsheng; Delmont, Tom O; Vogel, Timothy M; Bromberg, Yana

    2015-08-01

    Correctly identifying nearest "neighbors" of a given microorganism is important in industrial and clinical applications where close relationships imply similar treatment. Microbial classification based on similarity of physiological and genetic organism traits (polyphasic similarity) is experimentally difficult and, arguably, subjective. Evolutionary relatedness, inferred from phylogenetic markers, facilitates classification but does not guarantee functional identity between members of the same taxon or lack of similarity between different taxa. Using over thirteen hundred sequenced bacterial genomes, we built a novel function-based microorganism classification scheme, functional-repertoire similarity-based organism network (FuSiON; flattened to fusion). Our scheme is phenetic, based on a network of quantitatively defined organism relationships across the known prokaryotic space. It correlates significantly with the current taxonomy, but the observed discrepancies reveal both (1) the inconsistency of functional diversity levels among different taxa and (2) an (unsurprising) bias towards prioritizing, for classification purposes, relatively minor traits of particular interest to humans. Our dynamic network-based organism classification is independent of the arbitrary pairwise organism similarity cut-offs traditionally applied to establish taxonomic identity. Instead, it reveals natural, functionally defined organism groupings and is thus robust in handling organism diversity. Additionally, fusion can use organism meta-data to highlight the specific environmental factors that drive microbial diversification. Our approach provides a complementary view to cladistic assignments and holds important clues for further exploration of microbial lifestyles. Fusion is a more practical fit for biomedical, industrial, and ecological applications, as many of these rely on understanding the functional capabilities of the microbes in their environment and are less concerned with

  1. Application of a 5-tiered scheme for standardized classification of 2,360 unique mismatch repair gene variants in the InSiGHT locus-specific database

    DEFF Research Database (Denmark)

    Thompson, Bryony A; Spurdle, Amanda B; Plazzer, John-Paul

    2014-01-01

    and apply a standardized classification scheme to constitutional variants in the Lynch syndrome-associated genes MLH1, MSH2, MSH6 and PMS2. Unpublished data submission was encouraged to assist in variant classification and was recognized through microattribution. The scheme was refined by multidisciplinary...... are now possible for 1,370 variants that were not obviously protein truncating from nomenclature. This large-scale endeavor will facilitate the consistent management of families suspected to have Lynch syndrome and demonstrates the value of multidisciplinary collaboration in the curation......The clinical classification of hereditary sequence variants identified in disease-related genes directly affects clinical management of patients and their relatives. The International Society for Gastrointestinal Hereditary Tumours (InSiGHT) undertook a collaborative effort to develop, test...

  2. SPITZER IRS SPECTRA OF LUMINOUS 8 μm SOURCES IN THE LARGE MAGELLANIC CLOUD: TESTING COLOR-BASED CLASSIFICATIONS

    International Nuclear Information System (INIS)

    Buchanan, Catherine L.; Kastner, Joel H.; Hrivnak, Bruce J.; Sahai, Raghvendra

    2009-01-01

    We present archival Spitzer Infrared Spectrograph (IRS) spectra of 19 luminous 8 μm selected sources in the Large Magellanic Cloud (LMC). The object classes derived from these spectra and from an additional 24 spectra in the literature are compared with classifications based on Two Micron All Sky Survey (2MASS)/MSX (J, H, K, and 8 μm) colors in order to test the 'JHK8' (Kastner et al.) classification scheme. The IRS spectra confirm the classifications of 22 of the 31 sources that can be classified under the JHK8 system. The spectroscopic classification of 12 objects that were unclassifiable in the JHK8 scheme allow us to characterize regions of the color-color diagrams that previously lacked spectroscopic verification, enabling refinements to the JHK8 classification system. The results of these new classifications are consistent with previous results concerning the identification of the most infrared-luminous objects in the LMC. In particular, while the IRS spectra reveal several new examples of asymptotic giant branch (AGB) stars with O-rich envelopes, such objects are still far outnumbered by carbon stars (C-rich AGB stars). We show that Spitzer IRAC/MIPS color-color diagrams provide improved discrimination between red supergiants and oxygen-rich and carbon-rich AGB stars relative to those based on 2MASS/MSX colors. These diagrams will enable the most luminous IR sources in Local Group galaxies to be classified with high confidence based on their Spitzer colors. Such characterizations of stellar populations will continue to be possible during Spitzer's warm mission through the use of IRAC [3.6]-[4.5] and 2MASS colors.

  3. SoFoCles: feature filtering for microarray classification based on gene ontology.

    Science.gov (United States)

    Papachristoudis, Georgios; Diplaris, Sotiris; Mitkas, Pericles A

    2010-02-01

    Marker gene selection has been an important research topic in the classification analysis of gene expression data. Current methods try to reduce the "curse of dimensionality" by using statistical intra-feature set calculations, or classifiers that are based on the given dataset. In this paper, we present SoFoCles, an interactive tool that enables semantic feature filtering in microarray classification problems with the use of external, well-defined knowledge retrieved from the Gene Ontology. The notion of semantic similarity is used to derive genes that are involved in the same biological path during the microarray experiment, by enriching a feature set that has been initially produced with legacy methods. Among its other functionalities, SoFoCles offers a large repository of semantic similarity methods that are used in order to derive feature sets and marker genes. The structure and functionality of the tool are discussed in detail, as well as its ability to improve classification accuracy. Through experimental evaluation, SoFoCles is shown to outperform other classification schemes in terms of classification accuracy in two real datasets using different semantic similarity computation approaches.

  4. Classification Scheme for Diverse Sedimentary and Igneous Rocks Encountered by MSL in Gale Crater

    Science.gov (United States)

    Schmidt, M. E.; Mangold, N.; Fisk, M.; Forni, O.; McLennan, S.; Ming, D. W.; Sumner, D.; Sautter, V.; Williams, A. J.; Gellert, R.

    2015-01-01

    The Curiosity Rover landed in a lithologically and geochemically diverse region of Mars. We present a recommended rock classification framework based on terrestrial schemes, and adapted for the imaging and analytical capabilities of MSL as well as for rock types distinctive to Mars (e.g., high Fe sediments). After interpreting rock origin from textures, i.e., sedimentary (clastic, bedded), igneous (porphyritic, glassy), or unknown, the overall classification procedure (Fig 1) involves: (1) the characterization of rock type according to grain size and texture; (2) the assignment of geochemical modifiers according to Figs 3 and 4; and if applicable, in depth study of (3) mineralogy and (4) geologic/stratigraphic context. Sedimentary rock types are assigned by measuring grains in the best available resolution image (Table 1) and classifying according to the coarsest resolvable grains as conglomerate/breccia, (coarse, medium, or fine) sandstone, silt-stone, or mudstone. If grains are not resolvable in MAHLI images, grains in the rock are assumed to be silt sized or smaller than surface dust particles. Rocks with low color contrast contrast between grains (e.g., Dismal Lakes, sol 304) are classified according to minimum size of apparent grains from surface roughness or shadows outlining apparent grains. Igneous rocks are described as intrusive or extrusive depending on crystal size and fabric. Igneous textures may be described as granular, porphyritic, phaneritic, aphyric, or glassy depending on crystal size. Further descriptors may include terms such as vesicular or cumulate textures.

  5. A new approach to develop computer-aided diagnosis scheme of breast mass classification using deep learning technology.

    Science.gov (United States)

    Qiu, Yuchen; Yan, Shiju; Gundreddy, Rohith Reddy; Wang, Yunzhi; Cheng, Samuel; Liu, Hong; Zheng, Bin

    2017-01-01

    To develop and test a deep learning based computer-aided diagnosis (CAD) scheme of mammograms for classifying between malignant and benign masses. An image dataset involving 560 regions of interest (ROIs) extracted from digital mammograms was used. After down-sampling each ROI from 512×512 to 64×64 pixel size, we applied an 8 layer deep learning network that involves 3 pairs of convolution-max-pooling layers for automatic feature extraction and a multiple layer perceptron (MLP) classifier for feature categorization to process ROIs. The 3 pairs of convolution layers contain 20, 10, and 5 feature maps, respectively. Each convolution layer is connected with a max-pooling layer to improve the feature robustness. The output of the sixth layer is fully connected with a MLP classifier, which is composed of one hidden layer and one logistic regression layer. The network then generates a classification score to predict the likelihood of ROI depicting a malignant mass. A four-fold cross validation method was applied to train and test this deep learning network. The results revealed that this CAD scheme yields an area under the receiver operation characteristic curve (AUC) of 0.696±0.044, 0.802±0.037, 0.836±0.036, and 0.822±0.035 for fold 1 to 4 testing datasets, respectively. The overall AUC of the entire dataset is 0.790±0.019. This study demonstrates the feasibility of applying a deep learning based CAD scheme to classify between malignant and benign breast masses without a lesion segmentation, image feature computation and selection process.

  6. A New Approach to Develop Computer-aided Diagnosis Scheme of Breast Mass Classification Using Deep Learning Technology

    Science.gov (United States)

    Qiu, Yuchen; Yan, Shiju; Gundreddy, Rohith Reddy; Wang, Yunzhi; Cheng, Samuel; Liu, Hong; Zheng, Bin

    2017-01-01

    PURPOSE To develop and test a deep learning based computer-aided diagnosis (CAD) scheme of mammograms for classifying between malignant and benign masses. METHODS An image dataset involving 560 regions of interest (ROIs) extracted from digital mammograms was used. After down-sampling each ROI from 512×512 to 64×64 pixel size, we applied an 8 layer deep learning network that involves 3 pairs of convolution-max-pooling layers for automatic feature extraction and a multiple layer perceptron (MLP) classifier for feature categorization to process ROIs. The 3 pairs of convolution layers contain 20, 10, and 5 feature maps, respectively. Each convolution layer is connected with a max-pooling layer to improve the feature robustness. The output of the sixth layer is fully connected with a MLP classifier, which is composed of one hidden layer and one logistic regression layer. The network then generates a classification score to predict the likelihood of ROI depicting a malignant mass. A four-fold cross validation method was applied to train and test this deep learning network. RESULTS The results revealed that this CAD scheme yields an area under the receiver operation characteristic curve (AUC) of 0.696±0.044, 0.802±0.037, 0.836±0.036, and 0.822±0.035 for fold 1 to 4 testing datasets, respectively. The overall AUC of the entire dataset is 0.790±0.019. CONCLUSIONS This study demonstrates the feasibility of applying a deep learning based CAD scheme to classify between malignant and benign breast masses without a lesion segmentation, image feature computation and selection process. PMID:28436410

  7. Classification in context

    DEFF Research Database (Denmark)

    Mai, Jens Erik

    2004-01-01

    This paper surveys classification research literature, discusses various classification theories, and shows that the focus has traditionally been on establishing a scientific foundation for classification research. This paper argues that a shift has taken place, and suggests that contemporary...... classification research focus on contextual information as the guide for the design and construction of classification schemes....

  8. Cheese Classification, Characterization, and Categorization: A Global Perspective.

    Science.gov (United States)

    Almena-Aliste, Montserrat; Mietton, Bernard

    2014-02-01

    Cheese is one of the most fascinating, complex, and diverse foods enjoyed today. Three elements constitute the cheese ecosystem: ripening agents, consisting of enzymes and microorganisms; the composition of the fresh cheese; and the environmental conditions during aging. These factors determine and define not only the sensory quality of the final cheese product but also the vast diversity of cheeses produced worldwide. How we define and categorize cheese is a complicated matter. There are various approaches to cheese classification, and a global approach for classification and characterization is needed. We review current cheese classification schemes and the limitations inherent in each of the schemes described. While some classification schemes are based on microbiological criteria, others rely on descriptions of the technologies used for cheese production. The goal of this review is to present an overview of comprehensive and practical integrative classification models in order to better describe cheese diversity and the fundamental differences within cheeses, as well as to connect fundamental technological, microbiological, chemical, and sensory characteristics to contribute to an overall characterization of the main families of cheese, including the expanding world of American artisanal cheeses.

  9. Discriminative Hierarchical K-Means Tree for Large-Scale Image Classification.

    Science.gov (United States)

    Chen, Shizhi; Yang, Xiaodong; Tian, Yingli

    2015-09-01

    A key challenge in large-scale image classification is how to achieve efficiency in terms of both computation and memory without compromising classification accuracy. The learning-based classifiers achieve the state-of-the-art accuracies, but have been criticized for the computational complexity that grows linearly with the number of classes. The nonparametric nearest neighbor (NN)-based classifiers naturally handle large numbers of categories, but incur prohibitively expensive computation and memory costs. In this brief, we present a novel classification scheme, i.e., discriminative hierarchical K-means tree (D-HKTree), which combines the advantages of both learning-based and NN-based classifiers. The complexity of the D-HKTree only grows sublinearly with the number of categories, which is much better than the recent hierarchical support vector machines-based methods. The memory requirement is the order of magnitude less than the recent Naïve Bayesian NN-based approaches. The proposed D-HKTree classification scheme is evaluated on several challenging benchmark databases and achieves the state-of-the-art accuracies, while with significantly lower computation cost and memory requirement.

  10. A vegetation-based hierarchical classification for seasonally pulsed ...

    African Journals Online (AJOL)

    A classification scheme is presented for seasonal floodplains of the Boro-Xudum distributary of the Okavango Delta, Botswana. This distributary is subject to an annual flood-pulse, the inundated area varying from a mean low of 3 600 km2 to a mean high of 5 400 km2 between 2000 and 2006. A stratified random sample of ...

  11. Functional Basis of Microorganism Classification

    Science.gov (United States)

    Zhu, Chengsheng; Delmont, Tom O.; Vogel, Timothy M.; Bromberg, Yana

    2015-01-01

    Correctly identifying nearest “neighbors” of a given microorganism is important in industrial and clinical applications where close relationships imply similar treatment. Microbial classification based on similarity of physiological and genetic organism traits (polyphasic similarity) is experimentally difficult and, arguably, subjective. Evolutionary relatedness, inferred from phylogenetic markers, facilitates classification but does not guarantee functional identity between members of the same taxon or lack of similarity between different taxa. Using over thirteen hundred sequenced bacterial genomes, we built a novel function-based microorganism classification scheme, functional-repertoire similarity-based organism network (FuSiON; flattened to fusion). Our scheme is phenetic, based on a network of quantitatively defined organism relationships across the known prokaryotic space. It correlates significantly with the current taxonomy, but the observed discrepancies reveal both (1) the inconsistency of functional diversity levels among different taxa and (2) an (unsurprising) bias towards prioritizing, for classification purposes, relatively minor traits of particular interest to humans. Our dynamic network-based organism classification is independent of the arbitrary pairwise organism similarity cut-offs traditionally applied to establish taxonomic identity. Instead, it reveals natural, functionally defined organism groupings and is thus robust in handling organism diversity. Additionally, fusion can use organism meta-data to highlight the specific environmental factors that drive microbial diversification. Our approach provides a complementary view to cladistic assignments and holds important clues for further exploration of microbial lifestyles. Fusion is a more practical fit for biomedical, industrial, and ecological applications, as many of these rely on understanding the functional capabilities of the microbes in their environment and are less concerned

  12. Land Cover - Minnesota Land Cover Classification System

    Data.gov (United States)

    Minnesota Department of Natural Resources — Land cover data set based on the Minnesota Land Cover Classification System (MLCCS) coding scheme. This data was produced using a combination of aerial photograph...

  13. Classification of Ship Routing and Scheduling Problems in Liner Shipping

    DEFF Research Database (Denmark)

    Kjeldsen, Karina Hjortshøj

    2011-01-01

    This article provides a classification scheme for ship routing and scheduling problems in liner shipping in line with the current and future operational conditions of the liner shipping industry. Based on the classification, the literature is divided into groups whose main characteristics...

  14. Acoustic classification of dwellings

    DEFF Research Database (Denmark)

    Berardi, Umberto; Rasmussen, Birgit

    2014-01-01

    insulation performance, national schemes for sound classification of dwellings have been developed in several European countries. These schemes define acoustic classes according to different levels of sound insulation. Due to the lack of coordination among countries, a significant diversity in terms...... exchanging experiences about constructions fulfilling different classes, reducing trade barriers, and finally increasing the sound insulation of dwellings.......Schemes for the classification of dwellings according to different building performances have been proposed in the last years worldwide. The general idea behind these schemes relates to the positive impact a higher label, and thus a better performance, should have. In particular, focusing on sound...

  15. Classification Formula and Generation Algorithm of Cycle Decomposition Expression for Dihedral Groups

    Directory of Open Access Journals (Sweden)

    Dakun Zhang

    2013-01-01

    Full Text Available The necessary of classification research on common formula of group (dihedral group cycle decomposition expression is illustrated. It includes the reflection and rotation conversion, which derived six common formulae on cycle decomposition expressions of group; it designed the generation algorithm on the cycle decomposition expressions of group, which is based on the method of replacement conversion and the classification formula; algorithm analysis and the results of the process show that the generation algorithm which is based on the classification formula is outperformed by the general algorithm which is based on replacement conversion; it has great significance to solve the enumeration of the necklace combinational scheme, especially the structural problems of combinational scheme, by using group theory and computer.

  16. Cost-based droop scheme for DC microgrid

    DEFF Research Database (Denmark)

    Nutkani, Inam Ullah; Wang, Peng; Loh, Poh Chiang

    2014-01-01

    voltage level, less on optimized operation and control of generation sources. The latter theme is perused in this paper, where cost-based droop scheme is proposed for distributed generators (DGs) in DC microgrids. Unlike traditional proportional power sharing based droop scheme, the proposed scheme......-connected operation. Most importantly, the proposed scheme can reduce overall total generation cost in DC microgrids without centralized controller and communication links. The performance of the proposed scheme has been verified under different load conditions.......DC microgrids are gaining interest due to higher efficiencies of DC distribution compared with AC. The benefits of DC systems have been widely researched for data centers, IT facilities and residential applications. The research focus, however, has been more on system architecture and optimal...

  17. New KF-PP-SVM classification method for EEG in brain-computer interfaces.

    Science.gov (United States)

    Yang, Banghua; Han, Zhijun; Zan, Peng; Wang, Qian

    2014-01-01

    Classification methods are a crucial direction in the current study of brain-computer interfaces (BCIs). To improve the classification accuracy for electroencephalogram (EEG) signals, a novel KF-PP-SVM (kernel fisher, posterior probability, and support vector machine) classification method is developed. Its detailed process entails the use of common spatial patterns to obtain features, based on which the within-class scatter is calculated. Then the scatter is added into the kernel function of a radial basis function to construct a new kernel function. This new kernel is integrated into the SVM to obtain a new classification model. Finally, the output of SVM is calculated based on posterior probability and the final recognition result is obtained. To evaluate the effectiveness of the proposed KF-PP-SVM method, EEG data collected from laboratory are processed with four different classification schemes (KF-PP-SVM, KF-SVM, PP-SVM, and SVM). The results showed that the overall average improvements arising from the use of the KF-PP-SVM scheme as opposed to KF-SVM, PP-SVM and SVM schemes are 2.49%, 5.83 % and 6.49 % respectively.

  18. Music genre classification via likelihood fusion from multiple feature models

    Science.gov (United States)

    Shiu, Yu; Kuo, C.-C. J.

    2005-01-01

    Music genre provides an efficient way to index songs in a music database, and can be used as an effective means to retrieval music of a similar type, i.e. content-based music retrieval. A new two-stage scheme for music genre classification is proposed in this work. At the first stage, we examine a couple of different features, construct their corresponding parametric models (e.g. GMM and HMM) and compute their likelihood functions to yield soft classification results. In particular, the timbre, rhythm and temporal variation features are considered. Then, at the second stage, these soft classification results are integrated to result in a hard decision for final music genre classification. Experimental results are given to demonstrate the performance of the proposed scheme.

  19. Classification of proteins: available structural space for molecular modeling.

    Science.gov (United States)

    Andreeva, Antonina

    2012-01-01

    The wealth of available protein structural data provides unprecedented opportunity to study and better understand the underlying principles of protein folding and protein structure evolution. A key to achieving this lies in the ability to analyse these data and to organize them in a coherent classification scheme. Over the past years several protein classifications have been developed that aim to group proteins based on their structural relationships. Some of these classification schemes explore the concept of structural neighbourhood (structural continuum), whereas other utilize the notion of protein evolution and thus provide a discrete rather than continuum view of protein structure space. This chapter presents a strategy for classification of proteins with known three-dimensional structure. Steps in the classification process along with basic definitions are introduced. Examples illustrating some fundamental concepts of protein folding and evolution with a special focus on the exceptions to them are presented.

  20. Entropy-based gene ranking without selection bias for the predictive classification of microarray data

    Directory of Open Access Journals (Sweden)

    Serafini Maria

    2003-11-01

    Full Text Available Abstract Background We describe the E-RFE method for gene ranking, which is useful for the identification of markers in the predictive classification of array data. The method supports a practical modeling scheme designed to avoid the construction of classification rules based on the selection of too small gene subsets (an effect known as the selection bias, in which the estimated predictive errors are too optimistic due to testing on samples already considered in the feature selection process. Results With E-RFE, we speed up the recursive feature elimination (RFE with SVM classifiers by eliminating chunks of uninteresting genes using an entropy measure of the SVM weights distribution. An optimal subset of genes is selected according to a two-strata model evaluation procedure: modeling is replicated by an external stratified-partition resampling scheme, and, within each run, an internal K-fold cross-validation is used for E-RFE ranking. Also, the optimal number of genes can be estimated according to the saturation of Zipf's law profiles. Conclusions Without a decrease of classification accuracy, E-RFE allows a speed-up factor of 100 with respect to standard RFE, while improving on alternative parametric RFE reduction strategies. Thus, a process for gene selection and error estimation is made practical, ensuring control of the selection bias, and providing additional diagnostic indicators of gene importance.

  1. AN ENSEMBLE TEMPLATE MATCHING AND CONTENT-BASED IMAGE RETRIEVAL SCHEME TOWARDS EARLY STAGE DETECTION OF MELANOMA

    Directory of Open Access Journals (Sweden)

    Spiros Kostopoulos

    2016-12-01

    Full Text Available Malignant melanoma represents the most dangerous type of skin cancer. In this study we present an ensemble classification scheme, employing the mutual information, the cross-correlation and the clustering based on proximity of image features methods, for early stage assessment of melanomas on plain photography images. The proposed scheme performs two main operations. First, it retrieves the most similar, to the unknown case, image samples from an available image database with verified benign moles and malignant melanoma cases. Second, it provides an automated estimation regarding the nature of the unknown image sample based on the majority of the most similar images retrieved from the available database. Clinical material comprised 75 melanoma and 75 benign plain photography images collected from publicly available dermatological atlases. Results showed that the ensemble scheme outperformed all other methods tested in terms of accuracy with 94.9±1.5%, following an external cross-validation evaluation methodology. The proposed scheme may benefit patients by providing a second opinion consultation during the self-skin examination process and the physician by providing a second opinion estimation regarding the nature of suspicious moles that may assist towards decision making especially for ambiguous cases, safeguarding, in this way from potential diagnostic misinterpretations.

  2. Feature extraction based on extended multi-attribute profiles and sparse autoencoder for remote sensing image classification

    Science.gov (United States)

    Teffahi, Hanane; Yao, Hongxun; Belabid, Nasreddine; Chaib, Souleyman

    2018-02-01

    The satellite images with very high spatial resolution have been recently widely used in image classification topic as it has become challenging task in remote sensing field. Due to a number of limitations such as the redundancy of features and the high dimensionality of the data, different classification methods have been proposed for remote sensing images classification particularly the methods using feature extraction techniques. This paper propose a simple efficient method exploiting the capability of extended multi-attribute profiles (EMAP) with sparse autoencoder (SAE) for remote sensing image classification. The proposed method is used to classify various remote sensing datasets including hyperspectral and multispectral images by extracting spatial and spectral features based on the combination of EMAP and SAE by linking them to kernel support vector machine (SVM) for classification. Experiments on new hyperspectral image "Huston data" and multispectral image "Washington DC data" shows that this new scheme can achieve better performance of feature learning than the primitive features, traditional classifiers and ordinary autoencoder and has huge potential to achieve higher accuracy for classification in short running time.

  3. Sound insulation and reverberation time for classrooms - Criteria in regulations and classification schemes in the Nordic countries

    DEFF Research Database (Denmark)

    Rasmussen, Birgit

    2016-01-01

    Acoustic regulations or guidelines for schools exist in all five Nordic countries. The acoustic criteria depend on room uses and deal with airborne and impact sound insulation, reverberation time, sound absorption, traffic noise, service equipment noise and other acoustic performance...... have become more extensive and stricter during the last two decades. The paper focuses on comparison of sound insulation and reverberation time criteria for classrooms in regulations and classification schemes in the Nordic countries. Limit values and changes over time will be discussed as well as how...... not identical. The national criteria for quality level C correspond to the national regulations or recommendations for new-build. The quality levels A and B are intended to define better acoustic performance than C, and D lower performance. Typically, acoustic regulations and classification criteria for schools...

  4. A Topic Space Oriented User Group Discovering Scheme in Social Network: A Trust Chain Based Interest Measuring Perspective

    Directory of Open Access Journals (Sweden)

    Wang Dong

    2016-01-01

    Full Text Available Currently, user group has become an effective platform for information sharing and communicating among users in social network sites. In present work, we propose a single topic user group discovering scheme, which includes three phases: topic impact evaluation, interest degree measurement, and trust chain based discovering, to enable selecting influential topic and discovering users into a topic oriented group. Our main works include (1 an overview of proposed scheme and its related definitions; (2 topic space construction method based on topic relatedness clustering and its impact (influence degree and popularity degree evaluation; (3 a trust chain model to take user relation network topological information into account with a strength classification perspective; (4 an interest degree (user explicit and implicit interest degree evaluation method based on trust chain among users; and (5 a topic space oriented user group discovering method to group core users according to their explicit interest degrees and to predict ordinary users under implicit interest and user trust chain. Finally, experimental results are given to explain effectiveness and feasibility of our scheme.

  5. Adaptive Image Transmission Scheme over Wavelet-Based OFDM System

    Institute of Scientific and Technical Information of China (English)

    GAOXinying; YUANDongfeng; ZHANGHaixia

    2005-01-01

    In this paper an adaptive image transmission scheme is proposed over Wavelet-based OFDM (WOFDM) system with Unequal error protection (UEP) by the design of non-uniform signal constellation in MLC. Two different data division schemes: byte-based and bitbased, are analyzed and compared. Different bits are protected unequally according to their different contribution to the image quality in bit-based data division scheme, which causes UEP combined with this scheme more powerful than that with byte-based scheme. Simulation results demonstrate that image transmission by UEP with bit-based data division scheme presents much higher PSNR values and surprisingly better image quality. Furthermore, by considering the tradeoff of complexity and BER performance, Haar wavelet with the shortest compactly supported filter length is the most suitable one among orthogonal Daubechies wavelet series in our proposed system.

  6. Application of Bayesian Classification to Content-Based Data Management

    Science.gov (United States)

    Lynnes, Christopher; Berrick, S.; Gopalan, A.; Hua, X.; Shen, S.; Smith, P.; Yang, K-Y.; Wheeler, K.; Curry, C.

    2004-01-01

    The high volume of Earth Observing System data has proven to be challenging to manage for data centers and users alike. At the Goddard Earth Sciences Distributed Active Archive Center (GES DAAC), about 1 TB of new data are archived each day. Distribution to users is also about 1 TB/day. A substantial portion of this distribution is MODIS calibrated radiance data, which has a wide variety of uses. However, much of the data is not useful for a particular user's needs: for example, ocean color users typically need oceanic pixels that are free of cloud and sun-glint. The GES DAAC is using a simple Bayesian classification scheme to rapidly classify each pixel in the scene in order to support several experimental content-based data services for near-real-time MODIS calibrated radiance products (from Direct Readout stations). Content-based subsetting would allow distribution of, say, only clear pixels to the user if desired. Content-based subscriptions would distribute data to users only when they fit the user's usability criteria in their area of interest within the scene. Content-based cache management would retain more useful data on disk for easy online access. The classification may even be exploited in an automated quality assessment of the geolocation product. Though initially to be demonstrated at the GES DAAC, these techniques have applicability in other resource-limited environments, such as spaceborne data systems.

  7. Centrifuge: rapid and sensitive classification of metagenomic sequences.

    Science.gov (United States)

    Kim, Daehwan; Song, Li; Breitwieser, Florian P; Salzberg, Steven L

    2016-12-01

    Centrifuge is a novel microbial classification engine that enables rapid, accurate, and sensitive labeling of reads and quantification of species on desktop computers. The system uses an indexing scheme based on the Burrows-Wheeler transform (BWT) and the Ferragina-Manzini (FM) index, optimized specifically for the metagenomic classification problem. Centrifuge requires a relatively small index (4.2 GB for 4078 bacterial and 200 archaeal genomes) and classifies sequences at very high speed, allowing it to process the millions of reads from a typical high-throughput DNA sequencing run within a few minutes. Together, these advances enable timely and accurate analysis of large metagenomics data sets on conventional desktop computers. Because of its space-optimized indexing schemes, Centrifuge also makes it possible to index the entire NCBI nonredundant nucleotide sequence database (a total of 109 billion bases) with an index size of 69 GB, in contrast to k-mer-based indexing schemes, which require far more extensive space. © 2016 Kim et al.; Published by Cold Spring Harbor Laboratory Press.

  8. A Classification Methodology and Retrieval Model to Support Software Reuse

    Science.gov (United States)

    1988-01-01

    Dewey Decimal Classification ( DDC 18), an enumerative scheme, occupies 40 pages [Buchanan 19791. Langridge [19731 states that the facets listed in the...sense of historical importance or wide spread use. The schemes are: Dewey Decimal Classification ( DDC ), Universal Decimal Classification (UDC...Classification Systems ..... ..... 2.3.3 Library Classification__- .52 23.3.1 Dewey Decimal Classification -53 2.33.2 Universal Decimal Classification 55 2333

  9. Analysis on Target Detection and Classification in LTE Based Passive Forward Scattering Radar

    Directory of Open Access Journals (Sweden)

    Raja Syamsul Azmir Raja Abdullah

    2016-09-01

    Full Text Available The passive bistatic radar (PBR system can utilize the illuminator of opportunity to enhance radar capability. By utilizing the forward scattering technique and procedure into the specific mode of PBR can provide an improvement in target detection and classification. The system is known as passive Forward Scattering Radar (FSR. The passive FSR system can exploit the peculiar advantage of the enhancement in forward scatter radar cross section (FSRCS for target detection. Thus, the aim of this paper is to show the feasibility of passive FSR for moving target detection and classification by experimental analysis and results. The signal source is coming from the latest technology of 4G Long-Term Evolution (LTE base station. A detailed explanation on the passive FSR receiver circuit, the detection scheme and the classification algorithm are given. In addition, the proposed passive FSR circuit employs the self-mixing technique at the receiver; hence the synchronization signal from the transmitter is not required. The experimental results confirm the passive FSR system’s capability for ground target detection and classification. Furthermore, this paper illustrates the first classification result in the passive FSR system. The great potential in the passive FSR system provides a new research area in passive radar that can be used for diverse remote monitoring applications.

  10. Analysis on Target Detection and Classification in LTE Based Passive Forward Scattering Radar.

    Science.gov (United States)

    Raja Abdullah, Raja Syamsul Azmir; Abdul Aziz, Noor Hafizah; Abdul Rashid, Nur Emileen; Ahmad Salah, Asem; Hashim, Fazirulhisyam

    2016-09-29

    The passive bistatic radar (PBR) system can utilize the illuminator of opportunity to enhance radar capability. By utilizing the forward scattering technique and procedure into the specific mode of PBR can provide an improvement in target detection and classification. The system is known as passive Forward Scattering Radar (FSR). The passive FSR system can exploit the peculiar advantage of the enhancement in forward scatter radar cross section (FSRCS) for target detection. Thus, the aim of this paper is to show the feasibility of passive FSR for moving target detection and classification by experimental analysis and results. The signal source is coming from the latest technology of 4G Long-Term Evolution (LTE) base station. A detailed explanation on the passive FSR receiver circuit, the detection scheme and the classification algorithm are given. In addition, the proposed passive FSR circuit employs the self-mixing technique at the receiver; hence the synchronization signal from the transmitter is not required. The experimental results confirm the passive FSR system's capability for ground target detection and classification. Furthermore, this paper illustrates the first classification result in the passive FSR system. The great potential in the passive FSR system provides a new research area in passive radar that can be used for diverse remote monitoring applications.

  11. An Efficient Homomorphic Aggregate Signature Scheme Based on Lattice

    Directory of Open Access Journals (Sweden)

    Zhengjun Jing

    2014-01-01

    Full Text Available Homomorphic aggregate signature (HAS is a linearly homomorphic signature (LHS for multiple users, which can be applied for a variety of purposes, such as multi-source network coding and sensor data aggregation. In order to design an efficient postquantum secure HAS scheme, we borrow the idea of the lattice-based LHS scheme over binary field in the single-user case, and develop it into a new lattice-based HAS scheme in this paper. The security of the proposed scheme is proved by showing a reduction to the single-user case and the signature length remains invariant. Compared with the existing lattice-based homomorphic aggregate signature scheme, our new scheme enjoys shorter signature length and high efficiency.

  12. Clinical presentation and outcome prediction of clinical, serological, and histopathological classification schemes in ANCA-associated vasculitis with renal involvement.

    Science.gov (United States)

    Córdova-Sánchez, Bertha M; Mejía-Vilet, Juan M; Morales-Buenrostro, Luis E; Loyola-Rodríguez, Georgina; Uribe-Uribe, Norma O; Correa-Rotter, Ricardo

    2016-07-01

    Several classification schemes have been developed for anti-neutrophil cytoplasmic antibody (ANCA)-associated vasculitis (AAV), with actual debate focusing on their clinical and prognostic performance. Sixty-two patients with renal biopsy-proven AAV from a single center in Mexico City diagnosed between 2004 and 2013 were analyzed and classified under clinical (granulomatosis with polyangiitis [GPA], microscopic polyangiitis [MPA], renal limited vasculitis [RLV]), serological (proteinase 3 anti-neutrophil cytoplasmic antibodies [PR3-ANCA], myeloperoxidase anti-neutrophil cytoplasmic antibodies [MPO-ANCA], ANCA negative), and histopathological (focal, crescenteric, mixed-type, sclerosing) categories. Clinical presentation parameters were compared at baseline between classification groups, and the predictive value of different classification categories for disease and renal remission, relapse, renal, and patient survival was analyzed. Serological classification predicted relapse rate (PR3-ANCA hazard ratio for relapse 2.93, 1.20-7.17, p = 0.019). There were no differences in disease or renal remission, renal, or patient survival between clinical and serological categories. Histopathological classification predicted response to therapy, with a poorer renal remission rate for sclerosing group and those with less than 25 % normal glomeruli; in addition, it adequately delimited 24-month glomerular filtration rate (eGFR) evolution, but it did not predict renal nor patient survival. On multivariate models, renal replacement therapy (RRT) requirement (HR 8.07, CI 1.75-37.4, p = 0.008) and proteinuria (HR 1.49, CI 1.03-2.14, p = 0.034) at presentation predicted renal survival, while age (HR 1.10, CI 1.01-1.21, p = 0.041) and infective events during the induction phase (HR 4.72, 1.01-22.1, p = 0.049) negatively influenced patient survival. At present, ANCA-based serological classification may predict AAV relapses, but neither clinical nor serological

  13. SAR Imagery Simulation of Ship Based on Electromagnetic Calculations and Sea Clutter Modelling for Classification Applications

    International Nuclear Information System (INIS)

    Ji, K F; Zhao, Z; Xing, X W; Zou, H X; Zhou, S L

    2014-01-01

    Ship detection and classification with space-borne SAR has many potential applications within the maritime surveillance, fishery activity management, monitoring ship traffic, and military security. While ship detection techniques with SAR imagery are well established, ship classification is still an open issue. One of the main reasons may be ascribed to the difficulties on acquiring the required quantities of real data of vessels under different observation and environmental conditions with precise ground truth. Therefore, simulation of SAR images with high scenario flexibility and reasonable computation costs is compulsory for ship classification algorithms development. However, the simulation of SAR imagery of ship over sea surface is challenging. Though great efforts have been devoted to tackle this difficult problem, it is far from being conquered. This paper proposes a novel scheme for SAR imagery simulation of ship over sea surface. The simulation is implemented based on high frequency electromagnetic calculations methods of PO, MEC, PTD and GO. SAR imagery of sea clutter is modelled by the representative K-distribution clutter model. Then, the simulated SAR imagery of ship can be produced by inserting the simulated SAR imagery chips of ship into the SAR imagery of sea clutter. The proposed scheme has been validated with canonical and complex ship targets over a typical sea scene

  14. Classification of Metal-Deficient Dwarfs in the Vilnius Photometric System

    Directory of Open Access Journals (Sweden)

    Lazauskaitė R.

    2003-12-01

    Full Text Available Methods used for the quantitative classification of metal-deficient stars in the Vilnius photometric system are reviewed. We present a new calibration of absolute magnitudes for dwarfs and subdwarfs, based on Hipparcos parallaxes. The new classification scheme is applied to a sample of Population II visual binaries.

  15. The impact of catchment source group classification on the accuracy of sediment fingerprinting outputs.

    Science.gov (United States)

    Pulley, Simon; Foster, Ian; Collins, Adrian L

    2017-06-01

    The objective classification of sediment source groups is at present an under-investigated aspect of source tracing studies, which has the potential to statistically improve discrimination between sediment sources and reduce uncertainty. This paper investigates this potential using three different source group classification schemes. The first classification scheme was simple surface and subsurface groupings (Scheme 1). The tracer signatures were then used in a two-step cluster analysis to identify the sediment source groupings naturally defined by the tracer signatures (Scheme 2). The cluster source groups were then modified by splitting each one into a surface and subsurface component to suit catchment management goals (Scheme 3). The schemes were tested using artificial mixtures of sediment source samples. Controlled corruptions were made to some of the mixtures to mimic the potential causes of tracer non-conservatism present when using tracers in natural fluvial environments. It was determined how accurately the known proportions of sediment sources in the mixtures were identified after unmixing modelling using the three classification schemes. The cluster analysis derived source groups (2) significantly increased tracer variability ratios (inter-/intra-source group variability) (up to 2122%, median 194%) compared to the surface and subsurface groupings (1). As a result, the composition of the artificial mixtures was identified an average of 9.8% more accurately on the 0-100% contribution scale. It was found that the cluster groups could be reclassified into a surface and subsurface component (3) with no significant increase in composite uncertainty (a 0.1% increase over Scheme 2). The far smaller effects of simulated tracer non-conservatism for the cluster analysis based schemes (2 and 3) was primarily attributed to the increased inter-group variability producing a far larger sediment source signal that the non-conservatism noise (1). Modified cluster analysis

  16. A new gamma-ray burst classification scheme from GRB 060614.

    Science.gov (United States)

    Gehrels, N; Norris, J P; Barthelmy, S D; Granot, J; Kaneko, Y; Kouveliotou, C; Markwardt, C B; Mészáros, P; Nakar, E; Nousek, J A; O'Brien, P T; Page, M; Palmer, D M; Parsons, A M; Roming, P W A; Sakamoto, T; Sarazin, C L; Schady, P; Stamatikos, M; Woosley, S E

    2006-12-21

    Gamma-ray bursts (GRBs) are known to come in two duration classes, separated at approximately 2 s. Long-duration bursts originate from star-forming regions in galaxies, have accompanying supernovae when these are near enough to observe and are probably caused by massive-star collapsars. Recent observations show that short-duration bursts originate in regions within their host galaxies that have lower star-formation rates, consistent with binary neutron star or neutron star-black hole mergers. Moreover, although their hosts are predominantly nearby galaxies, no supernovae have been so far associated with short-duration GRBs. Here we report that the bright, nearby GRB 060614 does not fit into either class. Its approximately 102-s duration groups it with long-duration GRBs, while its temporal lag and peak luminosity fall entirely within the short-duration GRB subclass. Moreover, very deep optical observations exclude an accompanying supernova, similar to short-duration GRBs. This combination of a long-duration event without an accompanying supernova poses a challenge to both the collapsar and the merging-neutron-star interpretations and opens the door to a new GRB classification scheme that straddles both long- and short-duration bursts.

  17. Immunophenotype Discovery, Hierarchical Organization, and Template-based Classification of Flow Cytometry Samples

    Directory of Open Access Journals (Sweden)

    Ariful Azad

    2016-08-01

    Full Text Available We describe algorithms for discovering immunophenotypes from large collections of flow cytometry (FC samples, and using them to organize the samples into a hierarchy based on phenotypic similarity. The hierarchical organization is helpful for effective and robust cytometry data mining, including the creation of collections of cell populations characteristic of different classes of samples, robust classification, and anomaly detection. We summarize a set of samples belonging to a biological class or category with a statistically derived template for the class. Whereas individual samples are represented in terms of their cell populations (clusters, a template consists of generic meta-populations (a group of homogeneous cell populations obtained from the samples in a class that describe key phenotypes shared among all those samples. We organize an FC data collection in a hierarchical data structure that supports the identification of immunophenotypes relevant to clinical diagnosis. A robust template-based classification scheme is also developed, but our primary focus is in the discovery of phenotypic signatures and inter-sample relationships in an FC data collection. This collective analysis approach is more efficient and robust since templates describe phenotypic signatures common to cell populations in several samples, while ignoring noise and small sample-specific variations.We have applied the template-base scheme to analyze several data setsincluding one representing a healthy immune system, and one of Acute Myeloid Leukemia (AMLsamples. The last task is challenging due to the phenotypic heterogeneity of the severalsubtypes of AML. However, we identified thirteen immunophenotypes corresponding to subtypes of AML, and were able to distinguish Acute Promyelocytic Leukemia from other subtypes of AML.

  18. Hyperspectral Image Classification Using Discriminative Dictionary Learning

    International Nuclear Information System (INIS)

    Zongze, Y; Hao, S; Kefeng, J; Huanxin, Z

    2014-01-01

    The hyperspectral image (HSI) processing community has witnessed a surge of papers focusing on the utilization of sparse prior for effective HSI classification. In sparse representation based HSI classification, there are two phases: sparse coding with an over-complete dictionary and classification. In this paper, we first apply a novel fisher discriminative dictionary learning method, which capture the relative difference in different classes. The competitive selection strategy ensures that atoms in the resulting over-complete dictionary are the most discriminative. Secondly, motivated by the assumption that spatially adjacent samples are statistically related and even belong to the same materials (same class), we propose a majority voting scheme incorporating contextual information to predict the category label. Experiment results show that the proposed method can effectively strengthen relative discrimination of the constructed dictionary, and incorporating with the majority voting scheme achieve generally an improved prediction performance

  19. Pitch Based Sound Classification

    DEFF Research Database (Denmark)

    Nielsen, Andreas Brinch; Hansen, Lars Kai; Kjems, U

    2006-01-01

    A sound classification model is presented that can classify signals into music, noise and speech. The model extracts the pitch of the signal using the harmonic product spectrum. Based on the pitch estimate and a pitch error measure, features are created and used in a probabilistic model with soft......-max output function. Both linear and quadratic inputs are used. The model is trained on 2 hours of sound and tested on publicly available data. A test classification error below 0.05 with 1 s classification windows is achieved. Further more it is shown that linear input performs as well as a quadratic......, and that even though classification gets marginally better, not much is achieved by increasing the window size beyond 1 s....

  20. Defuzzification Strategies for Fuzzy Classifications of Remote Sensing Data

    Directory of Open Access Journals (Sweden)

    Peter Hofmann

    2016-06-01

    Full Text Available The classes in fuzzy classification schemes are defined as fuzzy sets, partitioning the feature space through fuzzy rules, defined by fuzzy membership functions. Applying fuzzy classification schemes in remote sensing allows each pixel or segment to be an incomplete member of more than one class simultaneously, i.e., one that does not fully meet all of the classification criteria for any one of the classes and is member of more than one class simultaneously. This can lead to fuzzy, ambiguous and uncertain class assignation, which is unacceptable for many applications, indicating the need for a reliable defuzzification method. Defuzzification in remote sensing has to date, been performed by “crisp-assigning” each fuzzy-classified pixel or segment to the class for which it best fulfills the fuzzy classification rules, regardless of its classification fuzziness, uncertainty or ambiguity (maximum method. The defuzzification of an uncertain or ambiguous fuzzy classification leads to a more or less reliable crisp classification. In this paper the most common parameters for expressing classification uncertainty, fuzziness and ambiguity are analysed and discussed in terms of their ability to express the reliability of a crisp classification. This is done by means of a typical practical example from Object Based Image Analysis (OBIA.

  1. Efficient Fingercode Classification

    Science.gov (United States)

    Sun, Hong-Wei; Law, Kwok-Yan; Gollmann, Dieter; Chung, Siu-Leung; Li, Jian-Bin; Sun, Jia-Guang

    In this paper, we present an efficient fingerprint classification algorithm which is an essential component in many critical security application systems e. g. systems in the e-government and e-finance domains. Fingerprint identification is one of the most important security requirements in homeland security systems such as personnel screening and anti-money laundering. The problem of fingerprint identification involves searching (matching) the fingerprint of a person against each of the fingerprints of all registered persons. To enhance performance and reliability, a common approach is to reduce the search space by firstly classifying the fingerprints and then performing the search in the respective class. Jain et al. proposed a fingerprint classification algorithm based on a two-stage classifier, which uses a K-nearest neighbor classifier in its first stage. The fingerprint classification algorithm is based on the fingercode representation which is an encoding of fingerprints that has been demonstrated to be an effective fingerprint biometric scheme because of its ability to capture both local and global details in a fingerprint image. We enhance this approach by improving the efficiency of the K-nearest neighbor classifier for fingercode-based fingerprint classification. Our research firstly investigates the various fast search algorithms in vector quantization (VQ) and the potential application in fingerprint classification, and then proposes two efficient algorithms based on the pyramid-based search algorithms in VQ. Experimental results on DB1 of FVC 2004 demonstrate that our algorithms can outperform the full search algorithm and the original pyramid-based search algorithms in terms of computational efficiency without sacrificing accuracy.

  2. GIS/RS-based Rapid Reassessment for Slope Land Capability Classification

    Science.gov (United States)

    Chang, T. Y.; Chompuchan, C.

    2014-12-01

    Farmland resources in Taiwan are limited because about 73% is mountainous and slope land. Moreover, the rapid urbanization and dense population resulted in the highly developed flat area. Therefore, the utilization of slope land for agriculture is more needed. In 1976, "Slope Land Conservation and Utilization Act" was promulgated to regulate the slope land utilization. Consequently, slope land capability was categorized into Class I-IV according to 4 criteria, i.e., average land slope, effective soil depth, degree of soil erosion, and parent rock. The slope land capability Class I-VI are suitable for cultivation and pasture. Whereas, Class V should be used for forestry purpose and Class VI should be the conservation land which requires intensive conservation practices. The field survey was conducted to categorize each land unit as the classification scheme. The landowners may not allow to overuse land capability limitation. In the last decade, typhoons and landslides frequently devastated in Taiwan. The rapid post-disaster reassessment of the slope land capability classification is necessary. However, the large-scale disaster on slope land is the constraint of field investigation. This study focused on using satellite remote sensing and GIS as the rapid re-evaluation method. Chenyulan watershed in Nantou County, Taiwan was selected to be a case study area. Grid-based slope derivation, topographic wetness index (TWI) and USLE soil loss calculation were used to classify slope land capability. The results showed that GIS-based classification give an overall accuracy of 68.32%. In addition, the post-disaster areas of Typhoon Morakot in 2009, which interpreted by SPOT satellite imageries, were suggested to classify as the conservation lands. These tools perform better in the large coverage post-disaster update for slope land capability classification and reduce time-consuming, manpower and material resources to the field investigation.

  3. Sound classification of dwellings

    DEFF Research Database (Denmark)

    Rasmussen, Birgit

    2012-01-01

    National schemes for sound classification of dwellings exist in more than ten countries in Europe, typically published as national standards. The schemes define quality classes reflecting different levels of acoustical comfort. Main criteria concern airborne and impact sound insulation between...... dwellings, facade sound insulation and installation noise. The schemes have been developed, implemented and revised gradually since the early 1990s. However, due to lack of coordination between countries, there are significant discrepancies, and new standards and revisions continue to increase the diversity...... is needed, and a European COST Action TU0901 "Integrating and Harmonizing Sound Insulation Aspects in Sustainable Urban Housing Constructions", has been established and runs 2009-2013, one of the main objectives being to prepare a proposal for a European sound classification scheme with a number of quality...

  4. Cluster Based Text Classification Model

    DEFF Research Database (Denmark)

    Nizamani, Sarwat; Memon, Nasrullah; Wiil, Uffe Kock

    2011-01-01

    We propose a cluster based classification model for suspicious email detection and other text classification tasks. The text classification tasks comprise many training examples that require a complex classification model. Using clusters for classification makes the model simpler and increases...... the accuracy at the same time. The test example is classified using simpler and smaller model. The training examples in a particular cluster share the common vocabulary. At the time of clustering, we do not take into account the labels of the training examples. After the clusters have been created......, the classifier is trained on each cluster having reduced dimensionality and less number of examples. The experimental results show that the proposed model outperforms the existing classification models for the task of suspicious email detection and topic categorization on the Reuters-21578 and 20 Newsgroups...

  5. Atmospheric circulation classification comparison based on wildfires in Portugal

    Science.gov (United States)

    Pereira, M. G.; Trigo, R. M.

    2009-04-01

    Atmospheric circulation classifications are not a simple description of atmospheric states but a tool to understand and interpret the atmospheric processes and to model the relation between atmospheric circulation and surface climate and other related variables (Radan Huth et al., 2008). Classifications were initially developed with weather forecasting purposes, however with the progress in computer processing capability, new and more robust objective methods were developed and applied to large datasets prompting atmospheric circulation classification methods to one of the most important fields in synoptic and statistical climatology. Classification studies have been extensively used in climate change studies (e.g. reconstructed past climates, recent observed changes and future climates), in bioclimatological research (e.g. relating human mortality to climatic factors) and in a wide variety of synoptic climatological applications (e.g. comparison between datasets, air pollution, snow avalanches, wine quality, fish captures and forest fires). Likewise, atmospheric circulation classifications are important for the study of the role of weather in wildfire occurrence in Portugal because the daily synoptic variability is the most important driver of local weather conditions (Pereira et al., 2005). In particular, the objective classification scheme developed by Trigo and DaCamara (2000) to classify the atmospheric circulation affecting Portugal have proved to be quite useful in discriminating the occurrence and development of wildfires as well as the distribution over Portugal of surface climatic variables with impact in wildfire activity such as maximum and minimum temperature and precipitation. This work aims to present: (i) an overview the existing circulation classification for the Iberian Peninsula, and (ii) the results of a comparison study between these atmospheric circulation classifications based on its relation with wildfires and relevant meteorological

  6. Update on diabetes classification.

    Science.gov (United States)

    Thomas, Celeste C; Philipson, Louis H

    2015-01-01

    This article highlights the difficulties in creating a definitive classification of diabetes mellitus in the absence of a complete understanding of the pathogenesis of the major forms. This brief review shows the evolving nature of the classification of diabetes mellitus. No classification scheme is ideal, and all have some overlap and inconsistencies. The only diabetes in which it is possible to accurately diagnose by DNA sequencing, monogenic diabetes, remains undiagnosed in more than 90% of the individuals who have diabetes caused by one of the known gene mutations. The point of classification, or taxonomy, of disease, should be to give insight into both pathogenesis and treatment. It remains a source of frustration that all schemes of diabetes mellitus continue to fall short of this goal. Copyright © 2015 Elsevier Inc. All rights reserved.

  7. Classification and global distribution of ocean precipitation types based on satellite passive microwave signatures

    Science.gov (United States)

    Gautam, Nitin

    The main objectives of this thesis are to develop a robust statistical method for the classification of ocean precipitation based on physical properties to which the SSM/I is sensitive and to examine how these properties vary globally and seasonally. A two step approach is adopted for the classification of oceanic precipitation classes from multispectral SSM/I data: (1)we subjectively define precipitation classes using a priori information about the precipitating system and its possible distinct signature on SSM/I data such as scattering by ice particles aloft in the precipitating cloud, emission by liquid rain water below freezing level, the difference of polarization at 19 GHz-an indirect measure of optical depth, etc.; (2)we then develop an objective classification scheme which is found to reproduce the subjective classification with high accuracy. This hybrid strategy allows us to use the characteristics of the data to define and encode classes and helps retain the physical interpretation of classes. The classification methods based on k-nearest neighbor and neural network are developed to objectively classify six precipitation classes. It is found that the classification method based neural network yields high accuracy for all precipitation classes. An inversion method based on minimum variance approach was used to retrieve gross microphysical properties of these precipitation classes such as column integrated liquid water path, column integrated ice water path, and column integrated min water path. This classification method is then applied to 2 years (1991-92) of SSM/I data to examine and document the seasonal and global distribution of precipitation frequency corresponding to each of these objectively defined six classes. The characteristics of the distribution are found to be consistent with assumptions used in defining these six precipitation classes and also with well known climatological patterns of precipitation regions. The seasonal and global

  8. Emotional textile image classification based on cross-domain convolutional sparse autoencoders with feature selection

    Science.gov (United States)

    Li, Zuhe; Fan, Yangyu; Liu, Weihua; Yu, Zeqi; Wang, Fengqin

    2017-01-01

    We aim to apply sparse autoencoder-based unsupervised feature learning to emotional semantic analysis for textile images. To tackle the problem of limited training data, we present a cross-domain feature learning scheme for emotional textile image classification using convolutional autoencoders. We further propose a correlation-analysis-based feature selection method for the weights learned by sparse autoencoders to reduce the number of features extracted from large size images. First, we randomly collect image patches on an unlabeled image dataset in the source domain and learn local features with a sparse autoencoder. We then conduct feature selection according to the correlation between different weight vectors corresponding to the autoencoder's hidden units. We finally adopt a convolutional neural network including a pooling layer to obtain global feature activations of textile images in the target domain and send these global feature vectors into logistic regression models for emotional image classification. The cross-domain unsupervised feature learning method achieves 65% to 78% average accuracy in the cross-validation experiments corresponding to eight emotional categories and performs better than conventional methods. Feature selection can reduce the computational cost of global feature extraction by about 50% while improving classification performance.

  9. PET/CT detectability and classification of simulated pulmonary lesions using an SUV correction scheme

    Science.gov (United States)

    Morrow, Andrew N.; Matthews, Kenneth L., II; Bujenovic, Steven

    2008-03-01

    Positron emission tomography (PET) and computed tomography (CT) together are a powerful diagnostic tool, but imperfect image quality allows false positive and false negative diagnoses to be made by any observer despite experience and training. This work investigates PET acquisition mode, reconstruction method and a standard uptake value (SUV) correction scheme on the classification of lesions as benign or malignant in PET/CT images, in an anthropomorphic phantom. The scheme accounts for partial volume effect (PVE) and PET resolution. The observer draws a region of interest (ROI) around the lesion using the CT dataset. A simulated homogenous PET lesion of the same shape as the drawn ROI is blurred with the point spread function (PSF) of the PET scanner to estimate the PVE, providing a scaling factor to produce a corrected SUV. Computer simulations showed that the accuracy of the corrected PET values depends on variations in the CT-drawn boundary and the position of the lesion with respect to the PET image matrix, especially for smaller lesions. Correction accuracy was affected slightly by mismatch of the simulation PSF and the actual scanner PSF. The receiver operating characteristic (ROC) study resulted in several observations. Using observer drawn ROIs, scaled tumor-background ratios (TBRs) more accurately represented actual TBRs than unscaled TBRs. For the PET images, 3D OSEM outperformed 2D OSEM, 3D OSEM outperformed 3D FBP, and 2D OSEM outperformed 2D FBP. The correction scheme significantly increased sensitivity and slightly increased accuracy for all acquisition and reconstruction modes at the cost of a small decrease in specificity.

  10. Comparing the performance of flat and hierarchical Habitat/Land-Cover classification models in a NATURA 2000 site

    Science.gov (United States)

    Gavish, Yoni; O'Connell, Jerome; Marsh, Charles J.; Tarantino, Cristina; Blonda, Palma; Tomaselli, Valeria; Kunin, William E.

    2018-02-01

    The increasing need for high quality Habitat/Land-Cover (H/LC) maps has triggered considerable research into novel machine-learning based classification models. In many cases, H/LC classes follow pre-defined hierarchical classification schemes (e.g., CORINE), in which fine H/LC categories are thematically nested within more general categories. However, none of the existing machine-learning algorithms account for this pre-defined hierarchical structure. Here we introduce a novel Random Forest (RF) based application of hierarchical classification, which fits a separate local classification model in every branching point of the thematic tree, and then integrates all the different local models to a single global prediction. We applied the hierarchal RF approach in a NATURA 2000 site in Italy, using two land-cover (CORINE, FAO-LCCS) and one habitat classification scheme (EUNIS) that differ from one another in the shape of the class hierarchy. For all 3 classification schemes, both the hierarchical model and a flat model alternative provided accurate predictions, with kappa values mostly above 0.9 (despite using only 2.2-3.2% of the study area as training cells). The flat approach slightly outperformed the hierarchical models when the hierarchy was relatively simple, while the hierarchical model worked better under more complex thematic hierarchies. Most misclassifications came from habitat pairs that are thematically distant yet spectrally similar. In 2 out of 3 classification schemes, the additional constraints of the hierarchical model resulted with fewer such serious misclassifications relative to the flat model. The hierarchical model also provided valuable information on variable importance which can shed light into "black-box" based machine learning algorithms like RF. We suggest various ways by which hierarchical classification models can increase the accuracy and interpretability of H/LC classification maps.

  11. An assessment of commonly employed satellite-based remote sensors for mapping mangrove species in Mexico using an NDVI-based classification scheme.

    Science.gov (United States)

    Valderrama-Landeros, L; Flores-de-Santiago, F; Kovacs, J M; Flores-Verdugo, F

    2017-12-14

    Optimizing the classification accuracy of a mangrove forest is of utmost importance for conservation practitioners. Mangrove forest mapping using satellite-based remote sensing techniques is by far the most common method of classification currently used given the logistical difficulties of field endeavors in these forested wetlands. However, there is now an abundance of options from which to choose in regards to satellite sensors, which has led to substantially different estimations of mangrove forest location and extent with particular concern for degraded systems. The objective of this study was to assess the accuracy of mangrove forest classification using different remotely sensed data sources (i.e., Landsat-8, SPOT-5, Sentinel-2, and WorldView-2) for a system located along the Pacific coast of Mexico. Specifically, we examined a stressed semiarid mangrove forest which offers a variety of conditions such as dead areas, degraded stands, healthy mangroves, and very dense mangrove island formations. The results indicated that Landsat-8 (30 m per pixel) had  the lowest overall accuracy at 64% and that WorldView-2 (1.6 m per pixel) had the highest at 93%. Moreover, the SPOT-5 and the Sentinel-2 classifications (10 m per pixel) were very similar having accuracies of 75 and 78%, respectively. In comparison to WorldView-2, the other sensors overestimated the extent of Laguncularia racemosa and underestimated the extent of Rhizophora mangle. When considering such type of sensors, the higher spatial resolution can be particularly important in mapping small mangrove islands that often occur in degraded mangrove systems.

  12. A Lattice-Based Identity-Based Proxy Blind Signature Scheme in the Standard Model

    Directory of Open Access Journals (Sweden)

    Lili Zhang

    2014-01-01

    Full Text Available A proxy blind signature scheme is a special form of blind signature which allowed a designated person called proxy signer to sign on behalf of original signers without knowing the content of the message. It combines the advantages of proxy signature and blind signature. Up to date, most proxy blind signature schemes rely on hard number theory problems, discrete logarithm, and bilinear pairings. Unfortunately, the above underlying number theory problems will be solvable in the postquantum era. Lattice-based cryptography is enjoying great interest these days, due to implementation simplicity and provable security reductions. Moreover, lattice-based cryptography is believed to be hard even for quantum computers. In this paper, we present a new identity-based proxy blind signature scheme from lattices without random oracles. The new scheme is proven to be strongly unforgeable under the standard hardness assumption of the short integer solution problem (SIS and the inhomogeneous small integer solution problem (ISIS. Furthermore, the secret key size and the signature length of our scheme are invariant and much shorter than those of the previous lattice-based proxy blind signature schemes. To the best of our knowledge, our construction is the first short lattice-based identity-based proxy blind signature scheme in the standard model.

  13. Development and evaluation of a computer-aided diagnostic scheme for lung nodule detection in chest radiographs by means of two-stage nodule enhancement with support vector classification

    International Nuclear Information System (INIS)

    Chen Sheng; Suzuki, Kenji; MacMahon, Heber

    2011-01-01

    Purpose: To develop a computer-aided detection (CADe) scheme for nodules in chest radiographs (CXRs) with a high sensitivity and a low false-positive (FP) rate. Methods: The authors developed a CADe scheme consisting of five major steps, which were developed for improving the overall performance of CADe schemes. First, to segment the lung fields accurately, the authors developed a multisegment active shape model. Then, a two-stage nodule-enhancement technique was developed for improving the conspicuity of nodules. Initial nodule candidates were detected and segmented by using the clustering watershed algorithm. Thirty-one shape-, gray-level-, surface-, and gradient-based features were extracted from each segmented candidate for determining the feature space, including one of the new features based on the Canny edge detector to eliminate a major FP source caused by rib crossings. Finally, a nonlinear support vector machine (SVM) with a Gaussian kernel was employed for classification of the nodule candidates. Results: To evaluate and compare the scheme to other published CADe schemes, the authors used a publicly available database containing 140 nodules in 140 CXRs and 93 normal CXRs. The CADe scheme based on the SVM classifier achieved sensitivities of 78.6% (110/140) and 71.4% (100/140) with averages of 5.0 (1165/233) FPs/image and 2.0 (466/233) FPs/image, respectively, in a leave-one-out cross-validation test, whereas the CADe scheme based on a linear discriminant analysis classifier had a sensitivity of 60.7% (85/140) at an FP rate of 5.0 FPs/image. For nodules classified as ''very subtle'' and ''extremely subtle,'' a sensitivity of 57.1% (24/42) was achieved at an FP rate of 5.0 FPs/image. When the authors used a database developed at the University of Chicago, the sensitivities was 83.3% (40/48) and 77.1% (37/48) at an FP rate of 5.0 (240/48) FPs/image and 2.0 (96/48) FPs /image, respectively. Conclusions: These results compare favorably to those described for

  14. Signature scheme based on bilinear pairs

    Science.gov (United States)

    Tong, Rui Y.; Geng, Yong J.

    2013-03-01

    An identity-based signature scheme is proposed by using bilinear pairs technology. The scheme uses user's identity information as public key such as email address, IP address, telephone number so that it erases the cost of forming and managing public key infrastructure and avoids the problem of user private generating center generating forgery signature by using CL-PKC framework to generate user's private key.

  15. Dynamic Symmetric Key Mobile Commerce Scheme Based on Self-Verified Mechanism

    Directory of Open Access Journals (Sweden)

    Jiachen Yang

    2014-01-01

    Full Text Available In terms of the security and efficiency of mobile e-commerce, the authors summarized the advantages and disadvantages of several related schemes, especially the self-verified mobile payment scheme based on the elliptic curve cryptosystem (ECC and then proposed a new type of dynamic symmetric key mobile commerce scheme based on self-verified mechanism. The authors analyzed the basic algorithm based on self-verified mechanisms and detailed the complete transaction process of the proposed scheme. The authors analyzed the payment scheme based on the security and high efficiency index. The analysis shows that the proposed scheme not only meets the high efficiency of mobile electronic payment premise, but also takes the security into account. The user confirmation mechanism at the end of the proposed scheme further strengthens the security of the proposed scheme. In brief, the proposed scheme is more efficient and practical than most of the existing schemes.

  16. GPGPU Accelerated Deep Object Classification on a Heterogeneous Mobile Platform

    Directory of Open Access Journals (Sweden)

    Syed Tahir Hussain Rizvi

    2016-12-01

    Full Text Available Deep convolutional neural networks achieve state-of-the-art performance in image classification. The computational and memory requirements of such networks are however huge, and that is an issue on embedded devices due to their constraints. Most of this complexity derives from the convolutional layers and in particular from the matrix multiplications they entail. This paper proposes a complete approach to image classification providing common layers used in neural networks. Namely, the proposed approach relies on a heterogeneous CPU-GPU scheme for performing convolutions in the transform domain. The Compute Unified Device Architecture(CUDA-based implementation of the proposed approach is evaluated over three different image classification networks on a Tegra K1 CPU-GPU mobile processor. Experiments show that the presented heterogeneous scheme boasts a 50× speedup over the CPU-only reference and outperforms a GPU-based reference by 2×, while slashing the power consumption by nearly 30%.

  17. Error function attack of chaos synchronization based encryption schemes.

    Science.gov (United States)

    Wang, Xingang; Zhan, Meng; Lai, C-H; Gang, Hu

    2004-03-01

    Different chaos synchronization based encryption schemes are reviewed and compared from the practical point of view. As an efficient cryptanalysis tool for chaos encryption, a proposal based on the error function attack is presented systematically and used to evaluate system security. We define a quantitative measure (quality factor) of the effective applicability of a chaos encryption scheme, which takes into account the security, the encryption speed, and the robustness against channel noise. A comparison is made of several encryption schemes and it is found that a scheme based on one-way coupled chaotic map lattices performs outstandingly well, as judged from quality factor. Copyright 2004 American Institute of Physics.

  18. TFOS DEWS II Definition and Classification Report.

    Science.gov (United States)

    Craig, Jennifer P; Nichols, Kelly K; Akpek, Esen K; Caffery, Barbara; Dua, Harminder S; Joo, Choun-Ki; Liu, Zuguo; Nelson, J Daniel; Nichols, Jason J; Tsubota, Kazuo; Stapleton, Fiona

    2017-07-01

    The goals of the TFOS DEWS II Definition and Classification Subcommittee were to create an evidence-based definition and a contemporary classification system for dry eye disease (DED). The new definition recognizes the multifactorial nature of dry eye as a disease where loss of homeostasis of the tear film is the central pathophysiological concept. Ocular symptoms, as a broader term that encompasses reports of discomfort or visual disturbance, feature in the definition and the key etiologies of tear film instability, hyperosmolarity, and ocular surface inflammation and damage were determined to be important for inclusion in the definition. In the light of new data, neurosensory abnormalities were also included in the definition for the first time. In the classification of DED, recent evidence supports a scheme based on the pathophysiology where aqueous deficient and evaporative dry eye exist as a continuum, such that elements of each are considered in diagnosis and management. Central to the scheme is a positive diagnosis of DED with signs and symptoms, and this is directed towards management to restore homeostasis. The scheme also allows consideration of various related manifestations, such as non-obvious disease involving ocular surface signs without related symptoms, including neurotrophic conditions where dysfunctional sensation exists, and cases where symptoms exist without demonstrable ocular surface signs, including neuropathic pain. This approach is not intended to override clinical assessment and judgment but should prove helpful in guiding clinical management and research. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. TFM classification and staging of oral submucous fibrosis: A new proposal.

    Science.gov (United States)

    Arakeri, Gururaj; Thomas, Deepak; Aljabab, Abdulsalam S; Hunasgi, Santosh; Rai, Kirthi Kumar; Hale, Beverley; Fonseca, Felipe Paiva; Gomez, Ricardo Santiago; Rahimi, Siavash; Merkx, Matthias A W; Brennan, Peter A

    2018-04-01

    We have evaluated the rationale of existing grading and staging schemes of oral submucous fibrosis (OSMF) based on how they are categorized. A novel classification and staging scheme is proposed. A total of 300 OSMF patients were evaluated for agreement between functional, clinical, and histopathological staging. Bilateral biopsies were assessed in 25 patients to evaluate for any differences in histopathological staging of OSMF in the same mouth. Extent of clinician agreement for categorized staging data was evaluated using Cohen's weighted kappa analysis. Cross-tabulation was performed on categorical grading data to understand the intercorrelation, and the unweighted kappa analysis was used to assess the bilateral grade agreement. Probabilities of less than 0.05 were considered significant. Data were analyzed using SPSS Statistics (version 25.0, IBM, USA). A low agreement was found between all the stages depicting the independent nature of trismus, clinical features, and histopathological components (K = 0.312, 0.167, 0.152) in OSMF. Following analysis, a three-component classification scheme (TFM classification) was developed that describes the severity of each independently, grouping them using a novel three-tier staging scheme as a guide to the treatment plan. The proposed classification and staging could be useful for effective communication, categorization, and for recording data and prognosis, and for guiding treatment plans. Furthermore, the classification considers OSMF malignant transformation in detail. © 2018 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  20. A novel secret image sharing scheme based on chaotic system

    Science.gov (United States)

    Li, Li; Abd El-Latif, Ahmed A.; Wang, Chuanjun; Li, Qiong; Niu, Xiamu

    2012-04-01

    In this paper, we propose a new secret image sharing scheme based on chaotic system and Shamir's method. The new scheme protects the shadow images with confidentiality and loss-tolerance simultaneously. In the new scheme, we generate the key sequence based on chaotic system and then encrypt the original image during the sharing phase. Experimental results and analysis of the proposed scheme demonstrate a better performance than other schemes and confirm a high probability to resist brute force attack.

  1. A unified classification of alien species based on the magnitude of their environmental impacts.

    Directory of Open Access Journals (Sweden)

    Tim M Blackburn

    2014-05-01

    Full Text Available Species moved by human activities beyond the limits of their native geographic ranges into areas in which they do not naturally occur (termed aliens can cause a broad range of significant changes to recipient ecosystems; however, their impacts vary greatly across species and the ecosystems into which they are introduced. There is therefore a critical need for a standardised method to evaluate, compare, and eventually predict the magnitudes of these different impacts. Here, we propose a straightforward system for classifying alien species according to the magnitude of their environmental impacts, based on the mechanisms of impact used to code species in the International Union for Conservation of Nature (IUCN Global Invasive Species Database, which are presented here for the first time. The classification system uses five semi-quantitative scenarios describing impacts under each mechanism to assign species to different levels of impact-ranging from Minimal to Massive-with assignment corresponding to the highest level of deleterious impact associated with any of the mechanisms. The scheme also includes categories for species that are Not Evaluated, have No Alien Population, or are Data Deficient, and a method for assigning uncertainty to all the classifications. We show how this classification system is applicable at different levels of ecological complexity and different spatial and temporal scales, and embraces existing impact metrics. In fact, the scheme is analogous to the already widely adopted and accepted Red List approach to categorising extinction risk, and so could conceivably be readily integrated with existing practices and policies in many regions.

  2. New guidelines for dam safety classification

    International Nuclear Information System (INIS)

    Dascal, O.

    1999-01-01

    Elements are outlined of recommended new guidelines for safety classification of dams. Arguments are provided for the view that dam classification systems should require more than one system as follows: (a) classification for selection of design criteria, operation procedures and emergency measures plans, based on potential consequences of a dam failure - the hazard classification of water retaining structures; (b) classification for establishment of surveillance activities and for safety evaluation of dams, based on the probability and consequences of failure - the risk classification of water retaining structures; and (c) classification for establishment of water management plans, for safety evaluation of the entire project, for preparation of emergency measures plans, for definition of the frequency and extent of maintenance operations, and for evaluation of changes and modifications required - the hazard classification of the project. The hazard classification of the dam considers, as consequence, mainly the loss of lives or persons in jeopardy and the property damages to third parties. Difficulties in determining the risk classification of the dam lie in the fact that no tool exists to evaluate the probability of the dam's failure. To overcome this, the probability of failure can be substituted for by a set of dam characteristics that express the failure potential of the dam and its foundation. The hazard classification of the entire project is based on the probable consequences of dam failure influencing: loss of life, persons in jeopardy, property and environmental damage. The classification scheme is illustrated for dam threatening events such as earthquakes and floods. 17 refs., 5 tabs

  3. The Classification of Hysteria and Related Disorders: Historical and Phenomenological Considerations

    Science.gov (United States)

    North, Carol S.

    2015-01-01

    This article examines the history of the conceptualization of dissociative, conversion, and somatoform syndromes in relation to one another, chronicles efforts to classify these and other phenomenologically-related psychopathology in the American diagnostic system for mental disorders, and traces the subsequent divergence in opinions of dissenting sectors on classification of these disorders. This article then considers the extensive phenomenological overlap across these disorders in empirical research, and from this foundation presents a new model for the conceptualization of these disorders. The classification of disorders formerly known as hysteria and phenomenologically-related syndromes has long been contentious and unsettled. Examination of the long history of the conceptual difficulties, which remain inherent in existing classification schemes for these disorders, can help to address the continuing controversy. This review clarifies the need for a major conceptual revision of the current classification of these disorders. A new phenomenologically-based classification scheme for these disorders is proposed that is more compatible with the agnostic and atheoretical approach to diagnosis of mental disorders used by the current classification system. PMID:26561836

  4. The Classification of Hysteria and Related Disorders: Historical and Phenomenological Considerations

    Directory of Open Access Journals (Sweden)

    Carol S. North

    2015-11-01

    Full Text Available This article examines the history of the conceptualization of dissociative, conversion, and somatoform syndromes in relation to one another, chronicles efforts to classify these and other phenomenologically-related psychopathology in the American diagnostic system for mental disorders, and traces the subsequent divergence in opinions of dissenting sectors on classification of these disorders. This article then considers the extensive phenomenological overlap across these disorders in empirical research, and from this foundation presents a new model for the conceptualization of these disorders. The classification of disorders formerly known as hysteria and phenomenologically-related syndromes has long been contentious and unsettled. Examination of the long history of the conceptual difficulties, which remain inherent in existing classification schemes for these disorders, can help to address the continuing controversy. This review clarifies the need for a major conceptual revision of the current classification of these disorders. A new phenomenologically-based classification scheme for these disorders is proposed that is more compatible with the agnostic and atheoretical approach to diagnosis of mental disorders used by the current classification system.

  5. Oral epithelial dysplasia classification systems

    DEFF Research Database (Denmark)

    Warnakulasuriya, S; Reibel, J; Bouquot, J

    2008-01-01

    At a workshop coordinated by the WHO Collaborating Centre for Oral Cancer and Precancer in the United Kingdom issues related to potentially malignant disorders of the oral cavity were discussed by an expert group. The consensus views of the Working Group are presented in a series of papers....... In this report, we review the oral epithelial dysplasia classification systems. The three classification schemes [oral epithelial dysplasia scoring system, squamous intraepithelial neoplasia and Ljubljana classification] were presented and the Working Group recommended epithelial dysplasia grading for routine...... use. Although most oral pathologists possibly recognize and accept the criteria for grading epithelial dysplasia, firstly based on architectural features and then of cytology, there is great variability in their interpretation of the presence, degree and significance of the individual criteria...

  6. Application of Snyder-Dolan classification scheme to the selection of "orthogonal" columns for fast screening of illicit drugs and impurity profiling of pharmaceuticals--I. Isocratic elution.

    Science.gov (United States)

    Fan, Wenzhe; Zhang, Yu; Carr, Peter W; Rutan, Sarah C; Dumarey, Melanie; Schellinger, Adam P; Pritts, Wayne

    2009-09-18

    Fourteen judiciously selected reversed phase columns were tested with 18 cationic drug solutes under the isocratic elution conditions advised in the Snyder-Dolan (S-D) hydrophobic subtraction method of column classification. The standard errors (S.E.) of the least squares regressions of logk' vs. logk'(REF) were obtained for a given column against a reference column and used to compare and classify columns based on their selectivity. The results are consistent with those obtained with a study of the 16 test solutes recommended by Snyder and Dolan. To the extent these drugs are representative, these results show that the S-D classification scheme is also generally applicable to pharmaceuticals under isocratic conditions. That is, those columns judged to be similar based on the 16 S-D solutes were similar based on the 18 drugs; furthermore those columns judged to have significantly different selectivities based on the 16 S-D probes appeared to be quite different for the drugs as well. Given that the S-D method has been used to classify more than 400 different types of reversed phases the extension to cationic drugs is a significant finding.

  7. An efficient and provable secure revocable identity-based encryption scheme.

    Directory of Open Access Journals (Sweden)

    Changji Wang

    Full Text Available Revocation functionality is necessary and crucial to identity-based cryptosystems. Revocable identity-based encryption (RIBE has attracted a lot of attention in recent years, many RIBE schemes have been proposed in the literature but shown to be either insecure or inefficient. In this paper, we propose a new scalable RIBE scheme with decryption key exposure resilience by combining Lewko and Waters' identity-based encryption scheme and complete subtree method, and prove our RIBE scheme to be semantically secure using dual system encryption methodology. Compared to existing scalable and semantically secure RIBE schemes, our proposed RIBE scheme is more efficient in term of ciphertext size, public parameters size and decryption cost at price of a little looser security reduction. To the best of our knowledge, this is the first construction of scalable and semantically secure RIBE scheme with constant size public system parameters.

  8. High-order asynchrony-tolerant finite difference schemes for partial differential equations

    Science.gov (United States)

    Aditya, Konduri; Donzis, Diego A.

    2017-12-01

    Synchronizations of processing elements (PEs) in massively parallel simulations, which arise due to communication or load imbalances between PEs, significantly affect the scalability of scientific applications. We have recently proposed a method based on finite-difference schemes to solve partial differential equations in an asynchronous fashion - synchronization between PEs is relaxed at a mathematical level. While standard schemes can maintain their stability in the presence of asynchrony, their accuracy is drastically affected. In this work, we present a general methodology to derive asynchrony-tolerant (AT) finite difference schemes of arbitrary order of accuracy, which can maintain their accuracy when synchronizations are relaxed. We show that there are several choices available in selecting a stencil to derive these schemes and discuss their effect on numerical and computational performance. We provide a simple classification of schemes based on the stencil and derive schemes that are representative of different classes. Their numerical error is rigorously analyzed within a statistical framework to obtain the overall accuracy of the solution. Results from numerical experiments are used to validate the performance of the schemes.

  9. A study for Unsafe Act classification under crew interaction during procedure-driven operation

    International Nuclear Information System (INIS)

    Choi, Sun Yeong; Park, Jinkyun; Kim, Yochan; Kim, Seunghwan; Jung, Wondea

    2016-01-01

    Highlights: • The procedure driven operation was divided into four stages by considering the crew relations such as instructions and responses. • Ten patterns of UA occurrence paths and the related operators per path were identified. • The UA type classification scheme was proposed based on the ten patterns of UA occurrence paths. • A case study to implement the UA type classification and to define the related operators per UA was performed. • The UA type classification scheme can be practical in that it prevents bias by subjective judgment. - Abstract: In this study, a method for UA (Unsafe Act) classification under a simulated procedure driven operation was proposed. To this end, a procedure driven operation was divided into four stages by considering the crew relations such as instructions and responses. Based on the four stages of a procedure driven operation, ten patterns of UA occurrence paths and the related operators per path were identified. From the ten types of UA occurrence paths including related operators, it is practicable to trace when and by whom a UA is initiated during a procedure driven operation, and the interaction or causality among the crew after the UA is initiated. Therefore, the types of UAs were classified into ‘Instruction UA’, ‘Reporting UA’, and ‘Execution UA’ by considering the initiation time and initiator of UA. A case study to implement the UA type classification and to define the related operators per UA was performed with the ISLOCA scenario simulator training data. The UA classification scheme proposed in this paper can be practical in that it does not require expertise relatively in a human performance analysis and it prevents bias by subjective judgment because it is based on an observation-based approach to exclude subjective judgment.

  10. Cloud field classification based on textural features

    Science.gov (United States)

    Sengupta, Sailes Kumar

    1989-01-01

    An essential component in global climate research is accurate cloud cover and type determination. Of the two approaches to texture-based classification (statistical and textural), only the former is effective in the classification of natural scenes such as land, ocean, and atmosphere. In the statistical approach that was adopted, parameters characterizing the stochastic properties of the spatial distribution of grey levels in an image are estimated and then used as features for cloud classification. Two types of textural measures were used. One is based on the distribution of the grey level difference vector (GLDV), and the other on a set of textural features derived from the MaxMin cooccurrence matrix (MMCM). The GLDV method looks at the difference D of grey levels at pixels separated by a horizontal distance d and computes several statistics based on this distribution. These are then used as features in subsequent classification. The MaxMin tectural features on the other hand are based on the MMCM, a matrix whose (I,J)th entry give the relative frequency of occurrences of the grey level pair (I,J) that are consecutive and thresholded local extremes separated by a given pixel distance d. Textural measures are then computed based on this matrix in much the same manner as is done in texture computation using the grey level cooccurrence matrix. The database consists of 37 cloud field scenes from LANDSAT imagery using a near IR visible channel. The classification algorithm used is the well known Stepwise Discriminant Analysis. The overall accuracy was estimated by the percentage or correct classifications in each case. It turns out that both types of classifiers, at their best combination of features, and at any given spatial resolution give approximately the same classification accuracy. A neural network based classifier with a feed forward architecture and a back propagation training algorithm is used to increase the classification accuracy, using these two classes

  11. Auroral arc classification scheme based on the observed arc-associated electric field pattern

    International Nuclear Information System (INIS)

    Marklund, G.

    1983-06-01

    Radar and rocket electric field observations of auroral arcs have earlier been used to identify essentially four different arc types, namely anticorrelation and correlation arcs (with, respectively, decreased and increased arc-assocaited field) and asymmetric and reversal arcs. In this paper rocket double probe and supplementary observations from the literature, obtained under various geophysical conditions, are used to organize the different arc types on a physical rather than morphological basis. This classification is based on the relative influence on the arc electric field pattern from the two current continuity mechanisms, polarisation electric fields and Birkeland currents. In this context the tangential electric field plays an essential role and it is thus important that it can be obtained with both high accuracy and resolution. In situ observations by sounding rockets are shown to be better suited for this specific task than monostatic radar observations. Depending on the dominating mechanism, estimated quantitatively for a number of arc-crossings, the different arc types have been grouped into the following main categories: Polarisation arcs, Birkeland current arcs and combination arcs. Finally the high altitude potential distributions corresponding to some of the different arc types are presented. (author)

  12. Classification of Radioactive Waste. General Safety Guide

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2009-11-15

    This publication is a revision of an earlier Safety Guide of the same title issued in 1994. It recommends revised waste management strategies that reflect changes in practices and approaches since then. It sets out a classification system for the management of waste prior to disposal and for disposal, driven by long term safety considerations. It includes a number of schemes for classifying radioactive waste that can be used to assist with planning overall national approaches to radioactive waste management and to assist with operational management at facilities. Contents: 1. Introduction; 2. The radioactive waste classification scheme; Appendix: The classification of radioactive waste; Annex I: Evolution of IAEA standards on radioactive waste classification; Annex II: Methods of classification; Annex III: Origin and types of radioactive waste.

  13. Classification of Radioactive Waste. General Safety Guide

    International Nuclear Information System (INIS)

    2009-01-01

    This publication is a revision of an earlier Safety Guide of the same title issued in 1994. It recommends revised waste management strategies that reflect changes in practices and approaches since then. It sets out a classification system for the management of waste prior to disposal and for disposal, driven by long term safety considerations. It includes a number of schemes for classifying radioactive waste that can be used to assist with planning overall national approaches to radioactive waste management and to assist with operational management at facilities. Contents: 1. Introduction; 2. The radioactive waste classification scheme; Appendix: The classification of radioactive waste; Annex I: Evolution of IAEA standards on radioactive waste classification; Annex II: Methods of classification; Annex III: Origin and types of radioactive waste

  14. Deep-learning-based classification of FDG-PET data for Alzheimer's disease categories

    Science.gov (United States)

    Singh, Shibani; Srivastava, Anant; Mi, Liang; Caselli, Richard J.; Chen, Kewei; Goradia, Dhruman; Reiman, Eric M.; Wang, Yalin

    2017-11-01

    Fluorodeoxyglucose (FDG) positron emission tomography (PET) measures the decline in the regional cerebral metabolic rate for glucose, offering a reliable metabolic biomarker even on presymptomatic Alzheimer's disease (AD) patients. PET scans provide functional information that is unique and unavailable using other types of imaging. However, the computational efficacy of FDG-PET data alone, for the classification of various Alzheimers Diagnostic categories, has not been well studied. This motivates us to correctly discriminate various AD Diagnostic categories using FDG-PET data. Deep learning has improved state-of-the-art classification accuracies in the areas of speech, signal, image, video, text mining and recognition. We propose novel methods that involve probabilistic principal component analysis on max-pooled data and mean-pooled data for dimensionality reduction, and multilayer feed forward neural network which performs binary classification. Our experimental dataset consists of baseline data of subjects including 186 cognitively unimpaired (CU) subjects, 336 mild cognitive impairment (MCI) subjects with 158 Late MCI and 178 Early MCI, and 146 AD patients from Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset. We measured F1-measure, precision, recall, negative and positive predictive values with a 10-fold cross validation scheme. Our results indicate that our designed classifiers achieve competitive results while max pooling achieves better classification performance compared to mean-pooled features. Our deep model based research may advance FDG-PET analysis by demonstrating their potential as an effective imaging biomarker of AD.

  15. Harmonization of sound insulation descriptors and classification schemes in Europe: COST Action TU0901

    DEFF Research Database (Denmark)

    Rasmussen, Birgit

    -in-Chief. Handbook of noise and vibration control, USA: Wiley and Son; 2007 [Ch. 114]. [4] COST Action TU0901 “Integrating and Harmonizing Sound Insulation Aspects in Sustainable Urban Housing Constructions”, 2009-2013, www.cost.eu/index.php?id=240&action_number=tu0901 (public information at COST website) or http...... insulation requirements seems unrealistic. However, by preparing a harmonized European classification scheme with a number of quality classes, member states could select a "harmonized" class fitting the national needs and conditions. A joint European Action, COST Action TU0901 "Integrating and Harmonizing...... on good workmanship. The paper will summarize the background, discuss the present situation in Europe and describe the joint efforts to reduce the diversity in Europe, thus supporting and initiating – where needed – improvement of sound insulation of new and existing dwellings in Europe to the benefit...

  16. Application of Snyder-Dolan Classification Scheme to the Selection of “Orthogonal” Columns for Fast Screening for Illicit Drugs and Impurity Profiling of Pharmaceuticals - I. Isocratic Elution

    Science.gov (United States)

    Fan, Wenzhe; Zhang, Yu; Carr, Peter W.; Rutan, Sarah C.; Dumarey, Melanie; Schellinger, Adam P.; Pritts, Wayne

    2011-01-01

    Fourteen judiciously selected reversed-phase columns were tested with 18 cationic drug solutes under the isocratic elution conditions advised in the Snyder-Dolan (S-D) hydrophobic subtraction method of column classification. The standard errors (S.E.) of the least squares regressions of log k′ vs. log k′REF were obtained for a given column against a reference column and used to compare and classify columns based on their selectivity. The results are consistent with those obtained with a study of the 16 test solutes recommended by Snyder and Dolan. To the extent that these drugs are representative these results show that the S-D classification scheme is also generally applicable to pharmaceuticals under isocratic conditions. That is, those columns judged to be similar based on the S-D 16 solutes were similar based on the 18 drugs; furthermore those columns judged to have significantly different selectivities based on the 16 S-D probes appeared to be quite different for the drugs as well. Given that the S-D method has been used to classify more than 400 different types of reversed phases the extension to cationic drugs is a significant finding. PMID:19698948

  17. AN OBJECT-BASED METHOD FOR CHINESE LANDFORM TYPES CLASSIFICATION

    Directory of Open Access Journals (Sweden)

    H. Ding

    2016-06-01

    Full Text Available Landform classification is a necessary task for various fields of landscape and regional planning, for example for landscape evaluation, erosion studies, hazard prediction, et al. This study proposes an improved object-based classification for Chinese landform types using the factor importance analysis of random forest and the gray-level co-occurrence matrix (GLCM. In this research, based on 1km DEM of China, the combination of the terrain factors extracted from DEM are selected by correlation analysis and Sheffield's entropy method. Random forest classification tree is applied to evaluate the importance of the terrain factors, which are used as multi-scale segmentation thresholds. Then the GLCM is conducted for the knowledge base of classification. The classification result was checked by using the 1:4,000,000 Chinese Geomorphological Map as reference. And the overall classification accuracy of the proposed method is 5.7% higher than ISODATA unsupervised classification, and 15.7% higher than the traditional object-based classification method.

  18. Polarisation-based coincidence event discrimination: an in silico study towards a feasible scheme for Compton-PET

    Science.gov (United States)

    Toghyani, M.; Gillam, J. E.; McNamara, A. L.; Kuncic, Z.

    2016-08-01

    Current positron emission tomography (PET) systems use temporally localised coincidence events discriminated by energy and time-of-flight information. The two annihilation photons are in an entangled polarisation state and, in principle, additional information from the polarisation correlation of photon pairs could be used to improve the accuracy of coincidence classification. In a previous study, we demonstrated that in principle, the polarisation correlation information could be transferred to an angular correlation in the distribution of scattered photon pairs in a planar Compton camera system. In the present study, we model a source-phantom-detector system using Geant4 and we develop a coincidence classification scheme that exploits the angular correlation of scattered annihilation quanta to improve the accuracy of coincidence detection. We find a 22% image quality improvement in terms of the peak signal-to-noise ratio when scattered coincidence events are discriminated solely by their angular correlation, thus demonstrating the feasibility of this novel classification scheme. By integrating scatter events (both single-single and single-only) with unscattered coincidence events discriminated using conventional methods, our results suggest that Compton-PET may be a promising candidate for optimal emission tomographic imaging.

  19. Quantum election scheme based on anonymous quantum key distribution

    International Nuclear Information System (INIS)

    Zhou Rui-Rui; Yang Li

    2012-01-01

    An unconditionally secure authority-certified anonymous quantum key distribution scheme using conjugate coding is presented, based on which we construct a quantum election scheme without the help of an entanglement state. We show that this election scheme ensures the completeness, soundness, privacy, eligibility, unreusability, fairness, and verifiability of a large-scale election in which the administrator and counter are semi-honest. This election scheme can work even if there exist loss and errors in quantum channels. In addition, any irregularity in this scheme is sensible. (general)

  20. Extending a field-based Sonoran desert vegetation classification to a regional scale using optical and microwave satellite imagery

    Science.gov (United States)

    Shupe, Scott Marshall

    2000-10-01

    Vegetation mapping in and regions facilitates ecological studies, land management, and provides a record to which future land changes can be compared. Accurate and representative mapping of desert vegetation requires a sound field sampling program and a methodology to transform the data collected into a representative classification system. Time and cost constraints require that a remote sensing approach be used if such a classification system is to be applied on a regional scale. However, desert vegetation may be sparse and thus difficult to sense at typical satellite resolutions, especially given the problem of soil reflectance. This study was designed to address these concerns by conducting vegetation mapping research using field and satellite data from the US Army Yuma Proving Ground (USYPG) in Southwest Arizona. Line and belt transect data from the Army's Land Condition Trend Analysis (LCTA) Program were transformed into relative cover and relative density classification schemes using cluster analysis. Ordination analysis of the same data produced two and three-dimensional graphs on which the homogeneity of each vegetation class could be examined. It was found that the use of correspondence analysis (CA), detrended correspondence analysis (DCA), and non-metric multidimensional scaling (NMS) ordination methods was superior to the use of any single ordination method for helping to clarify between-class and within-class relationships in vegetation composition. Analysis of these between-class and within-class relationships were of key importance in examining how well relative cover and relative density schemes characterize the USYPG vegetation. Using these two classification schemes as reference data, maximum likelihood and artificial neural net classifications were then performed on a coregistered dataset consisting of a summer Landsat Thematic Mapper (TM) image, one spring and one summer ERS-1 microwave image, and elevation, slope, and aspect layers

  1. Structural classification and a binary structure model for superconductors

    Institute of Scientific and Technical Information of China (English)

    Dong Cheng

    2006-01-01

    Based on structural and bonding features, a new classification scheme of superconductors is proposed to classify conductors can be partitioned into two parts, a superconducting active component and a supplementary component.Partially metallic covalent bonding is found to be a common feature in all superconducting active components, and the electron states of the atoms in the active components usually make a dominant contribution to the energy band near the Fermi surface. Possible directions to explore new superconductors are discussed based on the structural classification and the binary structure model.

  2. Hydrological Climate Classification: Can We Improve on Köppen-Geiger?

    Science.gov (United States)

    Knoben, W.; Woods, R. A.; Freer, J. E.

    2017-12-01

    Classification is essential in the study of complex natural systems, yet hydrology so far has no formal way to structure the climate forcing which underlies hydrologic response. Various climate classification systems can be borrowed from other disciplines but these are based on different organizing principles than a hydrological classification might use. From gridded global data we calculate a gridded aridity index, an aridity seasonality index and a rain-vs-snow index, which we use to cluster global locations into climate groups. We then define the membership degree of nearly 1100 catchments to each of our climate groups based on each catchment's climate and investigate the extent to which streamflow responses within each climate group are similar. We compare this climate classification approach with the often-used Köppen-Geiger classification, using statistical tests based on streamflow signature values. We find that three climate indices are sufficient to distinguish 18 different climate types world-wide. Climates tend to change gradually in space and catchments can thus belong to multiple climate groups, albeit with different degrees of membership. Streamflow responses within a climate group tend to be similar, regardless of the catchments' geographical proximity. A Wilcoxon two-sample test based on streamflow signature values for each climate group shows that the new classification can distinguish different flow regimes using this classification scheme. The Köppen-Geiger approach uses 29 climate classes but is less able to differentiate streamflow regimes. Climate forcing exerts a strong control on typical hydrologic response and both change gradually in space. This makes arbitrary hard boundaries in any classification scheme difficult to defend. Any hydrological classification should thus acknowledge these gradual changes in forcing. Catchment characteristics (soil or vegetation type, land use, etc) can vary more quickly in space than climate does, which

  3. A classification scheme of erroneous behaviors for human error probability estimations based on simulator data

    International Nuclear Information System (INIS)

    Kim, Yochan; Park, Jinkyun; Jung, Wondea

    2017-01-01

    Because it has been indicated that empirical data supporting the estimates used in human reliability analysis (HRA) is insufficient, several databases have been constructed recently. To generate quantitative estimates from human reliability data, it is important to appropriately sort the erroneous behaviors found in the reliability data. Therefore, this paper proposes a scheme to classify the erroneous behaviors identified by the HuREX (Human Reliability data Extraction) framework through a review of the relevant literature. A case study of the human error probability (HEP) calculations is conducted to verify that the proposed scheme can be successfully implemented for the categorization of the erroneous behaviors and to assess whether the scheme is useful for the HEP quantification purposes. Although continuously accumulating and analyzing simulator data is desirable to secure more reliable HEPs, the resulting HEPs were insightful in several important ways with regard to human reliability in off-normal conditions. From the findings of the literature review and the case study, the potential and limitations of the proposed method are discussed. - Highlights: • A taxonomy of erroneous behaviors is proposed to estimate HEPs from a database. • The cognitive models, procedures, HRA methods, and HRA databases were reviewed. • HEPs for several types of erroneous behaviors are calculated as a case study.

  4. Web Page Classification Method Using Neural Networks

    Science.gov (United States)

    Selamat, Ali; Omatu, Sigeru; Yanagimoto, Hidekazu; Fujinaka, Toru; Yoshioka, Michifumi

    Automatic categorization is the only viable method to deal with the scaling problem of the World Wide Web (WWW). In this paper, we propose a news web page classification method (WPCM). The WPCM uses a neural network with inputs obtained by both the principal components and class profile-based features (CPBF). Each news web page is represented by the term-weighting scheme. As the number of unique words in the collection set is big, the principal component analysis (PCA) has been used to select the most relevant features for the classification. Then the final output of the PCA is combined with the feature vectors from the class-profile which contains the most regular words in each class before feeding them to the neural networks. We have manually selected the most regular words that exist in each class and weighted them using an entropy weighting scheme. The fixed number of regular words from each class will be used as a feature vectors together with the reduced principal components from the PCA. These feature vectors are then used as the input to the neural networks for classification. The experimental evaluation demonstrates that the WPCM method provides acceptable classification accuracy with the sports news datasets.

  5. A Digital Signature Scheme Based on MST3 Cryptosystems

    Directory of Open Access Journals (Sweden)

    Haibo Hong

    2014-01-01

    Full Text Available As special types of factorization of finite groups, logarithmic signature and cover have been used as the main components of cryptographic keys for secret key cryptosystems such as PGM and public key cryptosystems like MST1, MST2, and MST3. Recently, Svaba et. al proposed a revised MST3 encryption scheme with greater security. Meanwhile, they put forward an idea of constructing signature schemes on the basis of logarithmic signatures and random covers. In this paper, we firstly design a secure digital signature scheme based on logarithmic signatures and random covers. In order to complete the task, we devise a new encryption scheme based on MST3 cryptosystems.

  6. A combined reconstruction-classification method for diffuse optical tomography

    Energy Technology Data Exchange (ETDEWEB)

    Hiltunen, P [Department of Biomedical Engineering and Computational Science, Helsinki University of Technology, PO Box 3310, FI-02015 TKK (Finland); Prince, S J D; Arridge, S [Department of Computer Science, University College London, Gower Street London, WC1E 6B (United Kingdom)], E-mail: petri.hiltunen@tkk.fi, E-mail: s.prince@cs.ucl.ac.uk, E-mail: s.arridge@cs.ucl.ac.uk

    2009-11-07

    We present a combined classification and reconstruction algorithm for diffuse optical tomography (DOT). DOT is a nonlinear ill-posed inverse problem. Therefore, some regularization is needed. We present a mixture of Gaussians prior, which regularizes the DOT reconstruction step. During each iteration, the parameters of a mixture model are estimated. These associate each reconstructed pixel with one of several classes based on the current estimate of the optical parameters. This classification is exploited to form a new prior distribution to regularize the reconstruction step and update the optical parameters. The algorithm can be described as an iteration between an optimization scheme with zeroth-order variable mean and variance Tikhonov regularization and an expectation-maximization scheme for estimation of the model parameters. We describe the algorithm in a general Bayesian framework. Results from simulated test cases and phantom measurements show that the algorithm enhances the contrast of the reconstructed images with good spatial accuracy. The probabilistic classifications of each image contain only a few misclassified pixels.

  7. Comparison Effectiveness of Pixel Based Classification and Object Based Classification Using High Resolution Image In Floristic Composition Mapping (Study Case: Gunung Tidar Magelang City)

    Science.gov (United States)

    Ardha Aryaguna, Prama; Danoedoro, Projo

    2016-11-01

    Developments of analysis remote sensing have same way with development of technology especially in sensor and plane. Now, a lot of image have high spatial and radiometric resolution, that's why a lot information. Vegetation object analysis such floristic composition got a lot advantage of that development. Floristic composition can be interpreted using a lot of method such pixel based classification and object based classification. The problems for pixel based method on high spatial resolution image are salt and paper who appear in result of classification. The purpose of this research are compare effectiveness between pixel based classification and object based classification for composition vegetation mapping on high resolution image Worldview-2. The results show that pixel based classification using majority 5×5 kernel windows give the highest accuracy between another classifications. The highest accuracy is 73.32% from image Worldview-2 are being radiometric corrected level surface reflectance, but for overall accuracy in every class, object based are the best between another methods. Reviewed from effectiveness aspect, pixel based are more effective then object based for vegetation composition mapping in Tidar forest.

  8. Color encryption scheme based on adapted quantum logistic map

    Science.gov (United States)

    Zaghloul, Alaa; Zhang, Tiejun; Amin, Mohamed; Abd El-Latif, Ahmed A.

    2014-04-01

    This paper presents a new color image encryption scheme based on quantum chaotic system. In this scheme, a new encryption scheme is accomplished by generating an intermediate chaotic key stream with the help of quantum chaotic logistic map. Then, each pixel is encrypted by the cipher value of the previous pixel and the adapted quantum logistic map. The results show that the proposed scheme has adequate security for the confidentiality of color images.

  9. Design Scheme of Remote Monitoring System Based on Qt

    Directory of Open Access Journals (Sweden)

    Xu Dawei

    2015-01-01

    Full Text Available This paper introduces a design scheme of remote monitoring system based on Qt, the scheme of remote monitoring system based on S3C2410 and Qt, with the aid of cross platform development tools Qt and powerful ARM platform design and implementation. The development of remote video surveillance system based on embedded terminal has practical significance and value.

  10. A provably-secure ECC-based authentication scheme for wireless sensor networks.

    Science.gov (United States)

    Nam, Junghyun; Kim, Moonseong; Paik, Juryon; Lee, Youngsook; Won, Dongho

    2014-11-06

    A smart-card-based user authentication scheme for wireless sensor networks (in short, a SUA-WSN scheme) is designed to restrict access to the sensor data only to users who are in possession of both a smart card and the corresponding password. While a significant number of SUA-WSN schemes have been suggested in recent years, their intended security properties lack formal definitions and proofs in a widely-accepted model. One consequence is that SUA-WSN schemes insecure against various attacks have proliferated. In this paper, we devise a security model for the analysis of SUA-WSN schemes by extending the widely-accepted model of Bellare, Pointcheval and Rogaway (2000). Our model provides formal definitions of authenticated key exchange and user anonymity while capturing side-channel attacks, as well as other common attacks. We also propose a new SUA-WSN scheme based on elliptic curve cryptography (ECC), and prove its security properties in our extended model. To the best of our knowledge, our proposed scheme is the first SUA-WSN scheme that provably achieves both authenticated key exchange and user anonymity. Our scheme is also computationally competitive with other ECC-based (non-provably secure) schemes.

  11. A Provably-Secure ECC-Based Authentication Scheme for Wireless Sensor Networks

    Science.gov (United States)

    Nam, Junghyun; Kim, Moonseong; Paik, Juryon; Lee, Youngsook; Won, Dongho

    2014-01-01

    A smart-card-based user authentication scheme for wireless sensor networks (in short, a SUA-WSN scheme) is designed to restrict access to the sensor data only to users who are in possession of both a smart card and the corresponding password. While a significant number of SUA-WSN schemes have been suggested in recent years, their intended security properties lack formal definitions and proofs in a widely-accepted model. One consequence is that SUA-WSN schemes insecure against various attacks have proliferated. In this paper, we devise a security model for the analysis of SUA-WSN schemes by extending the widely-accepted model of Bellare, Pointcheval and Rogaway (2000). Our model provides formal definitions of authenticated key exchange and user anonymity while capturing side-channel attacks, as well as other common attacks. We also propose a new SUA-WSN scheme based on elliptic curve cryptography (ECC), and prove its security properties in our extended model. To the best of our knowledge, our proposed scheme is the first SUA-WSN scheme that provably achieves both authenticated key exchange and user anonymity. Our scheme is also computationally competitive with other ECC-based (non-provably secure) schemes. PMID:25384009

  12. A Provably-Secure ECC-Based Authentication Scheme for Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Junghyun Nam

    2014-11-01

    Full Text Available A smart-card-based user authentication scheme for wireless sensor networks (in short, a SUA-WSN scheme is designed to restrict access to the sensor data only to users who are in possession of both a smart card and the corresponding password. While a significant number of SUA-WSN schemes have been suggested in recent years, their intended security properties lack formal definitions and proofs in a widely-accepted model. One consequence is that SUA-WSN schemes insecure against various attacks have proliferated. In this paper, we devise a security model for the analysis of SUA-WSN schemes by extending the widely-accepted model of Bellare, Pointcheval and Rogaway (2000. Our model provides formal definitions of authenticated key exchange and user anonymity while capturing side-channel attacks, as well as other common attacks. We also propose a new SUA-WSN scheme based on elliptic curve cryptography (ECC, and prove its security properties in our extended model. To the best of our knowledge, our proposed scheme is the first SUA-WSN scheme that provably achieves both authenticated key exchange and user anonymity. Our scheme is also computationally competitive with other ECC-based (non-provably secure schemes.

  13. Semi-Supervised Classification for Fault Diagnosis in Nuclear Power Plants

    International Nuclear Information System (INIS)

    Ma, Jian Ping; Jiang, Jin

    2014-01-01

    Pattern classification methods have become important tools for fault diagnosis in industrial systems. However, it is normally difficult to obtain reliable labeled data to train a supervised pattern classification model for applications in a nuclear power plant (NPP). However, unlabeled data easily become available through increased deployment of supervisory, control, and data acquisition (SCADA) systems. In this paper, a fault diagnosis scheme based on semi-supervised classification (SSC) method is developed with specific applications for NPP. In this scheme, newly measured plant data are treated as unlabeled data. They are integrated with selected labeled data to train a SSC model which is then used to estimate labels of the new data. Compared to exclusive supervised approaches, the proposed scheme requires significantly less number of labeled data to train a classifier. Furthermore, it is shown that higher degree of uncertainties in the labeled data can be tolerated. The developed scheme has been validated using the data generated from a desktop NPP simulator and also from a physical NPP simulator using a graph-based SSC algorithm. Two case studies have been used in the validation process. In the first case study, three faults have been simulated on the desktop simulator. These faults have all been classified successfully with only four labeled data points per fault case. In the second case, six types of fault are simulated on the physical NPP simulator. All faults have been successfully diagnosed. The results have demonstrated that SSC is a promising tool for fault diagnosis

  14. A physiologically-inspired model of numerical classification based on graded stimulus coding

    Directory of Open Access Journals (Sweden)

    John Pearson

    2010-01-01

    Full Text Available In most natural decision contexts, the process of selecting among competing actions takes place in the presence of informative, but potentially ambiguous, stimuli. Decisions about magnitudes—quantities like time, length, and brightness that are linearly ordered—constitute an important subclass of such decisions. It has long been known that perceptual judgments about such quantities obey Weber’s Law, wherein the just-noticeable difference in a magnitude is proportional to the magnitude itself. Current physiologically inspired models of numerical classification assume discriminations are made via a labeled line code of neurons selectively tuned for numerosity, a pattern observed in the firing rates of neurons in the ventral intraparietal area (VIP of the macaque. By contrast, neurons in the contiguous lateral intraparietal area (LIP signal numerosity in a graded fashion, suggesting the possibility that numerical classification could be achieved in the absence of neurons tuned for number. Here, we consider the performance of a decision model based on this analog coding scheme in a paradigmatic discrimination task—numerosity bisection. We demonstrate that a basic two-neuron classifier model, derived from experimentally measured monotonic responses of LIP neurons, is sufficient to reproduce the numerosity bisection behavior of monkeys, and that the threshold of the classifier can be set by reward maximization via a simple learning rule. In addition, our model predicts deviations from Weber Law scaling of choice behavior at high numerosity. Together, these results suggest both a generic neuronal framework for magnitude-based decisions and a role for reward contingency in the classification of such stimuli.

  15. Sentiment classification technology based on Markov logic networks

    Science.gov (United States)

    He, Hui; Li, Zhigang; Yao, Chongchong; Zhang, Weizhe

    2016-07-01

    With diverse online media emerging, there is a growing concern of sentiment classification problem. At present, text sentiment classification mainly utilizes supervised machine learning methods, which feature certain domain dependency. On the basis of Markov logic networks (MLNs), this study proposed a cross-domain multi-task text sentiment classification method rooted in transfer learning. Through many-to-one knowledge transfer, labeled text sentiment classification, knowledge was successfully transferred into other domains, and the precision of the sentiment classification analysis in the text tendency domain was improved. The experimental results revealed the following: (1) the model based on a MLN demonstrated higher precision than the single individual learning plan model. (2) Multi-task transfer learning based on Markov logical networks could acquire more knowledge than self-domain learning. The cross-domain text sentiment classification model could significantly improve the precision and efficiency of text sentiment classification.

  16. An enhanced dynamic ID-based authentication scheme for telecare medical information systems

    Directory of Open Access Journals (Sweden)

    Ankita Chaturvedi

    2017-01-01

    Full Text Available The authentication schemes for telecare medical information systems (TMIS try to ensure secure and authorized access. ID-based authentication schemes address secure communication, but privacy is not properly addressed. In recent times, dynamic ID-based remote user authentication schemes for TMIS have been presented to protect user’s privacy. The dynamic ID-based authentication schemes efficiently protect the user’s privacy. Unfortunately, most of the existing dynamic ID-based authentication schemes for TMIS ignore the input verifying condition. This makes login and password change phases inefficient. Inefficiency of the password change phase may lead to denial of service attack in the case of incorrect input in the password change phase. To overcome these weaknesses, we proposed a new dynamic ID-based authentication scheme using a smart card. The proposed scheme can quickly detect incorrect inputs which makes the login and password change phase efficient. We adopt the approach with the aim to protect privacy, and efficient login and password change phases. The proposed scheme also resists off-line password guessing attack and denial of service attack. We also demonstrate the validity of the proposed scheme by utilizing the widely-accepted BAN (Burrows, Abadi, and Needham logic. In addition, our scheme is comparable in terms of the communication and computational overheads with relevant schemes for TMIS.

  17. Video event classification and image segmentation based on noncausal multidimensional hidden Markov models.

    Science.gov (United States)

    Ma, Xiang; Schonfeld, Dan; Khokhar, Ashfaq A

    2009-06-01

    In this paper, we propose a novel solution to an arbitrary noncausal, multidimensional hidden Markov model (HMM) for image and video classification. First, we show that the noncausal model can be solved by splitting it into multiple causal HMMs and simultaneously solving each causal HMM using a fully synchronous distributed computing framework, therefore referred to as distributed HMMs. Next we present an approximate solution to the multiple causal HMMs that is based on an alternating updating scheme and assumes a realistic sequential computing framework. The parameters of the distributed causal HMMs are estimated by extending the classical 1-D training and classification algorithms to multiple dimensions. The proposed extension to arbitrary causal, multidimensional HMMs allows state transitions that are dependent on all causal neighbors. We, thus, extend three fundamental algorithms to multidimensional causal systems, i.e., 1) expectation-maximization (EM), 2) general forward-backward (GFB), and 3) Viterbi algorithms. In the simulations, we choose to limit ourselves to a noncausal 2-D model whose noncausality is along a single dimension, in order to significantly reduce the computational complexity. Simulation results demonstrate the superior performance, higher accuracy rate, and applicability of the proposed noncausal HMM framework to image and video classification.

  18. Mechanism-based drug exposure classification in pharmacoepidemiological studies

    NARCIS (Netherlands)

    Verdel, B.M.

    2010-01-01

    Mechanism-based classification of drug exposure in pharmacoepidemiological studies In pharmacoepidemiology and pharmacovigilance, the relation between drug exposure and clinical outcomes is crucial. Exposure classification in pharmacoepidemiological studies is traditionally based on

  19. Application of a kernel-based online learning algorithm to the classification of nodule candidates in computer-aided detection of CT lung nodules

    International Nuclear Information System (INIS)

    Matsumoto, S.; Ohno, Y.; Takenaka, D.; Sugimura, K.; Yamagata, H.

    2007-01-01

    Classification of the nodule candidates in computer-aided detection (CAD) of lung nodules in CT images was addressed by constructing a nonlinear discriminant function using a kernel-based learning algorithm called the kernel recursive least-squares (KRLS) algorithm. Using the nodule candidates derived from the processing by a CAD scheme of 100 CT datasets containing 253 non-calcified nodules or 3 mm or larger as determined by the consensus of two thoracic radiologists, the following trial were carried out 100 times: by randomly selecting 50 datasets for training, a nonlinear discriminant function was obtained using the nodule candidates in the training datasets and tested with the remaining candidates; for comparison, a rule-based classification was tested in a similar manner. At the number of false positives per case of about 5, the nonlinear classification method showed an improved sensitivity of 80% (mean over the 100 trials) compared with 74% of the rule-based method. (orig.)

  20. Search and Classification Using Multiple Autonomous Vehicles Decision-Making and Sensor Management

    CERN Document Server

    Wang, Yue

    2012-01-01

    Search and Classification Using Multiple Autonomous Vehicles provides a comprehensive study of decision-making strategies for domain search and object classification using multiple autonomous vehicles (MAV) under both deterministic and probabilistic frameworks. It serves as a first discussion of the problem of effective resource allocation using MAV with sensing limitations, i.e., for search and classification missions over large-scale domains, or when there are far more objects to be found and classified than there are autonomous vehicles available. Under such scenarios, search and classification compete for limited sensing resources. This is because search requires vehicle mobility while classification restricts the vehicles to the vicinity of any objects found. The authors develop decision-making strategies to choose between these competing tasks and vehicle-motion-control laws to achieve the proposed management scheme. Deterministic Lyapunov-based, probabilistic Bayesian-based, and risk-based decision-mak...

  1. Iris image recognition wavelet filter-banks based iris feature extraction schemes

    CERN Document Server

    Rahulkar, Amol D

    2014-01-01

    This book provides the new results in wavelet filter banks based feature extraction, and the classifier in the field of iris image recognition. It provides the broad treatment on the design of separable, non-separable wavelets filter banks, and the classifier. The design techniques presented in the book are applied on iris image analysis for person authentication. This book also brings together the three strands of research (wavelets, iris image analysis, and classifier). It compares the performance of the presented techniques with state-of-the-art available schemes. This book contains the compilation of basic material on the design of wavelets that avoids reading many different books. Therefore, it provide an easier path for the new-comers, researchers to master the contents. In addition, the designed filter banks and classifier can also be effectively used than existing filter-banks in many signal processing applications like pattern classification, data-compression, watermarking, denoising etc.  that will...

  2. Scalable Packet Classification with Hash Tables

    Science.gov (United States)

    Wang, Pi-Chung

    In the last decade, the technique of packet classification has been widely deployed in various network devices, including routers, firewalls and network intrusion detection systems. In this work, we improve the performance of packet classification by using multiple hash tables. The existing hash-based algorithms have superior scalability with respect to the required space; however, their search performance may not be comparable to other algorithms. To improve the search performance, we propose a tuple reordering algorithm to minimize the number of accessed hash tables with the aid of bitmaps. We also use pre-computation to ensure the accuracy of our search procedure. Performance evaluation based on both real and synthetic filter databases shows that our scheme is effective and scalable and the pre-computation cost is moderate.

  3. A Computer Oriented Scheme for Coding Chemicals in the Field of Biomedicine.

    Science.gov (United States)

    Bobka, Marilyn E.; Subramaniam, J.B.

    The chemical coding scheme of the Medical Coding Scheme (MCS), developed for use in the Comparative Systems Laboratory (CSL), is outlined and evaluated in this report. The chemical coding scheme provides a classification scheme and encoding method for drugs and chemical terms. Using the scheme complicated chemical structures may be expressed…

  4. Classification of Recommender Expertise in the Wikipedia Recommender System

    DEFF Research Database (Denmark)

    Jensen, Christian D.; Pilkauskas, Povilas; Lefévre, Thomas

    2011-01-01

    to the quality of articles. The Wikipedia Recommender System (WRS) was developed to help users determine the credibility of articles based on feedback from other Wikipedia users. The WRS implements a collaborative filtering system with trust metrics, i.e., it provides a rating of articles which emphasizes...... an evaluation of four existing knowledge classification schemes with respect to these requirements. This evaluation helped us identify a classification scheme, which we have implemented in the current version of the Wikipedia Recommender System....... feedback from recommenders that the user has agreed with in the past. This exposes the problem that most recommenders are not equally competent in all subject areas. The first WRS prototype did not include an evaluation of the areas of expertise of recommenders, so the trust metric used in the article...

  5. Classification of Recommender Expertise in the Wikipedia Recommender System

    DEFF Research Database (Denmark)

    Jensen, Christian D.; Pilkauskas, Povilas; Lefevre, Thomas

    2011-01-01

    to the quality of articles. The Wikipedia Recommender System (WRS) was developed to help users determine the credibility of articles based on feedback from other Wikipedia users. The WRS implements a collaborative filtering system with trust metrics, i.e., it provides a rating of articles "which emphasizes...... an evaluation of four existing knowledge classification schemes with respect to these requirements. This evaluation helped us identify a classification scheme, which we have implemented in the current version of the Wikipedia Recommender System....... feedback from recommenders that the user has agreed with in the past. This exposes the problem that most recommenders are not equally competent in all subject areas. The first WRS prototype did not include an evaluation of the areas of expertise of recommenders, so the trust metric used in the article...

  6. An improved biometrics-based authentication scheme for telecare medical information systems.

    Science.gov (United States)

    Guo, Dianli; Wen, Qiaoyan; Li, Wenmin; Zhang, Hua; Jin, Zhengping

    2015-03-01

    Telecare medical information system (TMIS) offers healthcare delivery services and patients can acquire their desired medical services conveniently through public networks. The protection of patients' privacy and data confidentiality are significant. Very recently, Mishra et al. proposed a biometrics-based authentication scheme for telecare medical information system. Their scheme can protect user privacy and is believed to resist a range of network attacks. In this paper, we analyze Mishra et al.'s scheme and identify that their scheme is insecure to against known session key attack and impersonation attack. Thereby, we present a modified biometrics-based authentication scheme for TMIS to eliminate the aforementioned faults. Besides, we demonstrate the completeness of the proposed scheme through BAN-logic. Compared to the related schemes, our protocol can provide stronger security and it is more practical.

  7. Quantum signature scheme based on a quantum search algorithm

    International Nuclear Information System (INIS)

    Yoon, Chun Seok; Kang, Min Sung; Lim, Jong In; Yang, Hyung Jin

    2015-01-01

    We present a quantum signature scheme based on a two-qubit quantum search algorithm. For secure transmission of signatures, we use a quantum search algorithm that has not been used in previous quantum signature schemes. A two-step protocol secures the quantum channel, and a trusted center guarantees non-repudiation that is similar to other quantum signature schemes. We discuss the security of our protocol. (paper)

  8. Contemplating case mix: A primer on case mix classification and management.

    Science.gov (United States)

    Costa, Andrew P; Poss, Jeffery W; McKillop, Ian

    2015-01-01

    Case mix classifications are the frameworks that underlie many healthcare funding schemes, including the so-called activity-based funding. Now more than ever, Canadian healthcare administrators are evaluating case mix-based funding and deciphering how they will influence their organization. Case mix is a topic fraught with technical jargon and largely relegated to government agencies or private industries. This article provides an abridged review of case mix classification as well as its implications for management in healthcare. © 2015 The Canadian College of Health Leaders.

  9. Distributed Group-Based Mobility Management Scheme in Wireless Body Area Networks

    Directory of Open Access Journals (Sweden)

    Moneeb Gohar

    2017-01-01

    Full Text Available For group-based mobility management in 6LoWPAN-based wireless body area networks (WBAN, some schemes using the Proxy Mobile IPv6 (PMIP have been proposed. However, the existing PMIP-based mobility schemes tend to induce large registration delay and handover delay. To overcome such limitations, we propose a new distributed group-based mobility management scheme, in which the Local Mobility Anchor (LMA function is implemented by each Mobile Access Gateway (MAG and the handover operation is performed between two neighboring MAGs without the help of LMA. Besides, each MAG maintains the information of the group of mobile sensors and aggregates the Authentication-Authorization-Accounting (AAA query messages for a group of mobile sensors as a “single” message to decrease the control overhead. By numerical analysis, it is shown that the proposed scheme can reduce the registration and handover delays, compared to the existing PMIP-based mobility schemes.

  10. Classification of phase transitions of finite Bose-Einstein condensates in power law traps by Fisher zeros

    NARCIS (Netherlands)

    Mülken, O.; Borrmann, P.; Harting, J.D.R.; Stamerjohanns, H.

    2001-01-01

    We present a detailed description of a classification scheme for phase transitions in finite systems based on the distribution of Fisher zeros of the canonical partition function in the complex temperature plane. We apply this scheme to finite Bose systems in power-law traps within a semi-analytic

  11. Fuzzy set classifier for waste classification tracking

    International Nuclear Information System (INIS)

    Gavel, D.T.

    1992-01-01

    We have developed an expert system based on fuzzy logic theory to fuse the data from multiple sensors and make classification decisions for objects in a waste reprocessing stream. Fuzzy set theory has been applied in decision and control applications with some success, particularly by the Japanese. We have found that the fuzzy logic system is rather easy to design and train, a feature that can cut development costs considerably. With proper training, the classification accuracy is quite high. We performed several tests sorting radioactive test samples using a gamma spectrometer to compare fuzzy logic to more conventional sorting schemes

  12. Quantum Watermarking Scheme Based on INEQR

    Science.gov (United States)

    Zhou, Ri-Gui; Zhou, Yang; Zhu, Changming; Wei, Lai; Zhang, Xiafen; Ian, Hou

    2018-04-01

    Quantum watermarking technology protects copyright by embedding invisible quantum signal in quantum multimedia data. In this paper, a watermarking scheme based on INEQR was presented. Firstly, the watermark image is extended to achieve the requirement of embedding carrier image. Secondly, the swap and XOR operation is used on the processed pixels. Since there is only one bit per pixel, XOR operation can achieve the effect of simple encryption. Thirdly, both the watermark image extraction and embedding operations are described, where the key image, swap operation and LSB algorithm are used. When the embedding is made, the binary image key is changed. It means that the watermark has been embedded. Of course, if the watermark image is extracted, the key's state need detected. When key's state is |1>, this extraction operation is carried out. Finally, for validation of the proposed scheme, both the Signal-to-noise ratio (PSNR) and the security of the scheme are analyzed.

  13. Searchable attribute-based encryption scheme with attribute revocation in cloud storage.

    Science.gov (United States)

    Wang, Shangping; Zhao, Duqiao; Zhang, Yaling

    2017-01-01

    Attribute based encryption (ABE) is a good way to achieve flexible and secure access control to data, and attribute revocation is the extension of the attribute-based encryption, and the keyword search is an indispensable part for cloud storage. The combination of both has an important application in the cloud storage. In this paper, we construct a searchable attribute-based encryption scheme with attribute revocation in cloud storage, the keyword search in our scheme is attribute based with access control, when the search succeeds, the cloud server returns the corresponding cipher text to user and the user can decrypt the cipher text definitely. Besides, our scheme supports multiple keywords search, which makes the scheme more practical. Under the assumption of decisional bilinear Diffie-Hellman exponent (q-BDHE) and decisional Diffie-Hellman (DDH) in the selective security model, we prove that our scheme is secure.

  14. Improving Cross-Day EEG-Based Emotion Classification Using Robust Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    Yuan-Pin Lin

    2017-07-01

    Full Text Available Constructing a robust emotion-aware analytical framework using non-invasively recorded electroencephalogram (EEG signals has gained intensive attentions nowadays. However, as deploying a laboratory-oriented proof-of-concept study toward real-world applications, researchers are now facing an ecological challenge that the EEG patterns recorded in real life substantially change across days (i.e., day-to-day variability, arguably making the pre-defined predictive model vulnerable to the given EEG signals of a separate day. The present work addressed how to mitigate the inter-day EEG variability of emotional responses with an attempt to facilitate cross-day emotion classification, which was less concerned in the literature. This study proposed a robust principal component analysis (RPCA-based signal filtering strategy and validated its neurophysiological validity and machine-learning practicability on a binary emotion classification task (happiness vs. sadness using a five-day EEG dataset of 12 subjects when participated in a music-listening task. The empirical results showed that the RPCA-decomposed sparse signals (RPCA-S enabled filtering off the background EEG activity that contributed more to the inter-day variability, and predominately captured the EEG oscillations of emotional responses that behaved relatively consistent along days. Through applying a realistic add-day-in classification validation scheme, the RPCA-S progressively exploited more informative features (from 12.67 ± 5.99 to 20.83 ± 7.18 and improved the cross-day binary emotion-classification accuracy (from 58.31 ± 12.33% to 64.03 ± 8.40% as trained the EEG signals from one to four recording days and tested against one unseen subsequent day. The original EEG features (prior to RPCA processing neither achieved the cross-day classification (the accuracy was around chance level nor replicated the encouraging improvement due to the inter-day EEG variability. This result

  15. Cost-Based Droop Schemes for Economic Dispatch in Islanded Microgrids

    DEFF Research Database (Denmark)

    Chen, Feixiong; Chen, Minyou; Li, Qiang

    2017-01-01

    In this paper, cost based droop schemes are proposed, to minimize the total active power generation cost in an islanded microgrid (MG), while the simplicity and decentralized nature of the droop control are retained. In cost based droop schemes, the incremental costs of distributed generators (DGs...

  16. A Semisupervised Cascade Classification Algorithm

    Directory of Open Access Journals (Sweden)

    Stamatis Karlos

    2016-01-01

    Full Text Available Classification is one of the most important tasks of data mining techniques, which have been adopted by several modern applications. The shortage of enough labeled data in the majority of these applications has shifted the interest towards using semisupervised methods. Under such schemes, the use of collected unlabeled data combined with a clearly smaller set of labeled examples leads to similar or even better classification accuracy against supervised algorithms, which use labeled examples exclusively during the training phase. A novel approach for increasing semisupervised classification using Cascade Classifier technique is presented in this paper. The main characteristic of Cascade Classifier strategy is the use of a base classifier for increasing the feature space by adding either the predicted class or the probability class distribution of the initial data. The classifier of the second level is supplied with the new dataset and extracts the decision for each instance. In this work, a self-trained NB∇C4.5 classifier algorithm is presented, which combines the characteristics of Naive Bayes as a base classifier and the speed of C4.5 for final classification. We performed an in-depth comparison with other well-known semisupervised classification methods on standard benchmark datasets and we finally reached to the point that the presented technique has better accuracy in most cases.

  17. Gemstones and geosciences in space and time. Digital maps to the "Chessboard classification scheme of mineral deposits"

    Science.gov (United States)

    Dill, Harald G.; Weber, Berthold

    2013-12-01

    The gemstones, covering the spectrum from jeweler's to showcase quality, have been presented in a tripartite subdivision, by country, geology and geomorphology realized in 99 digital maps with more than 2600 mineralized sites. The various maps were designed based on the "Chessboard classification scheme of mineral deposits" proposed by Dill (2010a, 2010b) to reveal the interrelations between gemstone deposits and mineral deposits of other commodities and direct our thoughts to potential new target areas for exploration. A number of 33 categories were used for these digital maps: chromium, nickel, titanium, iron, manganese, copper, tin-tungsten, beryllium, lithium, zinc, calcium, boron, fluorine, strontium, phosphorus, zirconium, silica, feldspar, feldspathoids, zeolite, amphibole (tiger's eye), olivine, pyroxenoid, garnet, epidote, sillimanite-andalusite, corundum-spinel - diaspore, diamond, vermiculite-pagodite, prehnite, sepiolite, jet, and amber. Besides the political base map (gems by country) the mineral deposit is drawn on a geological map, illustrating the main lithologies, stratigraphic units and tectonic structure to unravel the evolution of primary gemstone deposits in time and space. The geomorphological map is to show the control of climate and subaerial and submarine hydrography on the deposition of secondary gemstone deposits. The digital maps are designed so as to be plotted as a paper version of different scale and to upgrade them for an interactive use and link them to gemological databases.

  18. Automatic classification for mammogram backgrounds based on bi-rads complexity definition and on a multi content analysis framework

    Science.gov (United States)

    Wu, Jie; Besnehard, Quentin; Marchessoux, Cédric

    2011-03-01

    Clinical studies for the validation of new medical imaging devices require hundreds of images. An important step in creating and tuning the study protocol is the classification of images into "difficult" and "easy" cases. This consists of classifying the image based on features like the complexity of the background, the visibility of the disease (lesions). Therefore, an automatic medical background classification tool for mammograms would help for such clinical studies. This classification tool is based on a multi-content analysis framework (MCA) which was firstly developed to recognize image content of computer screen shots. With the implementation of new texture features and a defined breast density scale, the MCA framework is able to automatically classify digital mammograms with a satisfying accuracy. BI-RADS (Breast Imaging Reporting Data System) density scale is used for grouping the mammograms, which standardizes the mammography reporting terminology and assessment and recommendation categories. Selected features are input into a decision tree classification scheme in MCA framework, which is the so called "weak classifier" (any classifier with a global error rate below 50%). With the AdaBoost iteration algorithm, these "weak classifiers" are combined into a "strong classifier" (a classifier with a low global error rate) for classifying one category. The results of classification for one "strong classifier" show the good accuracy with the high true positive rates. For the four categories the results are: TP=90.38%, TN=67.88%, FP=32.12% and FN =9.62%.

  19. Deep learning for EEG-Based preference classification

    Science.gov (United States)

    Teo, Jason; Hou, Chew Lin; Mountstephens, James

    2017-10-01

    Electroencephalogram (EEG)-based emotion classification is rapidly becoming one of the most intensely studied areas of brain-computer interfacing (BCI). The ability to passively identify yet accurately correlate brainwaves with our immediate emotions opens up truly meaningful and previously unattainable human-computer interactions such as in forensic neuroscience, rehabilitative medicine, affective entertainment and neuro-marketing. One particularly useful yet rarely explored areas of EEG-based emotion classification is preference recognition [1], which is simply the detection of like versus dislike. Within the limited investigations into preference classification, all reported studies were based on musically-induced stimuli except for a single study which used 2D images. The main objective of this study is to apply deep learning, which has been shown to produce state-of-the-art results in diverse hard problems such as in computer vision, natural language processing and audio recognition, to 3D object preference classification over a larger group of test subjects. A cohort of 16 users was shown 60 bracelet-like objects as rotating visual stimuli on a computer display while their preferences and EEGs were recorded. After training a variety of machine learning approaches which included deep neural networks, we then attempted to classify the users' preferences for the 3D visual stimuli based on their EEGs. Here, we show that that deep learning outperforms a variety of other machine learning classifiers for this EEG-based preference classification task particularly in a highly challenging dataset with large inter- and intra-subject variability.

  20. An improved biometrics-based remote user authentication scheme with user anonymity.

    Science.gov (United States)

    Khan, Muhammad Khurram; Kumari, Saru

    2013-01-01

    The authors review the biometrics-based user authentication scheme proposed by An in 2012. The authors show that there exist loopholes in the scheme which are detrimental for its security. Therefore the authors propose an improved scheme eradicating the flaws of An's scheme. Then a detailed security analysis of the proposed scheme is presented followed by its efficiency comparison. The proposed scheme not only withstands security problems found in An's scheme but also provides some extra features with mere addition of only two hash operations. The proposed scheme allows user to freely change his password and also provides user anonymity with untraceability.

  1. Knowledge-based approach to video content classification

    Science.gov (United States)

    Chen, Yu; Wong, Edward K.

    2001-01-01

    A framework for video content classification using a knowledge-based approach is herein proposed. This approach is motivated by the fact that videos are rich in semantic contents, which can best be interpreted and analyzed by human experts. We demonstrate the concept by implementing a prototype video classification system using the rule-based programming language CLIPS 6.05. Knowledge for video classification is encoded as a set of rules in the rule base. The left-hand-sides of rules contain high level and low level features, while the right-hand-sides of rules contain intermediate results or conclusions. Our current implementation includes features computed from motion, color, and text extracted from video frames. Our current rule set allows us to classify input video into one of five classes: news, weather, reporting, commercial, basketball and football. We use MYCIN's inexact reasoning method for combining evidences, and to handle the uncertainties in the features and in the classification results. We obtained good results in a preliminary experiment, and it demonstrated the validity of the proposed approach.

  2. A Generalized Weight-Based Particle-In-Cell Simulation Scheme

    International Nuclear Information System (INIS)

    Lee, W.W.; Jenkins, T.G.; Ethier, S.

    2010-01-01

    A generalized weight-based particle simulation scheme suitable for simulating magnetized plasmas, where the zeroth-order inhomogeneity is important, is presented. The scheme is an extension of the perturbative simulation schemes developed earlier for particle-in-cell (PIC) simulations. The new scheme is designed to simulate both the perturbed distribution ((delta)f) and the full distribution (full-F) within the same code. The development is based on the concept of multiscale expansion, which separates the scale lengths of the background inhomogeneity from those associated with the perturbed distributions. The potential advantage for such an arrangement is to minimize the particle noise by using (delta)f in the linear stage stage of the simulation, while retaining the flexibility of a full-F capability in the fully nonlinear stage of the development when signals associated with plasma turbulence are at a much higher level than those from the intrinsic particle noise.

  3. A robust anonymous biometric-based remote user authentication scheme using smart cards

    Directory of Open Access Journals (Sweden)

    Ashok Kumar Das

    2015-04-01

    Full Text Available Several biometric-based remote user authentication schemes using smart cards have been proposed in the literature in order to improve the security weaknesses in user authentication system. In 2012, An proposed an enhanced biometric-based remote user authentication scheme using smart cards. It was claimed that the proposed scheme is secure against the user impersonation attack, the server masquerading attack, the password guessing attack, and the insider attack and provides mutual authentication between the user and the server. In this paper, we first analyze the security of An’s scheme and we show that this scheme has three serious security flaws in the design of the scheme: (i flaw in user’s biometric verification during the login phase, (ii flaw in user’s password verification during the login and authentication phases, and (iii flaw in user’s password change locally at any time by the user. Due to these security flaws, An’s scheme cannot support mutual authentication between the user and the server. Further, we show that An’s scheme cannot prevent insider attack. In order to remedy the security weaknesses found in An’s scheme, we propose a new robust and secure anonymous biometric-based remote user authentication scheme using smart cards. Through the informal and formal security analysis, we show that our scheme is secure against all possible known attacks including the attacks found in An’s scheme. The simulation results of our scheme using the widely-accepted AVISPA (Automated Validation of Internet Security Protocols and Applications tool ensure that our scheme is secure against passive and active attacks. In addition, our scheme is also comparable in terms of the communication and computational overheads with An’s scheme and other related existing schemes. As a result, our scheme is more appropriate for practical applications compared to other approaches.

  4. Land Cover and Land Use Classification with TWOPAC: towards Automated Processing for Pixel- and Object-Based Image Classification

    Directory of Open Access Journals (Sweden)

    Stefan Dech

    2012-09-01

    Full Text Available We present a novel and innovative automated processing environment for the derivation of land cover (LC and land use (LU information. This processing framework named TWOPAC (TWinned Object and Pixel based Automated classification Chain enables the standardized, independent, user-friendly, and comparable derivation of LC and LU information, with minimized manual classification labor. TWOPAC allows classification of multi-spectral and multi-temporal remote sensing imagery from different sensor types. TWOPAC enables not only pixel-based classification, but also allows classification based on object-based characteristics. Classification is based on a Decision Tree approach (DT for which the well-known C5.0 code has been implemented, which builds decision trees based on the concept of information entropy. TWOPAC enables automatic generation of the decision tree classifier based on a C5.0-retrieved ascii-file, as well as fully automatic validation of the classification output via sample based accuracy assessment.Envisaging the automated generation of standardized land cover products, as well as area-wide classification of large amounts of data in preferably a short processing time, standardized interfaces for process control, Web Processing Services (WPS, as introduced by the Open Geospatial Consortium (OGC, are utilized. TWOPAC’s functionality to process geospatial raster or vector data via web resources (server, network enables TWOPAC’s usability independent of any commercial client or desktop software and allows for large scale data processing on servers. Furthermore, the components of TWOPAC were built-up using open source code components and are implemented as a plug-in for Quantum GIS software for easy handling of the classification process from the user’s perspective.

  5. A novel grain cluster-based homogenization scheme

    International Nuclear Information System (INIS)

    Tjahjanto, D D; Eisenlohr, P; Roters, F

    2010-01-01

    An efficient homogenization scheme, termed the relaxed grain cluster (RGC), for elasto-plastic deformations of polycrystals is presented. The scheme is based on a generalization of the grain cluster concept. A volume element consisting of eight (= 2 × 2 × 2) hexahedral grains is considered. The kinematics of the RGC scheme is formulated within a finite deformation framework, where the relaxation of the local deformation gradient of each individual grain is connected to the overall deformation gradient by the, so-called, interface relaxation vectors. The set of relaxation vectors is determined by the minimization of the constitutive energy (or work) density of the overall cluster. An additional energy density associated with the mismatch at the grain boundaries due to relaxations is incorporated as a penalty term into the energy minimization formulation. Effectively, this penalty term represents the kinematical condition of deformation compatibility at the grain boundaries. Simulations have been performed for a dual-phase grain cluster loaded in uniaxial tension. The results of the simulations are presented and discussed in terms of the effective stress–strain response and the overall deformation anisotropy as functions of the penalty energy parameters. In addition, the prediction of the RGC scheme is compared with predictions using other averaging schemes, as well as to the result of direct finite element (FE) simulation. The comparison indicates that the present RGC scheme is able to approximate FE simulation results of relatively fine discretization at about three orders of magnitude lower computational cost

  6. Comparative Study between Two Schemes of Active-Control-Based Mechatronic Inerter

    Directory of Open Access Journals (Sweden)

    He Lingduo

    2017-01-01

    Full Text Available Based on force-current analogy and velocity-voltage analogy in the theory of electromechanical analogy, the inerter is a device that corresponded to the capacitor completely where conquers the nature restriction of mass, what’s more, it is significant to improve the ratio of the inerter’s inertance to its mass for mechanical networks synthesis. And according to the principle of active-control-based mechatronic inerter, we present two implementation schemes. One was based on linear motor, and the other was based on the ball screw and rotary motor. We introduced the implementation methods and established theoretical model of the two schemes, then compared the ratio of the inerter’s inertance to its mass for the two schemes. Finally, we consider the scheme is better which was based on the ball screw and rotary motor.

  7. Ototoxicity (cochleotoxicity) classifications: A review.

    Science.gov (United States)

    Crundwell, Gemma; Gomersall, Phil; Baguley, David M

    2016-01-01

    Drug-mediated ototoxicity, specifically cochleotoxicity, is a concern for patients receiving medications for the treatment of serious illness. A number of classification schemes exist, most of which are based on pure-tone audiometry, in order to assist non-audiological/non-otological specialists in the identification and monitoring of iatrogenic hearing loss. This review identifies the primary classification systems used in cochleototoxicity monitoring. By bringing together classifications published in discipline-specific literature, the paper aims to increase awareness of their relative strengths and limitations in the assessment and monitoring of ototoxic hearing loss and to indicate how future classification systems may improve upon the status-quo. Literature review. PubMed identified 4878 articles containing the search term ototox*. A systematic search identified 13 key classification systems. Cochleotoxicity classification systems can be divided into those which focus on hearing change from a baseline audiogram and those that focus on the functional impact of the hearing loss. Common weaknesses of these grading scales included a lack of sensitivity to small adverse changes in hearing thresholds, a lack of high-frequency audiometry (>8 kHz), and lack of indication of which changes are likely to be clinically significant for communication and quality of life.

  8. Artificial Mangrove Species Mapping Using Pléiades-1: An Evaluation of Pixel-Based and Object-Based Classifications with Selected Machine Learning Algorithms

    Directory of Open Access Journals (Sweden)

    Dezhi Wang

    2018-02-01

    Full Text Available In the dwindling natural mangrove today, mangrove reforestation projects are conducted worldwide to prevent further losses. Due to monoculture and the low survival rate of artificial mangroves, it is necessary to pay attention to mapping and monitoring them dynamically. Remote sensing techniques have been widely used to map mangrove forests due to their capacity for large-scale, accurate, efficient, and repetitive monitoring. This study evaluated the capability of a 0.5-m Pléiades-1 in classifying artificial mangrove species using both pixel-based and object-based classification schemes. For comparison, three machine learning algorithms—decision tree (DT, support vector machine (SVM, and random forest (RF—were used as the classifiers in the pixel-based and object-based classification procedure. The results showed that both the pixel-based and object-based approaches could recognize the major discriminations between the four major artificial mangrove species. However, the object-based method had a better overall accuracy than the pixel-based method on average. For pixel-based image analysis, SVM produced the highest overall accuracy (79.63%; for object-based image analysis, RF could achieve the highest overall accuracy (82.40%, and it was also the best machine learning algorithm for classifying artificial mangroves. The patches produced by object-based image analysis approaches presented a more generalized appearance and could contiguously depict mangrove species communities. When the same machine learning algorithms were compared by McNemar’s test, a statistically significant difference in overall classification accuracy between the pixel-based and object-based classifications only existed in the RF algorithm. Regarding species, monoculture and dominant mangrove species Sonneratia apetala group 1 (SA1 as well as partly mixed and regular shape mangrove species Hibiscus tiliaceus (HT could well be identified. However, for complex and easily

  9. A Multi-Classification Method of Improved SVM-based Information Fusion for Traffic Parameters Forecasting

    Directory of Open Access Journals (Sweden)

    Hongzhuan Zhao

    2016-04-01

    Full Text Available With the enrichment of perception methods, modern transportation system has many physical objects whose states are influenced by many information factors so that it is a typical Cyber-Physical System (CPS. Thus, the traffic information is generally multi-sourced, heterogeneous and hierarchical. Existing research results show that the multisourced traffic information through accurate classification in the process of information fusion can achieve better parameters forecasting performance. For solving the problem of traffic information accurate classification, via analysing the characteristics of the multi-sourced traffic information and using redefined binary tree to overcome the shortcomings of the original Support Vector Machine (SVM classification in information fusion, a multi-classification method using improved SVM in information fusion for traffic parameters forecasting is proposed. The experiment was conducted to examine the performance of the proposed scheme, and the results reveal that the method can get more accurate and practical outcomes.

  10. A NEW CLASSIFICATION METHOD FOR GAMMA-RAY BURSTS

    International Nuclear Information System (INIS)

    Lue Houjun; Liang Enwei; Zhang Binbin; Zhang Bing

    2010-01-01

    Recent Swift observations suggest that the traditional long versus short gamma-ray burst (GRB) classification scheme does not always associate GRBs to the two physically motivated model types, i.e., Type II (massive star origin) versus Type I (compact star origin). We propose a new phenomenological classification method of GRBs by introducing a new parameter ε = E γ,iso,52 /E 5/3 p,z,2 , where E γ,iso is the isotropic gamma-ray energy (in units of 10 52 erg) and E p,z is the cosmic rest-frame spectral peak energy (in units of 100 keV). For those short GRBs with 'extended emission', both quantities are defined for the short/hard spike only. With the current complete sample of GRBs with redshift and E p measurements, the ε parameter shows a clear bimodal distribution with a separation at ε ∼ 0.03. The high-ε region encloses the typical long GRBs with high luminosity, some high-z 'rest-frame-short' GRBs (such as GRB 090423 and GRB 080913), as well as some high-z short GRBs (such as GRB 090426). All these GRBs have been claimed to be of Type II origin based on other observational properties in the literature. All the GRBs that are argued to be of Type I origin are found to be clustered in the low-ε region. They can be separated from some nearby low-luminosity long GRBs (in 3σ) by an additional T 90 criterion, i.e., T 90,z ∼< 5 s in the Swift/BAT band. We suggest that this new classification scheme can better match the physically motivated Type II/I classification scheme.

  11. A digital memories based user authentication scheme with privacy preservation.

    Directory of Open Access Journals (Sweden)

    JunLiang Liu

    Full Text Available The traditional username/password or PIN based authentication scheme, which still remains the most popular form of authentication, has been proved insecure, unmemorable and vulnerable to guessing, dictionary attack, key-logger, shoulder-surfing and social engineering. Based on this, a large number of new alternative methods have recently been proposed. However, most of them rely on users being able to accurately recall complex and unmemorable information or using extra hardware (such as a USB Key, which makes authentication more difficult and confusing. In this paper, we propose a Digital Memories based user authentication scheme adopting homomorphic encryption and a public key encryption design which can protect users' privacy effectively, prevent tracking and provide multi-level security in an Internet & IoT environment. Also, we prove the superior reliability and security of our scheme compared to other schemes and present a performance analysis and promising evaluation results.

  12. An Improved Biometrics-Based Remote User Authentication Scheme with User Anonymity

    Directory of Open Access Journals (Sweden)

    Muhammad Khurram Khan

    2013-01-01

    Full Text Available The authors review the biometrics-based user authentication scheme proposed by An in 2012. The authors show that there exist loopholes in the scheme which are detrimental for its security. Therefore the authors propose an improved scheme eradicating the flaws of An’s scheme. Then a detailed security analysis of the proposed scheme is presented followed by its efficiency comparison. The proposed scheme not only withstands security problems found in An’s scheme but also provides some extra features with mere addition of only two hash operations. The proposed scheme allows user to freely change his password and also provides user anonymity with untraceability.

  13. Kernel-based Joint Feature Selection and Max-Margin Classification for Early Diagnosis of Parkinson’s Disease

    Science.gov (United States)

    Adeli, Ehsan; Wu, Guorong; Saghafi, Behrouz; An, Le; Shi, Feng; Shen, Dinggang

    2017-01-01

    Feature selection methods usually select the most compact and relevant set of features based on their contribution to a linear regression model. Thus, these features might not be the best for a non-linear classifier. This is especially crucial for the tasks, in which the performance is heavily dependent on the feature selection techniques, like the diagnosis of neurodegenerative diseases. Parkinson’s disease (PD) is one of the most common neurodegenerative disorders, which progresses slowly while affects the quality of life dramatically. In this paper, we use the data acquired from multi-modal neuroimaging data to diagnose PD by investigating the brain regions, known to be affected at the early stages. We propose a joint kernel-based feature selection and classification framework. Unlike conventional feature selection techniques that select features based on their performance in the original input feature space, we select features that best benefit the classification scheme in the kernel space. We further propose kernel functions, specifically designed for our non-negative feature types. We use MRI and SPECT data of 538 subjects from the PPMI database, and obtain a diagnosis accuracy of 97.5%, which outperforms all baseline and state-of-the-art methods.

  14. Kernel-based Joint Feature Selection and Max-Margin Classification for Early Diagnosis of Parkinson’s Disease

    Science.gov (United States)

    Adeli, Ehsan; Wu, Guorong; Saghafi, Behrouz; An, Le; Shi, Feng; Shen, Dinggang

    2017-01-01

    Feature selection methods usually select the most compact and relevant set of features based on their contribution to a linear regression model. Thus, these features might not be the best for a non-linear classifier. This is especially crucial for the tasks, in which the performance is heavily dependent on the feature selection techniques, like the diagnosis of neurodegenerative diseases. Parkinson’s disease (PD) is one of the most common neurodegenerative disorders, which progresses slowly while affects the quality of life dramatically. In this paper, we use the data acquired from multi-modal neuroimaging data to diagnose PD by investigating the brain regions, known to be affected at the early stages. We propose a joint kernel-based feature selection and classification framework. Unlike conventional feature selection techniques that select features based on their performance in the original input feature space, we select features that best benefit the classification scheme in the kernel space. We further propose kernel functions, specifically designed for our non-negative feature types. We use MRI and SPECT data of 538 subjects from the PPMI database, and obtain a diagnosis accuracy of 97.5%, which outperforms all baseline and state-of-the-art methods. PMID:28120883

  15. Efficient Closed-Loop Schemes for MIMO-OFDM-Based WLANs

    Directory of Open Access Journals (Sweden)

    Jiang Yi

    2006-01-01

    Full Text Available The single-input single-output (SISO orthogonal frequency-division multiplexing (OFDM systems for wireless local area networks (WLAN defined by the IEEE 802.11a standard can support data rates up to 54 Mbps. In this paper, we consider deploying two transmit and two receive antennas to increase the data rate up to 108 Mbps. Applying our recent multiple-input multiple-output (MIMO transceiver designs, that is, the geometric mean decomposition (GMD and the uniform channel decomposition (UCD schemes, we propose simple and efficient closed-loop MIMO-OFDM designs for much improved performance, compared to the standard singular value decomposition (SVD based schemes as well as the open-loop V-BLAST (vertical Bell Labs layered space-time based counterparts. In the explicit feedback mode, precoder feedback is needed for the proposed schemes. We show that the overhead of feedback can be made very moderate by using a vector quantization method. In the time-division duplex (TDD mode where the channel reciprocity is exploited, our schemes turn out to be robust against the mismatch between the uplink and downlink channels. The advantages of our schemes are demonstrated via extensive numerical examples.

  16. Distance tracking scheme for seamless handover in IMS-based ...

    African Journals Online (AJOL)

    This paper proposes a fast and seamless handover scheme for systems based on IP Multimedia Subsystem (IMS) architectural framework with Universal Mobile Telecommunications System (UMTS) access network. In the scheme the location, direction and movement pattern of a Mobile Node (MN) in a network cell are ...

  17. An Improved Dynamic ID-Based Remote User Authentication with Key Agreement Scheme

    Directory of Open Access Journals (Sweden)

    Juan Qu

    2013-01-01

    Full Text Available In recent years, several dynamic ID-based remote user authentication schemes have been proposed. In 2012, Wen and Li proposed a dynamic ID-based remote user authentication with key agreement scheme. They claimed that their scheme can resist impersonation attack and insider attack and provide anonymity for the users. However, we will show that Wen and Li's scheme cannot withstand insider attack and forward secrecy, does not provide anonymity for the users, and inefficiency for error password login. In this paper, we propose a novel ECC-based remote user authentication scheme which is immune to various known types of attack and is more secure and practical for mobile clients.

  18. The generalization ability of online SVM classification based on Markov sampling.

    Science.gov (United States)

    Xu, Jie; Yan Tang, Yuan; Zou, Bin; Xu, Zongben; Li, Luoqing; Lu, Yang

    2015-03-01

    In this paper, we consider online support vector machine (SVM) classification learning algorithms with uniformly ergodic Markov chain (u.e.M.c.) samples. We establish the bound on the misclassification error of an online SVM classification algorithm with u.e.M.c. samples based on reproducing kernel Hilbert spaces and obtain a satisfactory convergence rate. We also introduce a novel online SVM classification algorithm based on Markov sampling, and present the numerical studies on the learning ability of online SVM classification based on Markov sampling for benchmark repository. The numerical studies show that the learning performance of the online SVM classification algorithm based on Markov sampling is better than that of classical online SVM classification based on random sampling as the size of training samples is larger.

  19. Application of Object Based Classification and High Resolution Satellite Imagery for Savanna Ecosystem Analysis

    Directory of Open Access Journals (Sweden)

    Jane Southworth

    2010-12-01

    Full Text Available Savanna ecosystems are an important component of dryland regions and yet are exceedingly difficult to study using satellite imagery. Savannas are composed are varying amounts of trees, shrubs and grasses and typically traditional classification schemes or vegetation indices cannot differentiate across class type. This research utilizes object based classification (OBC for a region in Namibia, using IKONOS imagery, to help differentiate tree canopies and therefore woodland savanna, from shrub or grasslands. The methodology involved the identification and isolation of tree canopies within the imagery and the creation of tree polygon layers had an overall accuracy of 84%. In addition, the results were scaled up to a corresponding Landsat image of the same region, and the OBC results compared to corresponding pixel values of NDVI. The results were not compelling, indicating once more the problems of these traditional image analysis techniques for savanna ecosystems. Overall, the use of the OBC holds great promise for this ecosystem and could be utilized more frequently in studies of vegetation structure.

  20. Distributed Classification of Localization Attacks in Sensor Networks Using Exchange-Based Feature Extraction and Classifier

    Directory of Open Access Journals (Sweden)

    Su-Zhe Wang

    2016-01-01

    Full Text Available Secure localization under different forms of attack has become an essential task in wireless sensor networks. Despite the significant research efforts in detecting the malicious nodes, the problem of localization attack type recognition has not yet been well addressed. Motivated by this concern, we propose a novel exchange-based attack classification algorithm. This is achieved by a distributed expectation maximization extractor integrated with the PECPR-MKSVM classifier. First, the mixed distribution features based on the probabilistic modeling are extracted using a distributed expectation maximization algorithm. After feature extraction, by introducing the theory from support vector machine, an extensive contractive Peaceman-Rachford splitting method is derived to build the distributed classifier that diffuses the iteration calculation among neighbor sensors. To verify the efficiency of the distributed recognition scheme, four groups of experiments were carried out under various conditions. The average success rate of the proposed classification algorithm obtained in the presented experiments for external attacks is excellent and has achieved about 93.9% in some cases. These testing results demonstrate that the proposed algorithm can produce much greater recognition rate, and it can be also more robust and efficient even in the presence of excessive malicious scenario.

  1. Etiologic classification of TIA and minor stroke by A-S-C-O and causative classification system as compared to TOAST reduces the proportion of patients categorized as cause undetermined.

    Science.gov (United States)

    Desai, Jamsheed A; Abuzinadah, Ahmad R; Imoukhuede, Oje; Bernbaum, Manya L; Modi, Jayesh; Demchuk, Andrew M; Coutts, Shelagh B

    2014-01-01

    The assortment of patients based on the underlying pathophysiology is central to preventing recurrent stroke after a transient ischemic attack and minor stroke (TIA-MS). The causative classification of stroke (CCS) and the A-S-C-O (A for atherosclerosis, S for small vessel disease, C for Cardiac source, O for other cause) classification schemes have recently been developed. These systems have not been specifically applied to the TIA-MS population. We hypothesized that both CCS and A-S-C-O would increase the proportion of patients with a definitive etiologic mechanism for TIA-MS as compared with TOAST. Patients were analyzed from the CATCH study. A single-stroke physician assigned all patients to an etiologic subtype using published algorithms for TOAST, CCS and ASCO. We compared the proportions in the various categories for each classification scheme and then the association with stroke progression or recurrence was assessed. TOAST, CCS and A-S-C-O classification schemes were applied in 469 TIA-MS patients. When compared to TOAST both CCS (58.0 vs. 65.3%; p TIA and minor stroke patients classified as 'cause undetermined.' ASCO resulted in the fewest patients classified as cause undetermined. Stroke recurrence after TIA-MS is highest in patients with multiple high-risk etiologies or cryptogenic stroke classified by ASCO. © 2014 S. Karger AG, Basel.

  2. Interference mitigation enhancement of switched-based scheme in over-loaded femtocells

    KAUST Repository

    Gaaloul, Fakhreddine

    2012-06-01

    This paper proposes adequate methods to improve the interference mitigation capability of a recently investigated switched-based interference reduction scheme in short-range open-access and over-loaded femtocells. It is assumed that the available orthogonal channels for the femtocell network are distributed among operating access points in close vicinity, where each of which knows its allocated channels a priori. For the case when the feedback links are capacity-limited and the available channels can be universally shared and simultaneously used, the paper presents enhanced schemes to identify a channel to serve the desired scheduled user by maintaining the interference power level within a tolerable range. They attempt to either complement the switched-based scheme by minimum interference channel selection or adopt different interference thresholds on available channels, while aiming to reduce the channels examination load. The performance of the proposed schemes is quantified and then compared with those of the single-threshold switched-based scheme via numerical and simulation results. © 2012 IEEE.

  3. Multi-label literature classification based on the Gene Ontology graph

    Directory of Open Access Journals (Sweden)

    Lu Xinghua

    2008-12-01

    Full Text Available Abstract Background The Gene Ontology is a controlled vocabulary for representing knowledge related to genes and proteins in a computable form. The current effort of manually annotating proteins with the Gene Ontology is outpaced by the rate of accumulation of biomedical knowledge in literature, which urges the development of text mining approaches to facilitate the process by automatically extracting the Gene Ontology annotation from literature. The task is usually cast as a text classification problem, and contemporary methods are confronted with unbalanced training data and the difficulties associated with multi-label classification. Results In this research, we investigated the methods of enhancing automatic multi-label classification of biomedical literature by utilizing the structure of the Gene Ontology graph. We have studied three graph-based multi-label classification algorithms, including a novel stochastic algorithm and two top-down hierarchical classification methods for multi-label literature classification. We systematically evaluated and compared these graph-based classification algorithms to a conventional flat multi-label algorithm. The results indicate that, through utilizing the information from the structure of the Gene Ontology graph, the graph-based multi-label classification methods can significantly improve predictions of the Gene Ontology terms implied by the analyzed text. Furthermore, the graph-based multi-label classifiers are capable of suggesting Gene Ontology annotations (to curators that are closely related to the true annotations even if they fail to predict the true ones directly. A software package implementing the studied algorithms is available for the research community. Conclusion Through utilizing the information from the structure of the Gene Ontology graph, the graph-based multi-label classification methods have better potential than the conventional flat multi-label classification approach to facilitate

  4. A review of supervised object-based land-cover image classification

    Science.gov (United States)

    Ma, Lei; Li, Manchun; Ma, Xiaoxue; Cheng, Liang; Du, Peijun; Liu, Yongxue

    2017-08-01

    Object-based image classification for land-cover mapping purposes using remote-sensing imagery has attracted significant attention in recent years. Numerous studies conducted over the past decade have investigated a broad array of sensors, feature selection, classifiers, and other factors of interest. However, these research results have not yet been synthesized to provide coherent guidance on the effect of different supervised object-based land-cover classification processes. In this study, we first construct a database with 28 fields using qualitative and quantitative information extracted from 254 experimental cases described in 173 scientific papers. Second, the results of the meta-analysis are reported, including general characteristics of the studies (e.g., the geographic range of relevant institutes, preferred journals) and the relationships between factors of interest (e.g., spatial resolution and study area or optimal segmentation scale, accuracy and number of targeted classes), especially with respect to the classification accuracy of different sensors, segmentation scale, training set size, supervised classifiers, and land-cover types. Third, useful data on supervised object-based image classification are determined from the meta-analysis. For example, we find that supervised object-based classification is currently experiencing rapid advances, while development of the fuzzy technique is limited in the object-based framework. Furthermore, spatial resolution correlates with the optimal segmentation scale and study area, and Random Forest (RF) shows the best performance in object-based classification. The area-based accuracy assessment method can obtain stable classification performance, and indicates a strong correlation between accuracy and training set size, while the accuracy of the point-based method is likely to be unstable due to mixed objects. In addition, the overall accuracy benefits from higher spatial resolution images (e.g., unmanned aerial

  5. Using dual classifications in the development of avian wetland indices of biological integrity for wetlands in West Virginia, USA.

    Science.gov (United States)

    Veselka, Walter; Anderson, James T; Kordek, Walter S

    2010-05-01

    Considerable resources are being used to develop and implement bioassessment methods for wetlands to ensure that "biological integrity" is maintained under the United States Clean Water Act. Previous research has demonstrated that avian composition is susceptible to human impairments at multiple spatial scales. Using a site-specific disturbance gradient, we built avian wetland indices of biological integrity (AW-IBI) specific to two wetland classification schemes, one based on vegetative structure and the other based on the wetland's position in the landscape and sources of water. The resulting class-specific AW-IBI was comprised of one to four metrics that varied in their sensitivity to the disturbance gradient. Some of these metrics were specific to only one of the classification schemes, whereas others could discriminate varying levels of disturbance regardless of classification scheme. Overall, all of the derived biological indices specific to the vegetative structure-based classes of wetlands had a significant relation with the disturbance gradient; however, the biological index derived for floodplain wetlands exhibited a more consistent response to a local disturbance gradient. We suspect that the consistency of this response is due to the inherent nature of the connectivity of available habitat in floodplain wetlands.

  6. Micro-Doppler Based Classification of Human Aquatic Activities via Transfer Learning of Convolutional Neural Networks

    Directory of Open Access Journals (Sweden)

    Jinhee Park

    2016-11-01

    Full Text Available Accurate classification of human aquatic activities using radar has a variety of potential applications such as rescue operations and border patrols. Nevertheless, the classification of activities on water using radar has not been extensively studied, unlike the case on dry ground, due to its unique challenge. Namely, not only is the radar cross section of a human on water small, but the micro-Doppler signatures are much noisier due to water drops and waves. In this paper, we first investigate whether discriminative signatures could be obtained for activities on water through a simulation study. Then, we show how we can effectively achieve high classification accuracy by applying deep convolutional neural networks (DCNN directly to the spectrogram of real measurement data. From the five-fold cross-validation on our dataset, which consists of five aquatic activities, we report that the conventional feature-based scheme only achieves an accuracy of 45.1%. In contrast, the DCNN trained using only the collected data attains 66.7%, and the transfer learned DCNN, which takes a DCNN pre-trained on a RGB image dataset and fine-tunes the parameters using the collected data, achieves a much higher 80.3%, which is a significant performance boost.

  7. Cost-based droop scheme with lower generation costs for microgrids

    DEFF Research Database (Denmark)

    Nutkani, Inam Ullah; Loh, Poh Chiang; Blaabjerg, Frede

    2014-01-01

    -based droop scheme, whose objective is to reduce a generation cost function realised with various DG operating characteristics taken into consideration. Where desired, proportional power sharing based on the DG kVA ratings can also be included, whose disadvantage is a slightly higher generation cost, which...... on the DG kilovolts ampere (kVA) ratings. Other factors like generation costs, efficiencies and emission penalties at different load demands have not been considered. This omission might not be appropriate if different types of DGs are present in the microgrids. As an alternative, this study proposes a cost...... is still lower than that produced by the traditional droop schemes. The proposed droop scheme therefore retains all advantages of the traditional droop schemes, whereas at the same time, keeps its generation cost low. These findings have been validated in experiments....

  8. Fission--fusion systems: classification and critique

    International Nuclear Information System (INIS)

    Lidsky, L.M.

    1974-01-01

    A useful classification scheme for hybrid systems is described and some common features that the scheme makes apparent are pointed out. The early history of fusion-fission systems is reviewed. Some designs are described along with advantages and disadvantages of each. The extension to low and moderate Q devices is noted. (U.S.)

  9. A keyword searchable attribute-based encryption scheme with attribute update for cloud storage.

    Science.gov (United States)

    Wang, Shangping; Ye, Jian; Zhang, Yaling

    2018-01-01

    Ciphertext-policy attribute-based encryption (CP-ABE) scheme is a new type of data encryption primitive, which is very suitable for data cloud storage for its fine-grained access control. Keyword-based searchable encryption scheme enables users to quickly find interesting data stored in the cloud server without revealing any information of the searched keywords. In this work, we provide a keyword searchable attribute-based encryption scheme with attribute update for cloud storage, which is a combination of attribute-based encryption scheme and keyword searchable encryption scheme. The new scheme supports the user's attribute update, especially in our new scheme when a user's attribute need to be updated, only the user's secret key related with the attribute need to be updated, while other user's secret key and the ciphertexts related with this attribute need not to be updated with the help of the cloud server. In addition, we outsource the operation with high computation cost to cloud server to reduce the user's computational burden. Moreover, our scheme is proven to be semantic security against chosen ciphertext-policy and chosen plaintext attack in the general bilinear group model. And our scheme is also proven to be semantic security against chosen keyword attack under bilinear Diffie-Hellman (BDH) assumption.

  10. Structure-based classification and ontology in chemistry

    Directory of Open Access Journals (Sweden)

    Hastings Janna

    2012-04-01

    Full Text Available Abstract Background Recent years have seen an explosion in the availability of data in the chemistry domain. With this information explosion, however, retrieving relevant results from the available information, and organising those results, become even harder problems. Computational processing is essential to filter and organise the available resources so as to better facilitate the work of scientists. Ontologies encode expert domain knowledge in a hierarchically organised machine-processable format. One such ontology for the chemical domain is ChEBI. ChEBI provides a classification of chemicals based on their structural features and a role or activity-based classification. An example of a structure-based class is 'pentacyclic compound' (compounds containing five-ring structures, while an example of a role-based class is 'analgesic', since many different chemicals can act as analgesics without sharing structural features. Structure-based classification in chemistry exploits elegant regularities and symmetries in the underlying chemical domain. As yet, there has been neither a systematic analysis of the types of structural classification in use in chemistry nor a comparison to the capabilities of available technologies. Results We analyze the different categories of structural classes in chemistry, presenting a list of patterns for features found in class definitions. We compare these patterns of class definition to tools which allow for automation of hierarchy construction within cheminformatics and within logic-based ontology technology, going into detail in the latter case with respect to the expressive capabilities of the Web Ontology Language and recent extensions for modelling structured objects. Finally we discuss the relationships and interactions between cheminformatics approaches and logic-based approaches. Conclusion Systems that perform intelligent reasoning tasks on chemistry data require a diverse set of underlying computational

  11. Contextual segment-based classification of airborne laser scanner data

    NARCIS (Netherlands)

    Vosselman, George; Coenen, Maximilian; Rottensteiner, Franz

    2017-01-01

    Classification of point clouds is needed as a first step in the extraction of various types of geo-information from point clouds. We present a new approach to contextual classification of segmented airborne laser scanning data. Potential advantages of segment-based classification are easily offset

  12. Efficient and Provable Secure Pairing-Free Security-Mediated Identity-Based Identification Schemes

    Directory of Open Access Journals (Sweden)

    Ji-Jian Chin

    2014-01-01

    Full Text Available Security-mediated cryptography was first introduced by Boneh et al. in 2001. The main motivation behind security-mediated cryptography was the capability to allow instant revocation of a user’s secret key by necessitating the cooperation of a security mediator in any given transaction. Subsequently in 2003, Boneh et al. showed how to convert a RSA-based security-mediated encryption scheme from a traditional public key setting to an identity-based one, where certificates would no longer be required. Following these two pioneering papers, other cryptographic primitives that utilize a security-mediated approach began to surface. However, the security-mediated identity-based identification scheme (SM-IBI was not introduced until Chin et al. in 2013 with a scheme built on bilinear pairings. In this paper, we improve on the efficiency results for SM-IBI schemes by proposing two schemes that are pairing-free and are based on well-studied complexity assumptions: the RSA and discrete logarithm assumptions.

  13. Efficient and provable secure pairing-free security-mediated identity-based identification schemes.

    Science.gov (United States)

    Chin, Ji-Jian; Tan, Syh-Yuan; Heng, Swee-Huay; Phan, Raphael C-W

    2014-01-01

    Security-mediated cryptography was first introduced by Boneh et al. in 2001. The main motivation behind security-mediated cryptography was the capability to allow instant revocation of a user's secret key by necessitating the cooperation of a security mediator in any given transaction. Subsequently in 2003, Boneh et al. showed how to convert a RSA-based security-mediated encryption scheme from a traditional public key setting to an identity-based one, where certificates would no longer be required. Following these two pioneering papers, other cryptographic primitives that utilize a security-mediated approach began to surface. However, the security-mediated identity-based identification scheme (SM-IBI) was not introduced until Chin et al. in 2013 with a scheme built on bilinear pairings. In this paper, we improve on the efficiency results for SM-IBI schemes by proposing two schemes that are pairing-free and are based on well-studied complexity assumptions: the RSA and discrete logarithm assumptions.

  14. A scheme of hidden-structure attribute-based encryption with multiple authorities

    Science.gov (United States)

    Ling, J.; Weng, A. X.

    2018-05-01

    In the most of the CP-ABE schemes with hidden access structure, both all the user attributes and the key generation are managed by only one authority. The key generation efficiency will decrease as the number of user increases, and the data will encounter security issues as the only authority is attacked. We proposed a scheme of hidden-structure attribute-based encryption with multiple authorities, which introduces multiple semi-trusted attribute authorities, avoiding the threat even though one or more authorities are attacked. We also realized user revocation by managing a revocation list. Based on DBDH assumption, we proved that our scheme is of IND-CMA security. The analysis shows that our scheme improves the key generation efficiency.

  15. Integrating Globality and Locality for Robust Representation Based Classification

    Directory of Open Access Journals (Sweden)

    Zheng Zhang

    2014-01-01

    Full Text Available The representation based classification method (RBCM has shown huge potential for face recognition since it first emerged. Linear regression classification (LRC method and collaborative representation classification (CRC method are two well-known RBCMs. LRC and CRC exploit training samples of each class and all the training samples to represent the testing sample, respectively, and subsequently conduct classification on the basis of the representation residual. LRC method can be viewed as a “locality representation” method because it just uses the training samples of each class to represent the testing sample and it cannot embody the effectiveness of the “globality representation.” On the contrary, it seems that CRC method cannot own the benefit of locality of the general RBCM. Thus we propose to integrate CRC and LRC to perform more robust representation based classification. The experimental results on benchmark face databases substantially demonstrate that the proposed method achieves high classification accuracy.

  16. EMG finger movement classification based on ANFIS

    Science.gov (United States)

    Caesarendra, W.; Tjahjowidodo, T.; Nico, Y.; Wahyudati, S.; Nurhasanah, L.

    2018-04-01

    An increase number of people suffering from stroke has impact to the rapid development of finger hand exoskeleton to enable an automatic physical therapy. Prior to the development of finger exoskeleton, a research topic yet important i.e. machine learning of finger gestures classification is conducted. This paper presents a study on EMG signal classification of 5 finger gestures as a preliminary study toward the finger exoskeleton design and development in Indonesia. The EMG signals of 5 finger gestures were acquired using Myo EMG sensor. The EMG signal features were extracted and reduced using PCA. The ANFIS based learning is used to classify reduced features of 5 finger gestures. The result shows that the classification of finger gestures is less than the classification of 7 hand gestures.

  17. Chinese Sentence Classification Based on Convolutional Neural Network

    Science.gov (United States)

    Gu, Chengwei; Wu, Ming; Zhang, Chuang

    2017-10-01

    Sentence classification is one of the significant issues in Natural Language Processing (NLP). Feature extraction is often regarded as the key point for natural language processing. Traditional ways based on machine learning can not take high level features into consideration, such as Naive Bayesian Model. The neural network for sentence classification can make use of contextual information to achieve greater results in sentence classification tasks. In this paper, we focus on classifying Chinese sentences. And the most important is that we post a novel architecture of Convolutional Neural Network (CNN) to apply on Chinese sentence classification. In particular, most of the previous methods often use softmax classifier for prediction, we embed a linear support vector machine to substitute softmax in the deep neural network model, minimizing a margin-based loss to get a better result. And we use tanh as an activation function, instead of ReLU. The CNN model improve the result of Chinese sentence classification tasks. Experimental results on the Chinese news title database validate the effectiveness of our model.

  18. BossPro: a biometrics-based obfuscation scheme for software protection

    Science.gov (United States)

    Kuseler, Torben; Lami, Ihsan A.; Al-Assam, Hisham

    2013-05-01

    This paper proposes to integrate biometric-based key generation into an obfuscated interpretation algorithm to protect authentication application software from illegitimate use or reverse-engineering. This is especially necessary for mCommerce because application programmes on mobile devices, such as Smartphones and Tablet-PCs are typically open for misuse by hackers. Therefore, the scheme proposed in this paper ensures that a correct interpretation / execution of the obfuscated program code of the authentication application requires a valid biometric generated key of the actual person to be authenticated, in real-time. Without this key, the real semantics of the program cannot be understood by an attacker even if he/she gains access to this application code. Furthermore, the security provided by this scheme can be a vital aspect in protecting any application running on mobile devices that are increasingly used to perform business/financial or other security related applications, but are easily lost or stolen. The scheme starts by creating a personalised copy of any application based on the biometric key generated during an enrolment process with the authenticator as well as a nuance created at the time of communication between the client and the authenticator. The obfuscated code is then shipped to the client's mobile devise and integrated with real-time biometric extracted data of the client to form the unlocking key during execution. The novelty of this scheme is achieved by the close binding of this application program to the biometric key of the client, thus making this application unusable for others. Trials and experimental results on biometric key generation, based on client's faces, and an implemented scheme prototype, based on the Android emulator, prove the concept and novelty of this proposed scheme.

  19. Preliminary Research on Grassland Fine-classification Based on MODIS

    International Nuclear Information System (INIS)

    Hu, Z W; Zhang, S; Yu, X Y; Wang, X S

    2014-01-01

    Grassland ecosystem is important for climatic regulation, maintaining the soil and water. Research on the grassland monitoring method could provide effective reference for grassland resource investigation. In this study, we used the vegetation index method for grassland classification. There are several types of climate in China. Therefore, we need to use China's Main Climate Zone Maps and divide the study region into four climate zones. Based on grassland classification system of the first nation-wide grass resource survey in China, we established a new grassland classification system which is only suitable for this research. We used MODIS images as the basic data resources, and use the expert classifier method to perform grassland classification. Based on the 1:1,000,000 Grassland Resource Map of China, we obtained the basic distribution of all the grassland types and selected 20 samples evenly distributed in each type, then used NDVI/EVI product to summarize different spectral features of different grassland types. Finally, we introduced other classification auxiliary data, such as elevation, accumulate temperature (AT), humidity index (HI) and rainfall. China's nation-wide grassland classification map is resulted by merging the grassland in different climate zone. The overall classification accuracy is 60.4%. The result indicated that expert classifier is proper for national wide grassland classification, but the classification accuracy need to be improved

  20. On the integrity of functional brain networks in schizophrenia, Parkinson's disease, and advanced age: Evidence from connectivity-based single-subject classification.

    Science.gov (United States)

    Pläschke, Rachel N; Cieslik, Edna C; Müller, Veronika I; Hoffstaedter, Felix; Plachti, Anna; Varikuti, Deepthi P; Goosses, Mareike; Latz, Anne; Caspers, Svenja; Jockwitz, Christiane; Moebus, Susanne; Gruber, Oliver; Eickhoff, Claudia R; Reetz, Kathrin; Heller, Julia; Südmeyer, Martin; Mathys, Christian; Caspers, Julian; Grefkes, Christian; Kalenscher, Tobias; Langner, Robert; Eickhoff, Simon B

    2017-12-01

    Previous whole-brain functional connectivity studies achieved successful classifications of patients and healthy controls but only offered limited specificity as to affected brain systems. Here, we examined whether the connectivity patterns of functional systems affected in schizophrenia (SCZ), Parkinson's disease (PD), or normal aging equally translate into high classification accuracies for these conditions. We compared classification performance between pre-defined networks for each group and, for any given network, between groups. Separate support vector machine classifications of 86 SCZ patients, 80 PD patients, and 95 older adults relative to their matched healthy/young controls, respectively, were performed on functional connectivity in 12 task-based, meta-analytically defined networks using 25 replications of a nested 10-fold cross-validation scheme. Classification performance of the various networks clearly differed between conditions, as those networks that best classified one disease were usually non-informative for the other. For SCZ, but not PD, emotion-processing, empathy, and cognitive action control networks distinguished patients most accurately from controls. For PD, but not SCZ, networks subserving autobiographical or semantic memory, motor execution, and theory-of-mind cognition yielded the best classifications. In contrast, young-old classification was excellent based on all networks and outperformed both clinical classifications. Our pattern-classification approach captured associations between clinical and developmental conditions and functional network integrity with a higher level of specificity than did previous whole-brain analyses. Taken together, our results support resting-state connectivity as a marker of functional dysregulation in specific networks known to be affected by SCZ and PD, while suggesting that aging affects network integrity in a more global way. Hum Brain Mapp 38:5845-5858, 2017. © 2017 Wiley Periodicals, Inc. © 2017

  1. Fast Schemes for Computing Similarities between Gaussian HMMs and Their Applications in Texture Image Classification

    Directory of Open Access Journals (Sweden)

    Chen Ling

    2005-01-01

    Full Text Available An appropriate definition and efficient computation of similarity (or distance measures between two stochastic models are of theoretical and practical interest. In this work, a similarity measure, that is, a modified "generalized probability product kernel," of Gaussian hidden Markov models is introduced. Two efficient schemes for computing this similarity measure are presented. The first scheme adopts a forward procedure analogous to the approach commonly used in probability evaluation of observation sequences on HMMs. The second scheme is based on the specially defined similarity transition matrix of two Gaussian hidden Markov models. Two scaling procedures are also proposed to solve the out-of-precision problem in the implementation. The effectiveness of the proposed methods has been evaluated on simulated observations with predefined model parameters, and on natural texture images. Promising experimental results have been observed.

  2. Dropping out of Ethiopia’s Community Based Health Insurance scheme

    NARCIS (Netherlands)

    A.D. Mebratie (Anagaw); R.A. Sparrow (Robert); Z.Y. Debebe (Zelalem); G. Alemu (Getnet ); A.S. Bedi (Arjun Singh)

    2014-01-01

    textabstractLow contract renewal rates have been identified as one of the challenges facing the development of community based health insurance schemes (CBHI). This paper uses longitudinal household survey data to examine dropout in the case of Ethiopia’s pilot CBHI scheme, which saw enrolment

  3. An Authentication Technique Based on Classification

    Institute of Scientific and Technical Information of China (English)

    李钢; 杨杰

    2004-01-01

    We present a novel watermarking approach based on classification for authentication, in which a watermark is embedded into the host image. When the marked image is modified, the extracted watermark is also different to the original watermark, and different kinds of modification lead to different extracted watermarks. In this paper, different kinds of modification are considered as classes, and we used classification algorithm to recognize the modifications with high probability. Simulation results show that the proposed method is potential and effective.

  4. Integrated optical 3D digital imaging based on DSP scheme

    Science.gov (United States)

    Wang, Xiaodong; Peng, Xiang; Gao, Bruce Z.

    2008-03-01

    We present a scheme of integrated optical 3-D digital imaging (IO3DI) based on digital signal processor (DSP), which can acquire range images independently without PC support. This scheme is based on a parallel hardware structure with aid of DSP and field programmable gate array (FPGA) to realize 3-D imaging. In this integrated scheme of 3-D imaging, the phase measurement profilometry is adopted. To realize the pipeline processing of the fringe projection, image acquisition and fringe pattern analysis, we present a multi-threads application program that is developed under the environment of DSP/BIOS RTOS (real-time operating system). Since RTOS provides a preemptive kernel and powerful configuration tool, with which we are able to achieve a real-time scheduling and synchronization. To accelerate automatic fringe analysis and phase unwrapping, we make use of the technique of software optimization. The proposed scheme can reach a performance of 39.5 f/s (frames per second), so it may well fit into real-time fringe-pattern analysis and can implement fast 3-D imaging. Experiment results are also presented to show the validity of proposed scheme.

  5. Performance Analysis of Virtual MIMO Relaying Schemes Based on Detect–Split–Forward

    KAUST Repository

    Al-Basit, Suhaib M.

    2014-10-29

    © 2014, Springer Science+Business Media New York. Virtual multi-input multi-output (vMIMO) schemes in wireless communication systems improve coverage, throughput, capacity, and quality of service. In this paper, we propose three uplink vMIMO relaying schemes based on detect–split–forward (DSF). In addition, we investigate the effect of several physical parameters such as distance, modulation type and number of relays. Furthermore, an adaptive vMIMO DSF scheme based on VBLAST and STBC is proposed. In order to do that, we provide analytical tools to evaluate the performance of the propose vMIMO relaying scheme.

  6. Performance Analysis of Virtual MIMO Relaying Schemes Based on Detect–Split–Forward

    KAUST Repository

    Al-Basit, Suhaib M.; Al-Ghadhban, Samir; Zummo, Salam A.

    2014-01-01

    © 2014, Springer Science+Business Media New York. Virtual multi-input multi-output (vMIMO) schemes in wireless communication systems improve coverage, throughput, capacity, and quality of service. In this paper, we propose three uplink vMIMO relaying schemes based on detect–split–forward (DSF). In addition, we investigate the effect of several physical parameters such as distance, modulation type and number of relays. Furthermore, an adaptive vMIMO DSF scheme based on VBLAST and STBC is proposed. In order to do that, we provide analytical tools to evaluate the performance of the propose vMIMO relaying scheme.

  7. Cost-based droop scheme with lower generation costs for microgrids

    DEFF Research Database (Denmark)

    Nutkani, I. U.; Loh, Poh Chiang; Blaabjerg, Frede

    2013-01-01

    on the DG kVA ratings. Other operating characteristics like generation costs, efficiencies and emission penalties at different loadings have not been considered. This makes existing droop schemes not too well-suited for standalone microgrids without central management system, where different types of DGs...... usually exist. As an alternative, this paper proposes a cost-based droop scheme, whose objective is to reduce a generation cost realized with various DG operating characteristics taken into consideration. The proposed droop scheme therefore retains all advantages of the traditional droop schemes, while...... at the same time keep its generation cost low. These findings have been validated through simulation and scaled down lab experiment....

  8. Breaking a chaos-noise-based secure communication scheme

    Science.gov (United States)

    Li, Shujun; Álvarez, Gonzalo; Chen, Guanrong; Mou, Xuanqin

    2005-03-01

    This paper studies the security of a secure communication scheme based on two discrete-time intermittently chaotic systems synchronized via a common random driving signal. Some security defects of the scheme are revealed: 1) The key space can be remarkably reduced; 2) the decryption is insensitive to the mismatch of the secret key; 3) the key-generation process is insecure against known/chosen-plaintext attacks. The first two defects mean that the scheme is not secure enough against brute-force attacks, and the third one means that an attacker can easily break the cryptosystem by approximately estimating the secret key once he has a chance to access a fragment of the generated keystream. Yet it remains to be clarified if intermittent chaos could be used for designing secure chaotic cryptosystems.

  9. An Improved Timestamp-Based Password Authentication Scheme Using Smart Cards

    OpenAIRE

    Pathan, Al-Sakib Khan; Hong, Choong Seon

    2007-01-01

    With the recent proliferation of distributed systems and networking, remote authentication has become a crucial task in many networking applications. Various schemes have been proposed so far for the two-party remote authentication; however, some of them have been proved to be insecure. In this paper, we propose an efficient timestamp-based password authentication scheme using smart cards. We show various types of forgery attacks against a previously proposed timestamp-based password authenti...

  10. Novel neural networks-based fault tolerant control scheme with fault alarm.

    Science.gov (United States)

    Shen, Qikun; Jiang, Bin; Shi, Peng; Lim, Cheng-Chew

    2014-11-01

    In this paper, the problem of adaptive active fault-tolerant control for a class of nonlinear systems with unknown actuator fault is investigated. The actuator fault is assumed to have no traditional affine appearance of the system state variables and control input. The useful property of the basis function of the radial basis function neural network (NN), which will be used in the design of the fault tolerant controller, is explored. Based on the analysis of the design of normal and passive fault tolerant controllers, by using the implicit function theorem, a novel NN-based active fault-tolerant control scheme with fault alarm is proposed. Comparing with results in the literature, the fault-tolerant control scheme can minimize the time delay between fault occurrence and accommodation that is called the time delay due to fault diagnosis, and reduce the adverse effect on system performance. In addition, the FTC scheme has the advantages of a passive fault-tolerant control scheme as well as the traditional active fault-tolerant control scheme's properties. Furthermore, the fault-tolerant control scheme requires no additional fault detection and isolation model which is necessary in the traditional active fault-tolerant control scheme. Finally, simulation results are presented to demonstrate the efficiency of the developed techniques.

  11. Activity classification based on inertial and barometric pressure sensors at different anatomical locations.

    Science.gov (United States)

    Moncada-Torres, A; Leuenberger, K; Gonzenbach, R; Luft, A; Gassert, R

    2014-07-01

    Miniature, wearable sensor modules are a promising technology to monitor activities of daily living (ADL) over extended periods of time. To assure both user compliance and meaningful results, the selection and placement site of sensors requires careful consideration. We investigated these aspects for the classification of 16 ADL in 6 healthy subjects under laboratory conditions using ReSense, our custom-made inertial measurement unit enhanced with a barometric pressure sensor used to capture activity-related altitude changes. Subjects wore a module on each wrist and ankle, and one on the trunk. Activities comprised whole body movements as well as gross and dextrous upper-limb activities. Wrist-module data outperformed the other locations for the three activity groups. Specifically, overall classification accuracy rates of almost 93% and more than 95% were achieved for the repeated holdout and user-specific validation methods, respectively, for all 16 activities. Including the altitude profile resulted in a considerable improvement of up to 20% in the classification accuracy for stair ascent and descent. The gyroscopes provided no useful information for activity classification under this scheme. The proposed sensor setting could allow for robust long-term activity monitoring with high compliance in different patient populations.

  12. Activity classification based on inertial and barometric pressure sensors at different anatomical locations

    International Nuclear Information System (INIS)

    Moncada-Torres, A; Leuenberger, K; Gassert, R; Gonzenbach, R; Luft, A

    2014-01-01

    Miniature, wearable sensor modules are a promising technology to monitor activities of daily living (ADL) over extended periods of time. To assure both user compliance and meaningful results, the selection and placement site of sensors requires careful consideration. We investigated these aspects for the classification of 16 ADL in 6 healthy subjects under laboratory conditions using ReSense, our custom-made inertial measurement unit enhanced with a barometric pressure sensor used to capture activity-related altitude changes. Subjects wore a module on each wrist and ankle, and one on the trunk. Activities comprised whole body movements as well as gross and dextrous upper-limb activities. Wrist-module data outperformed the other locations for the three activity groups. Specifically, overall classification accuracy rates of almost 93% and more than 95% were achieved for the repeated holdout and user-specific validation methods, respectively, for all 16 activities. Including the altitude profile resulted in a considerable improvement of up to 20% in the classification accuracy for stair ascent and descent. The gyroscopes provided no useful information for activity classification under this scheme. The proposed sensor setting could allow for robust long-term activity monitoring with high compliance in different patient populations. (paper)

  13. A novel image encryption scheme based on the ergodicity of baker map

    Science.gov (United States)

    Ye, Ruisong; Chen, Yonghong

    2012-01-01

    Thanks to the exceptionally good properties in chaotic systems, such as sensitivity to initial conditions and control parameters, pseudo-randomness and ergodicity, chaos-based image encryption algorithms have been widely studied and developed in recent years. A novel digital image encryption scheme based on the chaotic ergodicity of Baker map is proposed in this paper. Different from traditional encryption schemes based on Baker map, we permute the pixel positions by their corresponding order numbers deriving from the approximating points in one chaotic orbit. To enhance the resistance to statistical and differential attacks, a diffusion process is suggested as well in the proposed scheme. The proposed scheme enlarges the key space significantly to resist brute-force attack. Additionally, the distribution of gray values in the cipher-image has a random-like behavior to resist statistical analysis. The proposed scheme is robust against cropping, tampering and noising attacks as well. It therefore suggests a high secure and efficient way for real-time image encryption and transmission in practice.

  14. Evolutionary algorithm based heuristic scheme for nonlinear heat transfer equations.

    Science.gov (United States)

    Ullah, Azmat; Malik, Suheel Abdullah; Alimgeer, Khurram Saleem

    2018-01-01

    In this paper, a hybrid heuristic scheme based on two different basis functions i.e. Log Sigmoid and Bernstein Polynomial with unknown parameters is used for solving the nonlinear heat transfer equations efficiently. The proposed technique transforms the given nonlinear ordinary differential equation into an equivalent global error minimization problem. Trial solution for the given nonlinear differential equation is formulated using a fitness function with unknown parameters. The proposed hybrid scheme of Genetic Algorithm (GA) with Interior Point Algorithm (IPA) is opted to solve the minimization problem and to achieve the optimal values of unknown parameters. The effectiveness of the proposed scheme is validated by solving nonlinear heat transfer equations. The results obtained by the proposed scheme are compared and found in sharp agreement with both the exact solution and solution obtained by Haar Wavelet-Quasilinearization technique which witnesses the effectiveness and viability of the suggested scheme. Moreover, the statistical analysis is also conducted for investigating the stability and reliability of the presented scheme.

  15. Evolutionary algorithm based heuristic scheme for nonlinear heat transfer equations.

    Directory of Open Access Journals (Sweden)

    Azmat Ullah

    Full Text Available In this paper, a hybrid heuristic scheme based on two different basis functions i.e. Log Sigmoid and Bernstein Polynomial with unknown parameters is used for solving the nonlinear heat transfer equations efficiently. The proposed technique transforms the given nonlinear ordinary differential equation into an equivalent global error minimization problem. Trial solution for the given nonlinear differential equation is formulated using a fitness function with unknown parameters. The proposed hybrid scheme of Genetic Algorithm (GA with Interior Point Algorithm (IPA is opted to solve the minimization problem and to achieve the optimal values of unknown parameters. The effectiveness of the proposed scheme is validated by solving nonlinear heat transfer equations. The results obtained by the proposed scheme are compared and found in sharp agreement with both the exact solution and solution obtained by Haar Wavelet-Quasilinearization technique which witnesses the effectiveness and viability of the suggested scheme. Moreover, the statistical analysis is also conducted for investigating the stability and reliability of the presented scheme.

  16. An Efficient Code-Based Threshold Ring Signature Scheme with a Leader-Participant Model

    Directory of Open Access Journals (Sweden)

    Guomin Zhou

    2017-01-01

    Full Text Available Digital signature schemes with additional properties have broad applications, such as in protecting the identity of signers allowing a signer to anonymously sign a message in a group of signers (also known as a ring. While these number-theoretic problems are still secure at the time of this research, the situation could change with advances in quantum computing. There is a pressing need to design PKC schemes that are secure against quantum attacks. In this paper, we propose a novel code-based threshold ring signature scheme with a leader-participant model. A leader is appointed, who chooses some shared parameters for other signers to participate in the signing process. This leader-participant model enhances the performance because every participant including the leader could execute the decoding algorithm (as a part of signing process upon receiving the shared parameters from the leader. The time complexity of our scheme is close to Courtois et al.’s (2001 scheme. The latter is often used as a basis to construct other types of code-based signature schemes. Moreover, as a threshold ring signature scheme, our scheme is as efficient as the normal code-based ring signature.

  17. A spatiotemporal-based scheme for efficient registration-based segmentation of thoracic 4-D MRI.

    Science.gov (United States)

    Yang, Y; Van Reeth, E; Poh, C L; Tan, C H; Tham, I W K

    2014-05-01

    Dynamic three-dimensional (3-D) (four-dimensional, 4-D) magnetic resonance (MR) imaging is gaining importance in the study of pulmonary motion for respiratory diseases and pulmonary tumor motion for radiotherapy. To perform quantitative analysis using 4-D MR images, segmentation of anatomical structures such as the lung and pulmonary tumor is required. Manual segmentation of entire thoracic 4-D MRI data that typically contains many 3-D volumes acquired over several breathing cycles is extremely tedious, time consuming, and suffers high user variability. This requires the development of new automated segmentation schemes for 4-D MRI data segmentation. Registration-based segmentation technique that uses automatic registration methods for segmentation has been shown to be an accurate method to segment structures for 4-D data series. However, directly applying registration-based segmentation to segment 4-D MRI series lacks efficiency. Here we propose an automated 4-D registration-based segmentation scheme that is based on spatiotemporal information for the segmentation of thoracic 4-D MR lung images. The proposed scheme saved up to 95% of computation amount while achieving comparable accurate segmentations compared to directly applying registration-based segmentation to 4-D dataset. The scheme facilitates rapid 3-D/4-D visualization of the lung and tumor motion and potentially the tracking of tumor during radiation delivery.

  18. [Part-time Work and Men's Health : Results based on Routine Data of a Statutory Health Insurance Scheme].

    Science.gov (United States)

    Grobe, Thomas G

    2016-08-01

    With the introduction of a new occupational classification at the end of 2011, employment characteristics are reported by employees to social insurance agencies in Germany in more detail than in previous years. In addition to other changes, the new classification allows a distinction between full- and part-time work to be made. This provided a reason to consider the health-related aspects of part-time work on the basis of data from a statutory health insurance scheme. Our analysis is based on the data of 3.8 million employees insured with the Techniker Krankenkasse (TK), a statutory health insurance scheme, in 2012. In addition to daily information on employment situations, details of periods and diagnoses of sick leave and the drugs prescribed were available. Although approximately 50 % of women of middle to higher working age worked part-time in 2012, the corresponding percentage of men employed in part-time work was less than 10 %. Overall, part-time employees were on sick leave for fewer days than full-time employees, but among men, sick leave due to mental disorders was longer for part-time employees than for full-time employees, whereas women working part time were affected to a lesser extent by corresponding periods of absence than those working full time. The results provide indications for the assertion that men in gender-specifically atypical employment situations are more frequently affected by mental disorders. Further evidence supports this assertion. With the long-term availability of these new employment characteristics, longitudinal analyses could help to clarify this cause-effect relationship.

  19. Cardiac arrhythmia beat classification using DOST and PSO tuned SVM.

    Science.gov (United States)

    Raj, Sandeep; Ray, Kailash Chandra; Shankar, Om

    2016-11-01

    The increase in the number of deaths due to cardiovascular diseases (CVDs) has gained significant attention from the study of electrocardiogram (ECG) signals. These ECG signals are studied by the experienced cardiologist for accurate and proper diagnosis, but it becomes difficult and time-consuming for long-term recordings. Various signal processing techniques are studied to analyze the ECG signal, but they bear limitations due to the non-stationary behavior of ECG signals. Hence, this study aims to improve the classification accuracy rate and provide an automated diagnostic solution for the detection of cardiac arrhythmias. The proposed methodology consists of four stages, i.e. filtering, R-peak detection, feature extraction and classification stages. In this study, Wavelet based approach is used to filter the raw ECG signal, whereas Pan-Tompkins algorithm is used for detecting the R-peak inside the ECG signal. In the feature extraction stage, discrete orthogonal Stockwell transform (DOST) approach is presented for an efficient time-frequency representation (i.e. morphological descriptors) of a time domain signal and retains the absolute phase information to distinguish the various non-stationary behavior ECG signals. Moreover, these morphological descriptors are further reduced in lower dimensional space by using principal component analysis and combined with the dynamic features (i.e based on RR-interval of the ECG signals) of the input signal. This combination of two different kinds of descriptors represents each feature set of an input signal that is utilized for classification into subsequent categories by employing PSO tuned support vector machines (SVM). The proposed methodology is validated on the baseline MIT-BIH arrhythmia database and evaluated under two assessment schemes, yielding an improved overall accuracy of 99.18% for sixteen classes in the category-based and 89.10% for five classes (mapped according to AAMI standard) in the patient-based

  20. GIS coupled Multiple Criteria based Decision Support for Classification of Urban Coastal Areas in India

    Science.gov (United States)

    Dhiman, R.; Kalbar, P.; Inamdar, A. B.

    2017-12-01

    Coastal area classification in India is a challenge for federal and state government agencies due to fragile institutional framework, unclear directions in implementation of costal regulations and violations happening at private and government level. This work is an attempt to improvise the objectivity of existing classification methods to synergies the ecological systems and socioeconomic development in coastal cities. We developed a Geographic information system coupled Multi-criteria Decision Making (GIS-MCDM) approach to classify urban coastal areas where utility functions are used to transform the costal features into quantitative membership values after assessing the sensitivity of urban coastal ecosystem. Furthermore, these membership values for costal features are applied in different weighting schemes to derive Coastal Area Index (CAI) which classifies the coastal areas in four distinct categories viz. 1) No Development Zone, 2) Highly Sensitive Zone, 3) Moderately Sensitive Zone and 4) Low Sensitive Zone based on the sensitivity of urban coastal ecosystem. Mumbai, a coastal megacity in India is used as case study for demonstration of proposed method. Finally, uncertainty analysis using Monte Carlo approach to validate the sensitivity of CAI under specific multiple scenarios is carried out. Results of CAI method shows the clear demarcation of coastal areas in GIS environment based on the ecological sensitivity. CAI provides better decision support for federal and state level agencies to classify urban coastal areas according to the regional requirement of coastal resources considering resilience and sustainable development. CAI method will strengthen the existing institutional framework for decision making in classification of urban coastal areas where most effective coastal management options can be proposed.

  1. Design of a polynomial ring based symmetric homomorphic encryption scheme

    Directory of Open Access Journals (Sweden)

    Smaranika Dasgupta

    2016-09-01

    Full Text Available Security of data, especially in clouds, has become immensely essential for present-day applications. Fully homomorphic encryption (FHE is a great way to secure data which is used and manipulated by untrusted applications or systems. In this paper, we propose a symmetric FHE scheme based on polynomial over ring of integers. This scheme is somewhat homomorphic due to accumulation of noise after few operations, which is made fully homomorphic using a refresh procedure. After certain amount of homomorphic computations, large ciphertexts are refreshed for proper decryption. The hardness of the scheme is based on the difficulty of factorizing large integers. Also, it requires polynomial addition which is computationally cost effective. Experimental results are shown to support our claim.

  2. A New Wavelet-Based Document Image Segmentation Scheme

    Institute of Scientific and Technical Information of China (English)

    赵健; 李道京; 俞卞章; 耿军平

    2002-01-01

    The document image segmentation is very useful for printing, faxing and data processing. An algorithm is developed for segmenting and classifying document image. Feature used for classification is based on the histogram distribution pattern of different image classes. The important attribute of the algorithm is using wavelet correlation image to enhance raw image's pattern, so the classification accuracy is improved. In this paper document image is divided into four types: background, photo, text and graph. Firstly, the document image background has been distingusished easily by former normally method; secondly, three image types will be distinguished by their typical histograms, in order to make histograms feature clearer, each resolution' s HH wavelet subimage is used to add to the raw image at their resolution. At last, the photo, text and praph have been devided according to how the feature fit to the Laplacian distrbution by -X2 and L. Simulations show that classification accuracy is significantly improved. The comparison with related shows that our algorithm provides both lower classification error rates and better visual results.

  3. Computer-aided diagnosis scheme for histological classification of clustered microcalcifications on magnification mammograms

    International Nuclear Information System (INIS)

    Nakayama, Ryohei; Uchiyama, Yoshikazu; Watanabe, Ryoji; Katsuragawa, Shigehiko; Namba, Kiyoshi; Doi, Kunio

    2004-01-01

    The histological classification of clustered microcalcifications on mammograms can be difficult, and thus often require biopsy or follow-up. Our purpose in this study was to develop a computer-aided diagnosis schemefor identifying the histological classification of clustered microcalcifications on magnification mammograms in order to assist the radiologists' interpretation as a 'second opinion'. Our database consisted of 58 magnification mammograms, which included 35 malignant clustered microcalcifications (9 invasive carcinomas, 12 noninvasive carcinomas of the comedo type, and 14 noninvasive carcinomas of the noncomedo type) and 23 benign clustered microcalcifications (17 mastopathies and 6 fibroadenomas). The histological classifications of all clustered microcalcifications were proved by pathologic diagnosis. The clustered microcalcifications were first segmented by use of a novel filter bank and a thresholding technique. Five objective features on clustered microcalcifications were determined by taking into account subjective features that experienced the radiologists commonly use to identify possible histological classifications. The Bayes decision rule with five objective features was employed for distinguishing between five histological classifications. The classification accuracies for distinguishing between three malignant histological classifications were 77.8% (7/9) for invasive carcinoma, 75.0% (9/12) for noninvasive carcinoma of the comedo type, and 92.9% (13/14) for noninvasive carcinoma of the noncomedo type. The classification accuracies for distinguishing between two benign histological classifications were 94.1% (16/17) for mastopathy, and 100.0% (6/6) for fibroadenoma. This computerized method would be useful in assisting radiologists in their assessments of clustered microcalcifications

  4. Chaos-based partial image encryption scheme based on linear fractional and lifting wavelet transforms

    Science.gov (United States)

    Belazi, Akram; Abd El-Latif, Ahmed A.; Diaconu, Adrian-Viorel; Rhouma, Rhouma; Belghith, Safya

    2017-01-01

    In this paper, a new chaos-based partial image encryption scheme based on Substitution-boxes (S-box) constructed by chaotic system and Linear Fractional Transform (LFT) is proposed. It encrypts only the requisite parts of the sensitive information in Lifting-Wavelet Transform (LWT) frequency domain based on hybrid of chaotic maps and a new S-box. In the proposed encryption scheme, the characteristics of confusion and diffusion are accomplished in three phases: block permutation, substitution, and diffusion. Then, we used dynamic keys instead of fixed keys used in other approaches, to control the encryption process and make any attack impossible. The new S-box was constructed by mixing of chaotic map and LFT to insure the high confidentiality in the inner encryption of the proposed approach. In addition, the hybrid compound of S-box and chaotic systems strengthened the whole encryption performance and enlarged the key space required to resist the brute force attacks. Extensive experiments were conducted to evaluate the security and efficiency of the proposed approach. In comparison with previous schemes, the proposed cryptosystem scheme showed high performances and great potential for prominent prevalence in cryptographic applications.

  5. ICF-based classification and measurement of functioning.

    Science.gov (United States)

    Stucki, G; Kostanjsek, N; Ustün, B; Cieza, A

    2008-09-01

    If we aim towards a comprehensive understanding of human functioning and the development of comprehensive programs to optimize functioning of individuals and populations we need to develop suitable measures. The approval of the International Classification, Disability and Health (ICF) in 2001 by the 54th World Health Assembly as the first universally shared model and classification of functioning, disability and health marks, therefore an important step in the development of measurement instruments and ultimately for our understanding of functioning, disability and health. The acceptance and use of the ICF as a reference framework and classification has been facilitated by its development in a worldwide, comprehensive consensus process and the increasing evidence regarding its validity. However, the broad acceptance and use of the ICF as a reference framework and classification will also depend on the resolution of conceptual and methodological challenges relevant for the classification and measurement of functioning. This paper therefore describes first how the ICF categories can serve as building blocks for the measurement of functioning and then the current state of the development of ICF based practical tools and international standards such as the ICF Core Sets. Finally it illustrates how to map the world of measures to the ICF and vice versa and the methodological principles relevant for the transformation of information obtained with a clinical test or a patient-oriented instrument to the ICF as well as the development of ICF-based clinical and self-reported measurement instruments.

  6. Interference mitigation enhancement of switched-based scheme in over-loaded femtocells

    KAUST Repository

    Gaaloul, Fakhreddine; Radaydeh, Redha Mahmoud Mesleh; Alouini, Mohamed-Slim

    2012-01-01

    -based scheme by minimum interference channel selection or adopt different interference thresholds on available channels, while aiming to reduce the channels examination load. The performance of the proposed schemes is quantified and then compared with those

  7. The "chessboard" classification scheme of mineral deposits: Mineralogy and geology from aluminum to zirconium

    Science.gov (United States)

    Dill, Harald G.

    2010-06-01

    Economic geology is a mixtum compositum of all geoscientific disciplines focused on one goal, finding new mineral depsosits and enhancing their exploitation. The keystones of this mixtum compositum are geology and mineralogy whose studies are centered around the emplacement of the ore body and the development of its minerals and rocks. In the present study, mineralogy and geology act as x- and y-coordinates of a classification chart of mineral resources called the "chessboard" (or "spreadsheet") classification scheme. Magmatic and sedimentary lithologies together with tectonic structures (1 -D/pipes, 2 -D/veins) are plotted along the x-axis in the header of the spreadsheet diagram representing the columns in this chart diagram. 63 commodity groups, encompassing minerals and elements are plotted along the y-axis, forming the lines of the spreadsheet. These commodities are subjected to a tripartite subdivision into ore minerals, industrial minerals/rocks and gemstones/ornamental stones. Further information on the various types of mineral deposits, as to the major ore and gangue minerals, the current models and the mode of formation or when and in which geodynamic setting these deposits mainly formed throughout the geological past may be obtained from the text by simply using the code of each deposit in the chart. This code can be created by combining the commodity (lines) shown by numbers plus lower caps with the host rocks or structure (columns) given by capital letters. Each commodity has a small preface on the mineralogy and chemistry and ends up with an outlook into its final use and the supply situation of the raw material on a global basis, which may be updated by the user through a direct link to databases available on the internet. In this case the study has been linked to the commodity database of the US Geological Survey. The internal subdivision of each commodity section corresponds to the common host rock lithologies (magmatic, sedimentary, and

  8. Voice based gender classification using machine learning

    Science.gov (United States)

    Raahul, A.; Sapthagiri, R.; Pankaj, K.; Vijayarajan, V.

    2017-11-01

    Gender identification is one of the major problem speech analysis today. Tracing the gender from acoustic data i.e., pitch, median, frequency etc. Machine learning gives promising results for classification problem in all the research domains. There are several performance metrics to evaluate algorithms of an area. Our Comparative model algorithm for evaluating 5 different machine learning algorithms based on eight different metrics in gender classification from acoustic data. Agenda is to identify gender, with five different algorithms: Linear Discriminant Analysis (LDA), K-Nearest Neighbour (KNN), Classification and Regression Trees (CART), Random Forest (RF), and Support Vector Machine (SVM) on basis of eight different metrics. The main parameter in evaluating any algorithms is its performance. Misclassification rate must be less in classification problems, which says that the accuracy rate must be high. Location and gender of the person have become very crucial in economic markets in the form of AdSense. Here with this comparative model algorithm, we are trying to assess the different ML algorithms and find the best fit for gender classification of acoustic data.

  9. Automatic liver volume segmentation and fibrosis classification

    Science.gov (United States)

    Bal, Evgeny; Klang, Eyal; Amitai, Michal; Greenspan, Hayit

    2018-02-01

    In this work, we present an automatic method for liver segmentation and fibrosis classification in liver computed-tomography (CT) portal phase scans. The input is a full abdomen CT scan with an unknown number of slices, and the output is a liver volume segmentation mask and a fibrosis grade. A multi-stage analysis scheme is applied to each scan, including: volume segmentation, texture features extraction and SVM based classification. Data contains portal phase CT examinations from 80 patients, taken with different scanners. Each examination has a matching Fibroscan grade. The dataset was subdivided into two groups: first group contains healthy cases and mild fibrosis, second group contains moderate fibrosis, severe fibrosis and cirrhosis. Using our automated algorithm, we achieved an average dice index of 0.93 ± 0.05 for segmentation and a sensitivity of 0.92 and specificity of 0.81for classification. To the best of our knowledge, this is a first end to end automatic framework for liver fibrosis classification; an approach that, once validated, can have a great potential value in the clinic.

  10. Cluster Validity Classification Approaches Based on Geometric Probability and Application in the Classification of Remotely Sensed Images

    Directory of Open Access Journals (Sweden)

    LI Jian-Wei

    2014-08-01

    Full Text Available On the basis of the cluster validity function based on geometric probability in literature [1, 2], propose a cluster analysis method based on geometric probability to process large amount of data in rectangular area. The basic idea is top-down stepwise refinement, firstly categories then subcategories. On all clustering levels, use the cluster validity function based on geometric probability firstly, determine clusters and the gathering direction, then determine the center of clustering and the border of clusters. Through TM remote sensing image classification examples, compare with the supervision and unsupervised classification in ERDAS and the cluster analysis method based on geometric probability in two-dimensional square which is proposed in literature 2. Results show that the proposed method can significantly improve the classification accuracy.

  11. Cryptanalysis and Improvement of a Biometric-Based Multi-Server Authentication and Key Agreement Scheme.

    Directory of Open Access Journals (Sweden)

    Chengqi Wang

    Full Text Available With the security requirements of networks, biometrics authenticated schemes which are applied in the multi-server environment come to be more crucial and widely deployed. In this paper, we propose a novel biometric-based multi-server authentication and key agreement scheme which is based on the cryptanalysis of Mishra et al.'s scheme. The informal and formal security analysis of our scheme are given, which demonstrate that our scheme satisfies the desirable security requirements. The presented scheme provides a variety of significant functionalities, in which some features are not considered in the most of existing authentication schemes, such as, user revocation or re-registration and biometric information protection. Compared with several related schemes, our scheme has more secure properties and lower computation cost. It is obviously more appropriate for practical applications in the remote distributed networks.

  12. Cryptanalysis and Improvement of a Biometric-Based Multi-Server Authentication and Key Agreement Scheme

    Science.gov (United States)

    Wang, Chengqi; Zhang, Xiao; Zheng, Zhiming

    2016-01-01

    With the security requirements of networks, biometrics authenticated schemes which are applied in the multi-server environment come to be more crucial and widely deployed. In this paper, we propose a novel biometric-based multi-server authentication and key agreement scheme which is based on the cryptanalysis of Mishra et al.’s scheme. The informal and formal security analysis of our scheme are given, which demonstrate that our scheme satisfies the desirable security requirements. The presented scheme provides a variety of significant functionalities, in which some features are not considered in the most of existing authentication schemes, such as, user revocation or re-registration and biometric information protection. Compared with several related schemes, our scheme has more secure properties and lower computation cost. It is obviously more appropriate for practical applications in the remote distributed networks. PMID:26866606

  13. Cryptanalysis and Improvement of a Biometric-Based Multi-Server Authentication and Key Agreement Scheme.

    Science.gov (United States)

    Wang, Chengqi; Zhang, Xiao; Zheng, Zhiming

    2016-01-01

    With the security requirements of networks, biometrics authenticated schemes which are applied in the multi-server environment come to be more crucial and widely deployed. In this paper, we propose a novel biometric-based multi-server authentication and key agreement scheme which is based on the cryptanalysis of Mishra et al.'s scheme. The informal and formal security analysis of our scheme are given, which demonstrate that our scheme satisfies the desirable security requirements. The presented scheme provides a variety of significant functionalities, in which some features are not considered in the most of existing authentication schemes, such as, user revocation or re-registration and biometric information protection. Compared with several related schemes, our scheme has more secure properties and lower computation cost. It is obviously more appropriate for practical applications in the remote distributed networks.

  14. U.S. Geological Survey ArcMap Sediment Classification tool

    Science.gov (United States)

    O'Malley, John

    2007-01-01

    The U.S. Geological Survey (USGS) ArcMap Sediment Classification tool is a custom toolbar that extends the Environmental Systems Research Institute, Inc. (ESRI) ArcGIS 9.2 Desktop application to aid in the analysis of seabed sediment classification. The tool uses as input either a point data layer with field attributes containing percentage of gravel, sand, silt, and clay or four raster data layers representing a percentage of sediment (0-100%) for the various sediment grain size analysis: sand, gravel, silt and clay. This tool is designed to analyze the percent of sediment at a given location and classify the sediments according to either the Folk (1954, 1974) or Shepard (1954) as modified by Schlee(1973) classification schemes. The sediment analysis tool is based upon the USGS SEDCLASS program (Poppe, et al. 2004).

  15. Clastic compaction unit classification based on clay content and integrated compaction recovery using well and seismic data

    Directory of Open Access Journals (Sweden)

    Zhong Hong

    2016-11-01

    Full Text Available Abstract Compaction correction is a key part of paleo-geomorphic recovery methods. Yet, the influence of lithology on the porosity evolution is not usually taken into account. Present methods merely classify the lithologies as sandstone and mudstone to undertake separate porosity-depth compaction modeling. However, using just two lithologies is an oversimplification that cannot represent the compaction history. In such schemes, the precision of the compaction recovery is inadequate. To improve the precision of compaction recovery, a depth compaction model has been proposed that involves both porosity and clay content. A clastic lithological compaction unit classification method, based on clay content, has been designed to identify lithological boundaries and establish sets of compaction units. Also, on the basis of the clastic compaction unit classification, two methods of compaction recovery that integrate well and seismic data are employed to extrapolate well-based compaction information outward along seismic lines and recover the paleo-topography of the clastic strata in the region. The examples presented here show that a better understanding of paleo-geomorphology can be gained by applying the proposed compaction recovery technology.

  16. Strong Authentication Scheme Based on Hand Geometry and Smart Card Factors

    Directory of Open Access Journals (Sweden)

    Ali A. Yassin

    2016-07-01

    Full Text Available In 2009, Xu et al. presented a safe, dynamic, id-based on remote user authentication method that has several advantages such as freely chosen passwords and mutual authentication. In this paper, we review the Xu–Zhu–Feng scheme and indicate many shortcomings in their scheme. Impersonation attacks and insider attacks could be effective. To overcome these drawbacks, we propose a secure biometric-based remote authentication scheme using biometric characteristics of hand-geometry, which is aimed at withstanding well-known attacks and achieving good performance. Furthermore, our work contains many crucial merits such as mutual authentication, user anonymity, freely chosen passwords, secure password changes, session key agreements, revocation by using personal biometrics, and does not need extra device or software for hand geometry in the login phase. Additionally, our scheme is highly efficient and withstands existing known attacks like password guessing, server impersonation, insider attacks, denial of service (DOS attacks, replay attacks, and parallel-session attacks. Compared with the other related schemes, our work is powerful both in communications and computation costs.

  17. Classification of hand eczema: clinical and aetiological types. Based on the guideline of the Danish Contact Dermatitis Group

    DEFF Research Database (Denmark)

    Johansen, Jeanne Duus; Hald, Marianne; Andersen, Bo Lasthein

    2011-01-01

    Background. No generally accepted classification scheme for hand eczema exists. The Danish Contact Dermatitis Group recently developed a guideline defining common clinical types and providing criteria for aetiological types. Objectives. To test the concepts of this guideline in a group of hand...

  18. An artificial intelligence based improved classification of two-phase flow patterns with feature extracted from acquired images.

    Science.gov (United States)

    Shanthi, C; Pappa, N

    2017-05-01

    Flow pattern recognition is necessary to select design equations for finding operating details of the process and to perform computational simulations. Visual image processing can be used to automate the interpretation of patterns in two-phase flow. In this paper, an attempt has been made to improve the classification accuracy of the flow pattern of gas/ liquid two- phase flow using fuzzy logic and Support Vector Machine (SVM) with Principal Component Analysis (PCA). The videos of six different types of flow patterns namely, annular flow, bubble flow, churn flow, plug flow, slug flow and stratified flow are recorded for a period and converted to 2D images for processing. The textural and shape features extracted using image processing are applied as inputs to various classification schemes namely fuzzy logic, SVM and SVM with PCA in order to identify the type of flow pattern. The results obtained are compared and it is observed that SVM with features reduced using PCA gives the better classification accuracy and computationally less intensive than other two existing schemes. This study results cover industrial application needs including oil and gas and any other gas-liquid two-phase flows. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  19. Vision-Based Perception and Classification of Mosquitoes Using Support Vector Machine

    Directory of Open Access Journals (Sweden)

    Masataka Fuchida

    2017-01-01

    Full Text Available The need for a novel automated mosquito perception and classification method is becoming increasingly essential in recent years, with steeply increasing number of mosquito-borne diseases and associated casualties. There exist remote sensing and GIS-based methods for mapping potential mosquito inhabitants and locations that are prone to mosquito-borne diseases, but these methods generally do not account for species-wise identification of mosquitoes in closed-perimeter regions. Traditional methods for mosquito classification involve highly manual processes requiring tedious sample collection and supervised laboratory analysis. In this research work, we present the design and experimental validation of an automated vision-based mosquito classification module that can deploy in closed-perimeter mosquito inhabitants. The module is capable of identifying mosquitoes from other bugs such as bees and flies by extracting the morphological features, followed by support vector machine-based classification. In addition, this paper presents the results of three variants of support vector machine classifier in the context of mosquito classification problem. This vision-based approach to the mosquito classification problem presents an efficient alternative to the conventional methods for mosquito surveillance, mapping and sample image collection. Experimental results involving classification between mosquitoes and a predefined set of other bugs using multiple classification strategies demonstrate the efficacy and validity of the proposed approach with a maximum recall of 98%.

  20. Model-based Clustering of Categorical Time Series with Multinomial Logit Classification

    Science.gov (United States)

    Frühwirth-Schnatter, Sylvia; Pamminger, Christoph; Winter-Ebmer, Rudolf; Weber, Andrea

    2010-09-01

    A common problem in many areas of applied statistics is to identify groups of similar time series in a panel of time series. However, distance-based clustering methods cannot easily be extended to time series data, where an appropriate distance-measure is rather difficult to define, particularly for discrete-valued time series. Markov chain clustering, proposed by Pamminger and Frühwirth-Schnatter [6], is an approach for clustering discrete-valued time series obtained by observing a categorical variable with several states. This model-based clustering method is based on finite mixtures of first-order time-homogeneous Markov chain models. In order to further explain group membership we present an extension to the approach of Pamminger and Frühwirth-Schnatter [6] by formulating a probabilistic model for the latent group indicators within the Bayesian classification rule by using a multinomial logit model. The parameters are estimated for a fixed number of clusters within a Bayesian framework using an Markov chain Monte Carlo (MCMC) sampling scheme representing a (full) Gibbs-type sampler which involves only draws from standard distributions. Finally, an application to a panel of Austrian wage mobility data is presented which leads to an interesting segmentation of the Austrian labour market.

  1. Evaluation of the Melanocytic Pathology Assessment Tool and Hierarchy for Diagnosis (MPATH-Dx) classification scheme for diagnosis of cutaneous melanocytic neoplasms: Results from the International Melanoma Pathology Study Group.

    Science.gov (United States)

    Lott, Jason P; Elmore, Joann G; Zhao, Ge A; Knezevich, Stevan R; Frederick, Paul D; Reisch, Lisa M; Chu, Emily Y; Cook, Martin G; Duncan, Lyn M; Elenitsas, Rosalie; Gerami, Pedram; Landman, Gilles; Lowe, Lori; Messina, Jane L; Mihm, Martin C; van den Oord, Joost J; Rabkin, Michael S; Schmidt, Birgitta; Shea, Christopher R; Yun, Sook Jung; Xu, George X; Piepkorn, Michael W; Elder, David E; Barnhill, Raymond L

    2016-08-01

    Pathologists use diverse terminology when interpreting melanocytic neoplasms, potentially compromising quality of care. We sought to evaluate the Melanocytic Pathology Assessment Tool and Hierarchy for Diagnosis (MPATH-Dx) scheme, a 5-category classification system for melanocytic lesions. Participants (n = 16) of the 2013 International Melanoma Pathology Study Group Workshop provided independent case-level diagnoses and treatment suggestions for 48 melanocytic lesions. Individual diagnoses (including, when necessary, least and most severe diagnoses) were mapped to corresponding MPATH-Dx classes. Interrater agreement and correlation between MPATH-Dx categorization and treatment suggestions were evaluated. Most participants were board-certified dermatopathologists (n = 15), age 50 years or older (n = 12), male (n = 9), based in the United States (n = 11), and primary academic faculty (n = 14). Overall, participants generated 634 case-level diagnoses with treatment suggestions. Mean weighted kappa coefficients for diagnostic agreement after MPATH-Dx mapping (assuming least and most severe diagnoses, when necessary) were 0.70 (95% confidence interval 0.68-0.71) and 0.72 (95% confidence interval 0.71-0.73), respectively, whereas correlation between MPATH-Dx categorization and treatment suggestions was 0.91. This was a small sample size of experienced pathologists in a testing situation. Varying diagnostic nomenclature can be classified into a concise hierarchy using the MPATH-Dx scheme. Further research is needed to determine whether this classification system can facilitate diagnostic concordance in general pathology practice and improve patient care. Copyright © 2016 American Academy of Dermatology, Inc. Published by Elsevier Inc. All rights reserved.

  2. Encoding atlases by randomized classification forests for efficient multi-atlas label propagation.

    Science.gov (United States)

    Zikic, D; Glocker, B; Criminisi, A

    2014-12-01

    We propose a method for multi-atlas label propagation (MALP) based on encoding the individual atlases by randomized classification forests. Most current approaches perform a non-linear registration between all atlases and the target image, followed by a sophisticated fusion scheme. While these approaches can achieve high accuracy, in general they do so at high computational cost. This might negatively affect the scalability to large databases and experimentation. To tackle this issue, we propose to use a small and deep classification forest to encode each atlas individually in reference to an aligned probabilistic atlas, resulting in an Atlas Forest (AF). Our classifier-based encoding differs from current MALP approaches, which represent each point in the atlas either directly as a single image/label value pair, or by a set of corresponding patches. At test time, each AF produces one probabilistic label estimate, and their fusion is done by averaging. Our scheme performs only one registration per target image, achieves good results with a simple fusion scheme, and allows for efficient experimentation. In contrast to standard forest schemes, in which each tree would be trained on all atlases, our approach retains the advantages of the standard MALP framework. The target-specific selection of atlases remains possible, and incorporation of new scans is straightforward without retraining. The evaluation on four different databases shows accuracy within the range of the state of the art at a significantly lower running time. Copyright © 2014 Elsevier B.V. All rights reserved.

  3. Classification of research reactors and discussion of thinking of safety regulation based on the classification

    International Nuclear Information System (INIS)

    Song Chenxiu; Zhu Lixin

    2013-01-01

    Research reactors have different characteristics in the fields of reactor type, use, power level, design principle, operation model and safety performance, etc, and also have significant discrepancy in the aspect of nuclear safety regulation. This paper introduces classification of research reactors and discusses thinking of safety regulation based on the classification of research reactors. (authors)

  4. Fuzzy-Wavelet Based Double Line Transmission System Protection Scheme in the Presence of SVC

    Science.gov (United States)

    Goli, Ravikumar; Shaik, Abdul Gafoor; Tulasi Ram, Sankara S.

    2015-06-01

    Increasing the power transfer capability and efficient utilization of available transmission lines, improving the power system controllability and stability, power oscillation damping and voltage compensation have made strides and created Flexible AC Transmission (FACTS) devices in recent decades. Shunt FACTS devices can have adverse effects on distance protection both in steady state and transient periods. Severe under reaching is the most important problem of relay which is caused by current injection at the point of connection to the system. Current absorption of compensator leads to overreach of relay. This work presents an efficient method based on wavelet transforms, fault detection, classification and location using Fuzzy logic technique which is almost independent of fault impedance, fault distance and fault inception angle. The proposed protection scheme is found to be fast, reliable and accurate for various types of faults on transmission lines with and without Static Var compensator at different locations and with various incidence angles.

  5. Strong decays of sc-bar mesons in the covariant oscillator quark model with the U tilde (4)DS x O(3, 1)L-classification scheme

    International Nuclear Information System (INIS)

    Maeda, Tomohito; Yamada, Kenji; Oda, Masuho; Ishida, Shin

    2010-01-01

    We investigate the strong decays with one pseudoscalar emission of charmed strange mesons in the covariant oscillator quark model. The wave functions of composite sc-bar mesons are constructed as the irreducible representations of the U tilde (4) DS xO(3,1) L . Through the observed mass and results of decay study we discuss a novel assignment of observed charmed strange mesons from the viewpoint of the U tilde (4) DS x O(3,1) L -classification scheme. It is shown that D s0 * (2317) and D s1 (2460) are consistently explained as ground state chiralons, appeared in the U tilde (4) DS xO(3,1) L scheme. Furthermore, it is also found that recently-observed D s1 * (2710) could be described as first excited state chiralon. (author)

  6. Do thoraco-lumbar spinal injuries classification systems exhibit lower inter- and intra-observer agreement than other fractures classifications?: A comparison using fractures of the trochanteric area of the proximal femur as contrast model.

    Science.gov (United States)

    Urrutia, Julio; Zamora, Tomas; Klaber, Ianiv; Carmona, Maximiliano; Palma, Joaquin; Campos, Mauricio; Yurac, Ratko

    2016-04-01

    It has been postulated that the complex patterns of spinal injuries have prevented adequate agreement using thoraco-lumbar spinal injuries (TLSI) classifications; however, limb fracture classifications have also shown variable agreements. This study compared agreement using two TLSI classifications with agreement using two classifications of fractures of the trochanteric area of the proximal femur (FTAPF). Six evaluators classified the radiographs and computed tomography scans of 70 patients with acute TLSI using the Denis and the new AO Spine thoraco-lumbar injury classifications. Additionally, six evaluators classified the radiographs of 70 patients with FTAPF using the Tronzo and the AO schemes. Six weeks later, all cases were presented in a random sequence for repeat assessment. The Kappa coefficient (κ) was used to determine agreement. Inter-observer agreement: For TLSI, using the AOSpine classification, the mean κ was 0.62 (0.57-0.66) considering fracture types, and 0.55 (0.52-0.57) considering sub-types; using the Denis classification, κ was 0.62 (0.59-0.65). For FTAPF, with the AO scheme, the mean κ was 0.58 (0.54-0.63) considering fracture types and 0.31 (0.28-0.33) considering sub-types; for the Tronzo classification, κ was 0.54 (0.50-0.57). Intra-observer agreement: For TLSI, using the AOSpine scheme, the mean κ was 0.77 (0.72-0.83) considering fracture types, and 0.71 (0.67-0.76) considering sub-types; for the Denis classification, κ was 0.76 (0.71-0.81). For FTAPF, with the AO scheme, the mean κ was 0.75 (0.69-0.81) considering fracture types and 0.45 (0.39-0.51) considering sub-types; for the Tronzo classification, κ was 0.64 (0.58-0.70). Using the main types of AO classifications, inter- and intra-observer agreement of TLSI were comparable to agreement evaluating FTAPF; including sub-types, inter- and intra-observer agreement evaluating TLSI were significantly better than assessing FTAPF. Inter- and intra-observer agreements using the Denis

  7. A Muon Collider scheme based on Frictional Cooling

    Energy Technology Data Exchange (ETDEWEB)

    Abramowicz, H. [Tel Aviv University, Tel Aviv (Israel); Caldwell, A. [Max-Planck-Institut fuer Physik, Munich (Germany); Galea, R. [Nevis Laboratories, Columbia University, Irvington, NY (United States)]. E-mail: galea@nevis.columbia.edu; Schlenstedt, S. [DESY, Zeuthen (Germany)

    2005-07-11

    Muon Colliders would usher in a new era of scientific investigation in the field of high-energy particle physics. The cooling of muon beams is proving to be the greatest obstacle in the realization of a Muon Collider. Monte Carlo simulations of a muon cooling scheme based on Frictional Cooling were performed. Critical issues, which require further study, relating to the technical feasibility of such a scheme are identified. Frictional Cooling, as outlined in this paper, provides sufficient six-dimensional emittance to make luminous collisions possible. It holds exciting potential in solving the problem of Muon Cooling.

  8. A Muon Collider scheme based on Frictional Cooling

    International Nuclear Information System (INIS)

    Abramowicz, H.; Caldwell, A.; Galea, R.; Schlenstedt, S.

    2005-01-01

    Muon Colliders would usher in a new era of scientific investigation in the field of high-energy particle physics. The cooling of muon beams is proving to be the greatest obstacle in the realization of a Muon Collider. Monte Carlo simulations of a muon cooling scheme based on Frictional Cooling were performed. Critical issues, which require further study, relating to the technical feasibility of such a scheme are identified. Frictional Cooling, as outlined in this paper, provides sufficient six-dimensional emittance to make luminous collisions possible. It holds exciting potential in solving the problem of Muon Cooling

  9. Radar Target Classification using Recursive Knowledge-Based Methods

    DEFF Research Database (Denmark)

    Jochumsen, Lars Wurtz

    The topic of this thesis is target classification of radar tracks from a 2D mechanically scanning coastal surveillance radar. The measurements provided by the radar are position data and therefore the classification is mainly based on kinematic data, which is deduced from the position. The target...... been terminated. Therefore, an update of the classification results must be made for each measurement of the target. The data for this work are collected throughout the PhD and are both collected from radars and other sensors such as GPS....

  10. Accurate classification of brain gliomas by discriminate dictionary learning based on projective dictionary pair learning of proton magnetic resonance spectra.

    Science.gov (United States)

    Adebileje, Sikiru Afolabi; Ghasemi, Keyvan; Aiyelabegan, Hammed Tanimowo; Saligheh Rad, Hamidreza

    2017-04-01

    Proton magnetic resonance spectroscopy is a powerful noninvasive technique that complements the structural images of cMRI, which aids biomedical and clinical researches, by identifying and visualizing the compositions of various metabolites within the tissues of interest. However, accurate classification of proton magnetic resonance spectroscopy is still a challenging issue in clinics due to low signal-to-noise ratio, overlapping peaks of metabolites, and the presence of background macromolecules. This paper evaluates the performance of a discriminate dictionary learning classifiers based on projective dictionary pair learning method for brain gliomas proton magnetic resonance spectroscopy spectra classification task, and the result were compared with the sub-dictionary learning methods. The proton magnetic resonance spectroscopy data contain a total of 150 spectra (74 healthy, 23 grade II, 23 grade III, and 30 grade IV) from two databases. The datasets from both databases were first coupled together, followed by column normalization. The Kennard-Stone algorithm was used to split the datasets into its training and test sets. Performance comparison based on the overall accuracy, sensitivity, specificity, and precision was conducted. Based on the overall accuracy of our classification scheme, the dictionary pair learning method was found to outperform the sub-dictionary learning methods 97.78% compared with 68.89%, respectively. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  11. Site classification of Indian strong motion network using response spectra ratios

    Science.gov (United States)

    Chopra, Sumer; Kumar, Vikas; Choudhury, Pallabee; Yadav, R. B. S.

    2018-03-01

    In the present study, we tried to classify the Indian strong motion sites spread all over Himalaya and adjoining region, located on varied geological formations, based on response spectral ratio. A total of 90 sites were classified based on 395 strong motion records from 94 earthquakes recorded at these sites. The magnitude of these earthquakes are between 2.3 and 7.7 and the hypocentral distance for most of the cases is less than 50 km. The predominant period obtained from response spectral ratios is used to classify these sites. It was found that the shape and predominant peaks of the spectra at these sites match with those in Japan, Italy, Iran, and at some of the sites in Europe and the same classification scheme can be applied to Indian strong motion network. We found that the earlier schemes based on description of near-surface geology, geomorphology, and topography were not able to capture the effect of sediment thickness. The sites are classified into seven classes (CL-I to CL-VII) with varying predominant periods and ranges as proposed by Alessandro et al. (Bull Seismol Soc Am 102:680-695 2012). The effect of magnitudes and hypocentral distances on the shape and predominant peaks were also studied and found to be very small. The classification scheme is robust and cost-effective and can be used in region-specific attenuation relationships for accounting local site effect.

  12. Universal block diagram based modeling and simulation schemes for fractional-order control systems.

    Science.gov (United States)

    Bai, Lu; Xue, Dingyü

    2017-05-08

    Universal block diagram based schemes are proposed for modeling and simulating the fractional-order control systems in this paper. A fractional operator block in Simulink is designed to evaluate the fractional-order derivative and integral. Based on the block, the fractional-order control systems with zero initial conditions can be modeled conveniently. For modeling the system with nonzero initial conditions, the auxiliary signal is constructed in the compensation scheme. Since the compensation scheme is very complicated, therefore the integrator chain scheme is further proposed to simplify the modeling procedures. The accuracy and effectiveness of the schemes are assessed in the examples, the computation results testify the block diagram scheme is efficient for all Caputo fractional-order ordinary differential equations (FODEs) of any complexity, including the implicit Caputo FODEs. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  13. The family and family structure classification redefined for the current times

    Directory of Open Access Journals (Sweden)

    Rahul Sharma

    2013-01-01

    Full Text Available The family is a basic unit of study in many medical and social science disciplines. Definitions of family have varied from country to country, and also within country. Because of this and the changing realities of the current times, there is a felt need for redefining the family and the common family structure types, for the purpose of study of the family as a factor in health and other variables of interest. A redefinition of a ′′family′′ has been proposed and various nuances of the definition are also discussed in detail. A classification scheme for the various types of family has also been put forward. A few exceptional case scenarios have been envisaged and their classification as per the new scheme is discussed, in a bid to clarify the classification scheme further. The proposed scheme should prove to be of use across various countries and cultures, for broadly classifying the family structure. The unique scenarios of particular cultures can be taken into account by defining region or culture-specific subtypes of the overall types of family structure.

  14. NIM: A Node Influence Based Method for Cancer Classification

    Directory of Open Access Journals (Sweden)

    Yiwen Wang

    2014-01-01

    Full Text Available The classification of different cancer types owns great significance in the medical field. However, the great majority of existing cancer classification methods are clinical-based and have relatively weak diagnostic ability. With the rapid development of gene expression technology, it is able to classify different kinds of cancers using DNA microarray. Our main idea is to confront the problem of cancer classification using gene expression data from a graph-based view. Based on a new node influence model we proposed, this paper presents a novel high accuracy method for cancer classification, which is composed of four parts: the first is to calculate the similarity matrix of all samples, the second is to compute the node influence of training samples, the third is to obtain the similarity between every test sample and each class using weighted sum of node influence and similarity matrix, and the last is to classify each test sample based on its similarity between every class. The data sets used in our experiments are breast cancer, central nervous system, colon tumor, prostate cancer, acute lymphoblastic leukemia, and lung cancer. experimental results showed that our node influence based method (NIM is more efficient and robust than the support vector machine, K-nearest neighbor, C4.5, naive Bayes, and CART.

  15. Development of a Regional Habitat Classification Scheme for the ...

    African Journals Online (AJOL)

    development, image processing techniques and field survey methods are outlined. Habitat classification, and regional-scale comparisons of relative habitat composition are described. The study demonstrates the use of remote sensing data to construct digital habitat maps for the comparison of regional habitat coverage, ...

  16. Enhanced ID-Based Authentication Scheme Using OTP in Smart Grid AMI Environment

    Directory of Open Access Journals (Sweden)

    Sang-Soo Yeo

    2014-01-01

    Full Text Available This paper presents the vulnerabilities analyses of KL scheme which is an ID-based authentication scheme for AMI network attached SCADA in smart grid and proposes a security-enhanced authentication scheme which satisfies forward secrecy as well as security requirements introduced in KL scheme and also other existing schemes. The proposed scheme uses MDMS which is the supervising system located in an electrical company as a time-synchronizing server in order to synchronize smart devices at home and conducts authentication between smart meter and smart devices using a new secret value generated by an OTP generator every session. The proposed scheme has forward secrecy, so it increases overall security, but its communication and computation overhead reduce its performance slightly, comparing the existing schemes. Nonetheless, hardware specification and communication bandwidth of smart devices will have better conditions continuously, so the proposed scheme would be a good choice for secure AMI environment.

  17. TENSOR MODELING BASED FOR AIRBORNE LiDAR DATA CLASSIFICATION

    Directory of Open Access Journals (Sweden)

    N. Li

    2016-06-01

    Full Text Available Feature selection and description is a key factor in classification of Earth observation data. In this paper a classification method based on tensor decomposition is proposed. First, multiple features are extracted from raw LiDAR point cloud, and raster LiDAR images are derived by accumulating features or the “raw” data attributes. Then, the feature rasters of LiDAR data are stored as a tensor, and tensor decomposition is used to select component features. This tensor representation could keep the initial spatial structure and insure the consideration of the neighborhood. Based on a small number of component features a k nearest neighborhood classification is applied.

  18. Judgement of Design Scheme Based on Flexible Constraint in ICAD

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    The conception of flexible constraint is proposed in the paper. The solution of flexible constraint is in special range, and maybe different in different instances of same design scheme. The paper emphasis on how to evaluate and optimize a design scheme with flexible constraints based on the satisfaction degree function defined on flexible constraints. The conception of flexible constraint is used to solve constraint conflict and design optimization in complicated constraint-based assembly design by the PFM parametrization assembly design system. An instance of gear-box design is used for verifying optimization method.

  19. Using methods from the data mining and machine learning literature for disease classification and prediction: A case study examining classification of heart failure sub-types

    Science.gov (United States)

    Austin, Peter C.; Tu, Jack V.; Ho, Jennifer E.; Levy, Daniel; Lee, Douglas S.

    2014-01-01

    Objective Physicians classify patients into those with or without a specific disease. Furthermore, there is often interest in classifying patients according to disease etiology or subtype. Classification trees are frequently used to classify patients according to the presence or absence of a disease. However, classification trees can suffer from limited accuracy. In the data-mining and machine learning literature, alternate classification schemes have been developed. These include bootstrap aggregation (bagging), boosting, random forests, and support vector machines. Study design and Setting We compared the performance of these classification methods with those of conventional classification trees to classify patients with heart failure according to the following sub-types: heart failure with preserved ejection fraction (HFPEF) vs. heart failure with reduced ejection fraction (HFREF). We also compared the ability of these methods to predict the probability of the presence of HFPEF with that of conventional logistic regression. Results We found that modern, flexible tree-based methods from the data mining literature offer substantial improvement in prediction and classification of heart failure sub-type compared to conventional classification and regression trees. However, conventional logistic regression had superior performance for predicting the probability of the presence of HFPEF compared to the methods proposed in the data mining literature. Conclusion The use of tree-based methods offers superior performance over conventional classification and regression trees for predicting and classifying heart failure subtypes in a population-based sample of patients from Ontario. However, these methods do not offer substantial improvements over logistic regression for predicting the presence of HFPEF. PMID:23384592

  20. Research on Classification of Chinese Text Data Based on SVM

    Science.gov (United States)

    Lin, Yuan; Yu, Hongzhi; Wan, Fucheng; Xu, Tao

    2017-09-01

    Data Mining has important application value in today’s industry and academia. Text classification is a very important technology in data mining. At present, there are many mature algorithms for text classification. KNN, NB, AB, SVM, decision tree and other classification methods all show good classification performance. Support Vector Machine’ (SVM) classification method is a good classifier in machine learning research. This paper will study the classification effect based on the SVM method in the Chinese text data, and use the support vector machine method in the chinese text to achieve the classify chinese text, and to able to combination of academia and practical application.

  1. Exploiting unsupervised and supervised classification for segmentation of the pathological lung in CT

    International Nuclear Information System (INIS)

    Korfiatis, P; Costaridou, L; Kalogeropoulou, C; Petsas, T; Daoussis, D; Adonopoulos, A

    2009-01-01

    Delineation of lung fields in presence of diffuse lung diseases (DLPDs), such as interstitial pneumonias (IP), challenges segmentation algorithms. To deal with IP patterns affecting the lung border an automated image texture classification scheme is proposed. The proposed segmentation scheme is based on supervised texture classification between lung tissue (normal and abnormal) and surrounding tissue (pleura and thoracic wall) in the lung border region. This region is coarsely defined around an initial estimate of lung border, provided by means of Markov Radom Field modeling and morphological operations. Subsequently, a support vector machine classifier was trained to distinguish between the above two classes of tissue, using textural feature of gray scale and wavelet domains. 17 patients diagnosed with IP, secondary to connective tissue diseases were examined. Segmentation performance in terms of overlap was 0.924±0.021, and for shape differentiation mean, rms and maximum distance were 1.663±0.816, 2.334±1.574 and 8.0515±6.549 mm, respectively. An accurate, automated scheme is proposed for segmenting abnormal lung fields in HRC affected by IP

  2. Exploiting unsupervised and supervised classification for segmentation of the pathological lung in CT

    Science.gov (United States)

    Korfiatis, P.; Kalogeropoulou, C.; Daoussis, D.; Petsas, T.; Adonopoulos, A.; Costaridou, L.

    2009-07-01

    Delineation of lung fields in presence of diffuse lung diseases (DLPDs), such as interstitial pneumonias (IP), challenges segmentation algorithms. To deal with IP patterns affecting the lung border an automated image texture classification scheme is proposed. The proposed segmentation scheme is based on supervised texture classification between lung tissue (normal and abnormal) and surrounding tissue (pleura and thoracic wall) in the lung border region. This region is coarsely defined around an initial estimate of lung border, provided by means of Markov Radom Field modeling and morphological operations. Subsequently, a support vector machine classifier was trained to distinguish between the above two classes of tissue, using textural feature of gray scale and wavelet domains. 17 patients diagnosed with IP, secondary to connective tissue diseases were examined. Segmentation performance in terms of overlap was 0.924±0.021, and for shape differentiation mean, rms and maximum distance were 1.663±0.816, 2.334±1.574 and 8.0515±6.549 mm, respectively. An accurate, automated scheme is proposed for segmenting abnormal lung fields in HRC affected by IP

  3. On argumentation schemes and the natural classification of arguments

    NARCIS (Netherlands)

    Katzav, J.K.; Reed, C.

    2004-01-01

    We develop conceptions of arguments and of argument types that will, by serving as the basis for developing a natural classification of arguments, benefit work in artificial intelligence. Focusing only on arguments construed as the semantic entities that are the outcome of processes of reasoning, we

  4. Iris Image Classification Based on Hierarchical Visual Codebook.

    Science.gov (United States)

    Zhenan Sun; Hui Zhang; Tieniu Tan; Jianyu Wang

    2014-06-01

    Iris recognition as a reliable method for personal identification has been well-studied with the objective to assign the class label of each iris image to a unique subject. In contrast, iris image classification aims to classify an iris image to an application specific category, e.g., iris liveness detection (classification of genuine and fake iris images), race classification (e.g., classification of iris images of Asian and non-Asian subjects), coarse-to-fine iris identification (classification of all iris images in the central database into multiple categories). This paper proposes a general framework for iris image classification based on texture analysis. A novel texture pattern representation method called Hierarchical Visual Codebook (HVC) is proposed to encode the texture primitives of iris images. The proposed HVC method is an integration of two existing Bag-of-Words models, namely Vocabulary Tree (VT), and Locality-constrained Linear Coding (LLC). The HVC adopts a coarse-to-fine visual coding strategy and takes advantages of both VT and LLC for accurate and sparse representation of iris texture. Extensive experimental results demonstrate that the proposed iris image classification method achieves state-of-the-art performance for iris liveness detection, race classification, and coarse-to-fine iris identification. A comprehensive fake iris image database simulating four types of iris spoof attacks is developed as the benchmark for research of iris liveness detection.

  5. Improving the Computational Performance of Ontology-Based Classification Using Graph Databases

    Directory of Open Access Journals (Sweden)

    Thomas J. Lampoltshammer

    2015-07-01

    Full Text Available The increasing availability of very high-resolution remote sensing imagery (i.e., from satellites, airborne laser scanning, or aerial photography represents both a blessing and a curse for researchers. The manual classification of these images, or other similar geo-sensor data, is time-consuming and leads to subjective and non-deterministic results. Due to this fact, (semi- automated classification approaches are in high demand in affected research areas. Ontologies provide a proper way of automated classification for various kinds of sensor data, including remotely sensed data. However, the processing of data entities—so-called individuals—is one of the most cost-intensive computational operations within ontology reasoning. Therefore, an approach based on graph databases is proposed to overcome the issue of a high time consumption regarding the classification task. The introduced approach shifts the classification task from the classical Protégé environment and its common reasoners to the proposed graph-based approaches. For the validation, the authors tested the approach on a simulation scenario based on a real-world example. The results demonstrate a quite promising improvement of classification speed—up to 80,000 times faster than the Protégé-based approach.

  6. Tissue classifications in Monte Carlo simulations of patient dose for photon beam tumor treatments

    Science.gov (United States)

    Lin, Mu-Han; Chao, Tsi-Chian; Lee, Chung-Chi; Tung-Chieh Chang, Joseph; Tung, Chuan-Jong

    2010-07-01

    The purpose of this work was to study the calculated dose uncertainties induced by the material classification that determined the interaction cross-sections and the water-to-material stopping-power ratios. Calculations were made for a head- and neck-cancer patient treated with five intensity-modulated radiotherapy fields using 6 MV photon beams. The patient's CT images were reconstructed into two voxelized patient phantoms based on different CT-to-material classification schemes. Comparisons of the depth-dose curve of the anterior-to-posterior field and the dose-volume-histogram of the treatment plan were used to evaluate the dose uncertainties from such schemes. The results indicated that any misassignment of tissue materials could lead to a substantial dose difference, which would affect the treatment outcome. To assure an appropriate material assignment, it is desirable to have different conversion tables for various parts of the body. The assignment of stopping-power ratio should be based on the chemical composition and the density of the material.

  7. Tissue classifications in Monte Carlo simulations of patient dose for photon beam tumor treatments

    International Nuclear Information System (INIS)

    Lin, Mu-Han; Chao, Tsi-Chian; Lee, Chung-Chi; Tung-Chieh Chang, Joseph; Tung, Chuan-Jong

    2010-01-01

    The purpose of this work was to study the calculated dose uncertainties induced by the material classification that determined the interaction cross-sections and the water-to-material stopping-power ratios. Calculations were made for a head- and neck-cancer patient treated with five intensity-modulated radiotherapy fields using 6 MV photon beams. The patient's CT images were reconstructed into two voxelized patient phantoms based on different CT-to-material classification schemes. Comparisons of the depth-dose curve of the anterior-to-posterior field and the dose-volume-histogram of the treatment plan were used to evaluate the dose uncertainties from such schemes. The results indicated that any misassignment of tissue materials could lead to a substantial dose difference, which would affect the treatment outcome. To assure an appropriate material assignment, it is desirable to have different conversion tables for various parts of the body. The assignment of stopping-power ratio should be based on the chemical composition and the density of the material.

  8. A novel lost packets recovery scheme based on visual secret sharing

    Science.gov (United States)

    Lu, Kun; Shan, Hong; Li, Zhi; Niu, Zhao

    2017-08-01

    In this paper, a novel lost packets recovery scheme which encrypts the effective parts of an original packet into two shadow packets based on (2, 2)-threshold XOR-based visual Secret Sharing (VSS) is proposed. The two shadow packets used as watermarks would be embedded into two normal data packets with digital watermarking embedding technology and then sent from one sensor node to another. Each shadow packet would reveal no information of the original packet, which can improve the security of original packet delivery greatly. The two shadow packets which can be extracted from the received two normal data packets delivered from a sensor node can recover the original packet lossless based on XOR-based VSS. The Performance analysis present that the proposed scheme provides essential services as long as possible in the presence of selective forwarding attack. The proposed scheme would not increase the amount of additional traffic, namely, lower energy consumption, which is suitable for Wireless Sensor Network (WSN).

  9. Knowledge-Based Trajectory Error Pattern Method Applied to an Active Force Control Scheme

    Directory of Open Access Journals (Sweden)

    Endra Pitowarno, Musa Mailah, Hishamuddin Jamaluddin

    2012-08-01

    Full Text Available The active force control (AFC method is known as a robust control scheme that dramatically enhances the performance of a robot arm particularly in compensating the disturbance effects. The main task of the AFC method is to estimate the inertia matrix in the feedback loop to provide the correct (motor torque required to cancel out these disturbances. Several intelligent control schemes have already been introduced to enhance the estimation methods of acquiring the inertia matrix such as those using neural network, iterative learning and fuzzy logic. In this paper, we propose an alternative scheme called Knowledge-Based Trajectory Error Pattern Method (KBTEPM to suppress the trajectory track error of the AFC scheme. The knowledge is developed from the trajectory track error characteristic based on the previous experimental results of the crude approximation method. It produces a unique, new and desirable error pattern when a trajectory command is forced. An experimental study was performed using simulation work on the AFC scheme with KBTEPM applied to a two-planar manipulator in which a set of rule-based algorithm is derived. A number of previous AFC schemes are also reviewed as benchmark. The simulation results show that the AFC-KBTEPM scheme successfully reduces the trajectory track error significantly even in the presence of the introduced disturbances.Key Words:  Active force control, estimated inertia matrix, robot arm, trajectory error pattern, knowledge-based.

  10. A secure smart-card based authentication and key agreement scheme for telecare medicine information systems.

    Science.gov (United States)

    Lee, Tian-Fu; Liu, Chuan-Ming

    2013-06-01

    A smart-card based authentication scheme for telecare medicine information systems enables patients, doctors, nurses, health visitors and the medicine information systems to establish a secure communication platform through public networks. Zhu recently presented an improved authentication scheme in order to solve the weakness of the authentication scheme of Wei et al., where the off-line password guessing attacks cannot be resisted. This investigation indicates that the improved scheme of Zhu has some faults such that the authentication scheme cannot execute correctly and is vulnerable to the attack of parallel sessions. Additionally, an enhanced authentication scheme based on the scheme of Zhu is proposed. The enhanced scheme not only avoids the weakness in the original scheme, but also provides users' anonymity and authenticated key agreements for secure data communications.

  11. Experiments with a novel content-based image retrieval software: can we eliminate classification systems in adolescent idiopathic scoliosis?

    Science.gov (United States)

    Menon, K Venugopal; Kumar, Dinesh; Thomas, Tessamma

    2014-02-01

    Study Design Preliminary evaluation of new tool. Objective To ascertain whether the newly developed content-based image retrieval (CBIR) software can be used successfully to retrieve images of similar cases of adolescent idiopathic scoliosis (AIS) from a database to help plan treatment without adhering to a classification scheme. Methods Sixty-two operated cases of AIS were entered into the newly developed CBIR database. Five new cases of different curve patterns were used as query images. The images were fed into the CBIR database that retrieved similar images from the existing cases. These were analyzed by a senior surgeon for conformity to the query image. Results Within the limits of variability set for the query system, all the resultant images conformed to the query image. One case had no similar match in the series. The other four retrieved several images that were matching with the query. No matching case was left out in the series. The postoperative images were then analyzed to check for surgical strategies. Broad guidelines for treatment could be derived from the results. More precise query settings, inclusion of bending films, and a larger database will enhance accurate retrieval and better decision making. Conclusion The CBIR system is an effective tool for accurate documentation and retrieval of scoliosis images. Broad guidelines for surgical strategies can be made from the postoperative images of the existing cases without adhering to any classification scheme.

  12. Hot complaint intelligent classification based on text mining

    Directory of Open Access Journals (Sweden)

    XIA Haifeng

    2013-10-01

    Full Text Available The complaint recognizer system plays an important role in making sure the correct classification of the hot complaint,improving the service quantity of telecommunications industry.The customers’ complaint in telecommunications industry has its special particularity which should be done in limited time,which cause the error in classification of hot complaint.The paper presents a model of complaint hot intelligent classification based on text mining,which can classify the hot complaint in the correct level of the complaint navigation.The examples show that the model can be efficient to classify the text of the complaint.

  13. Privacy-Preserving Location-Based Service Scheme for Mobile Sensing Data

    Directory of Open Access Journals (Sweden)

    Qingqing Xie

    2016-11-01

    Full Text Available With the wide use of mobile sensing application, more and more location-embedded data are collected and stored in mobile clouds, such as iCloud, Samsung cloud, etc. Using these data, the cloud service provider (CSP can provide location-based service (LBS for users. However, the mobile cloud is untrustworthy. The privacy concerns force the sensitive locations to be stored on the mobile cloud in an encrypted form. However, this brings a great challenge to utilize these data to provide efficient LBS. To solve this problem, we propose a privacy-preserving LBS scheme for mobile sensing data, based on the RSA (for Rivest, Shamir and Adleman algorithm and ciphertext policy attribute-based encryption (CP-ABE scheme. The mobile cloud can perform location distance computing and comparison efficiently for authorized users, without location privacy leakage. In the end, theoretical security analysis and experimental evaluation demonstrate that our scheme is secure against the chosen plaintext attack (CPA and efficient enough for practical applications in terms of user side computation overhead.

  14. Designing Structure-Dependent MPC-Based AGC Schemes Considering Network Topology

    Directory of Open Access Journals (Sweden)

    Young-Sik Jang

    2015-04-01

    Full Text Available This paper presents the important features of structure-dependent model predictive control (MPC-based approaches for automatic generation control (AGC considering network topology. Since power systems have various generators under different topologies, it is necessary to reflect the characteristics of generators in power networks and the control system structures in order to improve the dynamic performance of AGC. Specifically, considering control system structures is very important because not only can the topological problems be reduced, but also a computing system for AGC in a bulk-power system can be realized. Based on these considerations, we propose new schemes in the proposed controller for minimizing inadvertent line flows and computational burden, which strengthen the advantages of MPC-based approach for AGC. Analysis and simulation results in the IEEE 39-bus model system show different dynamic behaviors among structure-dependent control schemes and possible improvements in computational burden via the proposed control scheme while system operators in each balancing area consider physical load reference ramp constraints among generators.

  15. Privacy-Preserving Location-Based Service Scheme for Mobile Sensing Data.

    Science.gov (United States)

    Xie, Qingqing; Wang, Liangmin

    2016-11-25

    With the wide use of mobile sensing application, more and more location-embedded data are collected and stored in mobile clouds, such as iCloud, Samsung cloud, etc. Using these data, the cloud service provider (CSP) can provide location-based service (LBS) for users. However, the mobile cloud is untrustworthy. The privacy concerns force the sensitive locations to be stored on the mobile cloud in an encrypted form. However, this brings a great challenge to utilize these data to provide efficient LBS. To solve this problem, we propose a privacy-preserving LBS scheme for mobile sensing data, based on the RSA (for Rivest, Shamir and Adleman) algorithm and ciphertext policy attribute-based encryption (CP-ABE) scheme. The mobile cloud can perform location distance computing and comparison efficiently for authorized users, without location privacy leakage. In the end, theoretical security analysis and experimental evaluation demonstrate that our scheme is secure against the chosen plaintext attack (CPA) and efficient enough for practical applications in terms of user side computation overhead.

  16. Equivalence classification by California sea lions using class-specific reinforcers.

    OpenAIRE

    Kastak, C R; Schusterman, R J; Kastak, D

    2001-01-01

    The ability to group dissimilar stimuli into categories on the basis of common stimulus relations (stimulus equivalence) or common functional relations (functional equivalence) has been convincingly demonstrated in verbally competent subjects. However, there are investigations with verbally limited humans and with nonhuman animals that suggest that the formation and use of classification schemes based on equivalence does not depend on linguistic skills. The present investigation documented th...

  17. Carnegie's New Community Engagement Classification: Affirming Higher Education's Role in Community

    Science.gov (United States)

    Driscoll, Amy

    2009-01-01

    In 2005, the Carnegie Foundation for the Advancement of Teaching (CFAT) stirred the higher education world with the announcement of a new classification for institutions that engage with community. The classification, community engagement, is the first in a set of planned classification schemes resulting from the foundation's reexamination of the…

  18. A proposed data base system for detection, classification and ...

    African Journals Online (AJOL)

    A proposed data base system for detection, classification and location of fault on electricity company of Ghana electrical distribution system. Isaac Owusu-Nyarko, Mensah-Ananoo Eugine. Abstract. No Abstract. Keywords: database, classification of fault, power, distribution system, SCADA, ECG. Full Text: EMAIL FULL TEXT ...

  19. Hydrologic-Process-Based Soil Texture Classifications for Improved Visualization of Landscape Function

    Science.gov (United States)

    Groenendyk, Derek G.; Ferré, Ty P.A.; Thorp, Kelly R.; Rice, Amy K.

    2015-01-01

    Soils lie at the interface between the atmosphere and the subsurface and are a key component that control ecosystem services, food production, and many other processes at the Earth’s surface. There is a long-established convention for identifying and mapping soils by texture. These readily available, georeferenced soil maps and databases are used widely in environmental sciences. Here, we show that these traditional soil classifications can be inappropriate, contributing to bias and uncertainty in applications from slope stability to water resource management. We suggest a new approach to soil classification, with a detailed example from the science of hydrology. Hydrologic simulations based on common meteorological conditions were performed using HYDRUS-1D, spanning textures identified by the United States Department of Agriculture soil texture triangle. We consider these common conditions to be: drainage from saturation, infiltration onto a drained soil, and combined infiltration and drainage events. Using a k-means clustering algorithm, we created soil classifications based on the modeled hydrologic responses of these soils. The hydrologic-process-based classifications were compared to those based on soil texture and a single hydraulic property, Ks. Differences in classifications based on hydrologic response versus soil texture demonstrate that traditional soil texture classification is a poor predictor of hydrologic response. We then developed a QGIS plugin to construct soil maps combining a classification with georeferenced soil data from the Natural Resource Conservation Service. The spatial patterns of hydrologic response were more immediately informative, much simpler, and less ambiguous, for use in applications ranging from trafficability to irrigation management to flood control. The ease with which hydrologic-process-based classifications can be made, along with the improved quantitative predictions of soil responses and visualization of landscape

  20. A classification scheme for alternative oxidases reveals the taxonomic distribution and evolutionary history of the enzyme in angiosperms.

    Science.gov (United States)

    Costa, José Hélio; McDonald, Allison E; Arnholdt-Schmitt, Birgit; Fernandes de Melo, Dirce

    2014-11-01

    A classification scheme based on protein phylogenies and sequence harmony method was used to clarify the taxonomic distribution and evolutionary history of the alternative oxidase (AOX) in angiosperms. A large data set analyses showed that AOX1 and AOX2 subfamilies were distributed into 4 phylogenetic clades: AOX1a-c/1e, AOX1d, AOX2a-c and AOX2d. High diversity in AOX family compositions was found. While the AOX2 subfamily was not detected in monocots, the AOX1 subfamily has expanded (AOX1a-e) in the large majority of these plants. In addition, Poales AOX1b and 1d were orthologous to eudicots AOX1d and then renamed as AOX1d1 and 1d2. AOX1 or AOX2 losses were detected in some eudicot plants. Several AOX2 duplications (AOX2a-c) were identified in eudicot species, mainly in the asterids. The AOX2b originally identified in eudicots in the Fabales order (soybean, cowpea) was divergent from AOX2a-c showing some specific amino acids with AOX1d and then it was renamed as AOX2d. AOX1d and AOX2d seem to be stress-responsive, facultative and mutually exclusive among species suggesting a complementary role with an AOX1(a) in stress conditions. Based on the data collected, we present a model for the evolutionary history of AOX in angiosperms and highlight specific areas where further research would be most beneficial. Copyright © 2014 Elsevier B.V. All rights reserved.

  1. Quantum Ensemble Classification: A Sampling-Based Learning Control Approach.

    Science.gov (United States)

    Chen, Chunlin; Dong, Daoyi; Qi, Bo; Petersen, Ian R; Rabitz, Herschel

    2017-06-01

    Quantum ensemble classification (QEC) has significant applications in discrimination of atoms (or molecules), separation of isotopes, and quantum information extraction. However, quantum mechanics forbids deterministic discrimination among nonorthogonal states. The classification of inhomogeneous quantum ensembles is very challenging, since there exist variations in the parameters characterizing the members within different classes. In this paper, we recast QEC as a supervised quantum learning problem. A systematic classification methodology is presented by using a sampling-based learning control (SLC) approach for quantum discrimination. The classification task is accomplished via simultaneously steering members belonging to different classes to their corresponding target states (e.g., mutually orthogonal states). First, a new discrimination method is proposed for two similar quantum systems. Then, an SLC method is presented for QEC. Numerical results demonstrate the effectiveness of the proposed approach for the binary classification of two-level quantum ensembles and the multiclass classification of multilevel quantum ensembles.

  2. Time-and-ID-Based Proxy Reencryption Scheme

    OpenAIRE

    Mtonga, Kambombo; Paul, Anand; Rho, Seungmin

    2014-01-01

    Time- and ID-based proxy reencryption scheme is proposed in this paper in which a type-based proxy reencryption enables the delegator to implement fine-grained policies with one key pair without any additional trust on the proxy. However, in some applications, the time within which the data was sampled or collected is very critical. In such applications, for example, healthcare and criminal investigations, the delegatee may be interested in only some of the messages with some types sampled wi...

  3. A Credit-Based Congestion-Aware Incentive Scheme for DTNs

    Directory of Open Access Journals (Sweden)

    Qingfeng Jiang

    2016-12-01

    Full Text Available In Delay-Tolerant Networks (DTNs, nodes may be selfish and reluctant to expend their precious resources on forwarding messages for others. Therefore, an incentive scheme is necessary to motivate selfish nodes to cooperatively forward messages. However, the current incentive schemes mainly focus on encouraging nodes to participate in message forwarding, without considering the node congestion problem. When many messages are forwarded to the nodes with high connection degree, these nodes will become congested and deliberately discard messages, which will seriously degrade the routing performance and reduce the benefits of other nodes. To address this problem, we propose a credit-based congestion-aware incentive scheme (CBCAIS for DTNs. In CBCAIS, a check and punishment mechanism is proposed to prevent forwarding nodes from deliberately discarding message. In addition, a message acceptance selection mechanism is proposed to allow the nodes to decide whether to accept other messages, according to self congestion degree. The experimental results show that CBCAIS can effectively stimulate selfish nodes to cooperatively forward messages, and achieve a higher message delivery ratio with lower overhead ratio, compared with other schemes.

  4. Granular loess classification based

    International Nuclear Information System (INIS)

    Browzin, B.S.

    1985-01-01

    This paper discusses how loess might be identified by two index properties: the granulometric composition and the dry unit weight. These two indices are necessary but not always sufficient for identification of loess. On the basis of analyses of samples from three continents, it was concluded that the 0.01-0.5-mm fraction deserves the name loessial fraction. Based on the loessial fraction concept, a granulometric classification of loess is proposed. A triangular chart is used to classify loess

  5. Etiological classification of ischemic stroke in young patients: a comparative study of TOAST, CCS, and ASCO.

    Science.gov (United States)

    Gökçal, Elif; Niftaliyev, Elvin; Asil, Talip

    2017-09-01

    Analysis of stroke subtypes is important for making treatment decisions and prognostic evaluations. The TOAST classification system is most commonly used, but the CCS and ASCO classification systems might be more useful to identify stroke etiologies in young patients whose strokes have a wide range of different causes. In this manuscript, we aim to compare the differences in subtype classification between TOAST, CCS, and ASCO in young stroke patients. The TOAST, CCS, and ASCO classification schemes were applied to 151 patients with ischemic stroke aged 18-49 years old and the proportion of subtypes classified by each scheme was compared. For comparison, determined etiologies were defined as cases with evident and probable subtypes when using the CCS scheme and cases with grade 1 and 2 subtypes but no other grade 1 subtype when using the ASCO scheme. The McNemar test with Bonferroni correction was used to assess significance. By TOAST, 41.1% of patients' stroke etiology was classified as undetermined etiology, 19.2% as cardioembolic, 13.2% as large artery atherosclerosis, 11.3% as small vessel occlusion, and 15.2% as other causes. Compared with TOAST, both CCS and ASCO assigned fewer patients to the undetermined etiology group (30.5% p CCS and ASCO classification schemes in young stroke patients seems feasible, and using both schemes may result in fewer patients being classified as undetermined etiology. New studies with more patients and a prospective design are needed to explore this topic further.

  6. Linking project-based mechanisms with domestic greenhouse gas emissions trading schemes

    International Nuclear Information System (INIS)

    Bygrave, S.; Bosi, M.

    2004-01-01

    Although there are a number of possible links between emission trading and project-based mechanisms, the focus of this paper is on linking domestic GHG emission trading schemes with: (1) domestic; and, (2) international (JI and CDM) GHG reduction project activities. The objective is to examine some of the challenges in linking DETs and project-based mechanisms, as well as some possible solutions to address these challenges. The link between JI / CDM and intergovernmental international emissions trading (i.e. Article 17 of the Kyoto Protocol) is defined by the Kyoto Protocol, and therefore is not covered in this paper. The paper is written in the context of: (a) countries adhering to the Kyoto Protocol and elaborating their strategies to meet their GHG emission commitments, including through the use of the emissions trading and project-based mechanisms. For example, the European Union (EU) will be commencing a GHG Emissions Trading Scheme in January 2005, and recently, the Council of ministers and the European Parliament agreed on a text for an EU Linking Directive allowing the use of JI and CDM emission units in the EU Emission Trading Scheme (EU-ETS); and (b) all countries (and/or regions within countries) with GHG emission obligations that may choose to use domestic emissions trading and project-based mechanisms to meet their GHG commitments. The paper includes the following elements: (1) an overview of the different flexibility mechanisms (i.e. GHG emissions trading and PBMs), including a brief description and comparisons between the mechanisms (Section 3); (2) an exploration of the issues that emerge when project-based mechanisms link with domestic emissions trading schemes, as well as possible solutions to address some of the challenges raised (Section 4); (3) a case study examining the EU-ETS and the EU Linking Directive on project-based mechanisms, in particular on how the EU is addressing in a practical context relevant linking issues (Section 5); (4) a

  7. Failure diagnosis using deep belief learning based health state classification

    International Nuclear Information System (INIS)

    Tamilselvan, Prasanna; Wang, Pingfeng

    2013-01-01

    Effective health diagnosis provides multifarious benefits such as improved safety, improved reliability and reduced costs for operation and maintenance of complex engineered systems. This paper presents a novel multi-sensor health diagnosis method using deep belief network (DBN). DBN has recently become a popular approach in machine learning for its promised advantages such as fast inference and the ability to encode richer and higher order network structures. The DBN employs a hierarchical structure with multiple stacked restricted Boltzmann machines and works through a layer by layer successive learning process. The proposed multi-sensor health diagnosis methodology using DBN based state classification can be structured in three consecutive stages: first, defining health states and preprocessing sensory data for DBN training and testing; second, developing DBN based classification models for diagnosis of predefined health states; third, validating DBN classification models with testing sensory dataset. Health diagnosis using DBN based health state classification technique is compared with four existing diagnosis techniques. Benchmark classification problems and two engineering health diagnosis applications: aircraft engine health diagnosis and electric power transformer health diagnosis are employed to demonstrate the efficacy of the proposed approach

  8. Models of Marine Fish Biodiversity: Assessing Predictors from Three Habitat Classification Schemes.

    Science.gov (United States)

    Yates, Katherine L; Mellin, Camille; Caley, M Julian; Radford, Ben T; Meeuwig, Jessica J

    2016-01-01

    Prioritising biodiversity conservation requires knowledge of where biodiversity occurs. Such knowledge, however, is often lacking. New technologies for collecting biological and physical data coupled with advances in modelling techniques could help address these gaps and facilitate improved management outcomes. Here we examined the utility of environmental data, obtained using different methods, for developing models of both uni- and multivariate biodiversity metrics. We tested which biodiversity metrics could be predicted best and evaluated the performance of predictor variables generated from three types of habitat data: acoustic multibeam sonar imagery, predicted habitat classification, and direct observer habitat classification. We used boosted regression trees (BRT) to model metrics of fish species richness, abundance and biomass, and multivariate regression trees (MRT) to model biomass and abundance of fish functional groups. We compared model performance using different sets of predictors and estimated the relative influence of individual predictors. Models of total species richness and total abundance performed best; those developed for endemic species performed worst. Abundance models performed substantially better than corresponding biomass models. In general, BRT and MRTs developed using predicted habitat classifications performed less well than those using multibeam data. The most influential individual predictor was the abiotic categorical variable from direct observer habitat classification and models that incorporated predictors from direct observer habitat classification consistently outperformed those that did not. Our results show that while remotely sensed data can offer considerable utility for predictive modelling, the addition of direct observer habitat classification data can substantially improve model performance. Thus it appears that there are aspects of marine habitats that are important for modelling metrics of fish biodiversity that are

  9. Identity based Encryption and Biometric Authentication Scheme for Secure Data Access in Cloud Computing

    DEFF Research Database (Denmark)

    Cheng, Hongbing; Rong, Chunming; Tan, Zheng-Hua

    2012-01-01

    Cloud computing will be a main information infrastructure in the future; it consists of many large datacenters which are usually geographically distributed and heterogeneous. How to design a secure data access for cloud computing platform is a big challenge. In this paper, we propose a secure data...... access scheme based on identity-based encryption and biometric authentication for cloud computing. Firstly, we describe the security concern of cloud computing and then propose an integrated data access scheme for cloud computing, the procedure of the proposed scheme include parameter setup, key...... distribution, feature template creation, cloud data processing and secure data access control. Finally, we compare the proposed scheme with other schemes through comprehensive analysis and simulation. The results show that the proposed data access scheme is feasible and secure for cloud computing....

  10. Estimation of seasonal atmospheric stability and mixing height by using different schemes

    International Nuclear Information System (INIS)

    Essa, K.S.M.; Embaby, M.; Mubarak, F.; Kamel, I.

    2007-01-01

    Different atmospheric stability schemes were used to characterize the plume growth (dispersion coefficients σ) in the lateral and vertical directions to determine the concentration distribution of pollutants through the PBL. The PBL is the region in which surface friction has a large effect on the mixing of pollutants. It is also suffer large fluctuation in temperature and wind and its depth (mixing depth) changes over a diurnal cycle. In this study, four months of surface meteorological parameters were used (to represent different seasons) to determine seasonal stability, classification. Five different stability schemes were estimated based on temperature gradient, standard deviation of the horizontal wind direction fluctuation, gradient and Bulk Richardson numbers and Monin-Obukhov length. Friction velocity, (u * ) for each stability scheme was estimated for characterizing the hourly, mixing height for each stability class. Also, plume rise was estimated for each stability class depending on the availability of meteorological parameters

  11. Block-classified bidirectional motion compensation scheme for wavelet-decomposed digital video

    Energy Technology Data Exchange (ETDEWEB)

    Zafar, S. [Argonne National Lab., IL (United States). Mathematics and Computer Science Div.; Zhang, Y.Q. [David Sarnoff Research Center, Princeton, NJ (United States); Jabbari, B. [George Mason Univ., Fairfax, VA (United States)

    1997-08-01

    In this paper the authors introduce a block-classified bidirectional motion compensation scheme for the previously developed wavelet-based video codec, where multiresolution motion estimation is performed in the wavelet domain. The frame classification structure described in this paper is similar to that used in the MPEG standard. Specifically, the I-frames are intraframe coded, the P-frames are interpolated from a previous I- or a P-frame, and the B-frames are bidirectional interpolated frames. They apply this frame classification structure to the wavelet domain with variable block sizes and multiresolution representation. They use a symmetric bidirectional scheme for the B-frames and classify the motion blocks as intraframe, compensated either from the preceding or the following frame, or bidirectional (i.e., compensated based on which type yields the minimum energy). They also introduce the concept of F-frames, which are analogous to P-frames but are predicted from the following frame only. This improves the overall quality of the reconstruction in a group of pictures (GOP) but at the expense of extra buffering. They also study the effect of quantization of the I-frames on the reconstruction of a GOP, and they provide intuitive explanation for the results. In addition, the authors study a variety of wavelet filter-banks to be used in a multiresolution motion-compensated hierarchical video codec.

  12. Classification of Noisy Data: An Approach Based on Genetic Algorithms and Voronoi Tessellation

    DEFF Research Database (Denmark)

    Khan, Abdul Rauf; Schiøler, Henrik; Knudsen, Torben

    Classification is one of the major constituents of the data-mining toolkit. The well-known methods for classification are built on either the principle of logic or statistical/mathematical reasoning for classification. In this article we propose: (1) a different strategy, which is based on the po......Classification is one of the major constituents of the data-mining toolkit. The well-known methods for classification are built on either the principle of logic or statistical/mathematical reasoning for classification. In this article we propose: (1) a different strategy, which is based...

  13. Radar-Derived Quantitative Precipitation Estimation Based on Precipitation Classification

    Directory of Open Access Journals (Sweden)

    Lili Yang

    2016-01-01

    Full Text Available A method for improving radar-derived quantitative precipitation estimation is proposed. Tropical vertical profiles of reflectivity (VPRs are first determined from multiple VPRs. Upon identifying a tropical VPR, the event can be further classified as either tropical-stratiform or tropical-convective rainfall by a fuzzy logic (FL algorithm. Based on the precipitation-type fields, the reflectivity values are converted into rainfall rate using a Z-R relationship. In order to evaluate the performance of this rainfall classification scheme, three experiments were conducted using three months of data and two study cases. In Experiment I, the Weather Surveillance Radar-1988 Doppler (WSR-88D default Z-R relationship was applied. In Experiment II, the precipitation regime was separated into convective and stratiform rainfall using the FL algorithm, and corresponding Z-R relationships were used. In Experiment III, the precipitation regime was separated into convective, stratiform, and tropical rainfall, and the corresponding Z-R relationships were applied. The results show that the rainfall rates obtained from all three experiments match closely with the gauge observations, although Experiment II could solve the underestimation, when compared to Experiment I. Experiment III significantly reduced this underestimation and generated the most accurate radar estimates of rain rate among the three experiments.

  14. Multiple image encryption scheme based on pixel exchange operation and vector decomposition

    Science.gov (United States)

    Xiong, Y.; Quan, C.; Tay, C. J.

    2018-02-01

    We propose a new multiple image encryption scheme based on a pixel exchange operation and a basic vector decomposition in Fourier domain. In this algorithm, original images are imported via a pixel exchange operator, from which scrambled images and pixel position matrices are obtained. Scrambled images encrypted into phase information are imported using the proposed algorithm and phase keys are obtained from the difference between scrambled images and synthesized vectors in a charge-coupled device (CCD) plane. The final synthesized vector is used as an input in a random phase encoding (DRPE) scheme. In the proposed encryption scheme, pixel position matrices and phase keys serve as additional private keys to enhance the security of the cryptosystem which is based on a 4-f system. Numerical simulations are presented to demonstrate the feasibility and robustness of the proposed encryption scheme.

  15. Parallelised photoacoustic signal acquisition using a Fabry-Perot sensor and a camera-based interrogation scheme

    Science.gov (United States)

    Saeb Gilani, T.; Villringer, C.; Zhang, E.; Gundlach, H.; Buchmann, J.; Schrader, S.; Laufer, J.

    2018-02-01

    Tomographic photoacoustic (PA) images acquired using a Fabry-Perot (FP) based scanner offer high resolution and image fidelity but can result in long acquisition times due to the need for raster scanning. To reduce the acquisition times, a parallelised camera-based PA signal detection scheme is developed. The scheme is based on using a sCMOScamera and FPI sensors with high homogeneity of optical thickness. PA signals were acquired using the camera-based setup and the signal to noise ratio (SNR) was measured. A comparison of the SNR of PA signal detected using 1) a photodiode in a conventional raster scanning detection scheme and 2) a sCMOS camera in parallelised detection scheme is made. The results show that the parallelised interrogation scheme has the potential to provide high speed PA imaging.

  16. The Ecohydrological Context of Drought and Classification of Plant Responses

    Science.gov (United States)

    Feng, X.; Ackerly, D.; Dawson, T. E.; Manzoni, S.; Skelton, R. P.; Vico, G.; Thompson, S. E.

    2017-12-01

    Many recent studies on drought-induced vegetation mortality have explored how plant functional traits, and classifications of such traits along axes of, e.g., isohydry - anisohydry, might contribute to predicting drought survival and recovery. As these studies proliferate, concerns are growing about the consistency and predictive value of such classifications. Here, we outline the basis for a systematic classification of drought strategies that accounts for both environmental conditions and functional traits. We (1) identify drawbacks of exiting isohydricity and trait-based metrics, (2) identify major axes of trait and environmental variation that determine drought mortality pathways (hydraulic failure and carbon starvation) using non-dimensional trait groups, and (3) demonstrate that these trait groupings predict physiological drought outcomes using both measured and synthetic data. In doing so we untangle some confounding effects of environment and trait variations that undermine current classification schemes, outline a pathway to progress towards a general classification of drought vulnerability, and advocate for more careful treatment of the environmental conditions within which plant drought responses occur.

  17. Color Independent Components Based SIFT Descriptors for Object/Scene Classification

    Science.gov (United States)

    Ai, Dan-Ni; Han, Xian-Hua; Ruan, Xiang; Chen, Yen-Wei

    In this paper, we present a novel color independent components based SIFT descriptor (termed CIC-SIFT) for object/scene classification. We first learn an efficient color transformation matrix based on independent component analysis (ICA), which is adaptive to each category in a database. The ICA-based color transformation can enhance contrast between the objects and the background in an image. Then we compute CIC-SIFT descriptors over all three transformed color independent components. Since the ICA-based color transformation can boost the objects and suppress the background, the proposed CIC-SIFT can extract more effective and discriminative local features for object/scene classification. The comparison is performed among seven SIFT descriptors, and the experimental classification results show that our proposed CIC-SIFT is superior to other conventional SIFT descriptors.

  18. Object-Based Classification as an Alternative Approach to the Traditional Pixel-Based Classification to Identify Potential Habitat of the Grasshopper Sparrow

    Science.gov (United States)

    Jobin, Benoît; Labrecque, Sandra; Grenier, Marcelle; Falardeau, Gilles

    2008-01-01

    The traditional method of identifying wildlife habitat distribution over large regions consists of pixel-based classification of satellite images into a suite of habitat classes used to select suitable habitat patches. Object-based classification is a new method that can achieve the same objective based on the segmentation of spectral bands of the image creating homogeneous polygons with regard to spatial or spectral characteristics. The segmentation algorithm does not solely rely on the single pixel value, but also on shape, texture, and pixel spatial continuity. The object-based classification is a knowledge base process where an interpretation key is developed using ground control points and objects are assigned to specific classes according to threshold values of determined spectral and/or spatial attributes. We developed a model using the eCognition software to identify suitable habitats for the Grasshopper Sparrow, a rare and declining species found in southwestern Québec. The model was developed in a region with known breeding sites and applied on other images covering adjacent regions where potential breeding habitats may be present. We were successful in locating potential habitats in areas where dairy farming prevailed but failed in an adjacent region covered by a distinct Landsat scene and dominated by annual crops. We discuss the added value of this method, such as the possibility to use the contextual information associated to objects and the ability to eliminate unsuitable areas in the segmentation and land cover classification processes, as well as technical and logistical constraints. A series of recommendations on the use of this method and on conservation issues of Grasshopper Sparrow habitat is also provided.

  19. Comparison of hand-craft feature based SVM and CNN based deep learning framework for automatic polyp classification.

    Science.gov (United States)

    Younghak Shin; Balasingham, Ilangko

    2017-07-01

    Colonoscopy is a standard method for screening polyps by highly trained physicians. Miss-detected polyps in colonoscopy are potential risk factor for colorectal cancer. In this study, we investigate an automatic polyp classification framework. We aim to compare two different approaches named hand-craft feature method and convolutional neural network (CNN) based deep learning method. Combined shape and color features are used for hand craft feature extraction and support vector machine (SVM) method is adopted for classification. For CNN approach, three convolution and pooling based deep learning framework is used for classification purpose. The proposed framework is evaluated using three public polyp databases. From the experimental results, we have shown that the CNN based deep learning framework shows better classification performance than the hand-craft feature based methods. It achieves over 90% of classification accuracy, sensitivity, specificity and precision.

  20. Classification of instability after reverse shoulder arthroplasty guides surgical management and outcomes.

    Science.gov (United States)

    Abdelfattah, Adham; Otto, Randall J; Simon, Peter; Christmas, Kaitlyn N; Tanner, Gregory; LaMartina, Joey; Levy, Jonathan C; Cuff, Derek J; Mighell, Mark A; Frankle, Mark A

    2018-04-01

    Revision of unstable reverse shoulder arthroplasty (RSA) remains a significant challenge. The purpose of this study was to determine the reliability of a new treatment-guiding classification for instability after RSA, to describe the clinical outcomes of patients stabilized operatively, and to identify those with higher risk of recurrence. All patients undergoing revision for instability after RSA were identified at our institution. Demographic, clinical, radiographic, and intraoperative data were collected. A classification was developed using all identified causes of instability after RSA and allocating them to 1 of 3 defined treatment-guiding categories. Eight surgeons reviewed all data and applied the classification scheme to each case. Interobserver and intraobserver reliability was used to evaluate the classification scheme. Preoperative clinical outcomes were compared with final follow-up in stabilized shoulders. Forty-three revision cases in 34 patients met the inclusion for study. Five patients remained unstable after revision. Persistent instability most commonly occurred in persistent deltoid dysfunction and postoperative acromial fractures but also in 1 case of soft tissue impingement. Twenty-one patients remained stable at minimum 2 years of follow-up and had significant improvement of clinical outcome scores and range of motion. Reliability of the classification scheme showed substantial and almost perfect interobserver and intraobserver agreement among all the participants (κ = 0.699 and κ = 0.851, respectively). Instability after RSA can be successfully treated with revision surgery using the reliable treatment-guiding classification scheme presented herein. However, more understanding is needed for patients with greater risk of recurrent instability after revision surgery. Copyright © 2017 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.

  1. Accurate crop classification using hierarchical genetic fuzzy rule-based systems

    Science.gov (United States)

    Topaloglou, Charalampos A.; Mylonas, Stelios K.; Stavrakoudis, Dimitris G.; Mastorocostas, Paris A.; Theocharis, John B.

    2014-10-01

    This paper investigates the effectiveness of an advanced classification system for accurate crop classification using very high resolution (VHR) satellite imagery. Specifically, a recently proposed genetic fuzzy rule-based classification system (GFRBCS) is employed, namely, the Hierarchical Rule-based Linguistic Classifier (HiRLiC). HiRLiC's model comprises a small set of simple IF-THEN fuzzy rules, easily interpretable by humans. One of its most important attributes is that its learning algorithm requires minimum user interaction, since the most important learning parameters affecting the classification accuracy are determined by the learning algorithm automatically. HiRLiC is applied in a challenging crop classification task, using a SPOT5 satellite image over an intensively cultivated area in a lake-wetland ecosystem in northern Greece. A rich set of higher-order spectral and textural features is derived from the initial bands of the (pan-sharpened) image, resulting in an input space comprising 119 features. The experimental analysis proves that HiRLiC compares favorably to other interpretable classifiers of the literature, both in terms of structural complexity and classification accuracy. Its testing accuracy was very close to that obtained by complex state-of-the-art classification systems, such as the support vector machines (SVM) and random forest (RF) classifiers. Nevertheless, visual inspection of the derived classification maps shows that HiRLiC is characterized by higher generalization properties, providing more homogeneous classifications that the competitors. Moreover, the runtime requirements for producing the thematic map was orders of magnitude lower than the respective for the competitors.

  2. Solar wind and geomagnetism: toward a standard classification of geomagnetic activity from 1868 to 2009

    Directory of Open Access Journals (Sweden)

    J. L. Zerbo

    2012-02-01

    Full Text Available We examined solar activity with a large series of geomagnetic data from 1868 to 2009. We have revisited the geomagnetic activity classification scheme of Legrand and Simon (1989 and improve their scheme by lowering the minimum Aa index value for shock and recurrent activity from 40 to 20 nT. This improved scheme allows us to clearly classify about 80% of the geomagnetic activity in this time period instead of only 60% for the previous Legrand and Simon classification.

  3. Hierarchical structure for audio-video based semantic classification of sports video sequences

    Science.gov (United States)

    Kolekar, M. H.; Sengupta, S.

    2005-07-01

    A hierarchical structure for sports event classification based on audio and video content analysis is proposed in this paper. Compared to the event classifications in other games, those of cricket are very challenging and yet unexplored. We have successfully solved cricket video classification problem using a six level hierarchical structure. The first level performs event detection based on audio energy and Zero Crossing Rate (ZCR) of short-time audio signal. In the subsequent levels, we classify the events based on video features using a Hidden Markov Model implemented through Dynamic Programming (HMM-DP) using color or motion as a likelihood function. For some of the game-specific decisions, a rule-based classification is also performed. Our proposed hierarchical structure can easily be applied to any other sports. Our results are very promising and we have moved a step forward towards addressing semantic classification problems in general.

  4. SQL based cardiovascular ultrasound image classification.

    Science.gov (United States)

    Nandagopalan, S; Suryanarayana, Adiga B; Sudarshan, T S B; Chandrashekar, Dhanalakshmi; Manjunath, C N

    2013-01-01

    This paper proposes a novel method to analyze and classify the cardiovascular ultrasound echocardiographic images using Naïve-Bayesian model via database OLAP-SQL. Efficient data mining algorithms based on tightly-coupled model is used to extract features. Three algorithms are proposed for classification namely Naïve-Bayesian Classifier for Discrete variables (NBCD) with SQL, NBCD with OLAP-SQL, and Naïve-Bayesian Classifier for Continuous variables (NBCC) using OLAP-SQL. The proposed model is trained with 207 patient images containing normal and abnormal categories. Out of the three proposed algorithms, a high classification accuracy of 96.59% was achieved from NBCC which is better than the earlier methods.

  5. Design and implementation based on the classification protection vulnerability scanning system

    International Nuclear Information System (INIS)

    Wang Chao; Lu Zhigang; Liu Baoxu

    2010-01-01

    With the application and spread of the classification protection, Network Security Vulnerability Scanning should consider the efficiency and the function expansion. It proposes a kind of a system vulnerability from classification protection, and elaborates the design and implementation of a vulnerability scanning system based on vulnerability classification plug-in technology and oriented classification protection. According to the experiment, the application of classification protection has good adaptability and salability with the system, and it also approves the efficiency of scanning. (authors)

  6. SDN-Based Mobile Data Offloading Scheme Using a Femtocell and WiFi Networks

    Directory of Open Access Journals (Sweden)

    Chang-Woo Ahn

    2017-01-01

    Full Text Available Because of the many applications running on smartphones, the load of mobile data traffic on cellular networks is increasing rapidly. A femtocell is a solution to increase the cellular network capacity and coverage. However, because it uses the same frequency bands as a macrocell, interference problems have prevented its widespread adoption. In this paper, we propose a scheme for traffic offloading between femtocells and WiFi networks utilizing software-defined networking (SDN technology. In the proposed offloading scheme, the SDN technology allows a terminal to maintain existing sessions after offloading through a centralized control of the SDN-based equipment. We also propose an offloading target selection scheme based on available bandwidth estimation and an association control mechanism to reduce the femtocell load while ensuring quality of service (QoS in terms of throughput. Experimental results on an actual testbed showed that the proposed offloading scheme provides seamless connectivity and reduces the femtocell load by up to 46% with the aid of the proposed target selection scheme, while ensuring QoS after offloading. We also observed that the proposed target selection scheme offloads 28% more traffic to WiFi networks compared to received signal strength indicator-based target selection in a low background traffic environment.

  7. Secure biometric image sensor and authentication scheme based on compressed sensing.

    Science.gov (United States)

    Suzuki, Hiroyuki; Suzuki, Masamichi; Urabe, Takuya; Obi, Takashi; Yamaguchi, Masahiro; Ohyama, Nagaaki

    2013-11-20

    It is important to ensure the security of biometric authentication information, because its leakage causes serious risks, such as replay attacks using the stolen biometric data, and also because it is almost impossible to replace raw biometric information. In this paper, we propose a secure biometric authentication scheme that protects such information by employing an optical data ciphering technique based on compressed sensing. The proposed scheme is based on two-factor authentication, the biometric information being supplemented by secret information that is used as a random seed for a cipher key. In this scheme, a biometric image is optically encrypted at the time of image capture, and a pair of restored biometric images for enrollment and verification are verified in the authentication server. If any of the biometric information is exposed to risk, it can be reenrolled by changing the secret information. Through numerical experiments, we confirm that finger vein images can be restored from the compressed sensing measurement data. We also present results that verify the accuracy of the scheme.

  8. Voting-based Classification for E-mail Spam Detection

    Directory of Open Access Journals (Sweden)

    Bashar Awad Al-Shboul

    2016-06-01

    Full Text Available The problem of spam e-mail has gained a tremendous amount of attention. Although entities tend to use e-mail spam filter applications to filter out received spam e-mails, marketing companies still tend to send unsolicited e-mails in bulk and users still receive a reasonable amount of spam e-mail despite those filtering applications. This work proposes a new method for classifying e-mails into spam and non-spam. First, several e-mail content features are extracted and then those features are used for classifying each e-mail individually. The classification results of three different classifiers (i.e. Decision Trees, Random Forests and k-Nearest Neighbor are combined in various voting schemes (i.e. majority vote, average probability, product of probabilities, minimum probability and maximum probability for making the final decision. To validate our method, two different spam e-mail collections were used.

  9. Improving Biometric-Based Authentication Schemes with Smart Card Revocation/Reissue for Wireless Sensor Networks.

    Science.gov (United States)

    Moon, Jongho; Lee, Donghoon; Lee, Youngsook; Won, Dongho

    2017-04-25

    User authentication in wireless sensor networks is more difficult than in traditional networks owing to sensor network characteristics such as unreliable communication, limited resources, and unattended operation. For these reasons, various authentication schemes have been proposed to provide secure and efficient communication. In 2016, Park et al. proposed a secure biometric-based authentication scheme with smart card revocation/reissue for wireless sensor networks. However, we found that their scheme was still insecure against impersonation attack, and had a problem in the smart card revocation/reissue phase. In this paper, we show how an adversary can impersonate a legitimate user or sensor node, illegal smart card revocation/reissue and prove that Park et al.'s scheme fails to provide revocation/reissue. In addition, we propose an enhanced scheme that provides efficiency, as well as anonymity and security. Finally, we provide security and performance analysis between previous schemes and the proposed scheme, and provide formal analysis based on the random oracle model. The results prove that the proposed scheme can solve the weaknesses of impersonation attack and other security flaws in the security analysis section. Furthermore, performance analysis shows that the computational cost is lower than the previous scheme.

  10. The World Health Organization Classification of dontogenic Lesions: A Summary of the Changes of the 2017 (4th Edition

    Directory of Open Access Journals (Sweden)

    Merva SOLUK-TEKKEŞİN

    2018-01-01

    Full Text Available The 4th edition of the World Health Organization (WHO Classification of Head and Neck Tumors was published in January 2017. The edition serves to provide an updated classification scheme, and extended genetic and molecular data that are useful as diagnostic tools for the lesions of the head and neck region. This review focuses on the most current update of odontogenic cysts and tumors based on the 2017 WHO edition. The updated classification has some important differences from the 3rd edition (2005, including a new classification of odontogenic cysts, ‘reclassified’ odontogenic tumors, and some new entities.

  11. Classification of diffuse lung diseases: why and how.

    Science.gov (United States)

    Hansell, David M

    2013-09-01

    The understanding of complex lung diseases, notably the idiopathic interstitial pneumonias and small airways diseases, owes as much to repeated attempts over the years to classify them as to any single conceptual breakthrough. One of the many benefits of a successful classification scheme is that it allows workers, within and between disciplines, to be clear that they are discussing the same disease. This may be of particular importance in the recruitment of individuals for a clinical trial that requires a standardized and homogeneous study population. Different specialties require fundamentally different things from a classification: for epidemiologic studies, a classification that requires categorization of individuals according to histopathologic pattern is not usually practicable. Conversely, a scheme that simply divides diffuse parenchymal disease into inflammatory and noninflammatory categories is unlikely to further the understanding about the pathogenesis of disease. Thus, for some disease groupings, for example, pulmonary vasculopathies, there may be several appropriate classifications, each with its merits and demerits. There has been an interesting shift in the past few years, from the accepted primacy of histopathology as the sole basis on which the classification of parenchymal lung disease has rested, to new ways of considering how these entities relate to each other. Some inventive thinking has resulted in new classifications that undoubtedly benefit patients and clinicians in their endeavor to improve management and outcome. The challenge of understanding the logic behind current classifications and their shortcomings are explored in various examples of lung diseases.

  12. A privacy authentication scheme based on cloud for medical environment.

    Science.gov (United States)

    Chen, Chin-Ling; Yang, Tsai-Tung; Chiang, Mao-Lun; Shih, Tzay-Farn

    2014-11-01

    With the rapid development of the information technology, the health care technologies already became matured. Such as electronic medical records that can be easily stored. However, how to get medical resources more convenient is currently concerning issue. In spite of many literatures discussed about medical systems, these literatures should face many security challenges. The most important issue is patients' privacy. Therefore, we propose a privacy authentication scheme based on cloud environment. In our scheme, we use mobile device's characteristics, allowing peoples to use medical resources on the cloud environment to find medical advice conveniently. The digital signature is used to ensure the security of the medical information that is certified by the medical department in our proposed scheme.

  13. Two-out-of-two color matching based visual cryptography schemes.

    Science.gov (United States)

    Machizaud, Jacques; Fournel, Thierry

    2012-09-24

    Visual cryptography which consists in sharing a secret message between transparencies has been extended to color prints. In this paper, we propose a new visual cryptography scheme based on color matching. The stacked printed media reveal a uniformly colored message decoded by the human visual system. In contrast with the previous color visual cryptography schemes, the proposed one enables to share images without pixel expansion and to detect a forgery as the color of the message is kept secret. In order to correctly print the colors on the media and to increase the security of the scheme, we use spectral models developed for color reproduction describing printed colors from an optical point of view.

  14. Analysis of composition-based metagenomic classification.

    Science.gov (United States)

    Higashi, Susan; Barreto, André da Motta Salles; Cantão, Maurício Egidio; de Vasconcelos, Ana Tereza Ribeiro

    2012-01-01

    An essential step of a metagenomic study is the taxonomic classification, that is, the identification of the taxonomic lineage of the organisms in a given sample. The taxonomic classification process involves a series of decisions. Currently, in the context of metagenomics, such decisions are usually based on empirical studies that consider one specific type of classifier. In this study we propose a general framework for analyzing the impact that several decisions can have on the classification problem. Instead of focusing on any specific classifier, we define a generic score function that provides a measure of the difficulty of the classification task. Using this framework, we analyze the impact of the following parameters on the taxonomic classification problem: (i) the length of n-mers used to encode the metagenomic sequences, (ii) the similarity measure used to compare sequences, and (iii) the type of taxonomic classification, which can be conventional or hierarchical, depending on whether the classification process occurs in a single shot or in several steps according to the taxonomic tree. We defined a score function that measures the degree of separability of the taxonomic classes under a given configuration induced by the parameters above. We conducted an extensive computational experiment and found out that reasonable values for the parameters of interest could be (i) intermediate values of n, the length of the n-mers; (ii) any similarity measure, because all of them resulted in similar scores; and (iii) the hierarchical strategy, which performed better in all of the cases. As expected, short n-mers generate lower configuration scores because they give rise to frequency vectors that represent distinct sequences in a similar way. On the other hand, large values for n result in sparse frequency vectors that represent differently metagenomic fragments that are in fact similar, also leading to low configuration scores. Regarding the similarity measure, in

  15. Group-Based Active Learning of Classification Models.

    Science.gov (United States)

    Luo, Zhipeng; Hauskrecht, Milos

    2017-05-01

    Learning of classification models from real-world data often requires additional human expert effort to annotate the data. However, this process can be rather costly and finding ways of reducing the human annotation effort is critical for this task. The objective of this paper is to develop and study new ways of providing human feedback for efficient learning of classification models by labeling groups of examples. Briefly, unlike traditional active learning methods that seek feedback on individual examples, we develop a new group-based active learning framework that solicits label information on groups of multiple examples. In order to describe groups in a user-friendly way, conjunctive patterns are used to compactly represent groups. Our empirical study on 12 UCI data sets demonstrates the advantages and superiority of our approach over both classic instance-based active learning work, as well as existing group-based active-learning methods.

  16. A scheme for the classification of explosions in the chemical process industry.

    Science.gov (United States)

    Abbasi, Tasneem; Pasman, H J; Abbasi, S A

    2010-02-15

    All process industry accidents fall under three broad categories-fire, explosion, and toxic release. Of these fire is the most common, followed by explosions. Within these broad categories occur a large number of sub-categories, each depicting a specific sub-type of a fire/explosion/toxic release. But whereas clear and self-consistent sub-classifications exist for fires and toxic releases, the situation is not as clear vis a vis explosions. In this paper the inconsistencies and/or shortcomings associated with the classification of different types of explosions, which are seen even in otherwise highly authentic and useful reference books on process safety, are reviewed. In its context a new classification is attempted which may, hopefully, provide a frame-of-reference for the future.

  17. A modified chaos-based communication scheme using Hamiltonian forms and observer

    International Nuclear Information System (INIS)

    Lopez-Mancilla, D; Cruz-Hernandez, C; Posadas-Castillo, C

    2005-01-01

    In this work, a modified chaos-based communication scheme is presented. In particular, we use the modified scheme proposed by Lopez-Mancilla and Cruz-Hernandez (2005), that improves the basic scheme for chaotic masking using a single transmission channel proposed by Cuomo and coworkers (1993). It is extended for a special class of Generalized Hamiltonian systems. Substantial differences that significantly affect the reception quality of the sent message, with or without considering noise effect in the transmission channel are given. We use two Hamiltonian Lorenz systems unidirectionally coupled, the first like a master/transmitter system and the other like a slave/receiver system in order to illustrate with numerical simulations the effectiveness of the modified scheme, using chaos synchronization with Hamiltonian forms and observer

  18. A modified chaos-based communication scheme using Hamiltonian forms and observer

    Energy Technology Data Exchange (ETDEWEB)

    Lopez-Mancilla, D [Engineering Faculty, Baja California Autonomous University (UABC), Km. 103, Carretera Tijuana-Ensenada, 22860, Ensenada, B.C. (Mexico); Cruz-Hernandez, C [Telematics Direction, Scientific Research and Advanced Studies of Ensenada (CICESE), Km. 107 Carretera Tijuana-Ensenada, 22860 Ensenada, B.C. (Mexico); Posadas-Castillo, C [Engineering Faculty, Baja California Autonomous University (UABC), Km. 103, Carretera Tijuana-Ensenada, 22860, Ensenada, B.C. (Mexico); Faculty of Engineering Mechanic and Electrical (FIME), Nuevo Leon Autonomous University (UANL), Pedro de alba s/n Cd. Universitaria San Nicolas de los Garza N.L. (Mexico)

    2005-01-01

    In this work, a modified chaos-based communication scheme is presented. In particular, we use the modified scheme proposed by Lopez-Mancilla and Cruz-Hernandez (2005), that improves the basic scheme for chaotic masking using a single transmission channel proposed by Cuomo and coworkers (1993). It is extended for a special class of Generalized Hamiltonian systems. Substantial differences that significantly affect the reception quality of the sent message, with or without considering noise effect in the transmission channel are given. We use two Hamiltonian Lorenz systems unidirectionally coupled, the first like a master/transmitter system and the other like a slave/receiver system in order to illustrate with numerical simulations the effectiveness of the modified scheme, using chaos synchronization with Hamiltonian forms and observer.

  19. Toward an enhanced Arabic text classification using cosine similarity and Latent Semantic

    Directory of Open Access Journals (Sweden)

    Fawaz S. Al-Anzi

    2017-04-01

    Full Text Available Cosine similarity is one of the most popular distance measures in text classification problems. In this paper, we used this important measure to investigate the performance of Arabic language text classification. For textual features, vector space model (VSM is generally used as a model to represent textual information as numerical vectors. However, Latent Semantic Indexing (LSI is a better textual representation technique as it maintains semantic information between the words. Hence, we used the singular value decomposition (SVD method to extract textual features based on LSI. In our experiments, we conducted comparison between some of the well-known classification methods such as Naïve Bayes, k-Nearest Neighbors, Neural Network, Random Forest, Support Vector Machine, and classification tree. We used a corpus that contains 4,000 documents of ten topics (400 document for each topic. The corpus contains 2,127,197 words with about 139,168 unique words. The testing set contains 400 documents, 40 documents for each topics. As a weighing scheme, we used Term Frequency.Inverse Document Frequency (TF.IDF. This study reveals that the classification methods that use LSI features significantly outperform the TF.IDF-based methods. It also reveals that k-Nearest Neighbors (based on cosine measure and support vector machine are the best performing classifiers.

  20. Enhancing Community Detection By Affinity-based Edge Weighting Scheme

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, Andy [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Sanders, Geoffrey [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Henson, Van [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Vassilevski, Panayot [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-10-05

    Community detection refers to an important graph analytics problem of finding a set of densely-connected subgraphs in a graph and has gained a great deal of interest recently. The performance of current community detection algorithms is limited by an inherent constraint of unweighted graphs that offer very little information on their internal community structures. In this paper, we propose a new scheme to address this issue that weights the edges in a given graph based on recently proposed vertex affinity. The vertex affinity quantifies the proximity between two vertices in terms of their clustering strength, and therefore, it is ideal for graph analytics applications such as community detection. We also demonstrate that the affinity-based edge weighting scheme can improve the performance of community detection algorithms significantly.

  1. Data Clustering and Evolving Fuzzy Decision Tree for Data Base Classification Problems

    Science.gov (United States)

    Chang, Pei-Chann; Fan, Chin-Yuan; Wang, Yen-Wen

    Data base classification suffers from two well known difficulties, i.e., the high dimensionality and non-stationary variations within the large historic data. This paper presents a hybrid classification model by integrating a case based reasoning technique, a Fuzzy Decision Tree (FDT), and Genetic Algorithms (GA) to construct a decision-making system for data classification in various data base applications. The model is major based on the idea that the historic data base can be transformed into a smaller case-base together with a group of fuzzy decision rules. As a result, the model can be more accurately respond to the current data under classifying from the inductions by these smaller cases based fuzzy decision trees. Hit rate is applied as a performance measure and the effectiveness of our proposed model is demonstrated by experimentally compared with other approaches on different data base classification applications. The average hit rate of our proposed model is the highest among others.

  2. An encryption scheme based on phase-shifting digital holography and amplitude-phase disturbance

    International Nuclear Information System (INIS)

    Hua Li-Li; Xu Ning; Yang Geng

    2014-01-01

    In this paper, we propose an encryption scheme based on phase-shifting digital interferometry. According to the original system framework, we add a random amplitude mask and replace the Fourier transform by the Fresnel transform. We develop a mathematical model and give a discrete formula based on the scheme, which makes it easy to implement the scheme in computer programming. The experimental results show that the improved system has a better performance in security than the original encryption method. Moreover, it demonstrates a good capability of anti-noise and anti-shear robustness

  3. A Secure and Privacy-Preserving Navigation Scheme Using Spatial Crowdsourcing in Fog-Based VANETs

    Science.gov (United States)

    Wang, Lingling; Liu, Guozhu; Sun, Lijun

    2017-01-01

    Fog-based VANETs (Vehicular ad hoc networks) is a new paradigm of vehicular ad hoc networks with the advantages of both vehicular cloud and fog computing. Real-time navigation schemes based on fog-based VANETs can promote the scheme performance efficiently. In this paper, we propose a secure and privacy-preserving navigation scheme by using vehicular spatial crowdsourcing based on fog-based VANETs. Fog nodes are used to generate and release the crowdsourcing tasks, and cooperatively find the optimal route according to the real-time traffic information collected by vehicles in their coverage areas. Meanwhile, the vehicle performing the crowdsourcing task can get a reasonable reward. The querying vehicle can retrieve the navigation results from each fog node successively when entering its coverage area, and follow the optimal route to the next fog node until it reaches the desired destination. Our scheme fulfills the security and privacy requirements of authentication, confidentiality and conditional privacy preservation. Some cryptographic primitives, including the Elgamal encryption algorithm, AES, randomized anonymous credentials and group signatures, are adopted to achieve this goal. Finally, we analyze the security and the efficiency of the proposed scheme. PMID:28338620

  4. A Secure and Privacy-Preserving Navigation Scheme Using Spatial Crowdsourcing in Fog-Based VANETs.

    Science.gov (United States)

    Wang, Lingling; Liu, Guozhu; Sun, Lijun

    2017-03-24

    Fog-based VANETs (Vehicular ad hoc networks) is a new paradigm of vehicular ad hoc networks with the advantages of both vehicular cloud and fog computing. Real-time navigation schemes based on fog-based VANETs can promote the scheme performance efficiently. In this paper, we propose a secure and privacy-preserving navigation scheme by using vehicular spatial crowdsourcing based on fog-based VANETs. Fog nodes are used to generate and release the crowdsourcing tasks, and cooperatively find the optimal route according to the real-time traffic information collected by vehicles in their coverage areas. Meanwhile, the vehicle performing the crowdsourcing task can get a reasonable reward. The querying vehicle can retrieve the navigation results from each fog node successively when entering its coverage area, and follow the optimal route to the next fog node until it reaches the desired destination. Our scheme fulfills the security and privacy requirements of authentication, confidentiality and conditional privacy preservation. Some cryptographic primitives, including the Elgamal encryption algorithm, AES, randomized anonymous credentials and group signatures, are adopted to achieve this goal. Finally, we analyze the security and the efficiency of the proposed scheme.

  5. Image Encryption Scheme Based on Balanced Two-Dimensional Cellular Automata

    Directory of Open Access Journals (Sweden)

    Xiaoyan Zhang

    2013-01-01

    Full Text Available Cellular automata (CA are simple models of computation which exhibit fascinatingly complex behavior. Due to the universality of CA model, it has been widely applied in traditional cryptography and image processing. The aim of this paper is to present a new image encryption scheme based on balanced two-dimensional cellular automata. In this scheme, a random image with the same size of the plain image to be encrypted is first generated by a pseudo-random number generator with a seed. Then, the random image is evoluted alternately with two balanced two-dimensional CA rules. At last, the cipher image is obtained by operating bitwise XOR on the final evolution image and the plain image. This proposed scheme possesses some advantages such as very large key space, high randomness, complex cryptographic structure, and pretty fast encryption/decryption speed. Simulation results obtained from some classical images at the USC-SIPI database demonstrate the strong performance of the proposed image encryption scheme.

  6. Advanced neural network-based computational schemes for robust fault diagnosis

    CERN Document Server

    Mrugalski, Marcin

    2014-01-01

    The present book is devoted to problems of adaptation of artificial neural networks to robust fault diagnosis schemes. It presents neural networks-based modelling and estimation techniques used for designing robust fault diagnosis schemes for non-linear dynamic systems. A part of the book focuses on fundamental issues such as architectures of dynamic neural networks, methods for designing of neural networks and fault diagnosis schemes as well as the importance of robustness. The book is of a tutorial value and can be perceived as a good starting point for the new-comers to this field. The book is also devoted to advanced schemes of description of neural model uncertainty. In particular, the methods of computation of neural networks uncertainty with robust parameter estimation are presented. Moreover, a novel approach for system identification with the state-space GMDH neural network is delivered. All the concepts described in this book are illustrated by both simple academic illustrative examples and practica...

  7. The Importance of Temporal and Spatial Vegetation Structure Information in Biotope Mapping Schemes: A Case Study in Helsingborg, Sweden

    Science.gov (United States)

    Gao, Tian; Qiu, Ling; Hammer, Mårten; Gunnarsson, Allan

    2012-02-01

    Temporal and spatial vegetation structure has impact on biodiversity qualities. Yet, current schemes of biotope mapping do only to a limited extend incorporate these factors in the mapping. The purpose of this study is to evaluate the application of a modified biotope mapping scheme that includes temporal and spatial vegetation structure. A refined scheme was developed based on a biotope classification, and applied to a green structure system in Helsingborg city in southern Sweden. It includes four parameters of vegetation structure: continuity of forest cover, age of dominant trees, horizontal structure, and vertical structure. The major green structure sites were determined by interpretation of panchromatic aerial photographs assisted with a field survey. A set of biotope maps was constructed on the basis of each level of modified classification. An evaluation of the scheme included two aspects in particular: comparison of species richness between long-continuity and short-continuity forests based on identification of woodland continuity using ancient woodland indicators (AWI) species and related historical documents, and spatial distribution of animals in the green space in relation to vegetation structure. The results indicate that (1) the relationship between forest continuity: according to verification of historical documents, the richness of AWI species was higher in long-continuity forests; Simpson's diversity was significantly different between long- and short-continuity forests; the total species richness and Shannon's diversity were much higher in long-continuity forests shown a very significant difference. (2) The spatial vegetation structure and age of stands influence the richness and abundance of the avian fauna and rabbits, and distance to the nearest tree and shrub was a strong determinant of presence for these animal groups. It is concluded that continuity of forest cover, age of dominant trees, horizontal and vertical structures of vegetation

  8. A Novel Quantum Image Steganography Scheme Based on LSB

    Science.gov (United States)

    Zhou, Ri-Gui; Luo, Jia; Liu, XingAo; Zhu, Changming; Wei, Lai; Zhang, Xiafen

    2018-06-01

    Based on the NEQR representation of quantum images and least significant bit (LSB) scheme, a novel quantum image steganography scheme is proposed. The sizes of the cover image and the original information image are assumed to be 4 n × 4 n and n × n, respectively. Firstly, the bit-plane scrambling method is used to scramble the original information image. Then the scrambled information image is expanded to the same size of the cover image by using the key only known to the operator. The expanded image is scrambled to be a meaningless image with the Arnold scrambling. The embedding procedure and extracting procedure are carried out by K 1 and K 2 which are under control of the operator. For validation of the presented scheme, the peak-signal-to-noise ratio (PSNR), the capacity, the security of the images and the circuit complexity are analyzed.

  9. A new gammagraphic and functional-based classification for hyperthyroidism

    International Nuclear Information System (INIS)

    Sanchez, J.; Lamata, F.; Cerdan, R.; Agilella, V.; Gastaminza, R.; Abusada, R.; Gonzales, M.; Martinez, M.

    2000-01-01

    The absence of an universal classification for hyperthyroidism's (HT), give rise to inadequate interpretation of series and trials, and prevents decision making. We offer a tentative classification based on gammagraphic and functional findings. Clinical records from patients who underwent thyroidectomy in our Department since 1967 to 1997 were reviewed. Those with functional measurements of hyperthyroidism were considered. All were managed according to the same preestablished guidelines. HT was the surgical indication in 694 (27,1%) of the 2559 thyroidectomy. Based on gammagraphic studies, we classified HTs in: parenchymatous increased-uptake, which could be diffuse, diffuse with cold nodules or diffuse with at least one nodule, and nodular increased-uptake (Autonomous Functioning Thyroid Nodes-AFTN), divided into solitary AFTN or toxic adenoma and multiple AFTN o toxic multi-nodular goiter. This gammagraphic-based classification in useful and has high sensitivity to detect these nodules assessing their activity, allowing us to make therapeutic decision making and, in some cases, to choose surgical technique. (authors)

  10. Threshold secret sharing scheme based on phase-shifting interferometry.

    Science.gov (United States)

    Deng, Xiaopeng; Shi, Zhengang; Wen, Wei

    2016-11-01

    We propose a new method for secret image sharing with the (3,N) threshold scheme based on phase-shifting interferometry. The secret image, which is multiplied with an encryption key in advance, is first encrypted by using Fourier transformation. Then, the encoded image is shared into N shadow images based on the recording principle of phase-shifting interferometry. Based on the reconstruction principle of phase-shifting interferometry, any three or more shadow images can retrieve the secret image, while any two or fewer shadow images cannot obtain any information of the secret image. Thus, a (3,N) threshold secret sharing scheme can be implemented. Compared with our previously reported method, the algorithm of this paper is suited for not only a binary image but also a gray-scale image. Moreover, the proposed algorithm can obtain a larger threshold value t. Simulation results are presented to demonstrate the feasibility of the proposed method.

  11. Intelligent Aggregation Based on Content Routing Scheme for Cloud Computing

    Directory of Open Access Journals (Sweden)

    Jiachen Xu

    2017-10-01

    Full Text Available Cloud computing has emerged as today’s most exciting computing paradigm for providing services using a shared framework, which opens a new door for solving the problems of the explosive growth of digital resource demands and their corresponding convenience. With the exponential growth of the number of data types and data size in so-called big data work, the backbone network is under great pressure due to its transmission capacity, which is lower than the growth of the data size and would seriously hinder the development of the network without an effective approach to solve this problem. In this paper, an Intelligent Aggregation based on a Content Routing (IACR scheme for cloud computing, which could reduce the amount of data in the network effectively and play a basic supporting role in the development of cloud computing, is first put forward. All in all, the main innovations in this paper are: (1 A framework for intelligent aggregation based on content routing is proposed, which can support aggregation based content routing; (2 The proposed IACR scheme could effectively route the high aggregation ratio data to the data center through the same routing path so as to effectively reduce the amount of data that the network transmits. The theoretical analyses experiments and results show that, compared with the previous original routing scheme, the IACR scheme can balance the load of the whole network, reduce the amount of data transmitted in the network by 41.8%, and reduce the transmission time by 31.6% in the same network with a more balanced network load.

  12. An Interference Cancellation Scheme for High Reliability Based on MIMO Systems

    Directory of Open Access Journals (Sweden)

    Jae-Hyun Ro

    2018-03-01

    Full Text Available This article proposes a new interference cancellation scheme in a half-duplex based two-path relay system. In the conventional two-path relay system, inter-relay-interference (IRI which severely degrades the error performances at a destination occurs because a source and a relay transmit signals simultaneously at a specific time. The proposed scheme removes the IRI at a relay for higher signal-to-interference plus noise ratio (SINR to receive interference free signal at a destination, unlike the conventional relay system, which removes IRI at a destination. To handle the IRI, the proposed scheme uses multiple-input multiple-output (MIMO signal detection at the relays and it makes low-complexity signal processing at a destination which is a usually mobile user. At the relays, the proposed scheme uses the low-complexity QR decomposition-M algorithm (QRD-M to optimally remove the IRI. Also, for obtaining diversity gain, the proposed scheme uses cyclic delay diversity (CDD to transmit the signals at a source and the relays. In simulation results, the error performance for the proposed scheme is better when the distance between one relay and another relay is low unlike the conventional scheme because the QRD-M detects received signal in order of higher post signal-to-noise ratio (SNR.

  13. Classification of debris flow phenomena in the Faroe Islands

    DEFF Research Database (Denmark)

    Dahl, Mads-Peter Jakob; E. Mortensen, Lis; Jensen, Niels H.

    2012-01-01

    Landslides and debris flow phenomena in particular constitute a threat to human activities in the Faroe Islands. As a contribution to ongoing landslide risk management research, this paper proposes a classification scheme for debris flow phenomena in the Faroe Islands. The scheme, produced through...... a multidisciplinary study involving geomorphological fieldwork and qualitative collection of indigenous landslide knowledge, presents physical characteristics to classify debris flow phenomena into groups named with Faroese terms. The following landslide definitions are proposed. Brekku-skriðulop (English translation...... with international landslide classification systems, significantly increases the knowledge of debris flow phenomena and promotes a consistent terminology of these within the Faroe Islands....

  14. Decimal Classification Editions

    OpenAIRE

    Zenovia Niculescu

    2009-01-01

    The study approaches the evolution of Dewey Decimal Classification editions from the perspective of updating the terminology, reallocating and expanding the main and auxilary structure of Dewey indexing language. The comparative analysis of DDC editions emphasizes the efficiency of Dewey scheme from the point of view of improving the informational offer, through basic index terms, revised and developed, as well as valuing the auxilary notations.

  15. Self-match based on polling scheme for passive optical network monitoring

    Science.gov (United States)

    Zhang, Xuan; Guo, Hao; Jia, Xinhong; Liao, Qinghua

    2018-06-01

    We propose a self-match based on polling scheme for passive optical network monitoring. Each end-user is equipped with an optical matcher that exploits only the specific length patchcord and two different fiber Bragg gratings with 100% reflectivity. The simple and low-cost scheme can greatly simplify the final recognition processing of the network link status and reduce the sensitivity of the photodetector. We analyze the time-domain relation between reflected pulses and establish the calculation model to evaluate the false alarm rate. The feasibility of the proposed scheme and the validity of the time-domain relation analysis are experimentally demonstrated.

  16. Computerized classification of mass lesions in digital mammograms

    International Nuclear Information System (INIS)

    Giger, M.L.; Doi, K.; Yin, F.F.; Schmidt, R.A.; Vyborny, C.J.

    1989-01-01

    Subjective classification of masses on mammograms is a difficult task. On average, about 25% of masses referred for surgical biopsy are actually malignant. The authors are developing, as an aid to radiologists, a computerized scheme for the classification of lesions in mammograms to reduce the false-negative and false-positive diagnoses of malignancies. The classification scheme involves the extraction of border information from the mammographic lesion in order to quantify the degree of spiculation, which is related to the possibility of malignancy. Clinical film mammograms are digitized with an optical drum scanner (0.1-mm pixel size) for analysis on a Micro VAX 3500 computer. Border information (fluctuations) is obtained from the difference between the lesion border and its smoothed border. Using the rms variation of the frequency content of these fluctuations, approximately 85% of the cancerous lesions were correctly classified as malignant, while 15% of benign lesions were misclassified, in a preliminary study

  17. Biometrics based authentication scheme for session initiation protocol

    OpenAIRE

    Xie, Qi; Tang, Zhixiong

    2016-01-01

    Many two-factor challenge-response based session initiation protocol (SIP) has been proposed, but most of them are vulnerable to smart card stolen attacks and password guessing attacks. In this paper, we propose a novel three-factor SIP authentication scheme using biometrics, password and smart card, and utilize the pi calculus-based formal verification tool ProVerif to prove that the proposed protocol achieves security and authentication. Furthermore, our protocol is highly efficient when co...

  18. Dewey Decimal Classification for U. S. Conn: An Advantage?

    Science.gov (United States)

    Marek, Kate

    This paper examines the use of the Dewey Decimal Classification (DDC) system at the U. S. Conn Library at Wayne State College (WSC) in Nebraska. Several developments in the last 20 years which have eliminated the trend toward reclassification of academic library collections from DDC to the Library of Congress (LC) classification scheme are…

  19. Analytical evaluation of computer-based decision aids

    International Nuclear Information System (INIS)

    Kisner, R.A.

    1985-01-01

    This paper introduces a method for evaluating decision aids for nuclear power plant operators. The method involves a two-stage process of classification and analytical evaluation of display form and content. The classification scheme relates each specific aid to one or more general decision-making tasks. Evaluation then proceeds using a normative top-down design process based on the classification scheme by determining or deducing how various design issues associated with this process were resolved by the designer. The result is an assessment of the ''understandability'' of the aid as well as identification of the training and display features necessary to ensure understandability. 7 refs., 4 figs., 1 tab

  20. Renewing membership in three community-based health insurance schemes in rural India

    NARCIS (Netherlands)

    P. Panda (Pradeep); A. Chakraborty (Arpita); W.A. Raza (Wameq); A.S. Bedi (Arjun Singh)

    2015-01-01

    textabstractLow renewal rate is a key challenge facing the sustainability of Community-based Health Insurance (CBHI) schemes. While there is a large literature on initial enrolment into such schemes, there is limited evidence on the factors that impede renewal. This paper uses longitudinal data to

  1. XMSS : a practical forward secure signature scheme based on minimal security assumptions

    NARCIS (Netherlands)

    Buchmann, Johannes; Dahmen, Erik; Hülsing, Andreas; Yang, B.-Y.

    2011-01-01

    We present the hash-based signature scheme XMSS. It is the first provably (forward) secure and practical signature scheme with minimal security requirements: a pseudorandom and a second preimage resistant (hash) function family. Its signature size is reduced to less than 25% compared to the best

  2. Optimal sampling schemes for vegetation and geological field visits

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2012-07-01

    Full Text Available The presentation made to Wits Statistics Department was on common classification methods used in the field of remote sensing, and the use of remote sensing to design optimal sampling schemes for field visits with applications in vegetation...

  3. Multi-biometrics based cryptographic key regeneration scheme

    OpenAIRE

    Kanade , Sanjay Ganesh; Petrovska-Delacrétaz , Dijana; Dorizzi , Bernadette

    2009-01-01

    International audience; Biometrics lack revocability and privacy while cryptography cannot detect the user's identity. By obtaining cryptographic keys using biometrics, one can achieve the properties such as revocability, assurance about user's identity, and privacy. In this paper, we propose a multi-biometric based cryptographic key regeneration scheme. Since left and right irises of a person are uncorrelated, we treat them as two independent biometrics and combine in our system. We propose ...

  4. Secure and Efficient User Authentication Scheme Based on Password and Smart Card for Multiserver Environment

    Directory of Open Access Journals (Sweden)

    Yan Zhao

    2018-01-01

    Full Text Available The rapid development of information and network technologies motivates the emergence of various new computing paradigms, such as distributed computing, cloud computing, and edge computing. This also enables more and more network enterprises to provide multiple different services simultaneously. To ensure these services can only be accessed conveniently by authorized users, many password and smart card based authentication schemes for multiserver architecture have been proposed. Recently, Truong et al. introduced an identity based user authentication scheme on elliptic curve cryptography in multiserver environment and claimed that their scheme is secure against popular attacks. However, in this paper, we point out that their scheme suffers from offline password guessing and impersonation attack and fails to achieve security requirements of this kind of authentication scheme. Moreover, we put forward a new scheme to conquer security pitfalls in the above scheme. Security analysis indicates that the proposed scheme can be free from well-known attacks. Performance discussion demonstrates that our scheme has advantages in terms of both security property and computation efficiency and thus is more desirable for practical applications in multiserver environment.

  5. Performance Evaluation of Frequency Transform Based Block Classification of Compound Image Segmentation Techniques

    Science.gov (United States)

    Selwyn, Ebenezer Juliet; Florinabel, D. Jemi

    2018-04-01

    Compound image segmentation plays a vital role in the compression of computer screen images. Computer screen images are images which are mixed with textual, graphical, or pictorial contents. In this paper, we present a comparison of two transform based block classification of compound images based on metrics like speed of classification, precision and recall rate. Block based classification approaches normally divide the compound images into fixed size blocks of non-overlapping in nature. Then frequency transform like Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) are applied over each block. Mean and standard deviation are computed for each 8 × 8 block and are used as features set to classify the compound images into text/graphics and picture/background block. The classification accuracy of block classification based segmentation techniques are measured by evaluation metrics like precision and recall rate. Compound images of smooth background and complex background images containing text of varying size, colour and orientation are considered for testing. Experimental evidence shows that the DWT based segmentation provides significant improvement in recall rate and precision rate approximately 2.3% than DCT based segmentation with an increase in block classification time for both smooth and complex background images.

  6. Classification of types of stuttering symptoms based on brain activity.

    Directory of Open Access Journals (Sweden)

    Jing Jiang

    Full Text Available Among the non-fluencies seen in speech, some are more typical (MT of stuttering speakers, whereas others are less typical (LT and are common to both stuttering and fluent speakers. No neuroimaging work has evaluated the neural basis for grouping these symptom types. Another long-debated issue is which type (LT, MT whole-word repetitions (WWR should be placed in. In this study, a sentence completion task was performed by twenty stuttering patients who were scanned using an event-related design. This task elicited stuttering in these patients. Each stuttered trial from each patient was sorted into the MT or LT types with WWR put aside. Pattern classification was employed to train a patient-specific single trial model to automatically classify each trial as MT or LT using the corresponding fMRI data. This model was then validated by using test data that were independent of the training data. In a subsequent analysis, the classification model, just established, was used to determine which type the WWR should be placed in. The results showed that the LT and the MT could be separated with high accuracy based on their brain activity. The brain regions that made most contribution to the separation of the types were: the left inferior frontal cortex and bilateral precuneus, both of which showed higher activity in the MT than in the LT; and the left putamen and right cerebellum which showed the opposite activity pattern. The results also showed that the brain activity for WWR was more similar to that of the LT and fluent speech than to that of the MT. These findings provide a neurological basis for separating the MT and the LT types, and support the widely-used MT/LT symptom grouping scheme. In addition, WWR play a similar role as the LT, and thus should be placed in the LT type.

  7. Classification of Types of Stuttering Symptoms Based on Brain Activity

    Science.gov (United States)

    Jiang, Jing; Lu, Chunming; Peng, Danling; Zhu, Chaozhe; Howell, Peter

    2012-01-01

    Among the non-fluencies seen in speech, some are more typical (MT) of stuttering speakers, whereas others are less typical (LT) and are common to both stuttering and fluent speakers. No neuroimaging work has evaluated the neural basis for grouping these symptom types. Another long-debated issue is which type (LT, MT) whole-word repetitions (WWR) should be placed in. In this study, a sentence completion task was performed by twenty stuttering patients who were scanned using an event-related design. This task elicited stuttering in these patients. Each stuttered trial from each patient was sorted into the MT or LT types with WWR put aside. Pattern classification was employed to train a patient-specific single trial model to automatically classify each trial as MT or LT using the corresponding fMRI data. This model was then validated by using test data that were independent of the training data. In a subsequent analysis, the classification model, just established, was used to determine which type the WWR should be placed in. The results showed that the LT and the MT could be separated with high accuracy based on their brain activity. The brain regions that made most contribution to the separation of the types were: the left inferior frontal cortex and bilateral precuneus, both of which showed higher activity in the MT than in the LT; and the left putamen and right cerebellum which showed the opposite activity pattern. The results also showed that the brain activity for WWR was more similar to that of the LT and fluent speech than to that of the MT. These findings provide a neurological basis for separating the MT and the LT types, and support the widely-used MT/LT symptom grouping scheme. In addition, WWR play a similar role as the LT, and thus should be placed in the LT type. PMID:22761887

  8. Construction of an Yucatec Maya soil classification and comparison with the WRB framework.

    Science.gov (United States)

    Bautista, Francisco; Zinck, J Alfred

    2010-02-13

    Mayas living in southeast Mexico have used soils for millennia and provide thus a good example for understanding soil-culture relationships and for exploring the ways indigenous people name and classify the soils of their territory. This paper shows an attempt to organize the Maya soil knowledge into a soil classification scheme and compares the latter with the World Reference Base for Soil Resources (WRB). Several participative soil surveys were carried out in the period 2000-2009 with the help of bilingual Maya-Spanish-speaking farmers. A multilingual soil database was built with 315 soil profile descriptions. On the basis of the diagnostic soil properties and the soil nomenclature used by Maya farmers, a soil classification scheme with a hierarchic, dichotomous and open structure was constructed, organized in groups and qualifiers in a fashion similar to that of the WRB system. Maya soil properties were used at the same categorical levels as similar diagnostic properties are used in the WRB system. The Maya soil classification (MSC) is a natural system based on key properties, such as relief position, rock types, size and quantity of stones, color of topsoil and subsoil, depth, water dynamics, and plant-supporting processes. The MSC addresses the soil properties of surficial and subsurficial horizons, and uses plant communities as qualifier in some cases. The MSC is more accurate than the WRB for classifying Leptosols.

  9. Management initiatives in a community-based health insurance scheme.

    Science.gov (United States)

    Sinha, Tara; Ranson, M Kent; Chatterjee, Mirai; Mills, Anne

    2007-01-01

    Community-based health insurance (CBHI) schemes have developed in response to inadequacies of alternate systems for protecting the poor against health care expenditures. Some of these schemes have arisen within community-based organizations (CBOs), which have strong links with poor communities, and are therefore well situated to offer CBHI. However, the managerial capacities of many such CBOs are limited. This paper describes management initiatives undertaken in a CBHI scheme in India, in the course of an action-research project. The existing structures and systems at the CBHI had several strengths, but fell short on some counts, which became apparent in the course of planning for two interventions under the research project. Management initiatives were introduced that addressed four features of the CBHI, viz. human resources, organizational structure, implementation systems, and data management. Trained personnel were hired and given clear roles and responsibilities. Lines of reporting and accountability were spelt out, and supportive supervision was provided to team members. The data resources of the organization were strengthened for greater utilization of this information. While the changes that were introduced took some time to be accepted by team members, the commitment of the CBHI's leadership to these initiatives was critical to their success. Copyright (c) 2007 John Wiley & Sons, Ltd.

  10. AVQS: Attack Route-Based Vulnerability Quantification Scheme for Smart Grid

    Directory of Open Access Journals (Sweden)

    Jongbin Ko

    2014-01-01

    Full Text Available A smart grid is a large, consolidated electrical grid system that includes heterogeneous networks and systems. Based on the data, a smart grid system has a potential security threat in its network connectivity. To solve this problem, we develop and apply a novel scheme to measure the vulnerability in a smart grid domain. Vulnerability quantification can be the first step in security analysis because it can help prioritize the security problems. However, existing vulnerability quantification schemes are not suitable for smart grid because they do not consider network vulnerabilities. We propose a novel attack route-based vulnerability quantification scheme using a network vulnerability score and an end-to-end security score, depending on the specific smart grid network environment to calculate the vulnerability score for a particular attack route. To evaluate the proposed approach, we derive several attack scenarios from the advanced metering infrastructure domain. The experimental results of the proposed approach and the existing common vulnerability scoring system clearly show that we need to consider network connectivity for more optimized vulnerability quantification.

  11. AVQS: attack route-based vulnerability quantification scheme for smart grid.

    Science.gov (United States)

    Ko, Jongbin; Lim, Hyunwoo; Lee, Seokjun; Shon, Taeshik

    2014-01-01

    A smart grid is a large, consolidated electrical grid system that includes heterogeneous networks and systems. Based on the data, a smart grid system has a potential security threat in its network connectivity. To solve this problem, we develop and apply a novel scheme to measure the vulnerability in a smart grid domain. Vulnerability quantification can be the first step in security analysis because it can help prioritize the security problems. However, existing vulnerability quantification schemes are not suitable for smart grid because they do not consider network vulnerabilities. We propose a novel attack route-based vulnerability quantification scheme using a network vulnerability score and an end-to-end security score, depending on the specific smart grid network environment to calculate the vulnerability score for a particular attack route. To evaluate the proposed approach, we derive several attack scenarios from the advanced metering infrastructure domain. The experimental results of the proposed approach and the existing common vulnerability scoring system clearly show that we need to consider network connectivity for more optimized vulnerability quantification.

  12. Cellular automata rule characterization and classification using texture descriptors

    Science.gov (United States)

    Machicao, Jeaneth; Ribas, Lucas C.; Scabini, Leonardo F. S.; Bruno, Odermir M.

    2018-05-01

    The cellular automata (CA) spatio-temporal patterns have attracted the attention from many researchers since it can provide emergent behavior resulting from the dynamics of each individual cell. In this manuscript, we propose an approach of texture image analysis to characterize and classify CA rules. The proposed method converts the CA spatio-temporal patterns into a gray-scale image. The gray-scale is obtained by creating a binary number based on the 8-connected neighborhood of each dot of the CA spatio-temporal pattern. We demonstrate that this technique enhances the CA rule characterization and allow to use different texture image analysis algorithms. Thus, various texture descriptors were evaluated in a supervised training approach aiming to characterize the CA's global evolution. Our results show the efficiency of the proposed method for the classification of the elementary CA (ECAs), reaching a maximum of 99.57% of accuracy rate according to the Li-Packard scheme (6 classes) and 94.36% for the classification of the 88 rules scheme. Moreover, within the image analysis context, we found a better performance of the method by means of a transformation of the binary states to a gray-scale.

  13. Classifying Normal and Abnormal Status Based on Video Recordings of Epileptic Patients

    Directory of Open Access Journals (Sweden)

    Jing Li

    2014-01-01

    Full Text Available Based on video recordings of the movement of the patients with epilepsy, this paper proposed a human action recognition scheme to detect distinct motion patterns and to distinguish the normal status from the abnormal status of epileptic patients. The scheme first extracts local features and holistic features, which are complementary to each other. Afterwards, a support vector machine is applied to classification. Based on the experimental results, this scheme obtains a satisfactory classification result and provides a fundamental analysis towards the human-robot interaction with socially assistive robots in caring the patients with epilepsy (or other patients with brain disorders in order to protect them from injury.

  14. A Quantum Proxy Weak Blind Signature Scheme Based on Controlled Quantum Teleportation

    Science.gov (United States)

    Cao, Hai-Jing; Yu, Yao-Feng; Song, Qin; Gao, Lan-Xiang

    2015-04-01

    Proxy blind signature is applied to the electronic paying system, electronic voting system, mobile agent system, security of internet, etc. A quantum proxy weak blind signature scheme is proposed in this paper. It is based on controlled quantum teleportation. Five-qubit entangled state functions as quantum channel. The scheme uses the physical characteristics of quantum mechanics to implement message blinding, so it could guarantee not only the unconditional security of the scheme but also the anonymity of the messages owner.

  15. Connection Setup Signaling Scheme with Flooding-Based Path Searching for Diverse-Metric Network

    Science.gov (United States)

    Kikuta, Ko; Ishii, Daisuke; Okamoto, Satoru; Oki, Eiji; Yamanaka, Naoaki

    Connection setup on various computer networks is now achieved by GMPLS. This technology is based on the source-routing approach, which requires the source node to store metric information of the entire network prior to computing a route. Thus all metric information must be distributed to all network nodes and kept up-to-date. However, as metric information become more diverse and generalized, it is hard to update all information due to the huge update overhead. Emerging network services and applications require the network to support diverse metrics for achieving various communication qualities. Increasing the number of metrics supported by the network causes excessive processing of metric update messages. To reduce the number of metric update messages, another scheme is required. This paper proposes a connection setup scheme that uses flooding-based signaling rather than the distribution of metric information. The proposed scheme requires only flooding of signaling messages with requested metric information, no routing protocol is required. Evaluations confirm that the proposed scheme achieves connection establishment without excessive overhead. Our analysis shows that the proposed scheme greatly reduces the number of control messages compared to the conventional scheme, while their blocking probabilities are comparable.

  16. Graph-Based Semi-Supervised Hyperspectral Image Classification Using Spatial Information

    Science.gov (United States)

    Jamshidpour, N.; Homayouni, S.; Safari, A.

    2017-09-01

    Hyperspectral image classification has been one of the most popular research areas in the remote sensing community in the past decades. However, there are still some problems that need specific attentions. For example, the lack of enough labeled samples and the high dimensionality problem are two most important issues which degrade the performance of supervised classification dramatically. The main idea of semi-supervised learning is to overcome these issues by the contribution of unlabeled samples, which are available in an enormous amount. In this paper, we propose a graph-based semi-supervised classification method, which uses both spectral and spatial information for hyperspectral image classification. More specifically, two graphs were designed and constructed in order to exploit the relationship among pixels in spectral and spatial spaces respectively. Then, the Laplacians of both graphs were merged to form a weighted joint graph. The experiments were carried out on two different benchmark hyperspectral data sets. The proposed method performed significantly better than the well-known supervised classification methods, such as SVM. The assessments consisted of both accuracy and homogeneity analyses of the produced classification maps. The proposed spectral-spatial SSL method considerably increased the classification accuracy when the labeled training data set is too scarce.When there were only five labeled samples for each class, the performance improved 5.92% and 10.76% compared to spatial graph-based SSL, for AVIRIS Indian Pine and Pavia University data sets respectively.

  17. GRAPH-BASED SEMI-SUPERVISED HYPERSPECTRAL IMAGE CLASSIFICATION USING SPATIAL INFORMATION

    Directory of Open Access Journals (Sweden)

    N. Jamshidpour

    2017-09-01

    Full Text Available Hyperspectral image classification has been one of the most popular research areas in the remote sensing community in the past decades. However, there are still some problems that need specific attentions. For example, the lack of enough labeled samples and the high dimensionality problem are two most important issues which degrade the performance of supervised classification dramatically. The main idea of semi-supervised learning is to overcome these issues by the contribution of unlabeled samples, which are available in an enormous amount. In this paper, we propose a graph-based semi-supervised classification method, which uses both spectral and spatial information for hyperspectral image classification. More specifically, two graphs were designed and constructed in order to exploit the relationship among pixels in spectral and spatial spaces respectively. Then, the Laplacians of both graphs were merged to form a weighted joint graph. The experiments were carried out on two different benchmark hyperspectral data sets. The proposed method performed significantly better than the well-known supervised classification methods, such as SVM. The assessments consisted of both accuracy and homogeneity analyses of the produced classification maps. The proposed spectral-spatial SSL method considerably increased the classification accuracy when the labeled training data set is too scarce.When there were only five labeled samples for each class, the performance improved 5.92% and 10.76% compared to spatial graph-based SSL, for AVIRIS Indian Pine and Pavia University data sets respectively.

  18. Privacy-Preserving Location-Based Service Scheme for Mobile Sensing Data †

    Science.gov (United States)

    Xie, Qingqing; Wang, Liangmin

    2016-01-01

    With the wide use of mobile sensing application, more and more location-embedded data are collected and stored in mobile clouds, such as iCloud, Samsung cloud, etc. Using these data, the cloud service provider (CSP) can provide location-based service (LBS) for users. However, the mobile cloud is untrustworthy. The privacy concerns force the sensitive locations to be stored on the mobile cloud in an encrypted form. However, this brings a great challenge to utilize these data to provide efficient LBS. To solve this problem, we propose a privacy-preserving LBS scheme for mobile sensing data, based on the RSA (for Rivest, Shamir and Adleman) algorithm and ciphertext policy attribute-based encryption (CP-ABE) scheme. The mobile cloud can perform location distance computing and comparison efficiently for authorized users, without location privacy leakage. In the end, theoretical security analysis and experimental evaluation demonstrate that our scheme is secure against the chosen plaintext attack (CPA) and efficient enough for practical applications in terms of user side computation overhead. PMID:27897984

  19. Demand response scheme based on lottery-like rebates

    KAUST Repository

    Schwartz, Galina A.; Tembine, Hamidou; Amin, Saurabh; Sastry, S. Shankar

    2014-01-01

    In this paper, we develop a novel mechanism for reducing volatility of residential demand for electricity. We construct a reward-based (rebate) mechanism that provides consumers with incentives to shift their demand to off-peak time. In contrast to most other mechanisms proposed in the literature, the key feature of our mechanism is its modest requirements on user preferences, i.e., it does not require exact knowledge of user responsiveness to rewards for shifting their demand from the peak to the off-peak time. Specifically, our mechanism utilizes a probabilistic reward structure for users who shift their demand to the off-peak time, and is robust to incomplete information about user demand and/or risk preferences. We approach the problem from the public good perspective, and demonstrate that the mechanism can be implemented via lottery-like schemes. Our mechanism permits to reduce the distribution losses, and thus improve efficiency of electricity distribution. Finally, the mechanism can be readily incorporated into the emerging demand response schemes (e.g., the time-of-day pricing, and critical peak pricing schemes), and has security and privacy-preserving properties.

  20. Demand response scheme based on lottery-like rebates

    KAUST Repository

    Schwartz, Galina A.

    2014-08-24

    In this paper, we develop a novel mechanism for reducing volatility of residential demand for electricity. We construct a reward-based (rebate) mechanism that provides consumers with incentives to shift their demand to off-peak time. In contrast to most other mechanisms proposed in the literature, the key feature of our mechanism is its modest requirements on user preferences, i.e., it does not require exact knowledge of user responsiveness to rewards for shifting their demand from the peak to the off-peak time. Specifically, our mechanism utilizes a probabilistic reward structure for users who shift their demand to the off-peak time, and is robust to incomplete information about user demand and/or risk preferences. We approach the problem from the public good perspective, and demonstrate that the mechanism can be implemented via lottery-like schemes. Our mechanism permits to reduce the distribution losses, and thus improve efficiency of electricity distribution. Finally, the mechanism can be readily incorporated into the emerging demand response schemes (e.g., the time-of-day pricing, and critical peak pricing schemes), and has security and privacy-preserving properties.

  1. Breaking a chaos-based secure communication scheme designed by an improved modulation method

    Energy Technology Data Exchange (ETDEWEB)

    Li Shujun [Department of Electronic Engineering, City University of Hong Kong, Kowloon, Hong Kong (China)]. E-mail: hooklee@mail.com; Alvarez, Gonzalo [Instituto de Fisica Aplicada, Consejo Superior de Investigaciones Cientificas, Serrano 144-28006 Madrid (Spain); Chen Guanrong [Department of Electronic Engineering, City University of Hong Kong, Kowloon, Hong Kong (China)

    2005-07-01

    Recently Bu and Wang [Bu S, Wang B-H. Chaos, Solitons and Fractals 2004;19(4):919-24] proposed a simple modulation method aiming to improve the security of chaos-based secure communications against return-map-based attacks. Soon this modulation method was independently cryptanalyzed by Chee et al. [Chee CY, Xu D, Bishop SR. Chaos, Solitons and Fractals 2004;21(5):1129-34], Wu et al. [Wu X, Hu H, Zhang B. Chaos, Solitons and Fractals 2004;22(2):367-73], and Alvarez et al. [Alvarez G, Montoya F, Romera M, Pastor G. Chaos, Solitons and Fractals, in press, arXiv:nlin/0406065] via different attacks. As an enhancement to the Bu-Wang method, an improving scheme was suggested by Wu et al. by removing the relationship between the modulating function and the zero-points. The present paper points out that the improved scheme proposed by Wu et al. is still insecure against a new attack. Compared with the existing attacks, the proposed attack is more powerful and can also break the original Bu-Wang scheme. Furthermore, it is pointed out that the security of the modulation-based schemes proposed by Wu et al. is not so satisfactory from a pure cryptographical point of view. The synchronization performance of this class of modulation-based schemes is also discussed.

  2. Breaking a chaos-based secure communication scheme designed by an improved modulation method

    International Nuclear Information System (INIS)

    Li Shujun; Alvarez, Gonzalo; Chen Guanrong

    2005-01-01

    Recently Bu and Wang [Bu S, Wang B-H. Chaos, Solitons and Fractals 2004;19(4):919-24] proposed a simple modulation method aiming to improve the security of chaos-based secure communications against return-map-based attacks. Soon this modulation method was independently cryptanalyzed by Chee et al. [Chee CY, Xu D, Bishop SR. Chaos, Solitons and Fractals 2004;21(5):1129-34], Wu et al. [Wu X, Hu H, Zhang B. Chaos, Solitons and Fractals 2004;22(2):367-73], and Alvarez et al. [Alvarez G, Montoya F, Romera M, Pastor G. Chaos, Solitons and Fractals, in press, arXiv:nlin/0406065] via different attacks. As an enhancement to the Bu-Wang method, an improving scheme was suggested by Wu et al. by removing the relationship between the modulating function and the zero-points. The present paper points out that the improved scheme proposed by Wu et al. is still insecure against a new attack. Compared with the existing attacks, the proposed attack is more powerful and can also break the original Bu-Wang scheme. Furthermore, it is pointed out that the security of the modulation-based schemes proposed by Wu et al. is not so satisfactory from a pure cryptographical point of view. The synchronization performance of this class of modulation-based schemes is also discussed

  3. Reversible Dual-Image-Based Hiding Scheme Using Block Folding Technique

    Directory of Open Access Journals (Sweden)

    Tzu-Chuen Lu

    2017-10-01

    Full Text Available The concept of a dual-image based scheme in information sharing consists of concealing secret messages in two cover images; only someone who has both stego-images can extract the secret messages. In 2015, Lu et al. proposed a center-folding strategy where each secret symbol is folded into the reduced digit to reduce the distortion of the stego-image. Then, in 2016, Lu et al. used a frequency-based encoding strategy to reduce the distortion of the frequency of occurrence of the maximum absolute value. Because the folding strategy can obviously reduce the value, the proposed scheme includes the folding operation twice to further decrease the reduced digit. We use a frequency-based encoding strategy to encode a secret message and then use the block folding technique by performing the center-folding operation twice to embed secret messages. An indicator is needed to identify the sequence number of the folding operation. The proposed scheme collects several indicators to produce a combined code and hides the code in a pixel to reduce the size of the indicators. The experimental results show that the proposed method can achieve higher image quality under the same embedding rate or higher payload, which is better than other methods.

  4. Key-phrase based classification of public health web pages.

    Science.gov (United States)

    Dolamic, Ljiljana; Boyer, Célia

    2013-01-01

    This paper describes and evaluates the public health web pages classification model based on key phrase extraction and matching. Easily extendible both in terms of new classes as well as the new language this method proves to be a good solution for text classification faced with the total lack of training data. To evaluate the proposed solution we have used a small collection of public health related web pages created by a double blind manual classification. Our experiments have shown that by choosing the adequate threshold value the desired value for either precision or recall can be achieved.

  5. Biometrics based authentication scheme for session initiation protocol.

    Science.gov (United States)

    Xie, Qi; Tang, Zhixiong

    2016-01-01

    Many two-factor challenge-response based session initiation protocol (SIP) has been proposed, but most of them are vulnerable to smart card stolen attacks and password guessing attacks. In this paper, we propose a novel three-factor SIP authentication scheme using biometrics, password and smart card, and utilize the pi calculus-based formal verification tool ProVerif to prove that the proposed protocol achieves security and authentication. Furthermore, our protocol is highly efficient when compared to other related protocols.

  6. A novel, fast and efficient single-sensor automatic sleep-stage classification based on complementary cross-frequency coupling estimates.

    Science.gov (United States)

    Dimitriadis, Stavros I; Salis, Christos; Linden, David

    2018-04-01

    Limitations of the manual scoring of polysomnograms, which include data from electroencephalogram (EEG), electro-oculogram (EOG), electrocardiogram (ECG) and electromyogram (EMG) channels have long been recognized. Manual staging is resource intensive and time consuming, and thus considerable effort must be spent to ensure inter-rater reliability. As a result, there is a great interest in techniques based on signal processing and machine learning for a completely Automatic Sleep Stage Classification (ASSC). In this paper, we present a single-EEG-sensor ASSC technique based on the dynamic reconfiguration of different aspects of cross-frequency coupling (CFC) estimated between predefined frequency pairs over 5 s epoch lengths. The proposed analytic scheme is demonstrated using the PhysioNet Sleep European Data Format (EDF) Database with repeat recordings from 20 healthy young adults. We validate our methodology in a second sleep dataset. We achieved very high classification sensitivity, specificity and accuracy of 96.2 ± 2.2%, 94.2 ± 2.3%, and 94.4 ± 2.2% across 20 folds, respectively, and also a high mean F1 score (92%, range 90-94%) when a multi-class Naive Bayes classifier was applied. High classification performance has been achieved also in the second sleep dataset. Our method outperformed the accuracy of previous studies not only on different datasets but also on the same database. Single-sensor ASSC makes the entire methodology appropriate for longitudinal monitoring using wearable EEG in real-world and laboratory-oriented environments. Crown Copyright © 2018. Published by Elsevier B.V. All rights reserved.

  7. Novel Schemes for Measurement-Based Quantum Computation

    International Nuclear Information System (INIS)

    Gross, D.; Eisert, J.

    2007-01-01

    We establish a framework which allows one to construct novel schemes for measurement-based quantum computation. The technique develops tools from many-body physics--based on finitely correlated or projected entangled pair states--to go beyond the cluster-state based one-way computer. We identify resource states radically different from the cluster state, in that they exhibit nonvanishing correlations, can be prepared using nonmaximally entangling gates, or have very different local entanglement properties. In the computational models, randomness is compensated in a different manner. It is shown that there exist resource states which are locally arbitrarily close to a pure state. We comment on the possibility of tailoring computational models to specific physical systems

  8. Novel schemes for measurement-based quantum computation.

    Science.gov (United States)

    Gross, D; Eisert, J

    2007-06-01

    We establish a framework which allows one to construct novel schemes for measurement-based quantum computation. The technique develops tools from many-body physics-based on finitely correlated or projected entangled pair states-to go beyond the cluster-state based one-way computer. We identify resource states radically different from the cluster state, in that they exhibit nonvanishing correlations, can be prepared using nonmaximally entangling gates, or have very different local entanglement properties. In the computational models, randomness is compensated in a different manner. It is shown that there exist resource states which are locally arbitrarily close to a pure state. We comment on the possibility of tailoring computational models to specific physical systems.

  9. The Study of Land Use Classification Based on SPOT6 High Resolution Data

    OpenAIRE

    Wu Song; Jiang Qigang

    2016-01-01

    A method is carried out to quick classification extract of the type of land use in agricultural areas, which is based on the spot6 high resolution remote sensing classification data and used of the good nonlinear classification ability of support vector machine. The results show that the spot6 high resolution remote sensing classification data can realize land classification efficiently, the overall classification accuracy reached 88.79% and Kappa factor is 0.8632 which means that the classif...

  10. Rough set classification based on quantum logic

    Science.gov (United States)

    Hassan, Yasser F.

    2017-11-01

    By combining the advantages of quantum computing and soft computing, the paper shows that rough sets can be used with quantum logic for classification and recognition systems. We suggest the new definition of rough set theory as quantum logic theory. Rough approximations are essential elements in rough set theory, the quantum rough set model for set-valued data directly construct set approximation based on a kind of quantum similarity relation which is presented here. Theoretical analyses demonstrate that the new model for quantum rough sets has new type of decision rule with less redundancy which can be used to give accurate classification using principles of quantum superposition and non-linear quantum relations. To our knowledge, this is the first attempt aiming to define rough sets in representation of a quantum rather than logic or sets. The experiments on data-sets have demonstrated that the proposed model is more accuracy than the traditional rough sets in terms of finding optimal classifications.

  11. A Fingerprint Encryption Scheme Based on Irreversible Function and Secure Authentication

    Directory of Open Access Journals (Sweden)

    Yijun Yang

    2015-01-01

    Full Text Available A fingerprint encryption scheme based on irreversible function has been designed in this paper. Since the fingerprint template includes almost the entire information of users’ fingerprints, the personal authentication can be determined only by the fingerprint features. This paper proposes an irreversible transforming function (using the improved SHA1 algorithm to transform the original minutiae which are extracted from the thinned fingerprint image. Then, Chinese remainder theorem is used to obtain the biokey from the integration of the transformed minutiae and the private key. The result shows that the scheme has better performance on security and efficiency comparing with other irreversible function schemes.

  12. An evaluation of sampling and full enumeration strategies for Fisher Jenks classification in big data settings

    Science.gov (United States)

    Rey, Sergio J.; Stephens, Philip A.; Laura, Jason R.

    2017-01-01

    Large data contexts present a number of challenges to optimal choropleth map classifiers. Application of optimal classifiers to a sample of the attribute space is one proposed solution. The properties of alternative sampling-based classification methods are examined through a series of Monte Carlo simulations. The impacts of spatial autocorrelation, number of desired classes, and form of sampling are shown to have significant impacts on the accuracy of map classifications. Tradeoffs between improved speed of the sampling approaches and loss of accuracy are also considered. The results suggest the possibility of guiding the choice of classification scheme as a function of the properties of large data sets.

  13. A group signature scheme based on quantum teleportation

    International Nuclear Information System (INIS)

    Wen Xiaojun; Tian Yuan; Ji Liping; Niu Xiamu

    2010-01-01

    In this paper, we present a group signature scheme using quantum teleportation. Different from classical group signature and current quantum signature schemes, which could only deliver either group signature or unconditional security, our scheme guarantees both by adopting quantum key preparation, quantum encryption algorithm and quantum teleportation. Security analysis proved that our scheme has the characteristics of group signature, non-counterfeit, non-disavowal, blindness and traceability. Our quantum group signature scheme has a foreseeable application in the e-payment system, e-government, e-business, etc.

  14. A group signature scheme based on quantum teleportation

    Energy Technology Data Exchange (ETDEWEB)

    Wen Xiaojun; Tian Yuan; Ji Liping; Niu Xiamu, E-mail: wxjun36@gmail.co [Information Countermeasure Technique Research Institute, Harbin Institute of Technology, Harbin 150001 (China)

    2010-05-01

    In this paper, we present a group signature scheme using quantum teleportation. Different from classical group signature and current quantum signature schemes, which could only deliver either group signature or unconditional security, our scheme guarantees both by adopting quantum key preparation, quantum encryption algorithm and quantum teleportation. Security analysis proved that our scheme has the characteristics of group signature, non-counterfeit, non-disavowal, blindness and traceability. Our quantum group signature scheme has a foreseeable application in the e-payment system, e-government, e-business, etc.

  15. A light weight secure image encryption scheme based on chaos & DNA computing

    Directory of Open Access Journals (Sweden)

    Bhaskar Mondal

    2017-10-01

    Full Text Available This paper proposed a new light weight secure cryptographic scheme for secure image communication. In this scheme the plain image is permuted first using a sequence of pseudo random number (PRN and encrypted by DeoxyriboNucleic Acid (DNA computation. Two PRN sequences are generated by a Pseudo Random Number Generator (PRNG based on cross coupled chaotic logistic map using two sets of keys. The first PRN sequence is used for permuting the plain image whereas the second PRN sequence is used for generating random DNA sequence. The number of rounds of permutation and encryption may be variable to increase security. The scheme is proposed for gray label images but the scheme may be extended for color images and text data. Simulation results exhibit that the proposed scheme can defy any kind of attack.

  16. Novel UEP LT Coding Scheme with Feedback Based on Different Degree Distributions

    Directory of Open Access Journals (Sweden)

    Li Ya-Fang

    2016-01-01

    Full Text Available Traditional unequal error protection (UEP schemes have some limitations and problems, such as the poor UEP performance of high priority data and the seriously sacrifice of low priority data in decoding property. Based on the reasonable applications of different degree distributions in LT codes, this paper puts forward a novel UEP LT coding scheme with a simple feedback to compile these data packets separately. Simulation results show that the proposed scheme can effectively protect high priority data, and improve the transmission efficiency of low priority data from 2.9% to 22.3%. Furthermore, it is fairly suitable to apply this novel scheme to multicast and broadcast environments since only a simple feedback introduced.

  17. A Quantum Multi-Proxy Weak Blind Signature Scheme Based on Entanglement Swapping

    Science.gov (United States)

    Yan, LiLi; Chang, Yan; Zhang, ShiBin; Han, GuiHua; Sheng, ZhiWei

    2017-02-01

    In this paper, we present a multi-proxy weak blind signature scheme based on quantum entanglement swapping of Bell states. In the scheme, proxy signers can finish the signature instead of original singer with his/her authority. It can be applied to the electronic voting system, electronic paying system, etc. The scheme uses the physical characteristics of quantum mechanics to implement delegation, signature and verification. It could guarantee not only the unconditionally security but also the anonymity of the message owner. The security analysis shows the scheme satisfies the security features of multi-proxy weak signature, singers cannot disavowal his/her signature while the signature cannot be forged by others, and the message owner can be traced.

  18. Robust and efficient biometrics based password authentication scheme for telecare medicine information systems using extended chaotic maps.

    Science.gov (United States)

    Lu, Yanrong; Li, Lixiang; Peng, Haipeng; Xie, Dong; Yang, Yixian

    2015-06-01

    The Telecare Medicine Information Systems (TMISs) provide an efficient communicating platform supporting the patients access health-care delivery services via internet or mobile networks. Authentication becomes an essential need when a remote patient logins into the telecare server. Recently, many extended chaotic maps based authentication schemes using smart cards for TMISs have been proposed. Li et al. proposed a secure smart cards based authentication scheme for TMISs using extended chaotic maps based on Lee's and Jiang et al.'s scheme. In this study, we show that Li et al.'s scheme has still some weaknesses such as violation the session key security, vulnerability to user impersonation attack and lack of local verification. To conquer these flaws, we propose a chaotic maps and smart cards based password authentication scheme by applying biometrics technique and hash function operations. Through the informal and formal security analyses, we demonstrate that our scheme is resilient possible known attacks including the attacks found in Li et al.'s scheme. As compared with the previous authentication schemes, the proposed scheme is more secure and efficient and hence more practical for telemedical environments.

  19. Organizational Data Classification Based on the Importance Concept of Complex Networks.

    Science.gov (United States)

    Carneiro, Murillo Guimaraes; Zhao, Liang

    2017-08-01

    Data classification is a common task, which can be performed by both computers and human beings. However, a fundamental difference between them can be observed: computer-based classification considers only physical features (e.g., similarity, distance, or distribution) of input data; by contrast, brain-based classification takes into account not only physical features, but also the organizational structure of data. In this paper, we figure out the data organizational structure for classification using complex networks constructed from training data. Specifically, an unlabeled instance is classified by the importance concept characterized by Google's PageRank measure of the underlying data networks. Before a test data instance is classified, a network is constructed from vector-based data set and the test instance is inserted into the network in a proper manner. To this end, we also propose a measure, called spatio-structural differential efficiency, to combine the physical and topological features of the input data. Such a method allows for the classification technique to capture a variety of data patterns using the unique importance measure. Extensive experiments demonstrate that the proposed technique has promising predictive performance on the detection of heart abnormalities.

  20. Combined Kernel-Based BDT-SMO Classification of Hyperspectral Fused Images

    Directory of Open Access Journals (Sweden)

    Fenghua Huang

    2014-01-01

    Full Text Available To solve the poor generalization and flexibility problems that single kernel SVM classifiers have while classifying combined spectral and spatial features, this paper proposed a solution to improve the classification accuracy and efficiency of hyperspectral fused images: (1 different radial basis kernel functions (RBFs are employed for spectral and textural features, and a new combined radial basis kernel function (CRBF is proposed by combining them in a weighted manner; (2 the binary decision tree-based multiclass SMO (BDT-SMO is used in the classification of hyperspectral fused images; (3 experiments are carried out, where the single radial basis function- (SRBF- based BDT-SMO classifier and the CRBF-based BDT-SMO classifier are used, respectively, to classify the land usages of hyperspectral fused images, and genetic algorithms (GA are used to optimize the kernel parameters of the classifiers. The results show that, compared with SRBF, CRBF-based BDT-SMO classifiers display greater classification accuracy and efficiency.

  1. Decimal Classification Editions

    Directory of Open Access Journals (Sweden)

    Zenovia Niculescu

    2009-01-01

    Full Text Available The study approaches the evolution of Dewey Decimal Classification editions from the perspective of updating the terminology, reallocating and expanding the main and auxilary structure of Dewey indexing language. The comparative analysis of DDC editions emphasizes the efficiency of Dewey scheme from the point of view of improving the informational offer, through basic index terms, revised and developed, as well as valuing the auxilary notations.

  2. An Efficient Diffusion Scheme for Chaos-Based Digital Image Encryption

    Directory of Open Access Journals (Sweden)

    Jun-xin Chen

    2014-01-01

    Full Text Available In recent years, amounts of permutation-diffusion architecture-based image cryptosystems have been proposed. However, the key stream elements in the diffusion procedure are merely depending on the secret key that is usually fixed during the whole encryption process. Cryptosystems of this type suffer from unsatisfactory encryption speed and are considered insecure upon known/chosen plaintext attacks. In this paper, an efficient diffusion scheme is proposed. This scheme consists of two diffusion procedures, with a supplementary diffusion procedure padded after the normal diffusion. In the supplementary diffusion module, the control parameter of the selected chaotic map is altered by the resultant image produced after the normal diffusion operation. As a result, a slight difference in the plain image can be transferred to the chaotic iteration and bring about distinct key streams, and hence totally different cipher images will be produced. Therefore, the scheme can remarkably accelerate the diffusion effect of the cryptosystem and will effectively resist known/chosen plaintext attacks. Theoretical analyses and experimental results prove the high security performance and satisfactory operation efficiency of the proposed scheme.

  3. A semi-supervised classification algorithm using the TAD-derived background as training data

    Science.gov (United States)

    Fan, Lei; Ambeau, Brittany; Messinger, David W.

    2013-05-01

    In general, spectral image classification algorithms fall into one of two categories: supervised and unsupervised. In unsupervised approaches, the algorithm automatically identifies clusters in the data without a priori information about those clusters (except perhaps the expected number of them). Supervised approaches require an analyst to identify training data to learn the characteristics of the clusters such that they can then classify all other pixels into one of the pre-defined groups. The classification algorithm presented here is a semi-supervised approach based on the Topological Anomaly Detection (TAD) algorithm. The TAD algorithm defines background components based on a mutual k-Nearest Neighbor graph model of the data, along with a spectral connected components analysis. Here, the largest components produced by TAD are used as regions of interest (ROI's),or training data for a supervised classification scheme. By combining those ROI's with a Gaussian Maximum Likelihood (GML) or a Minimum Distance to the Mean (MDM) algorithm, we are able to achieve a semi supervised classification method. We test this classification algorithm against data collected by the HyMAP sensor over the Cooke City, MT area and University of Pavia scene.

  4. Asynchronous error-correcting secure communication scheme based on fractional-order shifting chaotic system

    Science.gov (United States)

    Chao, Luo

    2015-11-01

    In this paper, a novel digital secure communication scheme is firstly proposed. Different from the usual secure communication schemes based on chaotic synchronization, the proposed scheme employs asynchronous communication which avoids the weakness of synchronous systems and is susceptible to environmental interference. Moreover, as to the transmission errors and data loss in the process of communication, the proposed scheme has the ability to be error-checking and error-correcting in real time. In order to guarantee security, the fractional-order complex chaotic system with the shifting of order is utilized to modulate the transmitted signal, which has high nonlinearity and complexity in both frequency and time domains. The corresponding numerical simulations demonstrate the effectiveness and feasibility of the scheme.

  5. Pathological Bases for a Robust Application of Cancer Molecular Classification

    Directory of Open Access Journals (Sweden)

    Salvador J. Diaz-Cano

    2015-04-01

    Full Text Available Any robust classification system depends on its purpose and must refer to accepted standards, its strength relying on predictive values and a careful consideration of known factors that can affect its reliability. In this context, a molecular classification of human cancer must refer to the current gold standard (histological classification and try to improve it with key prognosticators for metastatic potential, staging and grading. Although organ-specific examples have been published based on proteomics, transcriptomics and genomics evaluations, the most popular approach uses gene expression analysis as a direct correlate of cellular differentiation, which represents the key feature of the histological classification. RNA is a labile molecule that varies significantly according with the preservation protocol, its transcription reflect the adaptation of the tumor cells to the microenvironment, it can be passed through mechanisms of intercellular transference of genetic information (exosomes, and it is exposed to epigenetic modifications. More robust classifications should be based on stable molecules, at the genetic level represented by DNA to improve reliability, and its analysis must deal with the concept of intratumoral heterogeneity, which is at the origin of tumor progression and is the byproduct of the selection process during the clonal expansion and progression of neoplasms. The simultaneous analysis of multiple DNA targets and next generation sequencing offer the best practical approach for an analytical genomic classification of tumors.

  6. An Ultra-Low-Latency Geo-Routing Scheme for Team-Based Unmanned Vehicular Applications

    KAUST Repository

    Bader, Ahmed; Alouini, Mohamed-Slim

    2016-01-01

    Results and lessons learned from the implementation of a novel ultra low-latency geo-routing scheme are presented in this paper. The geo-routing scheme is intended for team-based mobile systems whereby a cluster of unmanned autonomous vehicles

  7. Hardware Accelerators Targeting a Novel Group Based Packet Classification Algorithm

    Directory of Open Access Journals (Sweden)

    O. Ahmed

    2013-01-01

    Full Text Available Packet classification is a ubiquitous and key building block for many critical network devices. However, it remains as one of the main bottlenecks faced when designing fast network devices. In this paper, we propose a novel Group Based Search packet classification Algorithm (GBSA that is scalable, fast, and efficient. GBSA consumes an average of 0.4 megabytes of memory for a 10 k rule set. The worst-case classification time per packet is 2 microseconds, and the preprocessing speed is 3 M rules/second based on an Xeon processor operating at 3.4 GHz. When compared with other state-of-the-art classification techniques, the results showed that GBSA outperforms the competition with respect to speed, memory usage, and processing time. Moreover, GBSA is amenable to implementation in hardware. Three different hardware implementations are also presented in this paper including an Application Specific Instruction Set Processor (ASIP implementation and two pure Register-Transfer Level (RTL implementations based on Impulse-C and Handel-C flows, respectively. Speedups achieved with these hardware accelerators ranged from 9x to 18x compared with a pure software implementation running on an Xeon processor.

  8. Modulation classification for MIMO systems: State of the art and research directions

    International Nuclear Information System (INIS)

    Bahloul, Mohammad Rida; Yusoff, Mohd Zuki; Abdel-Aty, Abdel-Haleem; Saad, M. Naufal M.; Al-Jemeli, Marwan

    2016-01-01

    Blind techniques and algorithms for Multiple-Input Multiple-Output (MIMO) signals interception have recently attracted a great deal of research efforts. This is due to their important applications in the military and civil telecommunications domains. One essential step in the signal interception process is to blindly recognize the modulation scheme of the MIMO signals. This process is formally called Modulation Classification (MC). This paper discusses the modulation classification for MIMO systems and presents a comprehensive and critical literature review of the existing MC algorithms for MIMO systems; where possible, gaps in the knowledge base are identified and future directions for the research work are suggested.

  9. Performance Comparison of Grid-Faulty Control Schemes for Inverter-Based Industrial Microgrids

    Directory of Open Access Journals (Sweden)

    Antonio Camacho

    2017-12-01

    Full Text Available Several control schemes specifically designed to operate inverter-based industrial microgrids during voltage sags have been recently proposed. This paper first classifies these control schemes in three categories and then performs a comparative analysis of them. Representative control schemes of each category are selected, described and used to identify the main features and performance of the considered category. The comparison is based on the evaluation of several indexes, which measure the power quality of the installation and utility grid during voltage sags, including voltage regulation, reactive current injection and transient response. The paper includes selected simulation results from a 500 kVA industrial microgrid to validate the expected features of the considered control schemes. Finally, in view of the obtained results, the paper proposes an alternative solution to cope with voltage sags, which includes the use of a static compensator in parallel with the microgrid. The novelty of this proposal is the suitable selection of the control schemes for both the microgrid and the static compensator. The superior performance of the proposal is confirmed by the analysis of the quality indexes. Its practical limitations are also revealed, showing that the topic studied in this paper is still open for further research.

  10. Classification of high resolution imagery based on fusion of multiscale texture features

    International Nuclear Information System (INIS)

    Liu, Jinxiu; Liu, Huiping; Lv, Ying; Xue, Xiaojuan

    2014-01-01

    In high resolution data classification process, combining texture features with spectral bands can effectively improve the classification accuracy. However, the window size which is difficult to choose is regarded as an important factor influencing overall classification accuracy in textural classification and current approaches to image texture analysis only depend on a single moving window which ignores different scale features of various land cover types. In this paper, we propose a new method based on the fusion of multiscale texture features to overcome these problems. The main steps in new method include the classification of fixed window size spectral/textural images from 3×3 to 15×15 and comparison of all the posterior possibility values for every pixel, as a result the biggest probability value is given to the pixel and the pixel belongs to a certain land cover type automatically. The proposed approach is tested on University of Pavia ROSIS data. The results indicate that the new method improve the classification accuracy compared to results of methods based on fixed window size textural classification

  11. Empirical Studies On Machine Learning Based Text Classification Algorithms

    OpenAIRE

    Shweta C. Dharmadhikari; Maya Ingle; Parag Kulkarni

    2011-01-01

    Automatic classification of text documents has become an important research issue now days. Properclassification of text documents requires information retrieval, machine learning and Natural languageprocessing (NLP) techniques. Our aim is to focus on important approaches to automatic textclassification based on machine learning techniques viz. supervised, unsupervised and semi supervised.In this paper we present a review of various text classification approaches under machine learningparadig...

  12. Locality-preserving sparse representation-based classification in hyperspectral imagery

    Science.gov (United States)

    Gao, Lianru; Yu, Haoyang; Zhang, Bing; Li, Qingting

    2016-10-01

    This paper proposes to combine locality-preserving projections (LPP) and sparse representation (SR) for hyperspectral image classification. The LPP is first used to reduce the dimensionality of all the training and testing data by finding the optimal linear approximations to the eigenfunctions of the Laplace Beltrami operator on the manifold, where the high-dimensional data lies. Then, SR codes the projected testing pixels as sparse linear combinations of all the training samples to classify the testing pixels by evaluating which class leads to the minimum approximation error. The integration of LPP and SR represents an innovative contribution to the literature. The proposed approach, called locality-preserving SR-based classification, addresses the imbalance between high dimensionality of hyperspectral data and the limited number of training samples. Experimental results on three real hyperspectral data sets demonstrate that the proposed approach outperforms the original counterpart, i.e., SR-based classification.

  13. An image encryption scheme based on the MLNCML system using DNA sequences

    Science.gov (United States)

    Zhang, Ying-Qian; Wang, Xing-Yuan; Liu, Jia; Chi, Ze-Lin

    2016-07-01

    We propose a new image scheme based on the spatiotemporal chaos of the Mixed Linear-Nonlinear Coupled Map Lattices (MLNCML). This spatiotemporal chaotic system has more cryptographic features in dynamics than the system of Coupled Map Lattices (CML). In the proposed scheme, we employ the strategy of DNA computing and one time pad encryption policy, which can enhance the sensitivity to the plaintext and resist differential attack, brute-force attack, statistical attack and plaintext attack. Simulation results and theoretical analysis indicate that the proposed scheme has superior high security.

  14. AN ADABOOST OPTIMIZED CCFIS BASED CLASSIFICATION MODEL FOR BREAST CANCER DETECTION

    Directory of Open Access Journals (Sweden)

    CHANDRASEKAR RAVI

    2017-06-01

    Full Text Available Classification is a Data Mining technique used for building a prototype of the data behaviour, using which an unseen data can be classified into one of the defined classes. Several researchers have proposed classification techniques but most of them did not emphasis much on the misclassified instances and storage space. In this paper, a classification model is proposed that takes into account the misclassified instances and storage space. The classification model is efficiently developed using a tree structure for reducing the storage complexity and uses single scan of the dataset. During the training phase, Class-based Closed Frequent ItemSets (CCFIS were mined from the training dataset in the form of a tree structure. The classification model has been developed using the CCFIS and a similarity measure based on Longest Common Subsequence (LCS. Further, the Particle Swarm Optimization algorithm is applied on the generated CCFIS, which assigns weights to the itemsets and their associated classes. Most of the classifiers are correctly classifying the common instances but they misclassify the rare instances. In view of that, AdaBoost algorithm has been used to boost the weights of the misclassified instances in the previous round so as to include them in the training phase to classify the rare instances. This improves the accuracy of the classification model. During the testing phase, the classification model is used to classify the instances of the test dataset. Breast Cancer dataset from UCI repository is used for experiment. Experimental analysis shows that the accuracy of the proposed classification model outperforms the PSOAdaBoost-Sequence classifier by 7% superior to other approaches like Naïve Bayes Classifier, Support Vector Machine Classifier, Instance Based Classifier, ID3 Classifier, J48 Classifier, etc.

  15. Text Categorization Using Weight Adjusted k-Nearest Neighbor Classification

    National Research Council Canada - National Science Library

    Han, Euihong; Karypis, George; Kumar, Vipin

    1999-01-01

    .... The authors present a nearest neighbor classification scheme for text categorization in which the importance of discriminating words is learned using mutual information and weight adjustment techniques...

  16. Reinforcement Learning Based Data Self-Destruction Scheme for Secured Data Management

    Directory of Open Access Journals (Sweden)

    Young Ki Kim

    2018-04-01

    Full Text Available As technologies and services that leverage cloud computing have evolved, the number of businesses and individuals who use them are increasing rapidly. In the course of using cloud services, as users store and use data that include personal information, research on privacy protection models to protect sensitive information in the cloud environment is becoming more important. As a solution to this problem, a self-destructing scheme has been proposed that prevents the decryption of encrypted user data after a certain period of time using a Distributed Hash Table (DHT network. However, the existing self-destructing scheme does not mention how to set the number of key shares and the threshold value considering the environment of the dynamic DHT network. This paper proposes a method to set the parameters to generate the key shares needed for the self-destructing scheme considering the availability and security of data. The proposed method defines state, action, and reward of the reinforcement learning model based on the similarity of the graph, and applies the self-destructing scheme process by updating the parameter based on the reinforcement learning model. Through the proposed technique, key sharing parameters can be set in consideration of data availability and security in dynamic DHT network environments.

  17. Trends and concepts in fern classification

    Science.gov (United States)

    Christenhusz, Maarten J. M.; Chase, Mark W.

    2014-01-01

    Background and Aims Throughout the history of fern classification, familial and generic concepts have been highly labile. Many classifications and evolutionary schemes have been proposed during the last two centuries, reflecting different interpretations of the available evidence. Knowledge of fern structure and life histories has increased through time, providing more evidence on which to base ideas of possible relationships, and classification has changed accordingly. This paper reviews previous classifications of ferns and presents ideas on how to achieve a more stable consensus. Scope An historical overview is provided from the first to the most recent fern classifications, from which conclusions are drawn on past changes and future trends. The problematic concept of family in ferns is discussed, with a particular focus on how this has changed over time. The history of molecular studies and the most recent findings are also presented. Key Results Fern classification generally shows a trend from highly artificial, based on an interpretation of a few extrinsic characters, via natural classifications derived from a multitude of intrinsic characters, towards more evolutionary circumscriptions of groups that do not in general align well with the distribution of these previously used characters. It also shows a progression from a few broad family concepts to systems that recognized many more narrowly and highly controversially circumscribed families; currently, the number of families recognized is stabilizing somewhere between these extremes. Placement of many genera was uncertain until the arrival of molecular phylogenetics, which has rapidly been improving our understanding of fern relationships. As a collective category, the so-called ‘fern allies’ (e.g. Lycopodiales, Psilotaceae, Equisetaceae) were unsurprisingly found to be polyphyletic, and the term should be abandoned. Lycopodiaceae, Selaginellaceae and Isoëtaceae form a clade (the lycopods) that is

  18. Trends and concepts in fern classification.

    Science.gov (United States)

    Christenhusz, Maarten J M; Chase, Mark W

    2014-03-01

    Throughout the history of fern classification, familial and generic concepts have been highly labile. Many classifications and evolutionary schemes have been proposed during the last two centuries, reflecting different interpretations of the available evidence. Knowledge of fern structure and life histories has increased through time, providing more evidence on which to base ideas of possible relationships, and classification has changed accordingly. This paper reviews previous classifications of ferns and presents ideas on how to achieve a more stable consensus. An historical overview is provided from the first to the most recent fern classifications, from which conclusions are drawn on past changes and future trends. The problematic concept of family in ferns is discussed, with a particular focus on how this has changed over time. The history of molecular studies and the most recent findings are also presented. Fern classification generally shows a trend from highly artificial, based on an interpretation of a few extrinsic characters, via natural classifications derived from a multitude of intrinsic characters, towards more evolutionary circumscriptions of groups that do not in general align well with the distribution of these previously used characters. It also shows a progression from a few broad family concepts to systems that recognized many more narrowly and highly controversially circumscribed families; currently, the number of families recognized is stabilizing somewhere between these extremes. Placement of many genera was uncertain until the arrival of molecular phylogenetics, which has rapidly been improving our understanding of fern relationships. As a collective category, the so-called 'fern allies' (e.g. Lycopodiales, Psilotaceae, Equisetaceae) were unsurprisingly found to be polyphyletic, and the term should be abandoned. Lycopodiaceae, Selaginellaceae and Isoëtaceae form a clade (the lycopods) that is sister to all other vascular plants, whereas

  19. Scalable cavity-QED-based scheme of generating entanglement of atoms and of cavity fields

    OpenAIRE

    Lee, Jaehak; Park, Jiyong; Lee, Sang Min; Lee, Hai-Woong; Khosa, Ashfaq H.

    2008-01-01

    We propose a cavity-QED-based scheme of generating entanglement between atoms. The scheme is scalable to an arbitrary number of atoms, and can be used to generate a variety of multipartite entangled states such as the Greenberger-Horne-Zeilinger, W, and cluster states. Furthermore, with a role switching of atoms with photons, the scheme can be used to generate entanglement between cavity fields. We also introduce a scheme that can generate an arbitrary multipartite field graph state.

  20. OO (12) limit and complete classification of symmetry schemes in ...

    Indian Academy of Sciences (India)

    The generators of (12) are derived and the quantum number of (12) for a given boson number is determined by identifying the corresponding quasi-spin algebra. The (12) algebra generates two symmetry schemes and for both of them, complete classification of the basis states and typical spectra are given. With the ...

  1. Classification in Astronomy: Past and Present

    Science.gov (United States)

    Feigelson, Eric

    2012-03-01

    used today with many refinements by Gerard de Vaucouleurs and others. Supernovae, nearly all of which are found in external galaxies, have a complicated classification scheme:Type I with subtypes Ia, Ib, Ic, Ib/c pec and Type II with subtypes IIb, IIL, IIP, and IIn (Turatto 2003). The classification is based on elemental abundances in optical spectra and on optical light curve shapes. Tadhunter (2009) presents a three-dimensional classification of active galactic nuclei involving radio power, emission line width, and nuclear luminosity. These taxonomies have played enormously important roles in the development of astronomy, yet all were developed using heuristic methods. Many are based on qualitative and subjective assessments of spatial, temporal, or spectral properties. A qualitative, morphological approach to astronomical studies was explicitly promoted by Zwicky (1957). Other classifications are based on quantitative criteria, but these criteria were developed by subjective examination of training datasets. For example, starburst galaxies are discriminated from narrow-line Seyfert galaxies by a curved line in a diagramof the ratios of four emission lines (Veilleux and Osterbrock 1987). Class II young stellar objects have been defined by a rectangular region in a mid-infrared color-color diagram (Allen et al. 2004). Short and hard gamma-ray bursts are discriminated by a dip in the distribution of burst durations (Kouveliotou et al. 2000). In no case was a statistical or algorithmic procedure used to define the classes.

  2. Ligand and structure-based classification models for Prediction of P-glycoprotein inhibitors

    DEFF Research Database (Denmark)

    Klepsch, Freya; Poongavanam, Vasanthanathan; Ecker, Gerhard Franz

    2014-01-01

    an algorithm based on Euclidean distance. Results show that random forest and SVM performed best for classification of P-gp inhibitors and non-inhibitors, correctly predicting 73/75 % of the external test set compounds. Classification based on the docking experiments using the scoring function Chem...

  3. A multihop key agreement scheme for wireless ad hoc networks based on channel characteristics.

    Science.gov (United States)

    Hao, Zhuo; Zhong, Sheng; Yu, Nenghai

    2013-01-01

    A number of key agreement schemes based on wireless channel characteristics have been proposed recently. However, previous key agreement schemes require that two nodes which need to agree on a key are within the communication range of each other. Hence, they are not suitable for multihop wireless networks, in which nodes do not always have direct connections with each other. In this paper, we first propose a basic multihop key agreement scheme for wireless ad hoc networks. The proposed basic scheme is resistant to external eavesdroppers. Nevertheless, this basic scheme is not secure when there exist internal eavesdroppers or Man-in-the-Middle (MITM) adversaries. In order to cope with these adversaries, we propose an improved multihop key agreement scheme. We show that the improved scheme is secure against internal eavesdroppers and MITM adversaries in a single path. Both performance analysis and simulation results demonstrate that the improved scheme is efficient. Consequently, the improved key agreement scheme is suitable for multihop wireless ad hoc networks.

  4. Improving Classification of Protein Interaction Articles Using Context Similarity-Based Feature Selection.

    Science.gov (United States)

    Chen, Yifei; Sun, Yuxing; Han, Bing-Qing

    2015-01-01

    Protein interaction article classification is a text classification task in the biological domain to determine which articles describe protein-protein interactions. Since the feature space in text classification is high-dimensional, feature selection is widely used for reducing the dimensionality of features to speed up computation without sacrificing classification performance. Many existing feature selection methods are based on the statistical measure of document frequency and term frequency. One potential drawback of these methods is that they treat features separately. Hence, first we design a similarity measure between the context information to take word cooccurrences and phrase chunks around the features into account. Then we introduce the similarity of context information to the importance measure of the features to substitute the document and term frequency. Hence we propose new context similarity-based feature selection methods. Their performance is evaluated on two protein interaction article collections and compared against the frequency-based methods. The experimental results reveal that the context similarity-based methods perform better in terms of the F1 measure and the dimension reduction rate. Benefiting from the context information surrounding the features, the proposed methods can select distinctive features effectively for protein interaction article classification.

  5. Polarimetric SAR image classification based on discriminative dictionary learning model

    Science.gov (United States)

    Sang, Cheng Wei; Sun, Hong

    2018-03-01

    Polarimetric SAR (PolSAR) image classification is one of the important applications of PolSAR remote sensing. It is a difficult high-dimension nonlinear mapping problem, the sparse representations based on learning overcomplete dictionary have shown great potential to solve such problem. The overcomplete dictionary plays an important role in PolSAR image classification, however for PolSAR image complex scenes, features shared by different classes will weaken the discrimination of learned dictionary, so as to degrade classification performance. In this paper, we propose a novel overcomplete dictionary learning model to enhance the discrimination of dictionary. The learned overcomplete dictionary by the proposed model is more discriminative and very suitable for PolSAR classification.

  6. A Novel Basis Splitting Eavesdropping Scheme in Quantum Cryptography Based on the BB84 Protocol

    International Nuclear Information System (INIS)

    Zhao Nan; Zhu Chang-Hua; Quan Dong-Xiao

    2015-01-01

    We propose a novel strategy named basis-splitting scheme to split the intercepted quanta into several portions based on different bases, for eavesdropping in the process of quantum cryptography. Compared with intercept-resend strategy, our simulation results of the basis-splitting scheme under the non-ideal condition show a greater performance, especially with the increase of the length of shifted bits. Consequently our scheme can aid eavesdropper to gather much more useful information. (paper)

  7. Security analysis and enhancements of an effective biometric-based remote user authentication scheme using smart cards.

    Science.gov (United States)

    An, Younghwa

    2012-01-01

    Recently, many biometrics-based user authentication schemes using smart cards have been proposed to improve the security weaknesses in user authentication system. In 2011, Das proposed an efficient biometric-based remote user authentication scheme using smart cards that can provide strong authentication and mutual authentication. In this paper, we analyze the security of Das's authentication scheme, and we have shown that Das's authentication scheme is still insecure against the various attacks. Also, we proposed the enhanced scheme to remove these security problems of Das's authentication scheme, even if the secret information stored in the smart card is revealed to an attacker. As a result of security analysis, we can see that the enhanced scheme is secure against the user impersonation attack, the server masquerading attack, the password guessing attack, and the insider attack and provides mutual authentication between the user and the server.

  8. Prediction-based association control scheme in dense femtocell networks

    Science.gov (United States)

    Pham, Ngoc-Thai; Huynh, Thong; Hwang, Won-Joo; You, Ilsun; Choo, Kim-Kwang Raymond

    2017-01-01

    The deployment of large number of femtocell base stations allows us to extend the coverage and efficiently utilize resources in a low cost manner. However, the small cell size of femtocell networks can result in frequent handovers to the mobile user, and consequently throughput degradation. Thus, in this paper, we propose predictive association control schemes to improve the system’s effective throughput. Our design focuses on reducing handover frequency without impacting on throughput. The proposed schemes determine handover decisions that contribute most to the network throughput and are proper for distributed implementations. The simulation results show significant gains compared with existing methods in terms of handover frequency and network throughput perspective. PMID:28328992

  9. An object-oriented classification method of high resolution imagery based on improved AdaTree

    International Nuclear Information System (INIS)

    Xiaohe, Zhang; Liang, Zhai; Jixian, Zhang; Huiyong, Sang

    2014-01-01

    With the popularity of the application using high spatial resolution remote sensing image, more and more studies paid attention to object-oriented classification on image segmentation as well as automatic classification after image segmentation. This paper proposed a fast method of object-oriented automatic classification. First, edge-based or FNEA-based segmentation was used to identify image objects and the values of most suitable attributes of image objects for classification were calculated. Then a certain number of samples from the image objects were selected as training data for improved AdaTree algorithm to get classification rules. Finally, the image objects could be classified easily using these rules. In the AdaTree, we mainly modified the final hypothesis to get classification rules. In the experiment with WorldView2 image, the result of the method based on AdaTree showed obvious accuracy and efficient improvement compared with the method based on SVM with the kappa coefficient achieving 0.9242

  10. A dynamic identity based authentication scheme using chaotic maps for telecare medicine information systems.

    Science.gov (United States)

    Wang, Zhiheng; Huo, Zhanqiang; Shi, Wenbo

    2015-01-01

    With rapid development of computer technology and wide use of mobile devices, the telecare medicine information system has become universal in the field of medical care. To protect patients' privacy and medial data's security, many authentication schemes for the telecare medicine information system have been proposed. Due to its better performance, chaotic maps have been used in the design of authentication schemes for the telecare medicine information system. However, most of them cannot provide user's anonymity. Recently, Lin proposed a dynamic identity based authentication scheme using chaotic maps for the telecare medicine information system and claimed that their scheme was secure against existential active attacks. In this paper, we will demonstrate that their scheme cannot provide user anonymity and is vulnerable to the impersonation attack. Further, we propose an improved scheme to fix security flaws in Lin's scheme and demonstrate the proposed scheme could withstand various attacks.

  11. An enhanced biometric-based authentication scheme for telecare medicine information systems using elliptic curve cryptosystem.

    Science.gov (United States)

    Lu, Yanrong; Li, Lixiang; Peng, Haipeng; Yang, Yixian

    2015-03-01

    The telecare medical information systems (TMISs) enable patients to conveniently enjoy telecare services at home. The protection of patient's privacy is a key issue due to the openness of communication environment. Authentication as a typical approach is adopted to guarantee confidential and authorized interaction between the patient and remote server. In order to achieve the goals, numerous remote authentication schemes based on cryptography have been presented. Recently, Arshad et al. (J Med Syst 38(12): 2014) presented a secure and efficient three-factor authenticated key exchange scheme to remedy the weaknesses of Tan et al.'s scheme (J Med Syst 38(3): 2014). In this paper, we found that once a successful off-line password attack that results in an adversary could impersonate any user of the system in Arshad et al.'s scheme. In order to thwart these security attacks, an enhanced biometric and smart card based remote authentication scheme for TMISs is proposed. In addition, the BAN logic is applied to demonstrate the completeness of the enhanced scheme. Security and performance analyses show that our enhanced scheme satisfies more security properties and less computational cost compared with previously proposed schemes.

  12. Building an asynchronous web-based tool for machine learning classification.

    Science.gov (United States)

    Weber, Griffin; Vinterbo, Staal; Ohno-Machado, Lucila

    2002-01-01

    Various unsupervised and supervised learning methods including support vector machines, classification trees, linear discriminant analysis and nearest neighbor classifiers have been used to classify high-throughput gene expression data. Simpler and more widely accepted statistical tools have not yet been used for this purpose, hence proper comparisons between classification methods have not been conducted. We developed free software that implements logistic regression with stepwise variable selection as a quick and simple method for initial exploration of important genetic markers in disease classification. To implement the algorithm and allow our collaborators in remote locations to evaluate and compare its results against those of other methods, we developed a user-friendly asynchronous web-based application with a minimal amount of programming using free, downloadable software tools. With this program, we show that classification using logistic regression can perform as well as other more sophisticated algorithms, and it has the advantages of being easy to interpret and reproduce. By making the tool freely and easily available, we hope to promote the comparison of classification methods. In addition, we believe our web application can be used as a model for other bioinformatics laboratories that need to develop web-based analysis tools in a short amount of time and on a limited budget.

  13. Application of In-Segment Multiple Sampling in Object-Based Classification

    Directory of Open Access Journals (Sweden)

    Nataša Đurić

    2014-12-01

    Full Text Available When object-based analysis is applied to very high-resolution imagery, pixels within the segments reveal large spectral inhomogeneity; their distribution can be considered complex rather than normal. When normality is violated, the classification methods that rely on the assumption of normally distributed data are not as successful or accurate. It is hard to detect normality violations in small samples. The segmentation process produces segments that vary highly in size; samples can be very big or very small. This paper investigates whether the complexity within the segment can be addressed using multiple random sampling of segment pixels and multiple calculations of similarity measures. In order to analyze the effect sampling has on classification results, statistics and probability value equations of non-parametric two-sample Kolmogorov-Smirnov test and parametric Student’s t-test are selected as similarity measures in the classification process. The performance of both classifiers was assessed on a WorldView-2 image for four land cover classes (roads, buildings, grass and trees and compared to two commonly used object-based classifiers—k-Nearest Neighbor (k-NN and Support Vector Machine (SVM. Both proposed classifiers showed a slight improvement in the overall classification accuracies and produced more accurate classification maps when compared to the ground truth image.

  14. Lidar-based individual tree species classification using convolutional neural network

    Science.gov (United States)

    Mizoguchi, Tomohiro; Ishii, Akira; Nakamura, Hiroyuki; Inoue, Tsuyoshi; Takamatsu, Hisashi

    2017-06-01

    Terrestrial lidar is commonly used for detailed documentation in the field of forest inventory investigation. Recent improvements of point cloud processing techniques enabled efficient and precise computation of an individual tree shape parameters, such as breast-height diameter, height, and volume. However, tree species are manually specified by skilled workers to date. Previous works for automatic tree species classification mainly focused on aerial or satellite images, and few works have been reported for classification techniques using ground-based sensor data. Several candidate sensors can be considered for classification, such as RGB or multi/hyper spectral cameras. Above all candidates, we use terrestrial lidar because it can obtain high resolution point cloud in the dark forest. We selected bark texture for the classification criteria, since they clearly represent unique characteristics of each tree and do not change their appearance under seasonable variation and aged deterioration. In this paper, we propose a new method for automatic individual tree species classification based on terrestrial lidar using Convolutional Neural Network (CNN). The key component is the creation step of a depth image which well describe the characteristics of each species from a point cloud. We focus on Japanese cedar and cypress which cover the large part of domestic forest. Our experimental results demonstrate the effectiveness of our proposed method.

  15. Colour schemes

    DEFF Research Database (Denmark)

    van Leeuwen, Theo

    2013-01-01

    This chapter presents a framework for analysing colour schemes based on a parametric approach that includes not only hue, value and saturation, but also purity, transparency, luminosity, luminescence, lustre, modulation and differentiation.......This chapter presents a framework for analysing colour schemes based on a parametric approach that includes not only hue, value and saturation, but also purity, transparency, luminosity, luminescence, lustre, modulation and differentiation....

  16. On argumentation schemes and the natural classification of arguments

    OpenAIRE

    Katzav, K.; Reed, C.

    2004-01-01

    We develop conceptions of arguments and of argument types that will, by serving as the basis for developing a natural classification of arguments, benefit work in artificial intelligence. Focusing only on arguments construed as the semantic entities that are the outcome of processes of reasoning, we outline and clarify our view that an argument is a proposition that represents a fact as both conveying some other fact and as doing so wholly. Further, we outline our view that, with respect to a...

  17. Changing Histopathological Diagnostics by Genome-Based Tumor Classification

    Directory of Open Access Journals (Sweden)

    Michael Kloth

    2014-05-01

    Full Text Available Traditionally, tumors are classified by histopathological criteria, i.e., based on their specific morphological appearances. Consequently, current therapeutic decisions in oncology are strongly influenced by histology rather than underlying molecular or genomic aberrations. The increase of information on molecular changes however, enabled by the Human Genome Project and the International Cancer Genome Consortium as well as the manifold advances in molecular biology and high-throughput sequencing techniques, inaugurated the integration of genomic information into disease classification. Furthermore, in some cases it became evident that former classifications needed major revision and adaption. Such adaptations are often required by understanding the pathogenesis of a disease from a specific molecular alteration, using this molecular driver for targeted and highly effective therapies. Altogether, reclassifications should lead to higher information content of the underlying diagnoses, reflecting their molecular pathogenesis and resulting in optimized and individual therapeutic decisions. The objective of this article is to summarize some particularly important examples of genome-based classification approaches and associated therapeutic concepts. In addition to reviewing disease specific markers, we focus on potentially therapeutic or predictive markers and the relevance of molecular diagnostics in disease monitoring.

  18. Development of a classification scheme for disease-related enzyme information

    Directory of Open Access Journals (Sweden)

    Söhngen Carola

    2011-08-01

    Full Text Available Abstract Background BRENDA (BRaunschweig ENzyme DAtabase, http://www.brenda-enzymes.org is a major resource for enzyme related information. First and foremost, it provides data which are manually curated from the primary literature. DRENDA (Disease RElated ENzyme information DAtabase complements BRENDA with a focus on the automatic search and categorization of enzyme and disease related information from title and abstracts of primary publications. In a two-step procedure DRENDA makes use of text mining and machine learning methods. Results Currently enzyme and disease related references are biannually updated as part of the standard BRENDA update. 910,897 relations of EC-numbers and diseases were extracted from titles or abstracts and are included in the second release in 2010. The enzyme and disease entity recognition has been successfully enhanced by a further relation classification via machine learning. The classification step has been evaluated by a 5-fold cross validation and achieves an F1 score between 0.802 ± 0.032 and 0.738 ± 0.033 depending on the categories and pre-processing procedures. In the eventual DRENDA content every category reaches a classification specificity of at least 96.7% and a precision that ranges from 86-98% in the highest confidence level, and 64-83% for the smallest confidence level associated with higher recall. Conclusions The DRENDA processing chain analyses PubMed, locates references with disease-related information on enzymes and categorises their focus according to the categories causal interaction, therapeutic application, diagnostic usage and ongoing research. The categorisation gives an impression on the focus of the located references. Thus, the relation categorisation can facilitate orientation within the rapidly growing number of references with impact on diseases and enzymes. The DRENDA information is available as additional information in BRENDA.

  19. Studies on the Roles of PDGFRA and EGFR in the Classification and Identification of Therapeutic Targets for Human Gliomas

    OpenAIRE

    Chen, Dongfeng

    2013-01-01

    Glioma is the most common type of primary tumor in the adult central nervous system (CNS). However, the current classification of gliomas is highly subjective and even inaccurate in some cases, which leads to clinical confusion and hinders the development of targeted therapies. EGFR and PDGFRA play crucial roles in glia development and glioma pathogenesis. In this thesis we aim to establish a glial genesis-guided molecular classification scheme for gliomas based on the genes co-expressed with...

  20. A Deployment Scheme Based Upon Virtual Force for Directional Sensor Networks

    Directory of Open Access Journals (Sweden)

    Chiu-Kuo Liang

    2015-11-01

    Full Text Available A directional sensor network is composed of many directional sensor nodes. Unlike conventional omni-directional sensors that always have an omni-angle of sensing range; directional sensors may have a limited angle of sensing range due to technical constraints or cost considerations. Area coverage is still an essential issue in a directional sensor network. In this paper, we study the area coverage problem in directional sensor networks with mobile sensors, which can move to the correct places to get high coverage. We present distributed self-deployment schemes of mobile sensors. After sensors are randomly deployed, each sensor calculates its next new location to move in order to obtain a better coverage than previous one. The locations of sensors are adjusted round by round so that the coverage is gradually improved. Based on the virtual force of the directional sensors, we design a scheme, namely Virtual force scheme. Simulation results show the effectiveness of our scheme in term of the coverage improvement.