WorldWideScience

Sample records for machine readable features

  1. Machine Readable Passports & The Visa Waiver Programme

    CERN Document Server

    2003-01-01

    From 1 October 2003, all passengers intending to enter the USA on the Visa Waiver Programme (VWP) will be required to present a machine-readable passport (MRP). Passengers travelling to the USA with a non-machine readable passport will require a valid US entry visa. Applying for a US visa is a lengthy process, which can take several weeks or even months. Therefore it is strongly recommended that: • All Visa Waiver nationals who hold a non-machine readable passport should obtain a MRP before their next visit to the USA. • Children travelling on a parent's passport (be it machine readable or non-machine readable) cannot benefit from the Visa Waiver Programme and should obtain their own MRP prior to travelling to the USA or request a visa. What is a Machine Readable Passport (MRP)? A MRP has the holders' personal details, e.g. name, date of birth, nationality and their passport number contained in two lines of text at the base of the photo page. This text may be read by machine. These 2 lines ...

  2. A survey of machine readable data bases

    Science.gov (United States)

    Matlock, P.

    1981-01-01

    Forty-two of the machine readable data bases available to the technologist and researcher in the natural sciences and engineering are described and compared with the data bases and date base services offered by NASA.

  3. A Cataloguing System for Machine Readable Data Bases.

    Science.gov (United States)

    Lauterbach, Guy

    With the fantastic growth in computerized data processing and management, there arises a great need for improved techniques in cataloging of machine readable data bases. The purpose of this report is to define a system by which computerized data bases may be cataloged for easy reference and availability. Developed from a computer scientist's…

  4. Statistical Augmentation of a Chinese Machine-Readable Dictionary

    CERN Document Server

    Fung, P; Fung, Pascale; Wu, Dekai

    1994-01-01

    We describe a method of using statistically-collected Chinese character groups from a corpus to augment a Chinese dictionary. The method is particularly useful for extracting domain-specific and regional words not readily available in machine-readable dictionaries. Output was evaluated both using human evaluators and against a previously available dictionary. We also evaluated performance improvement in automatic Chinese tokenization. Results show that our method outputs legitimate words, acronymic constructions, idioms, names and titles, as well as technical compounds, many of which were lacking from the original dictionary.

  5. A Style Manual for Machine-Readable Data Files and Their Documentation.

    Science.gov (United States)

    Roistacher, Richard C.; And Others

    This manual presents detailed descriptions and examples of standards and techniques for formatting and documenting machine-readable data files. The descriptions of syntax and stylistic elements are independent of whether documentation is produced manually or as a machine-readable text file. The manual also discusses rules of good practice for…

  6. USA - Postponement of Deadline for Machine-Readable Passports

    CERN Multimedia

    2003-01-01

    U.S. Secretary of State Colin Powell has granted a postponement until October 26, 2004, as the date by which travellers holding Swiss passports must present a machine-readable passport at a U.S. port of entry to be admitted to the country without a visa. As in the past, citizens of Switzerland and other visa waiver program countries are permitted to enter the United States for general business or tourist purposes for a maximum of 90 days without needing a visa. Other categories of travellers such as students, journalists, individuals employed in the U.S., and all individuals staying for more than 90 days still require a visa. The postponement granted by the Secretary of State applies to a total to the following countries: Australia - Austria - Denmark - Finland - France - Germany - Iceland - Ireland - Italy - Japan - Monaco - Netherlands - New Zealand - Norway - Portugal - San Marino - Singapore - Spain - Sweden - Switzerland - United Kingdom More information available: http://www.us-embassy.ch Your Carlson...

  7. The comparison of Wiktionary thesauri transformed into the machine-readable format

    CERN Document Server

    Krizhanovsky, A A

    2010-01-01

    Wiktionary is a unique, peculiar, valuable and original resource for natural language processing (NLP). The paper describes an open-source Wiktionary parser: its architecture and requirements followed by a description of Wiktionary features to be taken into account, some open problems of Wiktionary and the parser. The current implementation of the parser extracts the definitions, semantic relations, and translations from English and Russian Wiktionaries. The paper's goal is to interest researchers (1) in using the constructed machine-readable dictionary for different NLP tasks, (2) in extending the software to parse 170 still unused Wiktionaries. The comparison of a number and types of semantic relations, a number of definitions, and a number of translations in the English Wiktionary and the Russian Wiktionary has been carried out. It was found that the number of semantic relations in the English Wiktionary is larger by 1.57 times than in Russian (157 and 100 thousands). But the Russian Wiktionary has more "r...

  8. 6 CFR 37.19 - Machine readable technology on the driver's license or identification card.

    Science.gov (United States)

    2010-01-01

    ... license or identification card. 37.19 Section 37.19 Domestic Security DEPARTMENT OF HOMELAND SECURITY, OFFICE OF THE SECRETARY REAL ID DRIVER'S LICENSES AND IDENTIFICATION CARDS Minimum Documentation... identification card. For the machine readable portion of the REAL ID driver's license or identification...

  9. A catalog of stellar spectrophotometry (Adelman, et al. 1989): Documentation for the machine-readable version

    Science.gov (United States)

    Warren, Wayne H., Jr.; Adelman, Saul J.

    1990-01-01

    The machine-readable version of the catalog, as it is currently being distributed from the astronomical data centers, is described. The catalog is a collection of spectrophotometric observations made using rotating grating scanners and calibrated with the fluxes of Vega. The observations cover various wavelength regions between about 330 and 1080 nm.

  10. Evaluation of the Mechanical Durability of the Egyptian Machine Readable Booklet Passport

    Directory of Open Access Journals (Sweden)

    Ahmed Mahmoud Yosri

    2013-12-01

    Full Text Available In 2008 the first Egyptian booklet Machine Readable Passport/ MRP has been issued and its security and informative standard quality levels were proved in a research published in 2011. Here the durability profiles of the Egyptian MRP have been evaluated. Seven mechanical durability tests were applied on the Egyptian MRP. Such tests are specified in the International Civil Aviation Organization / ICAO standard requirements documents. These seven very severe durability tests resulted in that the Egyptian MRP has achieved better & higher results than the values detected in ICAO-Doc N0232: Durability of Machine Readable Passports - Version: 3.2. Hence, this research had proved the complete conformance between the Egyptian MRP mechanical durability profiles to the international requirements. The Egyptian booklet MRP doesn’t need any obligatory modification concerning its mechanical durability profiles.

  11. CONTU revisited: the case against copyright protection for computer programs in machine-readable form.

    Science.gov (United States)

    Samuelson, P

    1984-09-01

    Professor Samuelson casts a critical eye on the Final Report of the National Commission on New Technological Uses of Copyrighted Works (CONTU) which recommended that copyright protection be extended to machine-readable versions of computer programs. CONTU appears to have misunderstood computer technology and misinterpreted copyright tradition in two significant respects. The Commission failed to take into account the historical importance of disclosure of the contents of protected works as a fundamental goal of both the copyright and patent laws. It also erroneously opined that the utilitarian character of a work was no bar to its copyrightability when both the statute and the case law make clear that utilitarian works are not copyrightable. Since computer programs in machine-readable forms do not disclose their contents and are inherently utilitarian, copyright protection for them is inappropriate. Congress acted on CONTU's recommendation without understanding the significance of these conceptual flaws. Professor Samuelson recommends the creation of a new form of intellectual property law specifically designed for machine-readable programs.

  12. Feature Recognition for Virtual Machining

    OpenAIRE

    Xú, Shixin; Anwer, Nabil; Qiao, Lihong

    2014-01-01

    International audience; Virtual machining uses software tools to simulate machining processes in virtual environments ahead of actual production. This paper proposes that feature recognition techniques can be applied in the course of virtual machining, such as identifying some process problems, and presenting corresponding correcting advices. By comparing with the original CAD model, form errors of the machining features can be found. And then corrections are suggested to process designers. T...

  13. Documentation for the machine-readable version of a table of Redshifts for Abell clusters (Sarazin, Rood and Struble 1982)

    Science.gov (United States)

    Warren, W. H., Jr.

    1983-01-01

    The machine readable catalog is described. The machine version contains the same data as the published table, which includes a second file with the notes. The computerized data files are prepared at the Astronomical Data Center. Detected discrepancies and cluster identifications based on photometric estimators are included.

  14. Automated Generation of Machine Verifiable and Readable Proofs: A Case Study of Tarski's Geometry

    OpenAIRE

    Stojanovic Durdevic, Sana; Narboux, Julien; Janicic, Predrag

    2015-01-01

    International audience; The power of state-of-the-art automated and interactive the-orem provers has reached the level at which a significant portion of non-trivial mathematical contents can be formalized almost fully automat-ically. In this paper we present our framework for the formalization of mathematical knowledge that can produce machine verifiable proofs (for different proof assistants) but also human-readable (nearly textbook-like) proofs. As a case study, we focus on one of the twent...

  15. Documentation for the machine-readable version of the Absolute Calibration of Stellar Spectrophotometry

    Science.gov (United States)

    Warren, W. H., Jr.

    1982-01-01

    The machine-readable data file of The Absolute Calibration of Stellar Spectrophotometry as distributed by the Astronomical Data Center is described. The data file contains the absolute fluxes for 16 stars published in Tables 1 and 2 of Johnson (1980). The absolute calibrations were accomplished by combining the 13-color photometry calibrations of Johnson and Mitchell (1975) with spectra obtained with a Michelson spectrophotometer and covering the wavelength range 4000 to 10300 A (Johnson 1977). The agreement between this absolute calibration and another recent one based upon data for a Lyr and 109 Vir by Tug, White and Lockwood (1977) is shown by Johnson (1980) to be quite good.

  16. Documentation for the machine-readable version of the revised Catalogue of Stellar Rotational Velocities of Uesugi and Fukuda (1982)

    Science.gov (United States)

    Warren, W. H., Jr.

    1983-01-01

    The machine-readable catalog provides mean data on the old Slettebak system for 6472 stars. The catalog results from the review, analysis and transformation of 11460 data from 102 sources. Star identification, (major catalog number, name if the star has one, or cluster identification, etc.), a man projected rotational velocity, and a list of source references re included. The references are given in a second file included with the catalog when it is distributed on magnetic tape. The contents and/formats of the the data and reference files of the machine-readable catalog are described to enable users to read and process the data.

  17. Sunspot latitudes during the Maunder Minimum: a machine-readable catalogue from previous studies

    CERN Document Server

    Vaquero, J M; Sánchez-Bajo, F

    2015-01-01

    The Maunder Minimum (1645-1715 approximately) was a period of very low solar activity and a strong hemispheric asymmetry, with most of sunspots in the southern hemisphere. In this paper, two data sets of sunspot latitudes during the Maunder minimum have been recovered for the international scientific community. The first data set is constituted by latitudes of sunspots appearing in the catalogue published by Gustav Sp\\"orer nearly 130 years ago. The second data set is based on the sunspot latitudes displayed in the butterfly diagram for the Maunder Minimum which was published by Ribes and Nesme-Ribes almost 20 years ago. We have calculated the asymmetry index using these data sets confirming a strong hemispherical asymmetry in this period. A machine-readable version of this catalogue with both data sets is available in the Historical Archive of Sunspot Observations (http://haso.unex.es) and in the appendix of this article.

  18. A Study of Readability of Texts in Bangla through Machine Learning Approaches

    Science.gov (United States)

    Sinha, Manjira; Basu, Anupam

    2016-01-01

    In this work, we have investigated text readability in Bangla language. Text readability is an indicator of the suitability of a given document with respect to a target reader group. Therefore, text readability has huge impact on educational content preparation. The advances in the field of natural language processing have enabled the automatic…

  19. Ownership of Machine-Readable Bibliographic Data. Canadian Network Papers Number 5 = Propriete des Donnees Bibliographique Lisibles par Machine. Documents sur les Resaux Canadiens Numero 5.

    Science.gov (United States)

    Duchesne, R. M.; And Others

    Because of data ownership questions raised by the interchange and sharing of machine readable bibliographic data, this paper was prepared for the Bibliographic and Communications Network Committee of the National Library Advisory Board. Background information and definitions are followed by a review of the legal aspects relating to property and…

  20. Three editions of the star catalogue of Tycho Brahe : machine-readable versions and comparison with the modern Hipparcos Catalogue

    OpenAIRE

    Verbunt, F.W.M.; Gent, R.H. van

    2010-01-01

    Tycho Brahe completed his catalogue with the positions and magnitudes of 1004 fixed stars in 1598. This catalogue circulated in manuscript form. Brahe edited a shorter version with 777 stars, printed in 1602, and Kepler edited the full catalogue of 1004 stars, printed in 1627. We provide machine-readable versions of the three versions of the catalogue, describe the differences between them and briefly discuss their accuracy on the basis of comparison with modern data from the Hipparcos Catalo...

  1. A Web Tool for Generating High Quality Machine-readable Biological Pathways.

    Science.gov (United States)

    Ramirez-Gaona, Miguel; Marcu, Ana; Pon, Allison; Grant, Jason; Wu, Anthony; Wishart, David S

    2017-02-08

    PathWhiz is a web server built to facilitate the creation of colorful, interactive, visually pleasing pathway diagrams that are rich in biological information. The pathways generated by this online application are machine-readable and fully compatible with essentially all web-browsers and computer operating systems. It uses a specially developed, web-enabled pathway drawing interface that permits the selection and placement of different combinations of pre-drawn biological or biochemical entities to depict reactions, interactions, transport processes and binding events. This palette of entities consists of chemical compounds, proteins, nucleic acids, cellular membranes, subcellular structures, tissues, and organs. All of the visual elements in it can be interactively adjusted and customized. Furthermore, because this tool is a web server, all pathways and pathway elements are publicly accessible. This kind of pathway "crowd sourcing" means that PathWhiz already contains a large and rapidly growing collection of previously drawn pathways and pathway elements. Here we describe a protocol for the quick and easy creation of new pathways and the alteration of existing pathways. To further facilitate pathway editing and creation, the tool contains replication and propagation functions. The replication function allows existing pathways to be used as templates to create or edit new pathways. The propagation function allows one to take an existing pathway and automatically propagate it across different species. Pathways created with this tool can be "re-styled" into different formats (KEGG-like or text-book like), colored with different backgrounds, exported to BioPAX, SBGN-ML, SBML, or PWML data exchange formats, and downloaded as PNG or SVG images. The pathways can easily be incorporated into online databases, integrated into presentations, posters or publications, or used exclusively for online visualization and exploration. This protocol has been successfully applied to

  2. Improved AAG based recognization of machining feature

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    The lost information caused by feature interaction is restored by using auxiliary faces(AF)and virtual links(VL).The delta volume of the interacted features represented by concave attachable connected graph (CACG)can be decomposed into several isolated features represented by complete concave adjacency graph (CCAG).We can recognize the features sketchy type by using CCAG as a hint; the exact type of the feature can be attained by deleting the auxiliary faces from the isolated feature.United machining feature(UMF)is used to represent the features that can be machined in the same machining process.It is important to the rationalizing of the process plans and reduce the time costing in machining.An example is given to demonstrate the effectiveness of this method.

  3. The Sunspot Catalogues of Carrington, Peters, and de la Rue: Quality Control and Machine-readable Versions

    CERN Document Server

    Casas, R

    2013-01-01

    In the 19th century, several astronomers made observations of sunspots, recording their positions and sometimes their areas. These observations were published in the form of extensive tables, but have been unhelpful until now. Three of these observers were Richard C. Carrington, Christian H. F. Peters, and Warren de la Rue (and their respective collaborators). They published, in various articles the data corresponding to 26 641 sunspot positions (Carrington, Peters, and de la Rue registered 4 900, 14 040, and 7 701 sunspot positions, respectively). In this paper we present a translation of more than 400 pages of their printed numerical tables into a machine readable format, including an initial analysis targeted at detecting possible mistakes in the reading or in the original transcription. The observations carried out by these three astronomers have been made available at the Centre de Don\\'ees Astronomiques de Strasbourg (http://cdsarc.u-strasbg.fr/cgi-bin/VizieR?-source=VI/138).

  4. A compilation of redshifts and velocity dispersions for Abell clusters (Struble and Rood 1987): Documentation for the machine-readable version

    Science.gov (United States)

    Warren, Wayne H., Jr.

    1989-01-01

    The machine readable version of the compilation, as it is currently being distributed from the Astronomical Data Center, is described. The catalog contains redshifts and velocity dispersions for all Abell clusters for which these data had been published up to 1986 July. Also included are 1950 equatorial coordinates for the centers of the listed clusters, numbers of observations used to determine the redshifts, and bibliographical references citing the data sources.

  5. Three editions of the star catalogue of Tycho Brahe. Machine-readable versions and comparison with the modern Hipparcos Catalogue

    Science.gov (United States)

    Verbunt, F.; van Gent, R. H.

    2010-06-01

    Tycho Brahe completed his catalogue with the positions and magnitudes of 1004 fixed stars in 1598. This catalogue circulated in manuscript form. Brahe edited a shorter version with 777 stars, printed in 1602, and Kepler edited the full catalogue of 1004 stars, printed in 1627. We provide machine-readable versions of the three versions of the catalogue, describe the differences between them and briefly discuss their accuracy on the basis of comparison with modern data from the Hipparcos Catalogue. We also compare our results with earlier analyses by Dreyer (1916, Tychonis Brahe Dani Scripta Astronomica, Vol. II) and Rawlins (1993, DIO, 3, 1), finding good overall agreement. The magnitudes given by Brahe correlate well with modern values, his longitudes and latitudes have error distributions with widths of 2´, with excess numbers of stars with larger errors (as compared to Gaussian distributions), in particular for the faintest stars. Errors in positions larger than ≃10´, which comprise about 15% of the entries, are likely due to computing or copying errors. The full tables KeplerE and Variants (see Table 4) and the table with the latin descriptions of the stars are available in electronic form only at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/516/A28

  6. Elementary epistemological features of machine intelligence

    CERN Document Server

    Horvat, Marko

    2008-01-01

    Theoretical analysis of machine intelligence (MI) is useful for defining a common platform in both theoretical and applied artificial intelligence (AI). The goal of this paper is to set canonical definitions that can assist pragmatic research in both strong and weak AI. Described epistemological features of machine intelligence include relationship between intelligent behavior, intelligent and unintelligent machine characteristics, observable and unobservable entities and classification of intelligence. The paper also establishes algebraic definitions of efficiency and accuracy of MI tests as their quality measure. The last part of the paper addresses the learning process with respect to the traditional epistemology and the epistemology of MI described here. The proposed views on MI positively correlate to the Hegelian monistic epistemology and contribute towards amalgamating idealistic deliberations with the AI theory, particularly in a local frame of reference.

  7. VizieR Online Data Catalog: Three editions of the star catalogue of Tycho Brahe. Machine-readable versions and comparison with the modern Hipparcos Catalogue.

    Science.gov (United States)

    Verbunt, F.; van Gent, R. H.

    2010-03-01

    We present in three data files the Machine-readable versions of three editions of Tycho Brahe's star catalogue. The main file KeplerE.dat contains the catalogue as published by Kepler in 1627 as part of the Tabulae Rudolphinae, with some emendations. The second file Variants.dat contains the data from other editions if different from the data in KeplerE.dat. These other editions are i) the Manuscript version of Brahe from 1598, as published by Dreyer in the Tychonis Brahe Dani Opera Omnia Vol. III (Kopenhagen, 1916), ii) the shorter version published by Brahe In the Astronomiae Instauratae Progymnasmata (1602), iii) the original (i.e. not emended) edition by Kepler (1627), iv) variants given by Kepler (1627) as `meus catalogus' or as `Piserus'. In addition to the data from the Historical catalogue, the machine-readable version contain the modern identification with a Hipparcos star and the latter's magnitude, and based on this identification the positional accuracy. The third file Names.dat contains the latin descriptions of the stars as given in Brahe's manuscript version (1598). (3 data files).

  8. Finite State Machine based Vending Machine Controller with Auto-Billing Features

    OpenAIRE

    2012-01-01

    Nowadays, Vending Machines are well known among Japan, Malaysia and Singapore. The quantity of machines in these countries is on the top worldwide. This is due to the modern lifestyles which require fast food processing with high quality. This paper describes the designing of multi select machine using Finite State Machine Model with Auto-Billing Features. Finite State Machine (FSM) modelling is the most crucial part in developing proposed model as this reduces the hardware. In this paper th...

  9. Improving readability through extractive summarization for learners with reading difficulties

    Directory of Open Access Journals (Sweden)

    K. Nandhini

    2013-11-01

    Full Text Available In this paper, we describe the design and evaluation of extractive summarization approach to assist the learners with reading difficulties. As existing summarization approaches inherently assign more weights to the important sentences, our approach predicts the summary sentences that are important as well as readable to the target audience with good accuracy. We used supervised machine learning technique for summary extraction of science and social subjects in the educational text. Various independent features from the existing literature for predicting important sentences and proposed learner dependent features for predicting readable sentences are extracted from texts and are used for automatic classification. We performed both extrinsic and intrinsic evaluation on this approach and the intrinsic evaluation is carried out using F-measure and readability analysis. The extrinsic evaluation comprises of learner feedback using likert scale and the effect of assistive summary on improving readability for learners’ with reading difficulty using ANOVA. The results show significant improvement in readability for the target audience using assistive summary.

  10. Finite State Machine based Vending Machine Controller with Auto-Billing Features

    Directory of Open Access Journals (Sweden)

    Ana Monga

    2012-04-01

    Full Text Available Nowadays, Vending Machines are well known among Japan, Malaysia and Singapore. The quantity of machines in these countries is on the top worldwide. This is due to the modern lifestyles which require fast food processing with high quality. This paper describes the designing of multi select machine using Finite State Machine Model with Auto-Billing Features. Finite State Machine (FSM modelling is the most crucial part in developing proposed model as this reduces the hardware. In this paper the process of four state (user Selection, Waiting for money insertion, product delivery and servicing has been modelled using MEALY Machine Model. The proposed model is tested using Spartan 3 development board and its performance is compared with CMOS based machine.

  11. Finite State Machine based Vending Machine Controller with Auto-Billing Features

    Directory of Open Access Journals (Sweden)

    Balwinder Singh

    2012-05-01

    Full Text Available Nowadays, Vending Machines are well known among Japan, Malaysia and Singapore. The quantity of machines in these countries is on the top worldwide. This is due to the modern lifestyles which require fast food processing with high quality. This paper describes the designing of multi select machine using Finite State Machine Model with Auto-Billing Features. Finite State Machine (FSM modelling is the most crucial part in developing proposed model as this reduces the hardware. In this paper the process of four state (user Selection, Waiting for money insertion, product delivery and servicing has been modelled using MEALY Machine Model. The proposed model is tested using Spartan 3 development board and its performance is compared with CMOS based machine.

  12. Finite State Machine based Vending Machine Controller with Auto-Billing Features

    CERN Document Server

    Monga, Ana; 10.5121/vlsic.2012.3202

    2012-01-01

    Nowadays, Vending Machines are well known among Japan, Malaysia and Singapore. The quantity of machines in these countries is on the top worldwide. This is due to the modern lifestyles which require fast food processing with high quality. This paper describes the designing of multi select machine using Finite State Machine Model with Auto-Billing Features. Finite State Machine (FSM) modelling is the most crucial part in developing proposed model as this reduces the hardware. In this paper the process of four state (user Selection, Waiting for money insertion, product delivery and servicing) has been modelled using MEALY Machine Model. The proposed model is tested using Spartan 3 development board and its performance is compared with CMOS based machine.

  13. Three editions of the star catalogue of Tycho Brahe : machine-readable versions and comparison with the modern Hipparcos Catalogue

    NARCIS (Netherlands)

    Verbunt, F.W.M.; Gent, R.H.

    2010-01-01

    Tycho Brahe completed his catalogue with the positions and magnitudes of 1004 fixed stars in 1598. This catalogue circulated in manuscript form. Brahe edited a shorter version with 777 stars, printed in 1602, and Kepler edited the full catalogue of 1004 stars, printed in 1627. We provide machine-rea

  14. A Survey on Readability

    Institute of Scientific and Technical Information of China (English)

    贾韶霞

    2015-01-01

    The readability means the extent to which the readers can read and understand the text without obstacles. With the re⁃view of research on readability both at home and abroad, the thesis finds the research on the readability mainly includes three as⁃pects, the definition, the readability formula, and the application of readability formulas to test some written texts.. Then, the au⁃thor points out the deficiencies of the present study, such as the limited research on the readability of non-profit organizations ’ documents and the ignorance of the effect of the reader and environment on the readability.

  15. Blue gum gaming machine: an evaluation of responsible gambling features.

    Science.gov (United States)

    Blaszczynski, Alexander; Gainsbury, Sally; Karlov, Lisa

    2014-09-01

    Structural characteristics of gaming machines contribute to persistence in play and excessive losses. The purpose of this study was to evaluate the effectiveness of five proposed responsible gaming features: responsible gaming messages; a bank meter quarantining winnings until termination of play; alarm clock facilitating setting time-reminders; demo mode allowing play without money; and a charity donation feature where residual amounts can be donated rather than played to zero credits. A series of ten modified gaming machines were located in five Australian gambling venues. The sample comprised 300 patrons attending the venue and who played the gaming machines. Participants completed a structured interview eliciting gambling and socio-demographic data and information on their perceptions and experience of play on the index machines. Results showed that one-quarter of participants considered that these features would contribute to preventing recreational gamblers from developing problems. Just under half of the participants rated these effects to be at least moderate or significant. The promising results suggest that further refinements to several of these features could represent a modest but effective approach to minimising excessive gambling on gaming machines.

  16. Feature importance for machine learning redshifts applied to SDSS galaxies

    CERN Document Server

    Hoyle, Ben; Zitlau, Roman; Steiz, Stella; Weller, Jochen

    2014-01-01

    We present an analysis of importance feature selection applied to photometric redshift estimation using the machine learning architecture Random Decision Forests (RDF) with the ensemble learning routine Adaboost. We select a list of 85 easily measured (or derived) photometric quantities (or 'features') and spectroscopic redshifts for almost two million galaxies from the Sloan Digital Sky Survey Data Release 10. After identifying which features have the most predictive power, we use standard artificial Neural Networks (aNN) to show that the addition of these features, in combination with the standard magnitudes and colours, improves the machine learning redshift estimate by 18% and decreases the catastrophic outlier rate by 32%. We further compare the redshift estimate from RDF using the ensemble learning routine Adaboost with those from two different aNNs, and with photometric redshifts available from the SDSS. We find that the RDF requires orders of magnitude less computation time than the aNNs to obtain a m...

  17. Statistical Machine Translation Features with Multitask Tensor Networks

    OpenAIRE

    Setiawan, Hendra; Huang, Zhongqiang; Devlin, Jacob; Lamar, Thomas; Zbib, Rabih; Schwartz, Richard; Makhoul, John

    2015-01-01

    We present a three-pronged approach to improving Statistical Machine Translation (SMT), building on recent success in the application of neural networks to SMT. First, we propose new features based on neural networks to model various non-local translation phenomena. Second, we augment the architecture of the neural network with tensor layers that capture important higher-order interaction among the network units. Third, we apply multitask learning to estimate the neural network parameters joi...

  18. Sensitivity of Support Vector Machine Classification to Various Training Features

    Directory of Open Access Journals (Sweden)

    Fuling Bian

    2013-07-01

    Full Text Available Remote sensing image classification is one of the most important techniques in image interpretation, which can be used for environmental monitoring, evaluation and prediction. Many algorithms have been developed for image classification in the literature. Support vector machine (SVM is a kind of supervised classification that has been widely used recently. The classification accuracy produced by SVM may show variation depending on the choice of training features. In this paper, SVM was used for land cover classification using Quickbird images. Spectral and textural features were extracted for the classification and the results were analyzed thoroughly. Results showed that the number of features employed in SVM was not the more the better. Different features are suitable for different type of land cover extraction. This study verifies the effectiveness and robustness of SVM in the classification of high spatial resolution remote sensing images.    

  19. ACQUISITION OF ATTRIBUTE INFORMATION OF MACHINE-READABLE DICTIONARY IN GENERIC TYPE%泛化类型的机读词典属性信息抽取

    Institute of Scientific and Technical Information of China (English)

    王随涛; 陆汝占

    2011-01-01

    为了构建实体关系网络、改进和完善基于概念的信息检索,提出一种不针对特定属性类型的从机读词典中抽取概念实例的属性值信息的方法.首先,通过手工标注和遴选等方式生成初始实体一属性值对集并抽取出粗糙模式实例集;其次,经过对模式实例集的聚类合并和扩充处理得到若干组的模式实例,每一组代表一个属性类型;最后.从词典中抽取出新实体词汇的属性值信息.在模式实例集的处理中引入了同义词扩展和词汇语义相似度计算以提高模式实例的覆盖率.实验中针对中的电子领域词汇进行抽取,取得了较好的效果.%This paper presents a method to acquire the attribute value information of conceptual instances from machine-readable dictionary in light to generic attribute types in order to build the network of entity-relationships and to improve and perfect the conceptual-based information retrieval. First, the method generates preliminary entity-attribute value pair sets by means of manual marking and selecting and acquires rough pattern instances set. Secondly, the method obtains several groups of pattern instances by clustering, merging and expanding the pattern instances set, each group represents a type of attribute. Finally, the method acquires the attribute value information of new entity vocabulary from dictionary. When processing pattern instances set the semantic similarity of the vocabulary and synonym extension are introduced to improve the coverage of pattern instances. In experiment the extraction aiming at the vocabulary in electronic field is conducted from the Standard Dictionary of Modern Chinese and the result is good.

  20. Image Retrieval via Relevance Vector Machine with Multiple Features

    Directory of Open Access Journals (Sweden)

    Zemin Liu

    2014-05-01

    Full Text Available With the fast development of computer network technique, there is large amount of image information every day. Researchers have paid more and more attention to the problem of how users quickly retrieving and identifying the images that they may interest. Meanwhile, with the rapid development of artificial intelligence and pattern recognition techniques, it provides people with new thought on the study on complex image retrieval while it’s very difficult for traditional machine learning method to get ideal retrieval results. For this reason, we in this paper propose a new approach for image retrieval based on multiple types of image features and relevance vector machine (RVM. The proposed method, termed as MF-RVM, integrates the informative cures of features and the discrimination ability of RVM. The retrieval experiment is conducted on COREL image library which is collected from internet. The experimental results show that the proposed method can significantly improve the performance for image retrieval, so MF-RVM presented in this paper has very high practicability in image retrieval.

  1. Novel Automatic Filter-Class Feature Selection for Machine Learning Regression

    DEFF Research Database (Denmark)

    Wollsen, Morten Gill; Hallam, John; Jørgensen, Bo Nørregaard

    2017-01-01

    With the increased focus on application of Big Data in all sectors of society, the performance of machine learning becomes essential. Efficient machine learning depends on efficient feature selection algorithms. Filter feature selection algorithms are model-free and therefore very fast, but require...... model in the feature selection process. PCA is often used in machine learning litterature and can be considered the default feature selection method. RDESF outperformed PCA in both experiments in both prediction error and computational speed. RDESF is a new step into filter-based automatic feature...

  2. Novel Automatic Filter-Class Feature Selection for Machine Learning Regression

    DEFF Research Database (Denmark)

    Wollsen, Morten Gill; Hallam, John; Jørgensen, Bo Nørregaard

    2016-01-01

    With the increased focus on application of Big Data in all sectors of society, the performance of machine learning becomes essential. Efficient machine learning depends on efficient feature selection algorithms. Filter feature selection algorithms are model-free and therefore very fast, but require...... model in the feature selection process. PCA is often used in machine learning litterature and can be considered the default feature selection method. RDESF outperformed PCA in both experiments in both prediction error and computational speed. RDESF is a new step into filter-based automatic feature...

  3. A multi-perspective dynamic feature concept in adaptive NC machining of complex freeform surfaces

    OpenAIRE

    Liu, Xu; Li, Yingguang; Gao, James

    2016-01-01

    This paper presents a new concept of feature for freeform surface machining that defines the changes in feature status during real manufacturing situations which have not been sufficiently addressed by current international standards and previous research in feature technology. These changes are multi-perspective, including (i) changes in depth-of-cut: the geometry of a feature in the depth-of-cut direction changes during different machining operations such as roughing, semi-finishing and fin...

  4. LSTM Neural Reordering Feature for Statistical Machine Translation

    OpenAIRE

    Cui, Yiming; Wang, Shijin; Li, Jianfeng

    2015-01-01

    Artificial neural networks are powerful models, which have been widely applied into many aspects of machine translation, such as language modeling and translation modeling. Though notable improvements have been made in these areas, the reordering problem still remains a challenge in statistical machine translations. In this paper, we present a novel neural reordering model that directly models word pairs and alignment. By utilizing LSTM recurrent neural networks, much longer context could be ...

  5. Readability Formulas: Pluses and Minuses.

    Science.gov (United States)

    Rygiel, Mary Ann

    1982-01-01

    Examines readability formulas and examples of their misuse. Analyzes an essay by George Orwell which was given a grade 10 readability level by one formula and discusses characteristics of Orwell's style that refute the accuracy of formula rating. (HTH)

  6. NEW FEATURES OF EXPERIMENTAL MACHINE HYDRAULIC DRIVE FINE TUNING

    Directory of Open Access Journals (Sweden)

    M. I. Zhylevich

    2011-01-01

    Full Text Available The paper considers new methods for  honing and functional testing of machine hydraulic drives: a method for evaluation of friction surface running-in ability and a functional test method for an unsteady temperature regime. Possibilities of their experimental realization are described in the paper

  7. Readability of Wikipedia

    NARCIS (Netherlands)

    Lucassen, Teun; Dijkstra, Roald; Schraagen, Jan Maarten

    2012-01-01

    Wikipedia is becoming widely acknowledged as a reliable source of encyclopedic information. However, concerns have been expressed about its readability. Wikipedia articles might be written in a language too difficult to be understood by most of its visitors. In this study, we apply the Flesch readin

  8. Improvement of Machine Translation Evaluation by Simple Linguistically Motivated Features

    Institute of Scientific and Technical Information of China (English)

    Mu-Yun Yang; Shu-Qi Sun; Jun-Guo Zhu; Sheng Li; Tie-Jun Zhao; Xiao-Ning Zhu

    2011-01-01

    Adopting the regression SVM framework, this paper proposes a linguistically motivated feature engineering strategy to develop an MT evaluation metric with a better correlation with human assessments. In contrast to current practices of "greedy" combination of all available features, six features are suggested according to the human intuition for translation quality. Then the contribution of linguistic features is examined and analyzed via a hill-climbing strategy. Experiments indicate that, compared to either the SVM-ranking model or the previous attempts on exhaustive linguistic features, the regression SVM model with six linguistic information based features generalizes across different datasets better, and augmenting these linguistic features with proper non-linguistic metrics can achieve additional improvements.

  9. NEW FEATURE SELECTION METHOD IN MACHINE FAULT DIAGNOSIS

    Institute of Scientific and Technical Information of China (English)

    Wang Xinfeng; Qiu Jing; Liu Guanjun

    2005-01-01

    Aiming to deficiency of the filter and wrapper feature selection methods, a new method based on composite method of filter and wrapper method is proposed. First the method filters original features to form a feature subset which can meet classification correctness rate, then applies wrapper feature selection method select optimal feature subset. A successful technique for solving optimization problems is given by genetic algorithm (GA). GA is applied to the problem of optimal feature selection. The composite method saves computing time several times of the wrapper method with holding the classification accuracy in data simulation and experiment on bearing fault feature selection. So this method possesses excellent optimization property, can save more selection time, and has the characteristics of high accuracy and high efficiency.

  10. Method of determining the process applied for feature machining : experimental validation of a slot

    OpenAIRE

    Martin, Patrick; D'ACUNTO, Alain

    2007-01-01

    International audience; In this paper, we will be evaluating the "manufacturability" levels for several machining processes of "slot" feature. Using the STEP standard, we will identify the slot feature characteristics. Then, using the ascendant generation of process method, we will define the associated milling process. The expertise is based on a methodology relative to the experience plans carried out during the formalization and systematic evaluation of the machining process associated wit...

  11. Learning features for tissue classification with the classification restricted Boltzmann machine

    NARCIS (Netherlands)

    G. van Tulder (Gijs); M. de Bruijne (Marleen)

    2014-01-01

    markdownabstract__Abstract__ Performance of automated tissue classification in medical imaging depends on the choice of descriptive features. In this paper, we show how restricted Boltzmann machines (RBMs) can be used to learn features that are especially suited for texture-based tissue

  12. An Approach with Support Vector Machine using Variable Features Selection on Breast Cancer Prognosis

    Directory of Open Access Journals (Sweden)

    Sandeep Chaurasia

    2013-09-01

    Full Text Available Cancer diagnosis and clinical outcome prediction are among the most important emerging applications of machine learning. In this paper we have used an approach by using support vector machine classifier to construct a model that is useful for the breast cancer survivability prediction. We have used both 5 cross and 10 cross validation of variable selection on input feature vectors and the performance measurement through bio-learning class performance while measuring AUC, specificity and sensitivity. The performance of the SVM is much better than the other machine learning classifier.

  13. EMPOWERING THE READING READABILITY

    Directory of Open Access Journals (Sweden)

    Handoko Handoko

    2014-06-01

    Full Text Available A general assumption about reading is that students improve their reading ability by reading a lot. This research was conducted to explain the use of extensive reading and aimed to figure out its implementation in improving students’ reading readability by using the class action research technique. The data of this research relates to the students ‘reading progress shown in their reading reports: spoken and written summary, reading comprehension and vocabulary mastery and their participation. The strategy was evolved in the continuity of reading. Students were encouraged to read extensively in and outside class. The findings indicated that the implementation could improve students’ reading readability.This attainment demonstrated that students’ reading readabilityis frosted through the continuity of reading. Other facts showed that students enjoyed reading. Students’ curiosity was also a significant factor. Their high curiosity explained why students continued reading though they realized that materials they read were difficult enough. Students’ self-confidence was also built as they were required to write a retelling story and to share their previous reading. Instead of their retelling and summarizing, students felt to be appreciated as readers. This appreciation indirectly helped students to improve the reading fondness.

  14. Operator functional state classification using least-square support vector machine based recursive feature elimination technique.

    Science.gov (United States)

    Yin, Zhong; Zhang, Jianhua

    2014-01-01

    This paper proposed two psychophysiological-data-driven classification frameworks for operator functional states (OFS) assessment in safety-critical human-machine systems with stable generalization ability. The recursive feature elimination (RFE) and least square support vector machine (LSSVM) are combined and used for binary and multiclass feature selection. Besides typical binary LSSVM classifiers for two-class OFS assessment, two multiclass classifiers based on multiclass LSSVM-RFE and decision directed acyclic graph (DDAG) scheme are developed, one used for recognizing the high mental workload and fatigued state while the other for differentiating overloaded and base-line states from the normal states. Feature selection results have revealed that different dimensions of OFS can be characterized by specific set of psychophysiological features. Performance comparison studies show that reasonable high and stable classification accuracy of both classification frameworks can be achieved if the RFE procedure is properly implemented and utilized.

  15. Extracting invariable fault features of rotating machines with multi-ICA networks

    Institute of Scientific and Technical Information of China (English)

    焦卫东; 杨世锡; 吴昭同

    2003-01-01

    This paper proposes novel multi-layer neural networks based on Independent Component Analysis for feature extraction of fault modes. By the use of ICA, invariable features embedded in multi-channel vibration measurements under different operating conditions (rotating speed and/or load) can be captured together.Thus, stable MLP classifiers insensitive to the variation of operation conditions are constructed. The successful results achieved by selected experiments indicate great potential of ICA in health condition monitoring of rotating machines.

  16. A Multiple Sensor Machine Vision System for Automatic Hardwood Feature Detection

    Science.gov (United States)

    D. Earl Kline; Richard W. Conners; Daniel L. Schmoldt; Philip A. Araman; Robert L. Brisbin

    1993-01-01

    A multiple sensor machine vision prototype is being developed to scan full size hardwood lumber at industrial speeds for automatically detecting features such as knots holes, wane, stain, splits, checks, and color. The prototype integrates a multiple sensor imaging system, a materials handling system, a computer system, and application software. The prototype provides...

  17. A novel method for machine performance degradation assessment based on fixed cycle features test

    Science.gov (United States)

    Liao, Linxia; Lee, Jay

    2009-10-01

    This paper presents a novel machine performance degradation scheme based on fixed cycle features test (FCFT). Instead of monitoring the machine under constant working load, FCFT introduces a new testing method which obtains data during the transient periods of different working loads. A novel performance assessment method based on those transient data without failure history is proposed. Wavelet packet analysis (WPA) is applied to extract features which capture the dynamic characteristics from the non-stationary vibration data. Principal component analysis (PCA) is used to reduce the dimension of the feature space. Gaussian mixture model (GMM) is utilized to approximate the density distribution of the lower-dimensional feature space which consists of the major principal components. The performance index of the machine is calculated based on the overlap between the distribution of the baseline feature space and that of the testing feature space. Bayesian information criterion (BIC) is used to determine the number of mixtures for the GMM and a density boosting method is applied to achieve better accuracy of the distribution estimation. A case study for a chiller system performance assessment is used as an example to validate the effectiveness of the proposed method.

  18. About machine-readable travel documents

    Energy Technology Data Exchange (ETDEWEB)

    Vaudenay, S; Vuagnoux, M [EPFL, Lausanne (Switzerland)

    2007-07-15

    Passports are documents that help immigration officers to identify people. In order to strongly authenticate their data and to automatically identify people, they are now equipped with RFID chips. These contain private information, biometrics, and a digital signature by issuing authorities. Although they substantially increase security at the border controls, they also come with new security and privacy issues. In this paper, we survey existing protocols and their weaknesses.

  19. Machine learning methods for the classification of gliomas: Initial results using features extracted from MR spectroscopy.

    Science.gov (United States)

    Ranjith, G; Parvathy, R; Vikas, V; Chandrasekharan, Kesavadas; Nair, Suresh

    2015-04-01

    With the advent of new imaging modalities, radiologists are faced with handling increasing volumes of data for diagnosis and treatment planning. The use of automated and intelligent systems is becoming essential in such a scenario. Machine learning, a branch of artificial intelligence, is increasingly being used in medical image analysis applications such as image segmentation, registration and computer-aided diagnosis and detection. Histopathological analysis is currently the gold standard for classification of brain tumors. The use of machine learning algorithms along with extraction of relevant features from magnetic resonance imaging (MRI) holds promise of replacing conventional invasive methods of tumor classification. The aim of the study is to classify gliomas into benign and malignant types using MRI data. Retrospective data from 28 patients who were diagnosed with glioma were used for the analysis. WHO Grade II (low-grade astrocytoma) was classified as benign while Grade III (anaplastic astrocytoma) and Grade IV (glioblastoma multiforme) were classified as malignant. Features were extracted from MR spectroscopy. The classification was done using four machine learning algorithms: multilayer perceptrons, support vector machine, random forest and locally weighted learning. Three of the four machine learning algorithms gave an area under ROC curve in excess of 0.80. Random forest gave the best performance in terms of AUC (0.911) while sensitivity was best for locally weighted learning (86.1%). The performance of different machine learning algorithms in the classification of gliomas is promising. An even better performance may be expected by integrating features extracted from other MR sequences. © The Author(s) 2015 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  20. Automatic Extraction of Three Dimensional Prismatic Machining Features from CAD Model

    Directory of Open Access Journals (Sweden)

    B.V. Sudheer Kumar

    2011-12-01

    Full Text Available Machining features recognition provides the necessary platform for the computer aided process planning (CAPP and plays a key role in the integration of computer aided design (CAD and computer aided manufacturing (CAM. This paper presents a new methodology for extracting features from the geometrical data of the CAD Model present in the form of Virtual Reality Modeling Language (VRML files. First, the point cloud is separated into the available number of horizontal cross sections. Each cross section consists of a 2D point cloud. Then, a collection of points represented by a set of feature points is derived for each slice, describing the cross section accurately, and providing the basis for a feature-extraction. These extracted manufacturing features, gives the necessary information regarding the manufacturing activities tomanufacture the part. Software in Microsoft Visual C++ environment is developed to recognize the features, where geometric information of the part isextracted from the CAD model. By using this data, anoutput file i.e., text file is generated, which gives all the machinable features present in the part. This process has been tested on various parts and successfully extracted all the features

  1. Reliable Fault Classification of Induction Motors Using Texture Feature Extraction and a Multiclass Support Vector Machine

    Directory of Open Access Journals (Sweden)

    Jia Uddin

    2014-01-01

    Full Text Available This paper proposes a method for the reliable fault detection and classification of induction motors using two-dimensional (2D texture features and a multiclass support vector machine (MCSVM. The proposed model first converts time-domain vibration signals to 2D gray images, resulting in texture patterns (or repetitive patterns, and extracts these texture features by generating the dominant neighborhood structure (DNS map. The principal component analysis (PCA is then used for the purpose of dimensionality reduction of the high-dimensional feature vector including the extracted texture features due to the fact that the high-dimensional feature vector can degrade classification performance, and this paper configures an effective feature vector including discriminative fault features for diagnosis. Finally, the proposed approach utilizes the one-against-all (OAA multiclass support vector machines (MCSVMs to identify induction motor failures. In this study, the Gaussian radial basis function kernel cooperates with OAA MCSVMs to deal with nonlinear fault features. Experimental results demonstrate that the proposed approach outperforms three state-of-the-art fault diagnosis algorithms in terms of fault classification accuracy, yielding an average classification accuracy of 100% even in noisy environments.

  2. Machine learning methods enable predictive modeling of antibody feature:function relationships in RV144 vaccinees.

    Directory of Open Access Journals (Sweden)

    Ickwon Choi

    2015-04-01

    Full Text Available The adaptive immune response to vaccination or infection can lead to the production of specific antibodies to neutralize the pathogen or recruit innate immune effector cells for help. The non-neutralizing role of antibodies in stimulating effector cell responses may have been a key mechanism of the protection observed in the RV144 HIV vaccine trial. In an extensive investigation of a rich set of data collected from RV144 vaccine recipients, we here employ machine learning methods to identify and model associations between antibody features (IgG subclass and antigen specificity and effector function activities (antibody dependent cellular phagocytosis, cellular cytotoxicity, and cytokine release. We demonstrate via cross-validation that classification and regression approaches can effectively use the antibody features to robustly predict qualitative and quantitative functional outcomes. This integration of antibody feature and function data within a machine learning framework provides a new, objective approach to discovering and assessing multivariate immune correlates.

  3. An Investigation into Error Source Identification of Machine Tools Based on Time-Frequency Feature Extraction

    Directory of Open Access Journals (Sweden)

    Dongju Chen

    2016-01-01

    Full Text Available This paper presents a new identification method to identify the main errors of the machine tool in time-frequency domain. The low- and high-frequency signals of the workpiece surface are decomposed based on the Daubechies wavelet transform. With power spectral density analysis, the main features of the high-frequency signal corresponding to the imbalance of the spindle system are extracted from the surface topography of the workpiece in the frequency domain. With the cross-correlation analysis method, the relationship between the guideway error of the machine tool and the low-frequency signal of the surface topography is calculated in the time domain.

  4. Feature Subset Selection for Hot Method Prediction using Genetic Algorithm wrapped with Support Vector Machines

    Directory of Open Access Journals (Sweden)

    S. Johnson

    2011-01-01

    Full Text Available Problem statement: All compilers have simple profiling-based heuristics to identify and predict program hot methods and also to make optimization decisions. The major challenge in the profile-based optimization is addressing the problem of overhead. The aim of this work is to perform feature subset selection using Genetic Algorithms (GA to improve and refine the machine learnt static hot method predictive technique and to compare the performance of the new models against the simple heuristics. Approach: The relevant features for training the predictive models are extracted from an initial set of randomly selected ninety static program features, with the help of the GA wrapped with the predictive model using the Support Vector Machine (SVM, a Machine Learning (ML algorithm. Results: The GA-generated feature subsets containing thirty and twenty nine features respectively for the two predictive models when tested on MiBench predict Long Running Hot Methods (LRHM and frequently called hot methods (FCHM with the respective accuracies of 71% and 80% achieving an increase of 19% and 22%. Further, inlining of the predicted LRHM and FCHM improve the program performance by 3% and 5% as against 4% and 6% with Low Level Virtual Machines (LLVM default heuristics. When intra-procedural optimizations (IPO are performed on the predicted hot methods, this system offers a performance improvement of 5% and 4% as against 0% and 3% by LLVM default heuristics on LRHM and FCHM respectively. However, we observe an improvement of 36% in certain individual programs. Conclusion: Overall, the results indicate that the GA wrapped with SVM derived feature reduction improves the hot method prediction accuracy and that the technique of hot method prediction based optimization is potentially useful in selective optimization.

  5. An attempt of CNC machining cycle’s application as a tool of the design feature library elaboration

    Directory of Open Access Journals (Sweden)

    Grabowik Cezary

    2017-01-01

    Full Text Available This paper presents a novel approach to a problem of the design feature library elaboration. As a tool of the design feature library development CNC machining cycles were proposed. Because of the great number of commercially available CNC machine controllers, with different CNC machining cycles definitions, it was necessary to make a decision about a research methodological framework, it is the selected CNC machine controller. Taking into account the criterion of popularity as the research framework the selected group of Sinumerik CNC machine controllers was chosen. Presented in the paper idea of the feature library development is based on an assumption saying that it is possible to find a relationship between a particular CNC machining cycle and the simple design feature or even compound design features. Identified, thanks to this assumption, set of the design features could be the base for elaboration of the design feature library. This solution, it is the feature library next gave opportunity for elaboration of the feature based design modelling module (FBDMM working in the SIEMENS NX system environment. Hence, the FBDMM module can support both a designer and CNC machine programmer which is possible due to received in the module modelling paradigm. In FBDMM module the removal feature based modelling technique is received.

  6. Reducing Sweeping Frequencies in Microwave NDT Employing Machine Learning Feature Selection

    Directory of Open Access Journals (Sweden)

    Abdelniser Moomen

    2016-04-01

    Full Text Available Nondestructive Testing (NDT assessment of materials’ health condition is useful for classifying healthy from unhealthy structures or detecting flaws in metallic or dielectric structures. Performing structural health testing for coated/uncoated metallic or dielectric materials with the same testing equipment requires a testing method that can work on metallics and dielectrics such as microwave testing. Reducing complexity and expenses associated with current diagnostic practices of microwave NDT of structural health requires an effective and intelligent approach based on feature selection and classification techniques of machine learning. Current microwave NDT methods in general based on measuring variation in the S-matrix over the entire operating frequency ranges of the sensors. For instance, assessing the health of metallic structures using a microwave sensor depends on the reflection or/and transmission coefficient measurements as a function of the sweeping frequencies of the operating band. The aim of this work is reducing sweeping frequencies using machine learning feature selection techniques. By treating sweeping frequencies as features, the number of top important features can be identified, then only the most influential features (frequencies are considered when building the microwave NDT equipment. The proposed method of reducing sweeping frequencies was validated experimentally using a waveguide sensor and a metallic plate with different cracks. Among the investigated feature selection techniques are information gain, gain ratio, relief, chi-squared. The effectiveness of the selected features were validated through performance evaluations of various classification models; namely, Nearest Neighbor, Neural Networks, Random Forest, and Support Vector Machine. Results showed good crack classification accuracy rates after employing feature selection algorithms.

  7. Reducing Sweeping Frequencies in Microwave NDT Employing Machine Learning Feature Selection.

    Science.gov (United States)

    Moomen, Abdelniser; Ali, Abdulbaset; Ramahi, Omar M

    2016-04-19

    Nondestructive Testing (NDT) assessment of materials' health condition is useful for classifying healthy from unhealthy structures or detecting flaws in metallic or dielectric structures. Performing structural health testing for coated/uncoated metallic or dielectric materials with the same testing equipment requires a testing method that can work on metallics and dielectrics such as microwave testing. Reducing complexity and expenses associated with current diagnostic practices of microwave NDT of structural health requires an effective and intelligent approach based on feature selection and classification techniques of machine learning. Current microwave NDT methods in general based on measuring variation in the S-matrix over the entire operating frequency ranges of the sensors. For instance, assessing the health of metallic structures using a microwave sensor depends on the reflection or/and transmission coefficient measurements as a function of the sweeping frequencies of the operating band. The aim of this work is reducing sweeping frequencies using machine learning feature selection techniques. By treating sweeping frequencies as features, the number of top important features can be identified, then only the most influential features (frequencies) are considered when building the microwave NDT equipment. The proposed method of reducing sweeping frequencies was validated experimentally using a waveguide sensor and a metallic plate with different cracks. Among the investigated feature selection techniques are information gain, gain ratio, relief, chi-squared. The effectiveness of the selected features were validated through performance evaluations of various classification models; namely, Nearest Neighbor, Neural Networks, Random Forest, and Support Vector Machine. Results showed good crack classification accuracy rates after employing feature selection algorithms.

  8. Diagnosis of Chronic Kidney Disease Based on Support Vector Machine by Feature Selection Methods.

    Science.gov (United States)

    Polat, Huseyin; Danaei Mehr, Homay; Cetin, Aydin

    2017-04-01

    As Chronic Kidney Disease progresses slowly, early detection and effective treatment are the only cure to reduce the mortality rate. Machine learning techniques are gaining significance in medical diagnosis because of their classification ability with high accuracy rates. The accuracy of classification algorithms depend on the use of correct feature selection algorithms to reduce the dimension of datasets. In this study, Support Vector Machine classification algorithm was used to diagnose Chronic Kidney Disease. To diagnose the Chronic Kidney Disease, two essential types of feature selection methods namely, wrapper and filter approaches were chosen to reduce the dimension of Chronic Kidney Disease dataset. In wrapper approach, classifier subset evaluator with greedy stepwise search engine and wrapper subset evaluator with the Best First search engine were used. In filter approach, correlation feature selection subset evaluator with greedy stepwise search engine and filtered subset evaluator with the Best First search engine were used. The results showed that the Support Vector Machine classifier by using filtered subset evaluator with the Best First search engine feature selection method has higher accuracy rate (98.5%) in the diagnosis of Chronic Kidney Disease compared to other selected methods.

  9. Statistical mechanics of unsupervised feature learning in a restricted Boltzmann machine with binary synapses

    Science.gov (United States)

    Huang, Haiping

    2017-05-01

    Revealing hidden features in unlabeled data is called unsupervised feature learning, which plays an important role in pretraining a deep neural network. Here we provide a statistical mechanics analysis of the unsupervised learning in a restricted Boltzmann machine with binary synapses. A message passing equation to infer the hidden feature is derived, and furthermore, variants of this equation are analyzed. A statistical analysis by replica theory describes the thermodynamic properties of the model. Our analysis confirms an entropy crisis preceding the non-convergence of the message passing equation, suggesting a discontinuous phase transition as a key characteristic of the restricted Boltzmann machine. Continuous phase transition is also confirmed depending on the embedded feature strength in the data. The mean-field result under the replica symmetric assumption agrees with that obtained by running message passing algorithms on single instances of finite sizes. Interestingly, in an approximate Hopfield model, the entropy crisis is absent, and a continuous phase transition is observed instead. We also develop an iterative equation to infer the hyper-parameter (temperature) hidden in the data, which in physics corresponds to iteratively imposing Nishimori condition. Our study provides insights towards understanding the thermodynamic properties of the restricted Boltzmann machine learning, and moreover important theoretical basis to build simplified deep networks.

  10. Learning features for tissue classification with the classification restricted Boltzmann machine

    DEFF Research Database (Denmark)

    van Tulder, Gijs; de Bruijne, Marleen

    2014-01-01

    Performance of automated tissue classification in medical imaging depends on the choice of descriptive features. In this paper, we show how restricted Boltzmann machines (RBMs) can be used to learn features that are especially suited for texture-based tissue classification. We introduce...... the convolutional classification RBM, a combination of the existing convolutional RBM and classification RBM, and use it for discriminative feature learning. We evaluate the classification accuracy of convolutional and non-convolutional classification RBMs on two lung CT problems. We find that RBM-learned features...... outperform conventional RBM-based feature learning, which is unsupervised and uses only a generative learning objective, as well as often-used filter banks. We show that a mixture of generative and discriminative learning can produce filters that give a higher classification accuracy....

  11. Application of higher order spectral features and support vector machines for bearing faults classification.

    Science.gov (United States)

    Saidi, Lotfi; Ben Ali, Jaouher; Fnaiech, Farhat

    2015-01-01

    Condition monitoring and fault diagnosis of rolling element bearings timely and accurately are very important to ensure the reliability of rotating machinery. This paper presents a novel pattern classification approach for bearings diagnostics, which combines the higher order spectra analysis features and support vector machine classifier. The use of non-linear features motivated by the higher order spectra has been reported to be a promising approach to analyze the non-linear and non-Gaussian characteristics of the mechanical vibration signals. The vibration bi-spectrum (third order spectrum) patterns are extracted as the feature vectors presenting different bearing faults. The extracted bi-spectrum features are subjected to principal component analysis for dimensionality reduction. These principal components were fed to support vector machine to distinguish four kinds of bearing faults covering different levels of severity for each fault type, which were measured in the experimental test bench running under different working conditions. In order to find the optimal parameters for the multi-class support vector machine model, a grid-search method in combination with 10-fold cross-validation has been used. Based on the correct classification of bearing patterns in the test set, in each fold the performance measures are computed. The average of these performance measures is computed to report the overall performance of the support vector machine classifier. In addition, in fault detection problems, the performance of a detection algorithm usually depends on the trade-off between robustness and sensitivity. The sensitivity and robustness of the proposed method are explored by running a series of experiments. A receiver operating characteristic (ROC) curve made the results more convincing. The results indicated that the proposed method can reliably identify different fault patterns of rolling element bearings based on vibration signals.

  12. Complex extreme learning machine applications in terahertz pulsed signals feature sets.

    Science.gov (United States)

    Yin, X-X; Hadjiloucas, S; Zhang, Y

    2014-11-01

    This paper presents a novel approach to the automatic classification of very large data sets composed of terahertz pulse transient signals, highlighting their potential use in biochemical, biomedical, pharmaceutical and security applications. Two different types of THz spectra are considered in the classification process. Firstly a binary classification study of poly-A and poly-C ribonucleic acid samples is performed. This is then contrasted with a difficult multi-class classification problem of spectra from six different powder samples that although have fairly indistinguishable features in the optical spectrum, they also possess a few discernable spectral features in the terahertz part of the spectrum. Classification is performed using a complex-valued extreme learning machine algorithm that takes into account features in both the amplitude as well as the phase of the recorded spectra. Classification speed and accuracy are contrasted with that achieved using a support vector machine classifier. The study systematically compares the classifier performance achieved after adopting different Gaussian kernels when separating amplitude and phase signatures. The two signatures are presented as feature vectors for both training and testing purposes. The study confirms the utility of complex-valued extreme learning machine algorithms for classification of the very large data sets generated with current terahertz imaging spectrometers. The classifier can take into consideration heterogeneous layers within an object as would be required within a tomographic setting and is sufficiently robust to detect patterns hidden inside noisy terahertz data sets. The proposed study opens up the opportunity for the establishment of complex-valued extreme learning machine algorithms as new chemometric tools that will assist the wider proliferation of terahertz sensing technology for chemical sensing, quality control, security screening and clinic diagnosis. Furthermore, the proposed

  13. Comparison of Feature Selection Techniques in Machine Learning for Anatomical Brain MRI in Dementia.

    Science.gov (United States)

    Tohka, Jussi; Moradi, Elaheh; Huttunen, Heikki

    2016-07-01

    We present a comparative split-half resampling analysis of various data driven feature selection and classification methods for the whole brain voxel-based classification analysis of anatomical magnetic resonance images. We compared support vector machines (SVMs), with or without filter based feature selection, several embedded feature selection methods and stability selection. While comparisons of the accuracy of various classification methods have been reported previously, the variability of the out-of-training sample classification accuracy and the set of selected features due to independent training and test sets have not been previously addressed in a brain imaging context. We studied two classification problems: 1) Alzheimer's disease (AD) vs. normal control (NC) and 2) mild cognitive impairment (MCI) vs. NC classification. In AD vs. NC classification, the variability in the test accuracy due to the subject sample did not vary between different methods and exceeded the variability due to different classifiers. In MCI vs. NC classification, particularly with a large training set, embedded feature selection methods outperformed SVM-based ones with the difference in the test accuracy exceeding the test accuracy variability due to the subject sample. The filter and embedded methods produced divergent feature patterns for MCI vs. NC classification that suggests the utility of the embedded feature selection for this problem when linked with the good generalization performance. The stability of the feature sets was strongly correlated with the number of features selected, weakly correlated with the stability of classification accuracy, and uncorrelated with the average classification accuracy.

  14. Unsupervised Feature Learning Classification With Radial Basis Function Extreme Learning Machine Using Graphic Processors.

    Science.gov (United States)

    Lam, Dao; Wunsch, Donald

    2017-01-01

    Ever-increasing size and complexity of data sets create challenges and potential tradeoffs of accuracy and speed in learning algorithms. This paper offers progress on both fronts. It presents a mechanism to train the unsupervised learning features learned from only one layer to improve performance in both speed and accuracy. The features are learned by an unsupervised feature learning (UFL) algorithm. Then, those features are trained by a fast radial basis function (RBF) extreme learning machine (ELM). By exploiting the massive parallel computing attribute of modern graphics processing unit, a customized compute unified device architecture (CUDA) kernel is developed to further speed up the computing of the RBF kernel in the ELM. Results tested on Canadian Institute for Advanced Research and Mixed National Institute of Standards and Technology data sets confirm the UFL RBF ELM achieves high accuracy, and the CUDA implementation is up to 20 times faster than CPU and the naive parallel approach.

  15. Device-Free Localization via an Extreme Learning Machine with Parameterized Geometrical Feature Extraction

    Directory of Open Access Journals (Sweden)

    Jie Zhang

    2017-04-01

    Full Text Available Device-free localization (DFL is becoming one of the new technologies in wireless localization field, due to its advantage that the target to be localized does not need to be attached to any electronic device. In the radio-frequency (RF DFL system, radio transmitters (RTs and radio receivers (RXs are used to sense the target collaboratively, and the location of the target can be estimated by fusing the changes of the received signal strength (RSS measurements associated with the wireless links. In this paper, we will propose an extreme learning machine (ELM approach for DFL, to improve the efficiency and the accuracy of the localization algorithm. Different from the conventional machine learning approaches for wireless localization, in which the above differential RSS measurements are trivially used as the only input features, we introduce the parameterized geometrical representation for an affected link, which consists of its geometrical intercepts and differential RSS measurement. Parameterized geometrical feature extraction (PGFE is performed for the affected links and the features are used as the inputs of ELM. The proposed PGFE-ELM for DFL is trained in the offline phase and performed for real-time localization in the online phase, where the estimated location of the target is obtained through the created ELM. PGFE-ELM has the advantages that the affected links used by ELM in the online phase can be different from those used for training in the offline phase, and can be more robust to deal with the uncertain combination of the detectable wireless links. Experimental results show that the proposed PGFE-ELM can improve the localization accuracy and learning speed significantly compared with a number of the existing machine learning and DFL approaches, including the weighted K-nearest neighbor (WKNN, support vector machine (SVM, back propagation neural network (BPNN, as well as the well-known radio tomographic imaging (RTI DFL approach.

  16. Device-Free Localization via an Extreme Learning Machine with Parameterized Geometrical Feature Extraction.

    Science.gov (United States)

    Zhang, Jie; Xiao, Wendong; Zhang, Sen; Huang, Shoudong

    2017-04-17

    Device-free localization (DFL) is becoming one of the new technologies in wireless localization field, due to its advantage that the target to be localized does not need to be attached to any electronic device. In the radio-frequency (RF) DFL system, radio transmitters (RTs) and radio receivers (RXs) are used to sense the target collaboratively, and the location of the target can be estimated by fusing the changes of the received signal strength (RSS) measurements associated with the wireless links. In this paper, we will propose an extreme learning machine (ELM) approach for DFL, to improve the efficiency and the accuracy of the localization algorithm. Different from the conventional machine learning approaches for wireless localization, in which the above differential RSS measurements are trivially used as the only input features, we introduce the parameterized geometrical representation for an affected link, which consists of its geometrical intercepts and differential RSS measurement. Parameterized geometrical feature extraction (PGFE) is performed for the affected links and the features are used as the inputs of ELM. The proposed PGFE-ELM for DFL is trained in the offline phase and performed for real-time localization in the online phase, where the estimated location of the target is obtained through the created ELM. PGFE-ELM has the advantages that the affected links used by ELM in the online phase can be different from those used for training in the offline phase, and can be more robust to deal with the uncertain combination of the detectable wireless links. Experimental results show that the proposed PGFE-ELM can improve the localization accuracy and learning speed significantly compared with a number of the existing machine learning and DFL approaches, including the weighted K-nearest neighbor (WKNN), support vector machine (SVM), back propagation neural network (BPNN), as well as the well-known radio tomographic imaging (RTI) DFL approach.

  17. Readability of standard appointment letters.

    Science.gov (United States)

    Bennett, Daniel M; Gilchrist, Anne

    2010-06-01

    Introduction and aims The first contact a clinical service has with a patient is often an appointment letter and thus it is important that this letter is written in a way which is accessible. One concern is to write in language which is easily able to be read by the majority of recipients. A simple initial way to assess this is by using measures of readability of text.Methods We applied measures to examine the readability of appointment and administrative letters sent to young people by clinicians in the Young People's Department at the Royal Cornhill Hospital in Aberdeen.Results Many letters were unlikely to be understood by our youngest patients. We revised the letters to meet an agreed standard of readability, and agreed their routine use within the team. All letters were significantly improved on standard measures of readability and were preferred by patients.Conclusions The methods used are feasible, easily available and may be helpful to clinicians working in other specialties to improve the level of readability of written communication. This will help patients and families in their first contact with any clinical service.

  18. Epileptic EEG classification based on extreme learning machine and nonlinear features.

    Science.gov (United States)

    Yuan, Qi; Zhou, Weidong; Li, Shufang; Cai, Dongmei

    2011-09-01

    The automatic detection and classification of epileptic EEG are significant in the evaluation of patients with epilepsy. This paper presents a new EEG classification approach based on the extreme learning machine (ELM) and nonlinear dynamical features. The theory of nonlinear dynamics has been a powerful tool for understanding brain electrical activities. Nonlinear features extracted from EEG signals such as approximate entropy (ApEn), Hurst exponent and scaling exponent obtained with detrended fluctuation analysis (DFA) are employed to characterize interictal and ictal EEGs. The statistics indicate that the differences of those nonlinear features between interictal and ictal EEGs are statistically significant. The ELM algorithm is employed to train a single hidden layer feedforward neural network (SLFN) with EEG nonlinear features. The experiments demonstrate that compared with the backpropagation (BP) algorithm and support vector machine (SVM), the performance of the ELM is better in terms of training time and classification accuracy which achieves a satisfying recognition accuracy of 96.5% for interictal and ictal EEG signals. Copyright © 2011 Elsevier B.V. All rights reserved.

  19. Predicting domain-domain interaction based on domain profiles with feature selection and support vector machines

    Directory of Open Access Journals (Sweden)

    Liao Li

    2010-10-01

    Full Text Available Abstract Background Protein-protein interaction (PPI plays essential roles in cellular functions. The cost, time and other limitations associated with the current experimental methods have motivated the development of computational methods for predicting PPIs. As protein interactions generally occur via domains instead of the whole molecules, predicting domain-domain interaction (DDI is an important step toward PPI prediction. Computational methods developed so far have utilized information from various sources at different levels, from primary sequences, to molecular structures, to evolutionary profiles. Results In this paper, we propose a computational method to predict DDI using support vector machines (SVMs, based on domains represented as interaction profile hidden Markov models (ipHMM where interacting residues in domains are explicitly modeled according to the three dimensional structural information available at the Protein Data Bank (PDB. Features about the domains are extracted first as the Fisher scores derived from the ipHMM and then selected using singular value decomposition (SVD. Domain pairs are represented by concatenating their selected feature vectors, and classified by a support vector machine trained on these feature vectors. The method is tested by leave-one-out cross validation experiments with a set of interacting protein pairs adopted from the 3DID database. The prediction accuracy has shown significant improvement as compared to InterPreTS (Interaction Prediction through Tertiary Structure, an existing method for PPI prediction that also uses the sequences and complexes of known 3D structure. Conclusions We show that domain-domain interaction prediction can be significantly enhanced by exploiting information inherent in the domain profiles via feature selection based on Fisher scores, singular value decomposition and supervised learning based on support vector machines. Datasets and source code are freely available on

  20. AFREET: HUMAN-INSPIRED SPATIO-SPECTRAL FEATURE CONSTRUCTION FOR IMAGE CLASSIFICATION WITH SUPPORT VECTOR MACHINES

    Energy Technology Data Exchange (ETDEWEB)

    S. PERKINS; N. HARVEY

    2001-02-01

    The authors examine the task of pixel-by-pixel classification of the multispectral and grayscale images typically found in remote-sensing and medical applications. Simple machine learning techniques have long been applied to remote-sensed image classification, but almost always using purely spectral information about each pixel. Humans can often outperform these systems, and make extensive use of spatial context to make classification decisions. They present AFREET: an SVM-based learning system which attempts to automatically construct and refine spatio-spectral features in a somewhat human-inspired fashion. Comparisons with traditionally used machine learning techniques show that AFREET achieves significantly higher performance. The use of spatial context is particularly useful for medical imagery, where multispectral images are still rare.

  1. Application of the Disruption Predictor Feature Developer to developing a machine-portable disruption predictor

    Science.gov (United States)

    Parsons, Matthew; Tang, William; Feibush, Eliot

    2016-10-01

    Plasma disruptions pose a major threat to the operation of tokamaks which confine a large amount of stored energy. In order to effectively mitigate this damage it is necessary to predict an oncoming disruption with sufficient warning time to take mitigative action. Machine learning approaches to this problem have shown promise but require further developments to address (1) the need for machine-portable predictors and (2) the availability of multi-dimensional signal inputs. Here we demonstrate progress in these two areas by applying the Disruption Predictor Feature Developer to data from JET and NSTX, and discuss topics of focus for ongoing work in support of ITER. The author is also supported under the Fulbright U.S. Student Program as a graduate student in the department of Nuclear, Plasma and Radiological Engineering at the University of Illinois at Urbana-Champaign.

  2. Pipeline leakage recognition based on the projection singular value features and support vector machine

    Energy Technology Data Exchange (ETDEWEB)

    Liang, Wei; Zhang, Laibin; Mingda, Wang; Jinqiu, Hu [College of Mechanical and Transportation Engineering, China University of Petroleum, Beijing, (China)

    2010-07-01

    The negative wave pressure method is one of the processes used to detect leaks on oil pipelines. The development of new leakage recognition processes is difficult because it is practically impossible to collect leakage pressure samples. The method of leakage feature extraction and the selection of the recognition model are also important in pipeline leakage detection. This study investigated a new feature extraction approach Singular Value Projection (SVP). It projects the singular value to a standard basis. A new pipeline recognition model based on the multi-class Support Vector Machines was also developed. It was found that SVP is a clear and concise recognition feature of the negative pressure wave. Field experiments proved that the model provided a high recognition accuracy rate. This approach to pipeline leakage detection based on the SVP and SVM has a high application value.

  3. Feature Selection by Merging Sequential Bidirectional Search into Relevance Vector Machine in Condition Monitoring

    Institute of Scientific and Technical Information of China (English)

    ZHANG Kui; DONG Yu; BALL Andrew

    2015-01-01

    For more accurate fault detection and diagnosis, there is an increasing trend to use a large number of sensors and to collect data at high frequency. This inevitably produces large-scale data and causes difficulties in fault classification. Actually, the classification methods are simply intractable when applied to high-dimensional condition monitoring data. In order to solve the problem, engineers have to resort to complicated feature extraction methods to reduce the dimensionality of data. However, the features transformed by the methods cannot be understood by the engineers due to a loss of the original engineering meaning. In this paper, other forms of dimensionality reduction technique(feature selection methods) are employed to identify machinery condition, based only on frequency spectrum data. Feature selection methods are usually divided into three main types: filter, wrapper and embedded methods. Most studies are mainly focused on the first two types, whilst the development and application of the embedded feature selection methods are very limited. This paper attempts to explore a novel embedded method. The method is formed by merging a sequential bidirectional search algorithm into scale parameters tuning within a kernel function in the relevance vector machine. To demonstrate the potential for applying the method to machinery fault diagnosis, the method is implemented to rolling bearing experimental data. The results obtained by using the method are consistent with the theoretical interpretation, proving that this algorithm has important engineering significance in revealing the correlation between the faults and relevant frequency features. The proposed method is a theoretical extension of relevance vector machine, and provides an effective solution to detect the fault-related frequency components with high efficiency.

  4. Feature selection by merging sequential bidirectional search into relevance vector machine in condition monitoring

    Science.gov (United States)

    Zhang, Kui; Dong, Yu; Ball, Andrew

    2015-11-01

    For more accurate fault detection and diagnosis, there is an increasing trend to use a large number of sensors and to collect data at high frequency. This inevitably produces large-scale data and causes difficulties in fault classification. Actually, the classification methods are simply intractable when applied to high-dimensional condition monitoring data. In order to solve the problem, engineers have to resort to complicated feature extraction methods to reduce the dimensionality of data. However, the features transformed by the methods cannot be understood by the engineers due to a loss of the original engineering meaning. In this paper, other forms of dimensionality reduction technique(feature selection methods) are employed to identify machinery condition, based only on frequency spectrum data. Feature selection methods are usually divided into three main types: filter, wrapper and embedded methods. Most studies are mainly focused on the first two types, whilst the development and application of the embedded feature selection methods are very limited. This paper attempts to explore a novel embedded method. The method is formed by merging a sequential bidirectional search algorithm into scale parameters tuning within a kernel function in the relevance vector machine. To demonstrate the potential for applying the method to machinery fault diagnosis, the method is implemented to rolling bearing experimental data. The results obtained by using the method are consistent with the theoretical interpretation, proving that this algorithm has important engineering significance in revealing the correlation between the faults and relevant frequency features. The proposed method is a theoretical extension of relevance vector machine, and provides an effective solution to detect the fault-related frequency components with high efficiency.

  5. The Readability of Principles of Macroeconomics Textbooks

    Science.gov (United States)

    Tinkler, Sarah; Woods, James

    2013-01-01

    The authors evaluated principles of macroeconomics textbooks for readability using Coh-Metrix, a computational linguistics tool. Additionally, they conducted an experiment on Amazon's Mechanical Turk Web site in which participants ranked the readability of text samples. There was a wide range of scores on readability indexes both among…

  6. The Readability of Principles of Macroeconomics Textbooks

    Science.gov (United States)

    Tinkler, Sarah; Woods, James

    2013-01-01

    The authors evaluated principles of macroeconomics textbooks for readability using Coh-Metrix, a computational linguistics tool. Additionally, they conducted an experiment on Amazon's Mechanical Turk Web site in which participants ranked the readability of text samples. There was a wide range of scores on readability indexes both among…

  7. Readability Revisited: Antiquated Albatross or Trusty Steed?

    Science.gov (United States)

    Schatzberg-Smith, Kathleen

    1989-01-01

    Reviews the literature and research on readability. Examines early formulas that relied on sentence length and difficulty of vocabulary, the subsequent use of cloze procedure to measure text readability, and other factors now recognized as influences on the comprehensibility of a text. Concludes that readability formulas have not served education…

  8. Soft Computing, Machine Intelligence and Data Mining: Features, Applications and Prospects

    Institute of Scientific and Technical Information of China (English)

    Sankar K. Pal

    2006-01-01

    Different components of soft computing (e.g., fuzzy logic, artificial neural networks, rough sets and genetic algorithms) and machine intelligence and their relevance to data mining and knowledge discovery from pattern recognition points of view are explained. Different features of these tools are explained conceptually. Ways of integrating different tools for application specific merits are described. Problems like case (prototype) generation, rule generation, and classification are considered in general. Some of such integrations are explained along with their merits and suitability for data mining with real life applications. Significance of granular computing through rough sets is given emphasis. Finally, the application in bioinformatics and webmining are discussed.

  9. Feature Extraction and Machine Learning for the Classification of Brazilian Savannah Pollen Grains.

    Directory of Open Access Journals (Sweden)

    Ariadne Barbosa Gonçalves

    Full Text Available The classification of pollen species and types is an important task in many areas like forensic palynology, archaeological palynology and melissopalynology. This paper presents the first annotated image dataset for the Brazilian Savannah pollen types that can be used to train and test computer vision based automatic pollen classifiers. A first baseline human and computer performance for this dataset has been established using 805 pollen images of 23 pollen types. In order to access the computer performance, a combination of three feature extractors and four machine learning techniques has been implemented, fine tuned and tested. The results of these tests are also presented in this paper.

  10. Discriminative feature-rich models for syntax-based machine translation.

    Energy Technology Data Exchange (ETDEWEB)

    Dixon, Kevin R.

    2012-12-01

    This report describes the campus executive LDRD %E2%80%9CDiscriminative Feature-Rich Models for Syntax-Based Machine Translation,%E2%80%9D which was an effort to foster a better relationship between Sandia and Carnegie Mellon University (CMU). The primary purpose of the LDRD was to fund the research of a promising graduate student at CMU; in this case, Kevin Gimpel was selected from the pool of candidates. This report gives a brief overview of Kevin Gimpel's research.

  11. BLProt: Prediction of bioluminescent proteins based on support vector machine and relieff feature selection

    KAUST Repository

    Kandaswamy, Krishna Kumar

    2011-08-17

    Background: Bioluminescence is a process in which light is emitted by a living organism. Most creatures that emit light are sea creatures, but some insects, plants, fungi etc, also emit light. The biotechnological application of bioluminescence has become routine and is considered essential for many medical and general technological advances. Identification of bioluminescent proteins is more challenging due to their poor similarity in sequence. So far, no specific method has been reported to identify bioluminescent proteins from primary sequence.Results: In this paper, we propose a novel predictive method that uses a Support Vector Machine (SVM) and physicochemical properties to predict bioluminescent proteins. BLProt was trained using a dataset consisting of 300 bioluminescent proteins and 300 non-bioluminescent proteins, and evaluated by an independent set of 141 bioluminescent proteins and 18202 non-bioluminescent proteins. To identify the most prominent features, we carried out feature selection with three different filter approaches, ReliefF, infogain, and mRMR. We selected five different feature subsets by decreasing the number of features, and the performance of each feature subset was evaluated.Conclusion: BLProt achieves 80% accuracy from training (5 fold cross-validations) and 80.06% accuracy from testing. The performance of BLProt was compared with BLAST and HMM. High prediction accuracy and successful prediction of hypothetical proteins suggests that BLProt can be a useful approach to identify bioluminescent proteins from sequence information, irrespective of their sequence similarity. 2011 Kandaswamy et al; licensee BioMed Central Ltd.

  12. Machine Fault Detection Based on Filter Bank Similarity Features Using Acoustic and Vibration Analysis

    Directory of Open Access Journals (Sweden)

    Mauricio Holguín-Londoño

    2016-01-01

    Full Text Available Vibration and acoustic analysis actively support the nondestructive and noninvasive fault diagnostics of rotating machines at early stages. Nonetheless, the acoustic signal is less used because of its vulnerability to external interferences, hindering an efficient and robust analysis for condition monitoring (CM. This paper presents a novel methodology to characterize different failure signatures from rotating machines using either acoustic or vibration signals. Firstly, the signal is decomposed into several narrow-band spectral components applying different filter bank methods such as empirical mode decomposition, wavelet packet transform, and Fourier-based filtering. Secondly, a feature set is built using a proposed similarity measure termed cumulative spectral density index and used to estimate the mutual statistical dependence between each bandwidth-limited component and the raw signal. Finally, a classification scheme is carried out to distinguish the different types of faults. The methodology is tested in two laboratory experiments, including turbine blade degradation and rolling element bearing faults. The robustness of our approach is validated contaminating the signal with several levels of additive white Gaussian noise, obtaining high-performance outcomes that make the usage of vibration, acoustic, and vibroacoustic measurements in different applications comparable. As a result, the proposed fault detection based on filter bank similarity features is a promising methodology to implement in CM of rotating machinery, even using measurements with low signal-to-noise ratio.

  13. Integration of Error Compensation of Coordinate Measuring Machines into Feature Measurement: Part I—Model Development

    Directory of Open Access Journals (Sweden)

    Roque Calvo

    2016-09-01

    Full Text Available The development of an error compensation model for coordinate measuring machines (CMMs and its integration into feature measurement is presented. CMMs are widespread and dependable instruments in industry and laboratories for dimensional measurement. From the tip probe sensor to the machine display, there is a complex transformation of probed point coordinates through the geometrical feature model that makes the assessment of accuracy and uncertainty measurement results difficult. Therefore, error compensation is not standardized, conversely to other simpler instruments. Detailed coordinate error compensation models are generally based on CMM as a rigid-body and it requires a detailed mapping of the CMM’s behavior. In this paper a new model type of error compensation is proposed. It evaluates the error from the vectorial composition of length error by axis and its integration into the geometrical measurement model. The non-explained variability by the model is incorporated into the uncertainty budget. Model parameters are analyzed and linked to the geometrical errors and uncertainty of CMM response. Next, the outstanding measurement models of flatness, angle, and roundness are developed. The proposed models are useful for measurement improvement with easy integration into CMM signal processing, in particular in industrial environments where built-in solutions are sought. A battery of implementation tests are presented in Part II, where the experimental endorsement of the model is included.

  14. Near-Surface Crevasse Detection in Ice Sheets using Feature-Based Machine Learning

    Science.gov (United States)

    Ray, L.; Walker, B.; Lever, J.; Arcone, S. A.

    2015-12-01

    In 2014, a team of Dartmouth, CRREL, and University of Maine researchers conducted the first of three annual ground-penetrating radar surveys of the McMurdo Shear Zone using robot-towed instruments. This survey provides over 100 transects of a 5.7 km x 5.0 km grid spanning the width of the shear zone at spacing of approximately 50 m. Transect direction was orthogonal to ice flow. Additionally, a dense 200 m x 200 m grid was surveyed at 10 m spacing in both the N-S and W-E directions. Radar settings provided 20 traces/sec, which combined with an average robot speed of 1.52 m/s, provides a trace every 7.6 cm. The robot towed two antenna units at 400 MHz and 200 MHz center frequencies, with the former penetrating to approximately 19 m. We establish boundaries for the shear zone over the region surveyed using the 400 MHz antenna data, and we geo-locate crevasses using feature-based machine learning classification of GPR traces into one of three classes - 1) firn, 2) distinct crevasses, and 3) less distinct or deeper features originating within the 19 m penetration depth. Distinct crevasses feature wide, hyperbolic reflections with strike angles of 35-40° to transect direction and clear voids. Less distinct or deeper features range from broad diffraction patterns with no clear void to overlapping diffractions extending tens of meters in width with or without a clear void. The classification is derived from statistical features of unprocessed traces and thus provides a computationally efficient means for eventual real-time classification of GPR traces. Feature-based classification is shown to be insensitive to artifacts related to rolling or pitching motion of the instrument sled and also provides a means of assessing crevasse width and depth. In subsequent years, we will use feature-based classification to estimate ice flow and evolution of individual crevasses.

  15. Specific Features of Chip Making and Work-piece Surface Layer Formation in Machining Thermal Coatings

    Directory of Open Access Journals (Sweden)

    V. M. Yaroslavtsev

    2016-01-01

    Full Text Available A wide range of unique engineering structural and performance properties inherent in metallic composites characterizes wear- and erosion-resistant high-temperature coatings made by thermal spraying methods. This allows their use both in manufacturing processes to enhance the wear strength of products, which have to operate under the cyclic loading, high contact pressures, corrosion and high temperatures and in product renewal.Thermal coatings contribute to the qualitative improvement of the technical level of production and product restoration using the ceramic composite materials. However, the possibility to have a significantly increased product performance, reduce their factory labour hours and materials/output ratio in manufacturing and restoration is largely dependent on the degree of the surface layer quality of products at their finishing stage, which is usually provided by different kinds of machining.When machining the plasma-sprayed thermal coatings, a removing process of the cut-off layer material is determined by its distinctive features such as a layered structure, high internal stresses, low ductility material, high tendency to the surface layer strengthening and rehardening, porosity, high abrasive properties, etc. When coatings are machined these coating properties result in specific characteristics of chip formation and conditions for formation of the billet surface layer.The chip formation of plasma-sprayed coatings was studied at micro-velocities using an experimental tool-setting microscope-based setup, created in BMSTU. The setup allowed simultaneous recording both the individual stages (phases of the chip formation process and the operating force factors.It is found that formation of individual chip elements comes with the multiple micro-cracks that cause chipping-off the small particles of material. The emerging main crack in the cut-off layer of material leads to separation of the largest chip element. Then all the stages

  16. A machine-learning approach for predicting palmitoylation sites from integrated sequence-based features.

    Science.gov (United States)

    Li, Liqi; Luo, Qifa; Xiao, Weidong; Li, Jinhui; Zhou, Shiwen; Li, Yongsheng; Zheng, Xiaoqi; Yang, Hua

    2017-02-01

    Palmitoylation is the covalent attachment of lipids to amino acid residues in proteins. As an important form of protein posttranslational modification, it increases the hydrophobicity of proteins, which contributes to the protein transportation, organelle localization, and functions, therefore plays an important role in a variety of cell biological processes. Identification of palmitoylation sites is necessary for understanding protein-protein interaction, protein stability, and activity. Since conventional experimental techniques to determine palmitoylation sites in proteins are both labor intensive and costly, a fast and accurate computational approach to predict palmitoylation sites from protein sequences is in urgent need. In this study, a support vector machine (SVM)-based method was proposed through integrating PSI-BLAST profile, physicochemical properties, [Formula: see text]-mer amino acid compositions (AACs), and [Formula: see text]-mer pseudo AACs into the principal feature vector. A recursive feature selection scheme was subsequently implemented to single out the most discriminative features. Finally, an SVM method was implemented to predict palmitoylation sites in proteins based on the optimal features. The proposed method achieved an accuracy of 99.41% and Matthews Correlation Coefficient of 0.9773 for a benchmark dataset. The result indicates the efficiency and accuracy of our method in prediction of palmitoylation sites based on protein sequences.

  17. Microcanonical Annealing and Threshold Accepting for Parameter Determination and Feature Selection of Support Vector Machines

    Directory of Open Access Journals (Sweden)

    Seyyid Ahmed Medjahed

    2016-12-01

    Full Text Available Support vector machine (SVM is a popular classification technique with many diverse applications. Parameter determination and feature selection significantly influences the classification accuracy rate and the SVM model quality. This paper proposes two novel approaches based on: Microcanonical Annealing (MA-SVM and Threshold Accepting (TA-SVM to determine the optimal value parameter and the relevant features subset, without reducing SVM classification accuracy. In order to evaluate the performance of MA-SVM and TA-SVM, several public datasets are employed to compute the classification accuracy rate. The proposed approaches were tested in the context of medical diagnosis. Also, we tested the approaches on DNA microarray datasets used for cancer diagnosis. The results obtained by the MA-SVM and TA-SVM algorithms are shown to be superior and have given a good performance in the DNA microarray data sets which are characterized by the large number of features. Therefore, the MA-SVM and TA-SVM approaches are well suited for parameter determination and feature selection in SVM.

  18. An Enhanced Grey Wolf Optimization Based Feature Selection Wrapped Kernel Extreme Learning Machine for Medical Diagnosis

    Science.gov (United States)

    Li, Qiang; Zhao, Xuehua; Cai, ZhenNao; Tong, Changfei; Liu, Wenbin; Tian, Xin

    2017-01-01

    In this study, a new predictive framework is proposed by integrating an improved grey wolf optimization (IGWO) and kernel extreme learning machine (KELM), termed as IGWO-KELM, for medical diagnosis. The proposed IGWO feature selection approach is used for the purpose of finding the optimal feature subset for medical data. In the proposed approach, genetic algorithm (GA) was firstly adopted to generate the diversified initial positions, and then grey wolf optimization (GWO) was used to update the current positions of population in the discrete searching space, thus getting the optimal feature subset for the better classification purpose based on KELM. The proposed approach is compared against the original GA and GWO on the two common disease diagnosis problems in terms of a set of performance metrics, including classification accuracy, sensitivity, specificity, precision, G-mean, F-measure, and the size of selected features. The simulation results have proven the superiority of the proposed method over the other two competitive counterparts. PMID:28246543

  19. PROSODIC FEATURE BASED TEXT DEPENDENT SPEAKER RECOGNITION USING MACHINE LEARNING ALGORITHMS

    Directory of Open Access Journals (Sweden)

    Sunil Agrawal

    2010-10-01

    Full Text Available Most of us are aware of the fact that voices of different individuals do not sound alike. The ability of recognizing a person solely from his voice is known as speaker recognition. Speaker recognition can not only assist in building better access control systems and security apparatus, it can be a useful tool in many other areas such as forensic speech analysis. The choice of features plays an important role in the performance of ML algorithm. Here we propose prosodic features based text dependent speaker recognition where the prosodic features can be extracted through linear predictive coding. Formants are efficient parameters to characterize a speaker’s voice. Formants are combined with their corresponding amplitudes, fundamental frequency, duration of speech utterance and energy ofthe windowed section. This feature vector is input to machine learning (ML algorithms for recognition. We investigate the performance of four ML algorithms namely MLP, RBFN, C4.5 decision tree, and BayesNet. Out of these ML algorithms, C4.5 decision tree performance is consistent. MLP performs better for gender recognition and experimental results show that RBFN gives better performance for increased population size.

  20. Digital Library ImageRetrieval usingScale Invariant Feature and Relevance Vector Machine

    Directory of Open Access Journals (Sweden)

    Hongtao Zhang

    2014-10-01

    Full Text Available With the advance of digital library, the digital content develops with rich information connotation. Traditional information retrieval methods based on external characteristic and text description are unable to sufficientlyreveal and express the substance and semantic relation of multimedia information, and unable to fully reveal and describe the representative characteristics of information. Because of the abundant connotation of image content and the people’s abstract subjectivity in studying image content, the visual feature of the image is difficult to be described by key words. Therefore, this method not always can meet people’s needs, and the study of digital library image retrieval technique based on content is important to both academic research and application. At present, image retrieval methods are mainly based on the text and content, etc. But these existing algorithms have shortages, such as large errors and slow speeds. Motivated by the above fact, we in this paper propose a new approach based on relevance vector machine (RVM. The proposed approach first extracts the patch-level scale invariant image feature (SIFT, and then constructs the global features for images. The image feature is then delivered into RVM for retrieval. We evaluate the proposed approach on Corel dataset. The experimental result shows that the proposed method in this text has high accuracy when retrieves images.

  1. A fuzzy based feature selection from independent component subspace for machine learning classification of microarray data

    Directory of Open Access Journals (Sweden)

    Rabia Aziz

    2016-06-01

    Full Text Available Feature (gene selection and classification of microarray data are the two most interesting machine learning challenges. In the present work two existing feature selection/extraction algorithms, namely independent component analysis (ICA and fuzzy backward feature elimination (FBFE are used which is a new combination of selection/extraction. The main objective of this paper is to select the independent components of the DNA microarray data using FBFE to improve the performance of support vector machine (SVM and Naïve Bayes (NB classifier, while making the computational expenses affordable. To show the validity of the proposed method, it is applied to reduce the number of genes for five DNA microarray datasets namely; colon cancer, acute leukemia, prostate cancer, lung cancer II, and high-grade glioma. Now these datasets are then classified using SVM and NB classifiers. Experimental results on these five microarray datasets demonstrate that gene selected by proposed approach, effectively improve the performance of SVM and NB classifiers in terms of classification accuracy. We compare our proposed method with principal component analysis (PCA as a standard extraction algorithm and find that the proposed method can obtain better classification accuracy, using SVM and NB classifiers with a smaller number of selected genes than the PCA. The curve between the average error rate and number of genes with each dataset represents the selection of required number of genes for the highest accuracy with our proposed method for both the classifiers. ROC shows best subset of genes for both the classifier of different datasets with propose method.

  2. Statistical interpretation of machine learning-based feature importance scores for biomarker discovery.

    Science.gov (United States)

    Huynh-Thu, Vân Anh; Saeys, Yvan; Wehenkel, Louis; Geurts, Pierre

    2012-07-01

    Univariate statistical tests are widely used for biomarker discovery in bioinformatics. These procedures are simple, fast and their output is easily interpretable by biologists but they can only identify variables that provide a significant amount of information in isolation from the other variables. As biological processes are expected to involve complex interactions between variables, univariate methods thus potentially miss some informative biomarkers. Variable relevance scores provided by machine learning techniques, however, are potentially able to highlight multivariate interacting effects, but unlike the p-values returned by univariate tests, these relevance scores are usually not statistically interpretable. This lack of interpretability hampers the determination of a relevance threshold for extracting a feature subset from the rankings and also prevents the wide adoption of these methods by practicians. We evaluated several, existing and novel, procedures that extract relevant features from rankings derived from machine learning approaches. These procedures replace the relevance scores with measures that can be interpreted in a statistical way, such as p-values, false discovery rates, or family wise error rates, for which it is easier to determine a significance level. Experiments were performed on several artificial problems as well as on real microarray datasets. Although the methods differ in terms of computing times and the tradeoff, they achieve in terms of false positives and false negatives, some of them greatly help in the extraction of truly relevant biomarkers and should thus be of great practical interest for biologists and physicians. As a side conclusion, our experiments also clearly highlight that using model performance as a criterion for feature selection is often counter-productive. Python source codes of all tested methods, as well as the MATLAB scripts used for data simulation, can be found in the Supplementary Material.

  3. Feature-matching pattern-based support vector machines for robust peptide mass fingerprinting.

    Science.gov (United States)

    Li, Youyuan; Hao, Pei; Zhang, Siliang; Li, Yixue

    2011-12-01

    Peptide mass fingerprinting, regardless of becoming complementary to tandem mass spectrometry for protein identification, is still the subject of in-depth study because of its higher sample throughput, higher level of specificity for single peptides and lower level of sensitivity to unexpected post-translational modifications compared with tandem mass spectrometry. In this study, we propose, implement and evaluate a uniform approach using support vector machines to incorporate individual concepts and conclusions for accurate PMF. We focus on the inherent attributes and critical issues of the theoretical spectrum (peptides), the experimental spectrum (peaks) and spectrum (masses) alignment. Eighty-one feature-matching patterns derived from cleavage type, uniqueness and variable masses of theoretical peptides together with the intensity rank of experimental peaks were proposed to characterize the matching profile of the peptide mass fingerprinting procedure. We developed a new strategy including the participation of matched peak intensity redistribution to handle shared peak intensities and 440 parameters were generated to digitalize each feature-matching pattern. A high performance for an evaluation data set of 137 items was finally achieved by the optimal multi-criteria support vector machines approach, with 491 final features out of a feature vector of 35,640 normalized features through cross training and validating a publicly available "gold standard" peptide mass fingerprinting data set of 1733 items. Compared with the Mascot, MS-Fit, ProFound and Aldente algorithms commonly used for MS-based protein identification, the feature-matching patterns algorithm has a greater ability to clearly separate correct identifications and random matches with the highest values for sensitivity (82%), precision (97%) and F1-measure (89%) of protein identification. Several conclusions reached via this research make general contributions to MS-based protein identification. Firstly

  4. 基于支持向量机的特征选择%Feature Selection Based on Support Vector Machine

    Institute of Scientific and Technical Information of China (English)

    葛敏敏; 范丽亚

    2011-01-01

    主要研究了基于支持向量机的特征选择方法——特征权法,通过对两组数据进行试验,说明了特征权法在分类效果上优于F-得分法和支持向量机.%This paper is devoted to study a feature election method based on support vector machine feature weight. Experiments with two kinds of data taken from UCI machine learning repository show that feature weight method is superior to F-score method and SVM on

  5. A Novel Approach for Multi Class Fault Diagnosis in Induction Machine Based on Statistical Time Features and Random Forest Classifier

    Science.gov (United States)

    Sonje, M. Deepak; Kundu, P.; Chowdhury, A.

    2017-08-01

    Fault diagnosis and detection is the important area in health monitoring of electrical machines. This paper proposes the recently developed machine learning classifier for multi class fault diagnosis in induction machine. The classification is based on random forest (RF) algorithm. Initially, stator currents are acquired from the induction machine under various conditions. After preprocessing the currents, fourteen statistical time features are estimated for each phase of the current. These parameters are considered as inputs to the classifier. The main scope of the paper is to evaluate effectiveness of RF classifier for individual and mixed fault diagnosis in induction machine. The stator, rotor and mixed faults (stator and rotor faults) are classified using the proposed classifier. The obtained performance measures are compared with the multilayer perceptron neural network (MLPNN) classifier. The results show the much better performance measures and more accurate than MLPNN classifier. For demonstration of planned fault diagnosis algorithm, experimentally obtained results are considered to build the classifier more practical.

  6. Textbook readability and ESL learners

    Directory of Open Access Journals (Sweden)

    Daniel Kasule

    2011-05-01

    Full Text Available This paper reports activities (as part of a university course in language teacher education on teaching reading in which primary school student teachers (all ESL in-service teacher trainees explored their own skills of determining textbook readability using an online software tool and a cloze test completed by two hundred and seventy-eight Grade Seven primary school pupils. Findings from the online tool were that the text was difficult. The cloze test confirmed this when it showed that only eighteen pupils could read the text unassisted while the rest were frustrated by it. The paper uses these findings to describe the challenges pupils face and how readability research is beneficial to the reading development of ESL learners if reading of academic texts is approached from the principles of the interactive view of reading; of cognitive learning theory; and of second language acquisition theory. It is concluded that teachers’ awareness of readability issues is helpful for effective reading instruction during the critical formative years of school.

  7. Classifying spatially heterogeneous wetland communities using machine learning algorithms and spectral and textural features.

    Science.gov (United States)

    Szantoi, Zoltan; Escobedo, Francisco J; Abd-Elrahman, Amr; Pearlstine, Leonard; Dewitt, Bon; Smith, Scot

    2015-05-01

    Mapping of wetlands (marsh vs. swamp vs. upland) is a common remote sensing application.Yet, discriminating between similar freshwater communities such as graminoid/sedge fromremotely sensed imagery is more difficult. Most of this activity has been performed using medium to low resolution imagery. There are only a few studies using highspatial resolutionimagery and machine learning image classification algorithms for mapping heterogeneouswetland plantcommunities. This study addresses this void by analyzing whether machine learning classifierssuch as decisiontrees (DT) and artificial neural networks (ANN) can accurately classify graminoid/sedgecommunities usinghigh resolution aerial imagery and image texture data in the Everglades National Park, Florida.In addition tospectral bands, the normalized difference vegetation index, and first- and second-order texturefeatures derivedfrom the near-infrared band were analyzed. Classifier accuracies were assessed using confusiontablesand the calculated kappa coefficients of the resulting maps. The results indicated that an ANN(multilayerperceptron based on backpropagation) algorithm produced a statistically significantly higheraccuracy(82.04%) than the DT (QUEST) algorithm (80.48%) or the maximum likelihood (80.56%)classifier (α<0.05). Findings show that using multiple window sizes provided the best results. First-ordertexture featuresalso provided computational advantages and results that were not significantly different fromthose usingsecond-order texture features.

  8. Feature extraction and classification for EEG signals using wavelet transform and machine learning techniques.

    Science.gov (United States)

    Amin, Hafeez Ullah; Malik, Aamir Saeed; Ahmad, Rana Fayyaz; Badruddin, Nasreen; Kamel, Nidal; Hussain, Muhammad; Chooi, Weng-Tink

    2015-03-01

    This paper describes a discrete wavelet transform-based feature extraction scheme for the classification of EEG signals. In this scheme, the discrete wavelet transform is applied on EEG signals and the relative wavelet energy is calculated in terms of detailed coefficients and the approximation coefficients of the last decomposition level. The extracted relative wavelet energy features are passed to classifiers for the classification purpose. The EEG dataset employed for the validation of the proposed method consisted of two classes: (1) the EEG signals recorded during the complex cognitive task--Raven's advance progressive metric test and (2) the EEG signals recorded in rest condition--eyes open. The performance of four different classifiers was evaluated with four performance measures, i.e., accuracy, sensitivity, specificity and precision values. The accuracy was achieved above 98 % by the support vector machine, multi-layer perceptron and the K-nearest neighbor classifiers with approximation (A4) and detailed coefficients (D4), which represent the frequency range of 0.53-3.06 and 3.06-6.12 Hz, respectively. The findings of this study demonstrated that the proposed feature extraction approach has the potential to classify the EEG signals recorded during a complex cognitive task by achieving a high accuracy rate.

  9. Two applications of small feature dimensional measurements on a coordinate measuring machine with a fiber probe

    Science.gov (United States)

    Stanfield, Eric; Muralikrishnan, Bala; Doiron, Ted; Zheng, Alan; Orandi, Shahram; Duquette, David

    2013-10-01

    This paper describes two applications of dimensional measurements performed using a contact fiber probe on a commercial coordinate measuring machine (CMM). Both examples involve artifacts that serve as reference standards and contain features in the 100 μm to 500 μm range. The first application involves measuring the spacing between features, either holes or rectangular prisms, on a cylinder that is approximately the size of a finger. The artifact, referred to as the fingerprint target, serves as a standard for verifying the performance of fingerprint scanners. The second application involves measuring the volume of small three-dimensional features such as cylinders and rectangular prisms that rise from a plate. This artifact is referred to as the volume target in this paper; these targets serve as volume standards for manufacturers and users of solder paste inspection systems. In each case, the measurement challenges presented by these artifacts are discussed and the measurand, the measurement plan, error sources, and uncertainty budget are described.

  10. Machine Learning Approaches to Classification of Seafloor Features from High Resolution Sonar Data

    Science.gov (United States)

    Smith, D. G.; Ed, L.; Sofge, D.; Elmore, P. A.; Petry, F.

    2014-12-01

    Navigation charts provide topographic maps of the seafloor created from swaths of sonar data. Converting sonar data to a topographic map is a manual, labor-intensive process that can be greatly assisted by contextual information obtained from automated classification of geomorphological structures. Finding structures such as seamounts can be challenging, as there are no established rules that can be used for decision-making. Often times, it is a determination that is made by human expertise. A variety of feature metrics may be useful for this task and we use a large number of metrics relevant to the task of finding seamounts. We demonstrate this ability in locating seamounts by two related machine learning techniques. As well as achieving good accuracy in classification, the human-understandable set of metrics that are most important for the results are discussed.

  11. FEATURE RANKING BASED NESTED SUPPORT VECTOR MACHINE ENSEMBLE FOR MEDICAL IMAGE CLASSIFICATION.

    Science.gov (United States)

    Varol, Erdem; Gaonkar, Bilwaj; Erus, Guray; Schultz, Robert; Davatzikos, Christos

    2012-01-01

    This paper presents a method for classification of structural magnetic resonance images (MRI) of the brain. An ensemble of linear support vector machine classifiers (SVMs) is used for classifying a subject as either patient or normal control. Image voxels are first ranked based on the voxel wise t-statistics between the voxel intensity values and class labels. Then voxel subsets are selected based on the rank value using a forward feature selection scheme. Finally, an SVM classifier is trained on each subset of image voxels. The class label of a test subject is calculated by combining individual decisions of the SVM classifiers using a voting mechanism. The method is applied for classifying patients with neurological diseases such as Alzheimer's disease (AD) and autism spectrum disorder (ASD). The results on both datasets demonstrate superior performance as compared to two state of the art methods for medical image classification.

  12. Extending the features of RBMK refuelling machine simulator with a training tool based on virtual reality

    Energy Technology Data Exchange (ETDEWEB)

    Khoudiakov, M.; Slonimsky, V.; Mitrofanov, S. (and others)

    2004-07-01

    should include a training methodology, simulation models/ malfunctions and VR-models to support the maintenance personnel. That work is to be based on a design and creation of a multi-machine computer complex, software and information support (Data base) development, and developing anew and/or up-grade the technology system models and training support methodology. The paper gives the background for developing the training system, the features and the structure of the system in addition to the current status in the development process. The final system will be delivered to LNPP in November 2004. (Author)

  13. Pain Intensity Recognition Rates via Biopotential Feature Patterns with Support Vector Machines.

    Directory of Open Access Journals (Sweden)

    Sascha Gruss

    Full Text Available The clinically used methods of pain diagnosis do not allow for objective and robust measurement, and physicians must rely on the patient's report on the pain sensation. Verbal scales, visual analog scales (VAS or numeric rating scales (NRS count among the most common tools, which are restricted to patients with normal mental abilities. There also exist instruments for pain assessment in people with verbal and / or cognitive impairments and instruments for pain assessment in people who are sedated and automated ventilated. However, all these diagnostic methods either have limited reliability and validity or are very time-consuming. In contrast, biopotentials can be automatically analyzed with machine learning algorithms to provide a surrogate measure of pain intensity.In this context, we created a database of biopotentials to advance an automated pain recognition system, determine its theoretical testing quality, and optimize its performance. Eighty-five participants were subjected to painful heat stimuli (baseline, pain threshold, two intermediate thresholds, and pain tolerance threshold under controlled conditions and the signals of electromyography, skin conductance level, and electrocardiography were collected. A total of 159 features were extracted from the mathematical groupings of amplitude, frequency, stationarity, entropy, linearity, variability, and similarity.We achieved classification rates of 90.94% for baseline vs. pain tolerance threshold and 79.29% for baseline vs. pain threshold. The most selected pain features stemmed from the amplitude and similarity group and were derived from facial electromyography.The machine learning measurement of pain in patients could provide valuable information for a clinical team and thus support the treatment assessment.

  14. Support vector machine model for diagnosing pneumoconiosis based on wavelet texture features of digital chest radiographs.

    Science.gov (United States)

    Zhu, Biyun; Chen, Hui; Chen, Budong; Xu, Yan; Zhang, Kuan

    2014-02-01

    This study aims to explore the classification ability of decision trees (DTs) and support vector machines (SVMs) to discriminate between the digital chest radiographs (DRs) of pneumoconiosis patients and control subjects. Twenty-eight wavelet-based energy texture features were calculated at the lung fields on DRs of 85 healthy controls and 40 patients with stage I and stage II pneumoconiosis. DTs with algorithm C5.0 and SVMs with four different kernels were trained by samples with two combinations of the texture features to classify a DR as of a healthy subject or of a patient with pneumoconiosis. All of the models were developed with fivefold cross-validation, and the final performances of each model were compared by the area under receiver operating characteristic (ROC) curve. For both SVM (with a radial basis function kernel) and DT (with algorithm C5.0), areas under ROC curves (AUCs) were 0.94 ± 0.02 and 0.86 ± 0.04 (P = 0.02) when using the full feature set and 0.95 ± 0.02 and 0.88 ± 0.04 (P = 0.05) when using the selected feature set, respectively. When built on the selected texture features, the SVM with a polynomial kernel showed a higher diagnostic performance with an AUC value of 0.97 ± 0.02 than SVMs with a linear kernel, a radial basis function kernel and a sigmoid kernel with AUC values of 0.96 ± 0.02 (P = 0.37), 0.95 ± 0.02 (P = 0.24), and 0.90 ± 0.03 (P = 0.01), respectively. The SVM model with a polynomial kernel built on the selected feature set showed the highest diagnostic performance among all tested models when using either all the wavelet texture features or the selected ones. The model has a good potential in diagnosing pneumoconiosis based on digital chest radiographs.

  15. Study of Machine-Learning Classifier and Feature Set Selection for Intent Classification of Korean Tweets about Food Safety

    Directory of Open Access Journals (Sweden)

    Yeom, Ha-Neul

    2014-09-01

    Full Text Available In recent years, several studies have proposed making use of the Twitter micro-blogging service to track various trends in online media and discussion. In this study, we specifically examine the use of Twitter to track discussions of food safety in the Korean language. Given the irregularity of keyword use in most tweets, we focus on optimistic machine-learning and feature set selection to classify collected tweets. We build the classifier model using Naive Bayes & Naive Bayes Multinomial, Support Vector Machine, and Decision Tree Algorithms, all of which show good performance. To select an optimum feature set, we construct a basic feature set as a standard for performance comparison, so that further test feature sets can be evaluated. Experiments show that precision and F-measure performance are best when using a Naive Bayes Multinomial classifier model with a test feature set defined by extracting Substantive, Predicate, Modifier, and Interjection parts of speech.

  16. Text and Language-Independent Speaker Recognition Using Suprasegmental Features and Support Vector Machines

    Science.gov (United States)

    Bajpai, Anvita; Pathangay, Vinod

    In this paper, presence of the speaker-specific suprasegmental information in the Linear Prediction (LP) residual signal is demonstrated. The LP residual signal is obtained after removing the predictable part of the speech signal. This information, if added to existing speaker recognition systems based on segmental and subsegmental features, can result in better performing combined system. The speaker-specific suprasegmental information can not only be perceived by listening to the residual, but can also be seen in the form of excitation peaks in the residual waveform. However, the challenge lies in capturing this information from the residual signal. Higher order correlations among samples of the residual are not known to be captured using standard signal processing and statistical techniques. The Hilbert envelope of residual is shown to further enhance the excitation peaks present in the residual signal. A speaker-specific pattern is also observed in the autocorrelation sequence of the Hilbert envelope, and further in the statistics of this autocorrelation sequence. This indicates the presence of the speaker-specific suprasegmental information in the residual signal. In this work, no distinction between voiced and unvoiced sounds is done for extracting these features. Support Vector Machine (SVM) is used to classify the patterns in the variance of the autocorrelation sequence for the speaker recognition task.

  17. Improved residue contact prediction using support vector machines and a large feature set

    Directory of Open Access Journals (Sweden)

    Baldi Pierre

    2007-04-01

    Full Text Available Abstract Background Predicting protein residue-residue contacts is an important 2D prediction task. It is useful for ab initio structure prediction and understanding protein folding. In spite of steady progress over the past decade, contact prediction remains still largely unsolved. Results Here we develop a new contact map predictor (SVMcon that uses support vector machines to predict medium- and long-range contacts. SVMcon integrates profiles, secondary structure, relative solvent accessibility, contact potentials, and other useful features. On the same test data set, SVMcon's accuracy is 4% higher than the latest version of the CMAPpro contact map predictor. SVMcon recently participated in the seventh edition of the Critical Assessment of Techniques for Protein Structure Prediction (CASP7 experiment and was evaluated along with seven other contact map predictors. SVMcon was ranked as one of the top predictors, yielding the second best coverage and accuracy for contacts with sequence separation >= 12 on 13 de novo domains. Conclusion We describe SVMcon, a new contact map predictor that uses SVMs and a large set of informative features. SVMcon yields good performance on medium- to long-range contact predictions and can be modularly incorporated into a structure prediction pipeline.

  18. Intelligent Video Object Classification Scheme using Offline Feature Extraction and Machine Learning based Approach

    Directory of Open Access Journals (Sweden)

    Chandra Mani Sharma

    2012-01-01

    Full Text Available Classification of objects in video stream is important because of its application in many emerging areas such as visual surveillance, content based video retrieval and indexing etc. The task is far more challenging because the video data is of heavy and highly variable nature. The processing of video data is required to be in real-time. This paper presents a multiclass object classification technique using machine learning approach. Haar-like features are used for training the classifier. The feature calculation is performed using Integral Image representation and we train the classifier offline using a Stage-wise Additive Modeling using a Multiclass Exponential loss function (SAMME. The validity of the method has been verified from the implementation of a real-time human-car detector. Experimental results show that the proposed method can accurately classify objects, in video, into their respective classes. The proposed object classifier works well in outdoor environment in presence of moderate lighting conditions and variable scene background. The proposed technique is compared, with other object classification techniques, based on various performance parameters.

  19. A Generalizable Brain-Computer Interface (BCI Using Machine Learning for Feature Discovery.

    Directory of Open Access Journals (Sweden)

    Ewan S Nurse

    Full Text Available This work describes a generalized method for classifying motor-related neural signals for a brain-computer interface (BCI, based on a stochastic machine learning method. The method differs from the various feature extraction and selection techniques employed in many other BCI systems. The classifier does not use extensive a-priori information, resulting in reduced reliance on highly specific domain knowledge. Instead of pre-defining features, the time-domain signal is input to a population of multi-layer perceptrons (MLPs in order to perform a stochastic search for the best structure. The results showed that the average performance of the new algorithm outperformed other published methods using the Berlin BCI IV (2008 competition dataset and was comparable to the best results in the Berlin BCI II (2002-3 competition dataset. The new method was also applied to electroencephalography (EEG data recorded from five subjects undertaking a hand squeeze task and demonstrated high levels of accuracy with a mean classification accuracy of 78.9% after five-fold cross-validation. Our new approach has been shown to give accurate results across different motor tasks and signal types as well as between subjects.

  20. Automated EEG artifact elimination by applying machine learning algorithms to ICA-based features

    Science.gov (United States)

    Radüntz, Thea; Scouten, Jon; Hochmuth, Olaf; Meffert, Beate

    2017-08-01

    Objective. Biological and non-biological artifacts cause severe problems when dealing with electroencephalogram (EEG) recordings. Independent component analysis (ICA) is a widely used method for eliminating various artifacts from recordings. However, evaluating and classifying the calculated independent components (IC) as artifact or EEG is not fully automated at present. Approach. In this study, we propose a new approach for automated artifact elimination, which applies machine learning algorithms to ICA-based features. Main results. We compared the performance of our classifiers with the visual classification results given by experts. The best result with an accuracy rate of 95% was achieved using features obtained by range filtering of the topoplots and IC power spectra combined with an artificial neural network. Significance. Compared with the existing automated solutions, our proposed method is not limited to specific types of artifacts, electrode configurations, or number of EEG channels. The main advantages of the proposed method is that it provides an automatic, reliable, real-time capable, and practical tool, which avoids the need for the time-consuming manual selection of ICs during artifact removal.

  1. Integration of Error Compensation of Coordinate Measuring Machines into Feature Measurement: Part II—Experimental Implementation

    Directory of Open Access Journals (Sweden)

    Roque Calvo

    2016-10-01

    Full Text Available Coordinate measuring machines (CMM are main instruments of measurement in laboratories and in industrial quality control. A compensation error model has been formulated (Part I. It integrates error and uncertainty in the feature measurement model. Experimental implementation for the verification of this model is carried out based on the direct testing on a moving bridge CMM. The regression results by axis are quantified and compared to CMM indication with respect to the assigned values of the measurand. Next, testing of selected measurements of length, flatness, dihedral angle, and roundness features are accomplished. The measurement of calibrated gauge blocks for length or angle, flatness verification of the CMM granite table and roundness of a precision glass hemisphere are presented under a setup of repeatability conditions. The results are analysed and compared with alternative methods of estimation. The overall performance of the model is endorsed through experimental verification, as well as the practical use and the model capability to contribute in the improvement of current standard CMM measuring capabilities.

  2. Histogram of Intensity Feature Extraction for Automatic Plastic Bottle Recycling System Using Machine Vision

    Directory of Open Access Journals (Sweden)

    Suzaimah Ramli

    2008-01-01

    Full Text Available Currently, many recycling activities adopt manual sorting for plastic recycling that relies on plant personnel who visually identify and pick plastic bottles as they travel along the conveyor belt. These bottles are then sorted into the respective containers. Manual sorting may not be a suitable option for recycling facilities of high throughput. It has also been noted that the high turnover among sorting line workers had caused difficulties in achieving consistency in the plastic separation process. As a result, an intelligent system for automated sorting is greatly needed to replace manual sorting system. The core components of machine vision for this intelligent sorting system is the image recognition and classification. In this research, the overall plastic bottle sorting system is described. Additionally, the feature extraction algorithm used is discussed in detail since it is the core component of the overall system that determines the success rate. The performance of the proposed feature extractions were evaluated in terms of classification accuracy and result obtained showed an accuracy of more than 80%.

  3. Ischemia episode detection in ECG using kernel density estimation, support vector machine and feature selection

    Directory of Open Access Journals (Sweden)

    Park Jinho

    2012-06-01

    Full Text Available Abstract Background Myocardial ischemia can be developed into more serious diseases. Early Detection of the ischemic syndrome in electrocardiogram (ECG more accurately and automatically can prevent it from developing into a catastrophic disease. To this end, we propose a new method, which employs wavelets and simple feature selection. Methods For training and testing, the European ST-T database is used, which is comprised of 367 ischemic ST episodes in 90 records. We first remove baseline wandering, and detect time positions of QRS complexes by a method based on the discrete wavelet transform. Next, for each heart beat, we extract three features which can be used for differentiating ST episodes from normal: 1 the area between QRS offset and T-peak points, 2 the normalized and signed sum from QRS offset to effective zero voltage point, and 3 the slope from QRS onset to offset point. We average the feature values for successive five beats to reduce effects of outliers. Finally we apply classifiers to those features. Results We evaluated the algorithm by kernel density estimation (KDE and support vector machine (SVM methods. Sensitivity and specificity for KDE were 0.939 and 0.912, respectively. The KDE classifier detects 349 ischemic ST episodes out of total 367 ST episodes. Sensitivity and specificity of SVM were 0.941 and 0.923, respectively. The SVM classifier detects 355 ischemic ST episodes. Conclusions We proposed a new method for detecting ischemia in ECG. It contains signal processing techniques of removing baseline wandering and detecting time positions of QRS complexes by discrete wavelet transform, and feature extraction from morphology of ECG waveforms explicitly. It was shown that the number of selected features were sufficient to discriminate ischemic ST episodes from the normal ones. We also showed how the proposed KDE classifier can automatically select kernel bandwidths, meaning that the algorithm does not require any numerical

  4. Computer-Aided Diagnosis for Breast Ultrasound Using Computerized BI-RADS Features and Machine Learning Methods.

    Science.gov (United States)

    Shan, Juan; Alam, S Kaisar; Garra, Brian; Zhang, Yingtao; Ahmed, Tahira

    2016-04-01

    This work identifies effective computable features from the Breast Imaging Reporting and Data System (BI-RADS), to develop a computer-aided diagnosis (CAD) system for breast ultrasound. Computerized features corresponding to ultrasound BI-RADs categories were designed and tested using a database of 283 pathology-proven benign and malignant lesions. Features were selected based on classification performance using a "bottom-up" approach for different machine learning methods, including decision tree, artificial neural network, random forest and support vector machine. Using 10-fold cross-validation on the database of 283 cases, the highest area under the receiver operating characteristic (ROC) curve (AUC) was 0.84 from a support vector machine with 77.7% overall accuracy; the highest overall accuracy, 78.5%, was from a random forest with the AUC 0.83. Lesion margin and orientation were optimum features common to all of the different machine learning methods. These features can be used in CAD systems to help distinguish benign from worrisome lesions.

  5. Depth-based human fall detection via shape features and improved extreme learning machine.

    Science.gov (United States)

    Ma, Xin; Wang, Haibo; Xue, Bingxia; Zhou, Mingang; Ji, Bing; Li, Yibin

    2014-11-01

    Falls are one of the major causes leading to injury of elderly people. Using wearable devices for fall detection has a high cost and may cause inconvenience to the daily lives of the elderly. In this paper, we present an automated fall detection approach that requires only a low-cost depth camera. Our approach combines two computer vision techniques-shape-based fall characterization and a learning-based classifier to distinguish falls from other daily actions. Given a fall video clip, we extract curvature scale space (CSS) features of human silhouettes at each frame and represent the action by a bag of CSS words (BoCSS). Then, we utilize the extreme learning machine (ELM) classifier to identify the BoCSS representation of a fall from those of other actions. In order to eliminate the sensitivity of ELM to its hyperparameters, we present a variable-length particle swarm optimization algorithm to optimize the number of hidden neurons, corresponding input weights, and biases of ELM. Using a low-cost Kinect depth camera, we build an action dataset that consists of six types of actions (falling, bending, sitting, squatting, walking, and lying) from ten subjects. Experimenting with the dataset shows that our approach can achieve up to 91.15% sensitivity, 77.14% specificity, and 86.83% accuracy. On a public dataset, our approach performs comparably to state-of-the-art fall detection methods that need multiple cameras.

  6. Prognosis Essay Scoring and Article Relevancy Using Multi-Text Features and Machine Learning

    Directory of Open Access Journals (Sweden)

    Arif Mehmood

    2017-01-01

    Full Text Available This study develops a model for essay scoring and article relevancy. Essay scoring is a costly process when we consider the time spent by an evaluator. It may lead to inequalities of the effort by various evaluators to apply the same evaluation criteria. Bibliometric research uses the evaluation criteria to find relevancy of articles instead. Researchers mostly face relevancy issues while searching articles. Therefore, they classify the articles manually. However, manual classification is burdensome due to time needed for evaluation. The proposed model performs automatic essay evaluation using multi-text features and ensemble machine learning. The proposed method is implemented in two data sets: a Kaggle short answer data set for essay scoring that includes four ranges of disciplines (Science, Biology, English, and English language Arts, and a bibliometric data set having IoT (Internet of Things and non-IoT classes. The efficacy of the model is measured against the Tandalla and AutoP approach using Cohen’s kappa. The model achieves kappa values of 0.80 and 0.83 for the first and second data sets, respectively. Kappa values show that the proposed model has better performance than those of earlier approaches.

  7. Machine-learning-based diagnosis of schizophrenia using combined sensor-level and source-level EEG features.

    Science.gov (United States)

    Shim, Miseon; Hwang, Han-Jeong; Kim, Do-Won; Lee, Seung-Hwan; Im, Chang-Hwan

    2016-10-01

    Recently, an increasing number of researchers have endeavored to develop practical tools for diagnosing patients with schizophrenia using machine learning techniques applied to EEG biomarkers. Although a number of studies showed that source-level EEG features can potentially be applied to the differential diagnosis of schizophrenia, most studies have used only sensor-level EEG features such as ERP peak amplitude and power spectrum for machine learning-based diagnosis of schizophrenia. In this study, we used both sensor-level and source-level features extracted from EEG signals recorded during an auditory oddball task for the classification of patients with schizophrenia and healthy controls. EEG signals were recorded from 34 patients with schizophrenia and 34 healthy controls while each subject was asked to attend to oddball tones. Our results demonstrated higher classification accuracy when source-level features were used together with sensor-level features, compared to when only sensor-level features were used. In addition, the selected sensor-level features were mostly found in the frontal area, and the selected source-level features were mostly extracted from the temporal area, which coincide well with the well-known pathological region of cognitive processing in patients with schizophrenia. Our results suggest that our approach would be a promising tool for the computer-aided diagnosis of schizophrenia.

  8. Artificial immune system based on adaptive clonal selection for feature selection and parameters optimisation of support vector machines

    Science.gov (United States)

    Sadat Hashemipour, Maryam; Soleimani, Seyed Ali

    2016-01-01

    Artificial immune system (AIS) algorithm based on clonal selection method can be defined as a soft computing method inspired by theoretical immune system in order to solve science and engineering problems. Support vector machine (SVM) is a popular pattern classification method with many diverse applications. Kernel parameter setting in the SVM training procedure along with the feature selection significantly impacts on the classification accuracy rate. In this study, AIS based on Adaptive Clonal Selection (AISACS) algorithm has been used to optimise the SVM parameters and feature subset selection without degrading the SVM classification accuracy. Several public datasets of University of California Irvine machine learning (UCI) repository are employed to calculate the classification accuracy rate in order to evaluate the AISACS approach then it was compared with grid search algorithm and Genetic Algorithm (GA) approach. The experimental results show that the feature reduction rate and running time of the AISACS approach are better than the GA approach.

  9. Feature Based Machining Process Planning Modeling and Integration for Life Cycle Engineering

    Institute of Scientific and Technical Information of China (English)

    LIU Changyi

    2006-01-01

    Machining process data is the core of computer aided process planning application systems. It is also provides essential content for product life cycle engineering. The character of CAPP that supports product LCE and virtual manufacturing is analyzed. The structure and content of machining process data concerning green manufacturing is also examined. A logic model of Machining Process Data has been built based on an object oriented approach, using UML technology and a physical model of machining process data that utilizes XML technology. To realize the integration of design and process, an approach based on graph-based volume decomposition was apposed. Instead, to solve the problem of generation in the machining process, case-based reasoning and rule-based reasoning have been applied synthetically. Finally, the integration framework and interface that deal with the CAPP integration with CAD, CAM, PDM, and ERP are discussed.

  10. Feature-Free Activity Classification of Inertial Sensor Data With Machine Vision Techniques: Method, Development, and Evaluation.

    Science.gov (United States)

    Dominguez Veiga, Jose Juan; O'Reilly, Martin; Whelan, Darragh; Caulfield, Brian; Ward, Tomas E

    2017-08-04

    Inertial sensors are one of the most commonly used sources of data for human activity recognition (HAR) and exercise detection (ED) tasks. The time series produced by these sensors are generally analyzed through numerical methods. Machine learning techniques such as random forests or support vector machines are popular in this field for classification efforts, but they need to be supported through the isolation of a potentially large number of additionally crafted features derived from the raw data. This feature preprocessing step can involve nontrivial digital signal processing (DSP) techniques. However, in many cases, the researchers interested in this type of activity recognition problems do not possess the necessary technical background for this feature-set development. The study aimed to present a novel application of established machine vision methods to provide interested researchers with an easier entry path into the HAR and ED fields. This can be achieved by removing the need for deep DSP skills through the use of transfer learning. This can be done by using a pretrained convolutional neural network (CNN) developed for machine vision purposes for exercise classification effort. The new method should simply require researchers to generate plots of the signals that they would like to build classifiers with, store them as images, and then place them in folders according to their training label before retraining the network. We applied a CNN, an established machine vision technique, to the task of ED. Tensorflow, a high-level framework for machine learning, was used to facilitate infrastructure needs. Simple time series plots generated directly from accelerometer and gyroscope signals are used to retrain an openly available neural network (Inception), originally developed for machine vision tasks. Data from 82 healthy volunteers, performing 5 different exercises while wearing a lumbar-worn inertial measurement unit (IMU), was collected. The ability of the

  11. Diagnosis of Alzheimer's Disease Based on Structural MRI Images Using a Regularized Extreme Learning Machine and PCA Features

    Science.gov (United States)

    Lama, Ramesh Kumar; Gwak, Jeonghwan; Park, Jeong-Seon

    2017-01-01

    Alzheimer's disease (AD) is a progressive, neurodegenerative brain disorder that attacks neurotransmitters, brain cells, and nerves, affecting brain functions, memory, and behaviors and then finally causing dementia on elderly people. Despite its significance, there is currently no cure for it. However, there are medicines available on prescription that can help delay the progress of the condition. Thus, early diagnosis of AD is essential for patient care and relevant researches. Major challenges in proper diagnosis of AD using existing classification schemes are the availability of a smaller number of training samples and the larger number of possible feature representations. In this paper, we present and compare AD diagnosis approaches using structural magnetic resonance (sMR) images to discriminate AD, mild cognitive impairment (MCI), and healthy control (HC) subjects using a support vector machine (SVM), an import vector machine (IVM), and a regularized extreme learning machine (RELM). The greedy score-based feature selection technique is employed to select important feature vectors. In addition, a kernel-based discriminative approach is adopted to deal with complex data distributions. We compare the performance of these classifiers for volumetric sMR image data from Alzheimer's disease neuroimaging initiative (ADNI) datasets. Experiments on the ADNI datasets showed that RELM with the feature selection approach can significantly improve classification accuracy of AD from MCI and HC subjects.

  12. Diagnosis of Alzheimer’s Disease Based on Structural MRI Images Using a Regularized Extreme Learning Machine and PCA Features

    Directory of Open Access Journals (Sweden)

    Ramesh Kumar Lama

    2017-01-01

    Full Text Available Alzheimer’s disease (AD is a progressive, neurodegenerative brain disorder that attacks neurotransmitters, brain cells, and nerves, affecting brain functions, memory, and behaviors and then finally causing dementia on elderly people. Despite its significance, there is currently no cure for it. However, there are medicines available on prescription that can help delay the progress of the condition. Thus, early diagnosis of AD is essential for patient care and relevant researches. Major challenges in proper diagnosis of AD using existing classification schemes are the availability of a smaller number of training samples and the larger number of possible feature representations. In this paper, we present and compare AD diagnosis approaches using structural magnetic resonance (sMR images to discriminate AD, mild cognitive impairment (MCI, and healthy control (HC subjects using a support vector machine (SVM, an import vector machine (IVM, and a regularized extreme learning machine (RELM. The greedy score-based feature selection technique is employed to select important feature vectors. In addition, a kernel-based discriminative approach is adopted to deal with complex data distributions. We compare the performance of these classifiers for volumetric sMR image data from Alzheimer’s disease neuroimaging initiative (ADNI datasets. Experiments on the ADNI datasets showed that RELM with the feature selection approach can significantly improve classification accuracy of AD from MCI and HC subjects.

  13. Automated Classification of L/R Hand Movement EEG Signals using Advanced Feature Extraction and Machine Learning

    Directory of Open Access Journals (Sweden)

    Mohammad H. Alomari

    2013-07-01

    Full Text Available In this paper, we propose an automated computer platform for the purpose of classifying Electroencephalography (EEG signals associated with left and right hand movements using a hybrid system that uses advanced feature extraction techniques and machine learning algorithms. It is known that EEG represents the brain activity by the electrical voltage fluctuations along the scalp, and Brain-Computer Interface (BCI is a device that enables the use of the brain’s neural activity to communicate with others or to control machines, artificial limbs, or robots without direct physical movements. In our research work, we aspired to find the best feature extraction method that enables the differentiation between left and right executed fist movements through various classification algorithms. The EEG dataset used in this research was created and contributed to PhysioNet by the developers of the BCI2000 instrumentation system. Data was preprocessed using the EEGLAB MATLAB toolbox and artifacts removal was done using AAR. Data was epoched on the basis of Event-Related (De Synchronization (ERD/ERS and movement-related cortical potentials (MRCP features. Mu/beta rhythms were isolated for the ERD/ERS analysis and delta rhythms were isolated for the MRCP analysis. The Independent Component Analysis (ICA spatial filter was applied on related channels for noise reduction and isolation of both artifactually and neutrally generated EEG sources. The final feature vector included the ERD, ERS, and MRCP features in addition to the mean, power and energy of the activations of the resulting Independent Components (ICs of the epoched feature datasets. The datasets were inputted into two machine-learning algorithms: Neural Networks (NNs and Support Vector Machines (SVMs. Intensive experiments were carried out and optimum classification performances of 89.8 and 97.1 were obtained using NN and SVM, respectively. This research shows that this method of feature extraction

  14. Control-group feature normalization for multivariate pattern analysis of structural MRI data using the support vector machine.

    Science.gov (United States)

    Linn, Kristin A; Gaonkar, Bilwaj; Satterthwaite, Theodore D; Doshi, Jimit; Davatzikos, Christos; Shinohara, Russell T

    2016-05-15

    Normalization of feature vector values is a common practice in machine learning. Generally, each feature value is standardized to the unit hypercube or by normalizing to zero mean and unit variance. Classification decisions based on support vector machines (SVMs) or by other methods are sensitive to the specific normalization used on the features. In the context of multivariate pattern analysis using neuroimaging data, standardization effectively up- and down-weights features based on their individual variability. Since the standard approach uses the entire data set to guide the normalization, it utilizes the total variability of these features. This total variation is inevitably dependent on the amount of marginal separation between groups. Thus, such a normalization may attenuate the separability of the data in high dimensional space. In this work we propose an alternate approach that uses an estimate of the control-group standard deviation to normalize features before training. We study our proposed approach in the context of group classification using structural MRI data. We show that control-based normalization leads to better reproducibility of estimated multivariate disease patterns and improves the classifier performance in many cases.

  15. 小样本问题的算法比较%Feature Selection Based on Support Vector Machine

    Institute of Scientific and Technical Information of China (English)

    张先荣; 范丽亚

    2011-01-01

    This paper is devoted to study a leature election method based on support vector machine feature weight. Experiments with two kinds of data taken from UCI machine learning repository show that feature weight method is superior to F-score method and SVM on%将不相关线性判别分析(ULDA)和零空间线性判别分析(NLDA)两种思想结合起来,提出了处理小样本问题的六种算法,并通过实验说明了这六种算法的分类有效性.

  16. Multimodal Discrimination of Schizophrenia Using Hybrid Weighted Feature Concatenation of Brain Functional Connectivity and Anatomical Features with an Extreme Learning Machine

    Directory of Open Access Journals (Sweden)

    Muhammad Naveed Iqbal Qureshi

    2017-09-01

    Full Text Available Multimodal features of structural and functional magnetic resonance imaging (MRI of the human brain can assist in the diagnosis of schizophrenia. We performed a classification study on age, sex, and handedness-matched subjects. The dataset we used is publicly available from the Center for Biomedical Research Excellence (COBRE and it consists of two groups: patients with schizophrenia and healthy controls. We performed an independent component analysis and calculated global averaged functional connectivity-based features from the resting-state functional MRI data for all the cortical and subcortical anatomical parcellation. Cortical thickness along with standard deviation, surface area, volume, curvature, white matter volume, and intensity measures from the cortical parcellation, as well as volume and intensity from sub-cortical parcellation and overall volume of cortex features were extracted from the structural MRI data. A novel hybrid weighted feature concatenation method was used to acquire maximal 99.29% (P < 0.0001 accuracy which preserves high discriminatory power through the weight of the individual feature type. The classification was performed by an extreme learning machine, and its efficiency was compared to linear and non-linear (radial basis function support vector machines, linear discriminant analysis, and random forest bagged tree ensemble algorithms. This article reports the predictive accuracy of both unimodal and multimodal features after 10-by-10-fold nested cross-validation. A permutation test followed the classification experiment to assess the statistical significance of the classification results. It was concluded that, from a clinical perspective, this feature concatenation approach may assist the clinicians in schizophrenia diagnosis.

  17. A Comparison of Supervised Machine Learning Algorithms and Feature Vectors for MS Lesion Segmentation Using Multimodal Structural MRI

    Science.gov (United States)

    Sweeney, Elizabeth M.; Vogelstein, Joshua T.; Cuzzocreo, Jennifer L.; Calabresi, Peter A.; Reich, Daniel S.; Crainiceanu, Ciprian M.; Shinohara, Russell T.

    2014-01-01

    Machine learning is a popular method for mining and analyzing large collections of medical data. We focus on a particular problem from medical research, supervised multiple sclerosis (MS) lesion segmentation in structural magnetic resonance imaging (MRI). We examine the extent to which the choice of machine learning or classification algorithm and feature extraction function impacts the performance of lesion segmentation methods. As quantitative measures derived from structural MRI are important clinical tools for research into the pathophysiology and natural history of MS, the development of automated lesion segmentation methods is an active research field. Yet, little is known about what drives performance of these methods. We evaluate the performance of automated MS lesion segmentation methods, which consist of a supervised classification algorithm composed with a feature extraction function. These feature extraction functions act on the observed T1-weighted (T1-w), T2-weighted (T2-w) and fluid-attenuated inversion recovery (FLAIR) MRI voxel intensities. Each MRI study has a manual lesion segmentation that we use to train and validate the supervised classification algorithms. Our main finding is that the differences in predictive performance are due more to differences in the feature vectors, rather than the machine learning or classification algorithms. Features that incorporate information from neighboring voxels in the brain were found to increase performance substantially. For lesion segmentation, we conclude that it is better to use simple, interpretable, and fast algorithms, such as logistic regression, linear discriminant analysis, and quadratic discriminant analysis, and to develop the features to improve performance. PMID:24781953

  18. Applying machine learning and image feature extraction techniques to the problem of cerebral aneurysm rupture

    Directory of Open Access Journals (Sweden)

    Steren Chabert

    2017-01-01

    Full Text Available Cerebral aneurysm is a cerebrovascular disorder characterized by a bulging in a weak area in the wall of an artery that supplies blood to the brain. It is relevant to understand the mechanisms leading to the apparition of aneurysms, their growth and, more important, leading to their rupture. The purpose of this study is to study the impact on aneurysm rupture of the combination of different parameters, instead of focusing on only one factor at a time as is frequently found in the literature, using machine learning and feature extraction techniques. This discussion takes relevance in the context of the complex decision that the physicians have to take to decide which therapy to apply, as each intervention bares its own risks, and implies to use a complex ensemble of resources (human resources, OR, etc. in hospitals always under very high work load. This project has been raised in our actual working team, composed of interventional neuroradiologist, radiologic technologist, informatics engineers and biomedical engineers, from Valparaiso public Hospital, Hospital Carlos van Buren, and from Universidad de Valparaíso – Facultad de Ingeniería and Facultad de Medicina. This team has been working together in the last few years, and is now participating in the implementation of an “interdisciplinary platform for innovation in health”, as part of a bigger project leaded by Universidad de Valparaiso (PMI UVA1402. It is relevant to emphasize that this project is made feasible by the existence of this network between physicians and engineers, and by the existence of data already registered in an orderly manner, structured and recorded in digital format. The present proposal arises from the description in nowadays literature that the actual indicators, whether based on morphological description of the aneurysm, or based on characterization of biomechanical factor or others, these indicators were shown not to provide sufficient information in order

  19. Automatic Detection of Diabetes Diagnosis using Feature Weighted Support Vector Machines based on Mutual Information and Modified Cuckoo Search

    CERN Document Server

    Giveki, Davar; Bahmanyar, GholamReza; Khademian, Younes

    2012-01-01

    Diabetes is a major health problem in both developing and developed countries and its incidence is rising dramatically. In this study, we investigate a novel automatic approach to diagnose Diabetes disease based on Feature Weighted Support Vector Machines (FW-SVMs) and Modified Cuckoo Search (MCS). The proposed model consists of three stages: Firstly, PCA is applied to select an optimal subset of features out of set of all the features. Secondly, Mutual Information is employed to construct the FWSVM by weighting different features based on their degree of importance. Finally, since parameter selection plays a vital role in classification accuracy of SVMs, MCS is applied to select the best parameter values. The proposed MI-MCS-FWSVM method obtains 93.58% accuracy on UCI dataset. The experimental results demonstrate that our method outperforms the previous methods by not only giving more accurate results but also significantly speeding up the classification procedure.

  20. Using machine learning to classify image features from canine pelvic radiographs

    DEFF Research Database (Denmark)

    McEvoy, Fintan; Amigo Rubio, Jose Manuel

    2013-01-01

    As the number of images per study increases in the field of veterinary radiology, there is a growing need for computer-assisted diagnosis techniques. The purpose of this study was to evaluate two machine learning statistical models for automatically identifying image regions that contain the canine...

  1. Multi-script handwritten character recognition : Using feature descriptors and machine learning

    NARCIS (Netherlands)

    Surinta, Olarik

    2016-01-01

    Handwritten character recognition plays an important role in transforming raw visual image data obtained from handwritten documents using for example scanners to a format which is understandable by a computer. It is an important application in the field of pattern recognition, machine learning and a

  2. Multi-script handwritten character recognition : Using feature descriptors and machine learning

    NARCIS (Netherlands)

    Surinta, Olarik

    2016-01-01

    Handwritten character recognition plays an important role in transforming raw visual image data obtained from handwritten documents using for example scanners to a format which is understandable by a computer. It is an important application in the field of pattern recognition, machine learning and

  3. Compiling scheme using abstract state machines

    OpenAIRE

    2003-01-01

    The project investigates the use of Abstract State Machine in the process of computer program compilation. Compilation is to produce machine-code from a source program written in a high-level language. A compiler is a program written for the purpose. Machine-code is the computer-readable representation of sequences of computer instructions. An Abstract State Machine (ASM) is a notional computing machine, developed by Yuri Gurevich, for accurately and easily representing the semantics of...

  4. Improving model predictions for RNA interference activities that use support vector machine regression by combining and filtering features

    Directory of Open Access Journals (Sweden)

    Peek Andrew S

    2007-06-01

    Full Text Available Abstract Background RNA interference (RNAi is a naturally occurring phenomenon that results in the suppression of a target RNA sequence utilizing a variety of possible methods and pathways. To dissect the factors that result in effective siRNA sequences a regression kernel Support Vector Machine (SVM approach was used to quantitatively model RNA interference activities. Results Eight overall feature mapping methods were compared in their abilities to build SVM regression models that predict published siRNA activities. The primary factors in predictive SVM models are position specific nucleotide compositions. The secondary factors are position independent sequence motifs (N-grams and guide strand to passenger strand sequence thermodynamics. Finally, the factors that are least contributory but are still predictive of efficacy are measures of intramolecular guide strand secondary structure and target strand secondary structure. Of these, the site of the 5' most base of the guide strand is the most informative. Conclusion The capacity of specific feature mapping methods and their ability to build predictive models of RNAi activity suggests a relative biological importance of these features. Some feature mapping methods are more informative in building predictive models and overall t-test filtering provides a method to remove some noisy features or make comparisons among datasets. Together, these features can yield predictive SVM regression models with increased predictive accuracy between predicted and observed activities both within datasets by cross validation, and between independently collected RNAi activity datasets. Feature filtering to remove features should be approached carefully in that it is possible to reduce feature set size without substantially reducing predictive models, but the features retained in the candidate models become increasingly distinct. Software to perform feature prediction and SVM training and testing on nucleic acid

  5. Optimizing a machine learning based glioma grading system using multi-parametric MRI histogram and texture features.

    Science.gov (United States)

    Zhang, Xin; Yan, Lin-Feng; Hu, Yu-Chuan; Li, Gang; Yang, Yang; Han, Yu; Sun, Ying-Zhi; Liu, Zhi-Cheng; Tian, Qiang; Han, Zi-Yang; Liu, Le-De; Hu, Bin-Quan; Qiu, Zi-Yu; Wang, Wen; Cui, Guang-Bin

    2017-07-18

    Current machine learning techniques provide the opportunity to develop noninvasive and automated glioma grading tools, by utilizing quantitative parameters derived from multi-modal magnetic resonance imaging (MRI) data. However, the efficacies of different machine learning methods in glioma grading have not been investigated.A comprehensive comparison of varied machine learning methods in differentiating low-grade gliomas (LGGs) and high-grade gliomas (HGGs) as well as WHO grade II, III and IV gliomas based on multi-parametric MRI images was proposed in the current study. The parametric histogram and image texture attributes of 120 glioma patients were extracted from the perfusion, diffusion and permeability parametric maps of preoperative MRI. Then, 25 commonly used machine learning classifiers combined with 8 independent attribute selection methods were applied and evaluated using leave-one-out cross validation (LOOCV) strategy. Besides, the influences of parameter selection on the classifying performances were investigated. We found that support vector machine (SVM) exhibited superior performance to other classifiers. By combining all tumor attributes with synthetic minority over-sampling technique (SMOTE), the highest classifying accuracy of 0.945 or 0.961 for LGG and HGG or grade II, III and IV gliomas was achieved. Application of Recursive Feature Elimination (RFE) attribute selection strategy further improved the classifying accuracies. Besides, the performances of LibSVM, SMO, IBk classifiers were influenced by some key parameters such as kernel type, c, gama, K, etc. SVM is a promising tool in developing automated preoperative glioma grading system, especially when being combined with RFE strategy. Model parameters should be considered in glioma grading model optimization.

  6. Using Trajectory Clusters to Define the Most Relevant Features for Transient Stability Prediction Based on Machine Learning Method

    Directory of Open Access Journals (Sweden)

    Luyu Ji

    2016-11-01

    Full Text Available To achieve rapid real-time transient stability prediction, a power system transient stability prediction method based on the extraction of the post-fault trajectory cluster features of generators is proposed. This approach is conducted using data-mining techniques and support vector machine (SVM models. First, the post-fault rotor angles and generator terminal voltage magnitudes are considered as the input vectors. Second, we construct a high-confidence dataset by extracting the 27 trajectory cluster features obtained from the chosen databases. Then, by applying a filter–wrapper algorithm for feature selection, we obtain the final feature set composed of the eight most relevant features for transient stability prediction, called the global trajectory clusters feature subset (GTCFS, which are validated by receiver operating characteristic (ROC analysis. Comprehensive simulations are conducted on a New England 39-bus system under various operating conditions, load levels and topologies, and the transient stability predicting capability of the SVM model based on the GTCFS is extensively tested. The experimental results show that the selected GTCFS features improve the prediction accuracy with high computational efficiency. The proposed method has distinct advantages for transient stability prediction when faced with incomplete Wide Area Measurement System (WAMS information, unknown operating conditions and unknown topologies and significantly improves the robustness of the transient stability prediction system.

  7. A general procedure to generate models for urban environmental-noise pollution using feature selection and machine learning methods.

    Science.gov (United States)

    Torija, Antonio J; Ruiz, Diego P

    2015-02-01

    The prediction of environmental noise in urban environments requires the solution of a complex and non-linear problem, since there are complex relationships among the multitude of variables involved in the characterization and modelling of environmental noise and environmental-noise magnitudes. Moreover, the inclusion of the great spatial heterogeneity characteristic of urban environments seems to be essential in order to achieve an accurate environmental-noise prediction in cities. This problem is addressed in this paper, where a procedure based on feature-selection techniques and machine-learning regression methods is proposed and applied to this environmental problem. Three machine-learning regression methods, which are considered very robust in solving non-linear problems, are used to estimate the energy-equivalent sound-pressure level descriptor (LAeq). These three methods are: (i) multilayer perceptron (MLP), (ii) sequential minimal optimisation (SMO), and (iii) Gaussian processes for regression (GPR). In addition, because of the high number of input variables involved in environmental-noise modelling and estimation in urban environments, which make LAeq prediction models quite complex and costly in terms of time and resources for application to real situations, three different techniques are used to approach feature selection or data reduction. The feature-selection techniques used are: (i) correlation-based feature-subset selection (CFS), (ii) wrapper for feature-subset selection (WFS), and the data reduction technique is principal-component analysis (PCA). The subsequent analysis leads to a proposal of different schemes, depending on the needs regarding data collection and accuracy. The use of WFS as the feature-selection technique with the implementation of SMO or GPR as regression algorithm provides the best LAeq estimation (R(2)=0.94 and mean absolute error (MAE)=1.14-1.16 dB(A)). Copyright © 2014 Elsevier B.V. All rights reserved.

  8. Support vector machine-based feature extractor for L/H transitions in JETa)

    Science.gov (United States)

    González, S.; Vega, J.; Murari, A.; Pereira, A.; Ramírez, J. M.; Dormido-Canto, S.; Jet-Efda Contributors

    2010-10-01

    Support vector machines (SVM) are machine learning tools originally developed in the field of artificial intelligence to perform both classification and regression. In this paper, we show how SVM can be used to determine the most relevant quantities to characterize the confinement transition from low to high confinement regimes in tokamak plasmas. A set of 27 signals is used as starting point. The signals are discarded one by one until an optimal number of relevant waveforms is reached, which is the best tradeoff between keeping a limited number of quantities and not loosing essential information. The method has been applied to a database of 749 JET discharges and an additional database of 150 JET discharges has been used to test the results obtained.

  9. [Readability of surgical informed consent in Spain].

    Science.gov (United States)

    San Norberto, Enrique María; Gómez-Alonso, Daniel; Trigueros, José M; Quiroga, Jorge; Gualis, Javier; Vaquero, Carlos

    2014-03-01

    To assess the readability of informed consent documents (IC) of the different national surgical societies. During January 2012 we collected 504 IC protocols of different specialties. To calculate readability parameters the following criteria were assessed: number of words, syllables and phrases, syllables/word and word/phrase averages, Word correlation index, Flesch-Szigriszt index, Huerta Fernández index, Inflesz scale degree and the Gunning-Fog index. The mean Flesch-Szigriszt index was 50.65 ± 6,72, so readability is considered normal. There are significant differences between specialties such as Urology (43.00 ± 4.17) and Angiology and Vascular Surgery (63.00 ± 3.26, P<.001). No IC would be appropriate for adult readability according to the Fernández-Huerta index (total mean 55.77 ± 6.57); the IC of Angiology and Vascular Surgery were the closest ones (67.85 ± 3.20). Considering the Inflesz scale degree (total mean of 2.84 ± 3,23), IC can be described as «somewhat difficult». There are significant differences between the IC of Angiology and Vascular Surgery (3.23 ± 0.47) that could be qualified as normal, or Cardiovascular Surgery (2.79 ± 0.43) as «nearly normal readability»; and others such as Urology (1, 70 ± 0.46, P<.001) and Thoracic Surgery (1.90 ± 0.30, P<.001), with a readability between «very» and «somewhat» difficult. The Gunning-Fog indexes are far from the readability for a general audience (total mean of 26.29 ± 10,89). IC developed by scientific societies of different surgical specialties do not have an adequate readability for patients. We recommend the use of readability indexes during the writing of these consent forms. Copyright © 2012 AEC. Published by Elsevier Espana. All rights reserved.

  10. A NOVEL FEATURE SET FOR RECOGNITION OF SIMILAR SHAPED HANDWRITTEN HINDI CHARACTERS USING MACHINE LEARNING

    OpenAIRE

    Sheetal Dabra; Sunil Agrawal; Rama Krishna Challa

    2011-01-01

    The growing need of handwritten Hindi character recognition in Indian offices such as passport, railway etc, has made it a vital area of research. Similar shaped characters are more prone to misclassification. In this paper four Machine Learning (ML) algorithms namely Bayesian Network, Radial Basis Function Network (RBFN), Multilayer Perceptron (MLP), and C4.5 Decision Tree are used for recognition of Similar Shaped Handwritten Hindi Characters (SSHHC) and their performance is ...

  11. Brain cells in the avian 'prefrontal cortex' code for features of slot-machine-like gambling.

    Directory of Open Access Journals (Sweden)

    Damian Scarf

    Full Text Available Slot machines are the most common and addictive form of gambling. In the current study, we recorded from single neurons in the 'prefrontal cortex' of pigeons while they played a slot-machine-like task. We identified four categories of neurons that coded for different aspects of our slot-machine-like task. Reward-Proximity neurons showed a linear increase in activity as the opportunity for a reward drew near. I-Won neurons fired only when the fourth stimulus of a winning (four-of-a-kind combination was displayed. I-Lost neurons changed their firing rate at the presentation of the first nonidentical stimulus, that is, when it was apparent that no reward was forthcoming. Finally, Near-Miss neurons also changed their activity the moment it was recognized that a reward was no longer available, but more importantly, the activity level was related to whether the trial contained one, two, or three identical stimuli prior to the display of the nonidentical stimulus. These findings not only add to recent neurophysiological research employing simulated gambling paradigms, but also add to research addressing the functional correspondence between the avian NCL and primate PFC.

  12. Integrated Features by Administering the Support Vector Machine (SVM of Translational Initiations Sites in Alternative Polymorphic Contex

    Directory of Open Access Journals (Sweden)

    Nurul Arneida Husin

    2012-04-01

    Full Text Available Many algorithms and methods have been proposed for classification problems in bioinformatics. In this study, the discriminative approach in particular support vector machines (SVM is employed to recognize the studied TIS patterns. The applied discriminative approach is used to learn about some discriminant functions of samples that have been labelled as positive or negative. After learning, the discriminant functions are employed to decide whether a new sample is true or false. In this study, support vector machines (SVM is employed to recognize the patterns for studied translational initiation sites in alternative weak context. The method has been optimized with the best parameters selected; c=100, E=10-6 and ex=2 for non linear kernel function. Results show that with top 5 features and non linear kernel, the best prediction accuracy achieved is 95.8%. J48 algorithm is applied to compare with SVM with top 15 features and the results show a good prediction accuracy of 95.8%. This indicates that the top 5 features selected by the IGR method and that are performed by SVM are sufficient to use in the prediction of TIS in weak contexts.

  13. Understanding and Writing G & M Code for CNC Machines

    Science.gov (United States)

    Loveland, Thomas

    2012-01-01

    In modern CAD and CAM manufacturing companies, engineers design parts for machines and consumable goods. Many of these parts are cut on CNC machines. Whether using a CNC lathe, milling machine, or router, the ideas and designs of engineers must be translated into a machine-readable form called G & M Code that can be used to cut parts to precise…

  14. Understanding and Writing G & M Code for CNC Machines

    Science.gov (United States)

    Loveland, Thomas

    2012-01-01

    In modern CAD and CAM manufacturing companies, engineers design parts for machines and consumable goods. Many of these parts are cut on CNC machines. Whether using a CNC lathe, milling machine, or router, the ideas and designs of engineers must be translated into a machine-readable form called G & M Code that can be used to cut parts to precise…

  15. Computer-aided classification of Alzheimer's disease based on support vector machine with combination of cerebral image features in MRI

    Science.gov (United States)

    Jongkreangkrai, C.; Vichianin, Y.; Tocharoenchai, C.; Arimura, H.; Alzheimer's Disease Neuroimaging Initiative

    2016-03-01

    Several studies have differentiated Alzheimer's disease (AD) using cerebral image features derived from MR brain images. In this study, we were interested in combining hippocampus and amygdala volumes and entorhinal cortex thickness to improve the performance of AD differentiation. Thus, our objective was to investigate the useful features obtained from MRI for classification of AD patients using support vector machine (SVM). T1-weighted MR brain images of 100 AD patients and 100 normal subjects were processed using FreeSurfer software to measure hippocampus and amygdala volumes and entorhinal cortex thicknesses in both brain hemispheres. Relative volumes of hippocampus and amygdala were calculated to correct variation in individual head size. SVM was employed with five combinations of features (H: hippocampus relative volumes, A: amygdala relative volumes, E: entorhinal cortex thicknesses, HA: hippocampus and amygdala relative volumes and ALL: all features). Receiver operating characteristic (ROC) analysis was used to evaluate the method. AUC values of five combinations were 0.8575 (H), 0.8374 (A), 0.8422 (E), 0.8631 (HA) and 0.8906 (ALL). Although “ALL” provided the highest AUC, there were no statistically significant differences among them except for “A” feature. Our results showed that all suggested features may be feasible for computer-aided classification of AD patients.

  16. Effects of Semantic Features on Machine Learning-Based Drug Name Recognition Systems: Word Embeddings vs. Manually Constructed Dictionaries

    Directory of Open Access Journals (Sweden)

    Shengyu Liu

    2015-12-01

    Full Text Available Semantic features are very important for machine learning-based drug name recognition (DNR systems. The semantic features used in most DNR systems are based on drug dictionaries manually constructed by experts. Building large-scale drug dictionaries is a time-consuming task and adding new drugs to existing drug dictionaries immediately after they are developed is also a challenge. In recent years, word embeddings that contain rich latent semantic information of words have been widely used to improve the performance of various natural language processing tasks. However, they have not been used in DNR systems. Compared to the semantic features based on drug dictionaries, the advantage of word embeddings lies in that learning them is unsupervised. In this paper, we investigate the effect of semantic features based on word embeddings on DNR and compare them with semantic features based on three drug dictionaries. We propose a conditional random fields (CRF-based system for DNR. The skip-gram model, an unsupervised algorithm, is used to induce word embeddings on about 17.3 GigaByte (GB unlabeled biomedical texts collected from MEDLINE (National Library of Medicine, Bethesda, MD, USA. The system is evaluated on the drug-drug interaction extraction (DDIExtraction 2013 corpus. Experimental results show that word embeddings significantly improve the performance of the DNR system and they are competitive with semantic features based on drug dictionaries. F-score is improved by 2.92 percentage points when word embeddings are added into the baseline system. It is comparative with the improvements from semantic features based on drug dictionaries. Furthermore, word embeddings are complementary to the semantic features based on drug dictionaries. When both word embeddings and semantic features based on drug dictionaries are added, the system achieves the best performance with an F-score of 78.37%, which outperforms the best system of the DDIExtraction 2013

  17. Readability of online materials for Dupuytren's contracture.

    Science.gov (United States)

    Santos, Pauline Joy F; Daar, David A; Badeau, Austin; Leis, Amber

    2017-08-23

    Descriptive. Dupuytren's contracture is a common disorder involving fibrosis of the palmar fascia. As patients are increasingly using online materials to gather health care information, it is imperative to assess the readability and appropriateness of this content. The recommended grade level for patient educational materials is seventh to eighth grade according to the National Institutes of Health. This study aims to assess the readability and content of online patient resources for Dupuytren's contracture. Evaluate readability of online patient education materials for Dupuytren's contracture. The largest public search engine, Google, was queried using the term "Dupuytren's contracture surgery" on February 26, 2016. Location filters were disabled, and sponsored results were excluded to avoid any inadvertent search bias. The 10 most popular Web sites were identified, and all relevant patient-directed information within 1 click from the original site was downloaded and saved as plain text. Readability was analyzed using 6 established analyses (Readable.io, Added Bytes, Ltd, UK). Analysis of 10 Web sites demonstrates an average grade level of at least 11th grade (Flesch-Kincaid grade level, 10.2; Gunning-Fog grade level, 13.1; Coleman-Liau grade level, 14.4; Simple Measure of Gobbledygook grade level, 10.0; automated readability grade level, 9.7; and average grade level, 11.5). Overall Flesch-Kincaid reading ease index was 46.4, which is difficult. No single article was at the recommended reading level. Online materials available for treatment of Dupuytren's contracture are above recommended reading levels and do not include a comprehensive explanation of treatment options, which may negatively impact decision making in patients seeking treatment for this condition. Surgeons and hand therapists alike should be cognizant of available online patient materials and make efforts to develop and provide more appropriate materials. V. Copyright © 2017 Hanley & Belfus

  18. Decision forests for machine learning classification of large, noisy seafloor feature sets

    Science.gov (United States)

    Lawson, Ed; Smith, Denson; Sofge, Donald; Elmore, Paul; Petry, Frederick

    2017-02-01

    Extremely randomized trees (ET) classifiers, an extension of random forests (RF) are applied to classification of features such as seamounts derived from bathymetry data. This data is characterized by sparse training data from by large noisy features sets such as often found in other geospatial data. A variety of feature metrics may be useful for this task and we use a large number of metrics relevant to the task of finding seamounts. The major significant results to be described include: an outstanding seamount classification accuracy of 97%; an automated process to produce the most useful classification features that are relevant to geophysical scientists (as represented by the feature metrics); demonstration that topography provides the most important data representation for classification. As well as achieving good accuracy in classification, the human-understandable set of metrics generated by the classifier that are most relevant for the results are discussed.

  19. Cancer Feature Selection and Classification Using a Binary Quantum-Behaved Particle Swarm Optimization and Support Vector Machine

    Directory of Open Access Journals (Sweden)

    Maolong Xi

    2016-01-01

    Full Text Available This paper focuses on the feature gene selection for cancer classification, which employs an optimization algorithm to select a subset of the genes. We propose a binary quantum-behaved particle swarm optimization (BQPSO for cancer feature gene selection, coupling support vector machine (SVM for cancer classification. First, the proposed BQPSO algorithm is described, which is a discretized version of original QPSO for binary 0-1 optimization problems. Then, we present the principle and procedure for cancer feature gene selection and cancer classification based on BQPSO and SVM with leave-one-out cross validation (LOOCV. Finally, the BQPSO coupling SVM (BQPSO/SVM, binary PSO coupling SVM (BPSO/SVM, and genetic algorithm coupling SVM (GA/SVM are tested for feature gene selection and cancer classification on five microarray data sets, namely, Leukemia, Prostate, Colon, Lung, and Lymphoma. The experimental results show that BQPSO/SVM has significant advantages in accuracy, robustness, and the number of feature genes selected compared with the other two algorithms.

  20. Cancer Feature Selection and Classification Using a Binary Quantum-Behaved Particle Swarm Optimization and Support Vector Machine.

    Science.gov (United States)

    Xi, Maolong; Sun, Jun; Liu, Li; Fan, Fangyun; Wu, Xiaojun

    2016-01-01

    This paper focuses on the feature gene selection for cancer classification, which employs an optimization algorithm to select a subset of the genes. We propose a binary quantum-behaved particle swarm optimization (BQPSO) for cancer feature gene selection, coupling support vector machine (SVM) for cancer classification. First, the proposed BQPSO algorithm is described, which is a discretized version of original QPSO for binary 0-1 optimization problems. Then, we present the principle and procedure for cancer feature gene selection and cancer classification based on BQPSO and SVM with leave-one-out cross validation (LOOCV). Finally, the BQPSO coupling SVM (BQPSO/SVM), binary PSO coupling SVM (BPSO/SVM), and genetic algorithm coupling SVM (GA/SVM) are tested for feature gene selection and cancer classification on five microarray data sets, namely, Leukemia, Prostate, Colon, Lung, and Lymphoma. The experimental results show that BQPSO/SVM has significant advantages in accuracy, robustness, and the number of feature genes selected compared with the other two algorithms.

  1. Comparative study of shape, intensity and texture features and support vector machine for white blood cell classification

    Directory of Open Access Journals (Sweden)

    Mehdi Habibzadeh

    2013-04-01

    Full Text Available The complete blood count (CBC is widely used test for counting and categorizing various peripheral particles in the blood. The main goal of the paper is to count and classify white blood cells (leukocytes in microscopic images into five major categories using features such as shape, intensity and texture features. The first critical step of counting and classification procedure involves segmentation of individual cells in cytological images of thin blood smears. The quality of segmentation has significant impact on the cell type identification, but poor quality, noise, and/or low resolution images make segmentation less reliable. We analyze the performance of our system for three different sets of features and we determine that the best performance is achieved by wavelet features using the Dual-Tree Complex Wavelet Transform (DT-CWT which is based on multi-resolution characteristics of the image. These features are combined with the Support Vector Machine (SVM which classifies white blood cells into their five primary types. This approach was validated with experiments conducted on digital normal blood smear images with low resolution.

  2. Hybrid Feature Selection Based Weighted Least Squares Twin Support Vector Machine Approach for Diagnosing Breast Cancer, Hepatitis, and Diabetes

    Directory of Open Access Journals (Sweden)

    Divya Tomar

    2015-01-01

    Full Text Available There is a necessity for analysis of a large amount of data in many fields such as healthcare, business, industries, and agriculture. Therefore, the need of the feature selection (FS technique for the researchers is quite evident in many fields of science, especially in computer science. Furthermore, an effective FS technique that is best suited to a particular learning algorithm is of great help for the researchers. Hence, this paper proposes a hybrid feature selection (HFS based efficient disease diagnostic model for Breast Cancer, Hepatitis, and Diabetes. A HFS is an efficient method that combines the positive aspects of both Filter and Wrapper FS approaches. The proposed model adopts weighted least squares twin support vector machine (WLSTSVM as a classification approach, sequential forward selection (SFS as a search strategy, and correlation feature selection (CFS to evaluate the importance of each feature. This model not only selects relevant feature subset but also efficiently deals with the data imbalance problem. The effectiveness of the HFS based WLSTSVM approach is examined on three well-known disease datasets taken from UCI repository with the help of predictive accuracy, sensitivity, specificity, and geometric mean. The experiment confirms that our proposed HFS based WLSTSVM disease diagnostic model can result in positive outcomes.

  3. Feature Selection Method Based on Artificial Bee Colony Algorithm and Support Vector Machines for Medical Datasets Classification

    Directory of Open Access Journals (Sweden)

    Mustafa Serter Uzer

    2013-01-01

    Full Text Available This paper offers a hybrid approach that uses the artificial bee colony (ABC algorithm for feature selection and support vector machines for classification. The purpose of this paper is to test the effect of elimination of the unimportant and obsolete features of the datasets on the success of the classification, using the SVM classifier. The developed approach conventionally used in liver diseases and diabetes diagnostics, which are commonly observed and reduce the quality of life, is developed. For the diagnosis of these diseases, hepatitis, liver disorders and diabetes datasets from the UCI database were used, and the proposed system reached a classification accuracies of 94.92%, 74.81%, and 79.29%, respectively. For these datasets, the classification accuracies were obtained by the help of the 10-fold cross-validation method. The results show that the performance of the method is highly successful compared to other results attained and seems very promising for pattern recognition applications.

  4. Testing Occam's razor to characterize high-order connectivity in pore networks of granular media: Feature selection in machine learning

    Science.gov (United States)

    van der Linden, Joost; Tordesillas, Antoinette; Narsilio, Guillermo

    2017-06-01

    A perennial challenge for the characterization and modelling of phenomena involving granular media is that the internal connectivity of, and interactions between, the pores and the particles exhibit hallmarks of complexity: multi-scale and nonlinear interactions that lead to a plethora of patterns at the mesoscale, including fluid flow patterns that ultimately render a permeability of the granular media at the macroscale. A multitude of physical parameters exist to characterize geometry and structure, including pore/particle shape, volume and surface area, while a rich class of complex network parameters quantifies internal connectivity of the pore and particles in the material. A large collection of such variables is likely to exhibit a high degree of redundancy. Here we demonstrate how to use feature selection in machine learning theory to identify the most informative and non-redundant, yet parsimonious set of features that optimally characterizes the interstitial flow properties of porous, granular media, e.g., permeability, from high resolution data.

  5. Icing Forecasting of High Voltage Transmission Line Using Weighted Least Square Support Vector Machine with Fireworks Algorithm for Feature Selection

    Directory of Open Access Journals (Sweden)

    Tiannan Ma

    2016-12-01

    Full Text Available Accurate forecasting of icing thickness has great significance for ensuring the security and stability of the power grid. In order to improve the forecasting accuracy, this paper proposes an icing forecasting system based on the fireworks algorithm and weighted least square support vector machine (W-LSSVM. The method of the fireworks algorithm is employed to select the proper input features with the purpose of eliminating redundant influence. In addition, the aim of the W-LSSVM model is to train and test the historical data-set with the selected features. The capability of this proposed icing forecasting model and framework is tested through simulation experiments using real-world icing data from the monitoring center of the key laboratory of anti-ice disaster, Hunan, South China. The results show that the proposed W-LSSVM-FA method has a higher prediction accuracy and it may be a promising alternative for icing thickness forecasting.

  6. Feature Selection based on Machine Learning in MRIs for Hippocampal Segmentation

    CERN Document Server

    Tangaro, Sabina; Brescia, Massimo; Cavuoti, Stefano; Chincarini, Andrea; Errico, Rosangela; Inglese, Paolo; Longo, Giuseppe; Maglietta, Rosalia; Tateo, Andrea; Riccio, Giuseppe; Bellotti, Roberto

    2015-01-01

    Neurodegenerative diseases are frequently associated with structural changes in the brain. Magnetic Resonance Imaging (MRI) scans can show these variations and therefore be used as a supportive feature for a number of neurodegenerative diseases. The hippocampus has been known to be a biomarker for Alzheimer disease and other neurological and psychiatric diseases. However, it requires accurate, robust and reproducible delineation of hippocampal structures. Fully automatic methods are usually the voxel based approach, for each voxel a number of local features were calculated. In this paper we compared four different techniques for feature selection from a set of 315 features extracted for each voxel: (i) filter method based on the Kolmogorov-Smirnov test; two wrapper methods, respectively, (ii) Sequential Forward Selection and (iii) Sequential Backward Elimination; and (iv) embedded method based on the Random Forest Classifier on a set of 10 T1-weighted brain MRIs and tested on an independent set of 25 subjects...

  7. Classifying depression patients and normal subjects using machine learning techniques and nonlinear features from EEG signal.

    Science.gov (United States)

    Hosseinifard, Behshad; Moradi, Mohammad Hassan; Rostami, Reza

    2013-03-01

    Diagnosing depression in the early curable stages is very important and may even save the life of a patient. In this paper, we study nonlinear analysis of EEG signal for discriminating depression patients and normal controls. Forty-five unmedicated depressed patients and 45 normal subjects were participated in this study. Power of four EEG bands and four nonlinear features including detrended fluctuation analysis (DFA), higuchi fractal, correlation dimension and lyapunov exponent were extracted from EEG signal. For discriminating the two groups, k-nearest neighbor, linear discriminant analysis and logistic regression as the classifiers are then used. Highest classification accuracy of 83.3% is obtained by correlation dimension and LR classifier among other nonlinear features. For further improvement, all nonlinear features are combined and applied to classifiers. A classification accuracy of 90% is achieved by all nonlinear features and LR classifier. In all experiments, genetic algorithm is employed to select the most important features. The proposed technique is compared and contrasted with the other reported methods and it is demonstrated that by combining nonlinear features, the performance is enhanced. This study shows that nonlinear analysis of EEG can be a useful method for discriminating depressed patients and normal subjects. It is suggested that this analysis may be a complementary tool to help psychiatrists for diagnosing depressed patients. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  8. Machine Learning Approach for Classifying Multiple Sclerosis Courses by Combining Clinical Data with Lesion Loads and Magnetic Resonance Metabolic Features

    Directory of Open Access Journals (Sweden)

    Adrian Ion-Mărgineanu

    2017-07-01

    Full Text Available Purpose: The purpose of this study is classifying multiple sclerosis (MS patients in the four clinical forms as defined by the McDonald criteria using machine learning algorithms trained on clinical data combined with lesion loads and magnetic resonance metabolic features.Materials and Methods: Eighty-seven MS patients [12 Clinically Isolated Syndrome (CIS, 30 Relapse Remitting (RR, 17 Primary Progressive (PP, and 28 Secondary Progressive (SP] and 18 healthy controls were included in this study. Longitudinal data available for each MS patient included clinical (e.g., age, disease duration, Expanded Disability Status Scale, conventional magnetic resonance imaging and spectroscopic imaging. We extract N-acetyl-aspartate (NAA, Choline (Cho, and Creatine (Cre concentrations, and we compute three features for each spectroscopic grid by averaging metabolite ratios (NAA/Cho, NAA/Cre, Cho/Cre over good quality voxels. We built linear mixed-effects models to test for statistically significant differences between MS forms. We test nine binary classification tasks on clinical data, lesion loads, and metabolic features, using a leave-one-patient-out cross-validation method based on 100 random patient-based bootstrap selections. We compute F1-scores and BAR values after tuning Linear Discriminant Analysis (LDA, Support Vector Machines with gaussian kernel (SVM-rbf, and Random Forests.Results: Statistically significant differences were found between the disease starting points of each MS form using four different response variables: Lesion Load, NAA/Cre, NAA/Cho, and Cho/Cre ratios. Training SVM-rbf on clinical and lesion loads yields F1-scores of 71–72% for CIS vs. RR and CIS vs. RR+SP, respectively. For RR vs. PP we obtained good classification results (maximum F1-score of 85% after training LDA on clinical and metabolic features, while for RR vs. SP we obtained slightly higher classification results (maximum F1-score of 87% after training LDA and SVM

  9. Man-machine interface in a submarine command and weapon control system: features and design experience

    Directory of Open Access Journals (Sweden)

    Johan H. Aas

    1989-01-01

    Full Text Available Important man-machine interface (MMI issues concerning a submarine command and weapon control system (CWCS such as crew organization, automation level and decision support are discussed in this paper. Generic submarine CWCS functions and operating conditions are outlined. Detailed, dynamic and real-time prototypes were used to support the MMI design. The prototypes are described and experience with detailed prototyping is discussed. Some of the main interaction principles are summarized and a restricted example of the resulting design is given. Our design experience and current work have been used to outline future perspectives of MMI design in naval CWCSs. The need for both formal and experimental approaches is emphasized.

  10. Readability Study in Reading and Math.

    Science.gov (United States)

    Horn, Joan M.

    Two readability formulas were applied to two math and two reading series textbooks at the fourth, fifth, and sixth grade levels to see whether the publishers' suggested reading levels were accurate. Five samples were taken from each text and compared to the publishers' suggested usage according to grade level. Results showed that although formulas…

  11. Readability of Special Education Procedural Safeguards

    Science.gov (United States)

    Mandic, Carmen Gomez; Rudd, Rima; Hehir, Thomas; Acevedo-Garcia, Dolores

    2012-01-01

    This study focused on literacy-related barriers to understanding the rights of students with disabilities and their parents within the special education system. SMOG readability scores were determined for procedural safeguards documents issued by all state departments of education. The average reading grade level was 16; 6% scored in the high…

  12. The Machine Recognition for Population Feature of Wheat Images Based on BP Neural Network

    Institute of Scientific and Technical Information of China (English)

    LI Shao-kun; SUO Xing-mei; BAI Zhong-ying; QI Zhi-li; Liu Xiao-hong; GAO Shi-ju; ZHAO Shuang-ning

    2002-01-01

    Recognition and analysis of dynamic information about population images during wheat growth periods can be taken for the base of quantitative diagnosis for wheat growth. A recognition system based on self-learning BP neural network for feature data of wheat population images, such as total green areas and leaves areas was designed in this paper. In addition, some techniques to create favorable conditions for image recognition was discussed, which were as follows: (1) The method of collecting images by a digital camera and assistant equipment under natural conditions in fields. (2) An algorithm of pixei labeling was used to segment image and extract feature. (3)A high pass filter based on Laplacian was used to strengthen image information. The results showed that the ANN system was availability for image recognition of wheat population feature.

  13. Automatically Identifying Morphological Relations in = Machine-Readable Dictionaries

    CERN Document Server

    Pentheroudakis, J; Pentheroudakis, Joseph; Vanderwende, Lucy

    1994-01-01

    We describe an automated method for identifying classes of morphologically related words in an on-line dictionary, and for linking individual senses in the derived form to one or more senses in the base form by means of morphological relation attributes. We also present an algorithm for computing a score reflecting the system=92s certainty in these derivational links; this computation relies on the content of semantic relations associated with each sense, which are extracted automatically by parsing each sense definition and subjecting the parse structure to automated semantic analysis. By processing the entire set of headwords in the dictionary in this fashion we create a large set of directed derivational graphs, which can then be accessed by other components in our broad-coverage NLP system. Spurious or unlikely derivations are not discarded, but are rather added to the dictionary and assigned a negative score; this allows the system to handle non-standard uses of these forms.

  14. Adolescent Fertility: National File [Machine-Readable Data File].

    Science.gov (United States)

    Moore, Kristin A.; And Others

    This computer file contains recent cross sectional data on adolescent fertility in the United States for 1960, 1965, 1970, 1975 and 1980-85. The following variables are included: (1) births; (2) birth rates; (3) abortions; (4) non-marital childbearing; (5) infant mortality; and (6) low birth weight. Data for both teenagers and women aged 20-24 are…

  15. ENTREVIS - a Spanish machine-readable text corpus

    DEFF Research Database (Denmark)

    Jensen, Kjær

    1991-01-01

    Præsentation af første halvdel et spansk tekskorpus bestående af samtlige interviews med spaniere i de to ugeskrifter Cambio16 og Tiempo i 1990. Dette korpus er siden suppleret med samtlige interviews i de samme tidsskrifter i 1995. Korpus samlede størrelse: over 1.2 million ord...

  16. Finite State Machine with Adaptive Electromyogram (EMG) Feature Extraction to Drive Meal Assistance Robot

    Science.gov (United States)

    Zhang, Xiu; Wang, Xingyu; Wang, Bei; Sugi, Takenao; Nakamura, Masatoshi

    Surface electromyogram (EMG) from elbow, wrist and hand has been widely used as an input of multifunction prostheses for many years. However, for patients with high-level limb deficiencies, muscle activities in upper-limbs are not strong enough to be used as control signals. In this paper, EMG from lower-limbs is acquired and applied to drive a meal assistance robot. An onset detection method with adaptive threshold based on EMG power is proposed to recognize different muscle contractions. Predefined control commands are output by finite state machine (FSM), and applied to operate the robot. The performance of EMG control is compared with joystick control by both objective and subjective indices. The results show that FSM provides the user with an easy-performing control strategy, which successfully operates robots with complicated control commands by limited muscle motions. The high accuracy and comfortableness of the EMG-control meal assistance robot make it feasible for users with upper limbs motor disabilities.

  17. Classifying Cyst and Tumor Lesion Using Support Vector Machine Based on Dental Panoramic Images Texture Features

    OpenAIRE

    Nurtanio, Ingrid

    2013-01-01

    Dental radiographs are essential in diagnosing the pathology of the jaw. However, similar radiographic appearance of jaw lesions causes difficulties in differentiating cyst from tumor. Therefore, we conducted a development of computer-aided classification system for cyst and tumor lesions in dental panoramic images. The proposed system consists of feature extraction based on texture using the first-order statistics texture (FO), Gray Level Co-occurrence Matrix (GLCM) and Gray Level Run ...

  18. Practical data mining and machine learning for optics applications: introduction to the feature issue.

    Science.gov (United States)

    Abdulla, Ghaleb; Awwal, Abdul; Borne, Kirk; Ho, Tin Kam; Vestrand, W Thomas

    2011-08-01

    Data mining algorithms utilize search techniques to explore hidden patterns and correlations in the data, which otherwise require a tremendous amount of human time to explore. This feature issue explores the use of such techniques to help understand the data, build better simulators, explain outlier behavior, and build better predictive models. We hope that this issue will spur discussions and expose a set of tools that can be useful to the optics community.

  19. Application of Multi-task Sparse Lasso Feature Extraction and Support Vector Machine Regression in the Stellar Atmospheric Parameterization

    Science.gov (United States)

    Gao, Wei; Li, Xiang-ru

    2017-07-01

    The multi-task learning takes the multiple tasks together to make analysis and calculation, so as to dig out the correlations among them, and therefore to improve the accuracy of the analyzed results. This kind of methods have been widely applied to the machine learning, pattern recognition, computer vision, and other related fields. This paper investigates the application of multi-task learning in estimating the stellar atmospheric parameters, including the surface temperature (Teff), surface gravitational acceleration (lg g), and chemical abundance ([Fe/H]). Firstly, the spectral features of the three stellar atmospheric parameters are extracted by using the multi-task sparse group Lasso algorithm, then the support vector machine is used to estimate the atmospheric physical parameters. The proposed scheme is evaluated on both the Sloan stellar spectra and the theoretical spectra computed from the Kurucz's New Opacity Distribution Function (NEWODF) model. The mean absolute errors (MAEs) on the Sloan spectra are: 0.0064 for lg (Teff /K), 0.1622 for lg (g/(cm · s-2)), and 0.1221 dex for [Fe/H]; the MAEs on the synthetic spectra are 0.0006 for lg (Teff /K), 0.0098 for lg (g/(cm · s-2)), and 0.0082 dex for [Fe/H]. Experimental results show that the proposed scheme has a rather high accuracy for the estimation of stellar atmospheric parameters.

  20. Multiple-output support vector machine regression with feature selection for arousal/valence space emotion assessment.

    Science.gov (United States)

    Torres-Valencia, Cristian A; Álvarez, Mauricio A; Orozco-Gutiérrez, Alvaro A

    2014-01-01

    Human emotion recognition (HER) allows the assessment of an affective state of a subject. Until recently, such emotional states were described in terms of discrete emotions, like happiness or contempt. In order to cover a high range of emotions, researchers in the field have introduced different dimensional spaces for emotion description that allow the characterization of affective states in terms of several variables or dimensions that measure distinct aspects of the emotion. One of the most common of such dimensional spaces is the bidimensional Arousal/Valence space. To the best of our knowledge, all HER systems so far have modelled independently, the dimensions in these dimensional spaces. In this paper, we study the effect of modelling the output dimensions simultaneously and show experimentally the advantages in modeling them in this way. We consider a multimodal approach by including features from the Electroencephalogram and a few physiological signals. For modelling the multiple outputs, we employ a multiple output regressor based on support vector machines. We also include an stage of feature selection that is developed within an embedded approach known as Recursive Feature Elimination (RFE), proposed initially for SVM. The results show that several features can be eliminated using the multiple output support vector regressor with RFE without affecting the performance of the regressor. From the analysis of the features selected in smaller subsets via RFE, it can be observed that the signals that are more informative into the arousal and valence space discrimination are the EEG, Electrooculogram/Electromiogram (EOG/EMG) and the Galvanic Skin Response (GSR).

  1. 基于改进SVM的特征选择%Based on Modified Support Vector Machines Feature Selection

    Institute of Scientific and Technical Information of China (English)

    陈振洲; 邹丽珊

    2007-01-01

    本文在仔细分析特征选择思想的基础上,将特征选择过程嵌入到学习机里面,提出了一种基于改进支持向量机的特征选择算法(Feature selection via Modified Support Vector Machines),该方法通过对特征的权重进行排序来实现特征选择.利用可以将特征选择过程和学习过程有机地统一起来,实验表明,与其它方法比较,该方法能够达到比较好的效果.

  2. An analysis of feature relevance in the classification of astronomical transients with machine learning methods

    CERN Document Server

    D'Isanto, Antonio; Brescia, Massimo; Donalek, Ciro; Longo, Giuseppe; Riccio, Giuseppe; Djorgovski, Stanislav G

    2016-01-01

    The exploitation of present and future synoptic (multi-band and multi-epoch) surveys requires an extensive use of automatic methods for data processing and data interpretation. In this work, using data extracted from the Catalina Real Time Transient Survey (CRTS), we investigate the classification performance of some well tested methods: Random Forest, MLPQNA (Multi Layer Perceptron with Quasi Newton Algorithm) and K-Nearest Neighbors, paying special attention to the feature selection phase. In order to do so, several classification experiments were performed. Namely: identification of cataclysmic variables, separation between galactic and extra-galactic objects and identification of supernovae.

  3. Virtual machines placement algorithm based on resource utilization feature-matching%基于资源特征匹配的虚拟机放置算法

    Institute of Scientific and Technical Information of China (English)

    冯伟; 陈静怡; 吴杰

    2012-01-01

    VMP-RUFM (virtual machines placement algorithm based on resource utilization feature-matching) is proposed to address the problem of resource inefficient utilization in data centers. Considering of the feature of performance and access mode of virtual machine application, the algorithm models feature expression of resource utilization. After that, the method selects the associated collection of virtual machines of which resource utilization feature matches the resource allocation of physical machine. The experimental results show that this approach can effectively optimize the matching degree between resource utilization of virtual machines and resource allocation of relevant physical machine.%针对目前数据中心的资源低效利用问题,提出了一种基于资源消耗特征匹配的虚拟机放置算法VMP-RUFM (virtual machines placement algorithm based on resource utilization feature-matching).算法在虚拟机应用的性能表现和访问模式两个层面上,建立虚拟机资源特征模型,进而选择资源消耗特征与物理机资源配置相匹配的虚拟机集合.实验结果表明,该算法对满足条件的虚拟机进行关联后,能够显著优化虚拟机整体资源消耗和对应物理机资源配置的匹配程度.

  4. ECG quality assessment based on a kernel support vector machine and genetic algorithm with a feature matrix

    Institute of Scientific and Technical Information of China (English)

    Ya-tao ZHANG; Cheng-yu LIU; Shou-shui WEI; Chang-zhi WEI; Fei-fei LIU

    2014-01-01

    We propose a systematic ECG quality classification method based on a kernel support vector machine (KSVM) and genetic algorithm (GA) to determine whether ECGs collected via mobile phone are acceptable or not. This method includes mainly three modules, i.e., lead-fall detection, feature extraction, and intelligent classification. First, lead-fall detection is executed to make the initial classification. Then the power spectrum, baseline drifts, amplitude difference, and other time-domain features for ECGs are analyzed and quantified to form the feature matrix. Finally, the feature matrix is assessed using KSVM and GA to determine the ECG quality classification results. A Gaussian radial basis function (GRBF) is employed as the kernel function of KSVM and its performance is compared with that of the Mexican hat wavelet function (MHWF). GA is used to determine the optimal parameters of the KSVM classifier and its performance is compared with that of the grid search (GS) method. The performance of the proposed method was tested on a database from PhysioNet/Computing in Cardiology Challenge 2011, which includes 1500 12-lead ECG recordings. True positive (TP), false positive (FP), and classification accuracy were used as the assessment indices. For training database set A (1000 recordings), the optimal results were obtained using the combination of lead-fall, GA, and GRBF methods, and the corresponding results were:TP 92.89%, FP 5.68%, and classification accuracy 94.00%. For test database set B (500 recordings), the optimal results were also obtained using the combination of lead-fall, GA, and GRBF methods, and the classification accuracy was 91.80%.

  5. Method of generating a computer readable model

    DEFF Research Database (Denmark)

    2008-01-01

    A method of generating a computer readable model of a geometrical object constructed from a plurality of interconnectable construction elements, wherein each construction element has a number of connection elements for connecting the construction element with another construction element. The met......A method of generating a computer readable model of a geometrical object constructed from a plurality of interconnectable construction elements, wherein each construction element has a number of connection elements for connecting the construction element with another construction element....... The method comprises encoding a first and a second one of the construction elements as corresponding data structures, each representing the connection elements of the corresponding construction element, and each of the connection elements having associated with it a predetermined connection type. The method...

  6. Feature Extraction and Classification of EHG between Pregnancy and Labour Group Using Hilbert-Huang Transform and Extreme Learning Machine

    Directory of Open Access Journals (Sweden)

    Lili Chen

    2017-01-01

    Full Text Available Preterm birth (PTB is the leading cause of perinatal mortality and long-term morbidity, which results in significant health and economic problems. The early detection of PTB has great significance for its prevention. The electrohysterogram (EHG related to uterine contraction is a noninvasive, real-time, and automatic novel technology which can be used to detect, diagnose, or predict PTB. This paper presents a method for feature extraction and classification of EHG between pregnancy and labour group, based on Hilbert-Huang transform (HHT and extreme learning machine (ELM. For each sample, each channel was decomposed into a set of intrinsic mode functions (IMFs using empirical mode decomposition (EMD. Then, the Hilbert transform was applied to IMF to obtain analytic function. The maximum amplitude of analytic function was extracted as feature. The identification model was constructed based on ELM. Experimental results reveal that the best classification performance of the proposed method can reach an accuracy of 88.00%, a sensitivity of 91.30%, and a specificity of 85.19%. The area under receiver operating characteristic (ROC curve is 0.88. Finally, experimental results indicate that the method developed in this work could be effective in the classification of EHG between pregnancy and labour group.

  7. Feature Extraction and Classification of EHG between Pregnancy and Labour Group Using Hilbert-Huang Transform and Extreme Learning Machine

    Science.gov (United States)

    Hao, Yaru

    2017-01-01

    Preterm birth (PTB) is the leading cause of perinatal mortality and long-term morbidity, which results in significant health and economic problems. The early detection of PTB has great significance for its prevention. The electrohysterogram (EHG) related to uterine contraction is a noninvasive, real-time, and automatic novel technology which can be used to detect, diagnose, or predict PTB. This paper presents a method for feature extraction and classification of EHG between pregnancy and labour group, based on Hilbert-Huang transform (HHT) and extreme learning machine (ELM). For each sample, each channel was decomposed into a set of intrinsic mode functions (IMFs) using empirical mode decomposition (EMD). Then, the Hilbert transform was applied to IMF to obtain analytic function. The maximum amplitude of analytic function was extracted as feature. The identification model was constructed based on ELM. Experimental results reveal that the best classification performance of the proposed method can reach an accuracy of 88.00%, a sensitivity of 91.30%, and a specificity of 85.19%. The area under receiver operating characteristic (ROC) curve is 0.88. Finally, experimental results indicate that the method developed in this work could be effective in the classification of EHG between pregnancy and labour group. PMID:28316639

  8. Design of a Closed-Loop, Bidirectional Brain Machine Interface System With Energy Efficient Neural Feature Extraction and PID Control.

    Science.gov (United States)

    Liu, Xilin; Zhang, Milin; Richardson, Andrew G; Lucas, Timothy H; Van der Spiegel, Jan

    2016-12-16

    This paper presents a bidirectional brain machine interface (BMI) microsystem designed for closed-loop neuroscience research, especially experiments in freely behaving animals. The system-on-chip (SoC) consists of 16-channel neural recording front-ends, neural feature extraction units, 16-channel programmable neural stimulator back-ends, in-channel programmable closed-loop controllers, global analog-digital converters (ADC), and peripheral circuits. The proposed neural feature extraction units includes 1) an ultra low-power neural energy extraction unit enabling a 64-step natural logarithmic domain frequency tuning, and 2) a current-mode action potential (AP) detection unit with time-amplitude window discriminator. A programmable proportional-integral-derivative (PID) controller has been integrated in each channel enabling a various of closed-loop operations. The implemented ADCs include a 10-bit voltage-mode successive approximation register (SAR) ADC for the digitization of the neural feature outputs and/or local field potential (LFP) outputs, and an 8-bit current-mode SAR ADC for the digitization of the action potential outputs. The multi-mode stimulator can be programmed to perform monopolar or bipolar, symmetrical or asymmetrical charge balanced stimulation with a maximum current of 4 mA in an arbitrary channel configuration. The chip has been fabricated in 0.18 μ m CMOS technology, occupying a silicon area of 3.7 mm (2). The chip dissipates 56 μW/ch on average. General purpose low-power microcontroller with Bluetooth module are integrated in the system to provide wireless link and SoC configuration. Methods, circuit techniques and system topology proposed in this work can be used in a wide range of relevant neurophysiology research, especially closed-loop BMI experiments.

  9. Rank by Readability: Document Weighting for Information Retrieval

    Science.gov (United States)

    Newbold, Neil; McLaughlin, Harry; Gillam, Lee

    In this paper, we present a new approach to ranking that considers the reading ability (and motivation) of the user. Web pages can be, increasingly, badly written with unfamiliar words, poor use of syntax, ambiguous phrases and so on. Readability research suggests that experts and motivated readers may overcome confusingly written text, but nevertheless find it an irritation. We investigate using readability to re-rank web pages. We take an extended view of readability that considers the reading level of retrieved web pages using techniques that consider both textual and cognitive factors. Readability of a selection of query results is examined, and a re-ranking on readability is compared to the original ranking. Results to date suggest that considering a view of readability for each reader may increase the probability of relevance to a particular user.

  10. 基于最大加工体的特征模型转换方法%Feature Model Conversion Based on Max Machining Body

    Institute of Scientific and Technical Information of China (English)

    刘景; 朱英; 陈正鸣

    2011-01-01

    An incremental intermediate-model-based approach to convert design feature model to machining feature model is presented for the class of rough machining parts produced by milling or turning operation. A new concept named Max Machining Body (MMB) is proposed. The proposed method consists of three steps. Firstly, all the basic MMBs are generated incrementally based on the design feature history. Secondly, new MMBs are generated by merging the basic MMBs according to the types of their original features. Lastly, the intermediate model, which is composed of all the MMBs and their relative machining parameters, is converted to the machining feature model incrementally based on the machining priority rules and the user interaction strategies. The examples show that the proposed method can generate multiple meaningful machining interpretations, thus the user can obtain a reasonable machining feature interpretation automatically or conveniently.%针对需要铣削或车削的粗加工零件,提出一种基于中间模型的、从设计特征模型向加工特征模型的逐步转换方法.在提出最大加工体概念的基础上,基于设计特征历史逐步生成基本最大加工体;并根据基本最大加工体的来源特征类型合并产生新的最大加工体,所有最大加工体及其加工参数构成了中间模型;最后基于加工优先规则和用户交互策略实现中间模型向加工特征模型的逐步转换.实例结果表明,该方法能够生成有意义的多种加工解释,用户可以自动或者方便地获得合理的加工特征解释.

  11. How readable are Australian paediatric oral health education materials?

    OpenAIRE

    Arora, Amit; Lam, Andy SF; Karami, Zahra; Do, Loc Giang; Harris, Mark Fort

    2014-01-01

    Background The objective of this study was to analyse the readability of paediatric oral health education leaflets available in Australia. Methods Forty paediatric oral health education materials were analysed for general readability according to the following parameters: Thoroughness; Textual framework; Terminology; and Readability (Flesch-Kincaid grade level (FKGL), Gunning Fog index (Fog) and Simplified Measure of Gobbledygook (SMOG)). Results Leaflets produced by the industry were among t...

  12. Wavelet Correlation Feature Scale Entropy and Fuzzy Support Vector Machine Approach for Aeroengine Whole-Body Vibration Fault Diagnosis

    Directory of Open Access Journals (Sweden)

    Cheng-Wei Fei

    2013-01-01

    Full Text Available In order to correctly analyze aeroengine whole-body vibration signals, Wavelet Correlation Feature Scale Entropy (WCFSE and Fuzzy Support Vector Machine (FSVM (WCFSE-FSVM method was proposed by fusing the advantages of the WCFSE method and the FSVM method. The wavelet coefficients were known to be located in high Signal-to-Noise Ratio (S/N or SNR scales and were obtained by the Wavelet Transform Correlation Filter Method (WTCFM. This method was applied to address the whole-body vibration signals. The WCFSE method was derived from the integration of the information entropy theory and WTCFM, and was applied to extract the WCFSE values of the vibration signals. Among the WCFSE values, the WFSE1 and WCFSE2 values on the scale 1 and 2 from the high band of vibration signal were believed to acceptably reflect the vibration feature and were selected to construct the eigenvectors of vibration signals as fault samples to establish the WCFSE-FSVM model. This model was applied to aeroengine whole-body vibration fault diagnosis. Through the diagnoses of four vibration fault modes and the comparison of the analysis results by four methods (SVM, FSVM, WESE-SVM, WCFSE-FSVM, it is shown that the WCFSE-FSVM method is characterized by higher learning ability, higher generalization ability and higher anti-noise ability than other methods in aeroengine whole-vibration fault analysis. Meanwhile, this present study provides a useful insight for the vibration fault diagnosis of complex machinery besides an aeroengine.

  13. Feature selection for speech emotion recognition in Spanish and Basque: on the use of machine learning to improve human-computer interaction.

    Directory of Open Access Journals (Sweden)

    Andoni Arruti

    Full Text Available Study of emotions in human-computer interaction is a growing research area. This paper shows an attempt to select the most significant features for emotion recognition in spoken Basque and Spanish Languages using different methods for feature selection. RekEmozio database was used as the experimental data set. Several Machine Learning paradigms were used for the emotion classification task. Experiments were executed in three phases, using different sets of features as classification variables in each phase. Moreover, feature subset selection was applied at each phase in order to seek for the most relevant feature subset. The three phases approach was selected to check the validity of the proposed approach. Achieved results show that an instance-based learning algorithm using feature subset selection techniques based on evolutionary algorithms is the best Machine Learning paradigm in automatic emotion recognition, with all different feature sets, obtaining a mean of 80,05% emotion recognition rate in Basque and a 74,82% in Spanish. In order to check the goodness of the proposed process, a greedy searching approach (FSS-Forward has been applied and a comparison between them is provided. Based on achieved results, a set of most relevant non-speaker dependent features is proposed for both languages and new perspectives are suggested.

  14. Feature selection for speech emotion recognition in Spanish and Basque: on the use of machine learning to improve human-computer interaction.

    Science.gov (United States)

    Arruti, Andoni; Cearreta, Idoia; Alvarez, Aitor; Lazkano, Elena; Sierra, Basilio

    2014-01-01

    Study of emotions in human-computer interaction is a growing research area. This paper shows an attempt to select the most significant features for emotion recognition in spoken Basque and Spanish Languages using different methods for feature selection. RekEmozio database was used as the experimental data set. Several Machine Learning paradigms were used for the emotion classification task. Experiments were executed in three phases, using different sets of features as classification variables in each phase. Moreover, feature subset selection was applied at each phase in order to seek for the most relevant feature subset. The three phases approach was selected to check the validity of the proposed approach. Achieved results show that an instance-based learning algorithm using feature subset selection techniques based on evolutionary algorithms is the best Machine Learning paradigm in automatic emotion recognition, with all different feature sets, obtaining a mean of 80,05% emotion recognition rate in Basque and a 74,82% in Spanish. In order to check the goodness of the proposed process, a greedy searching approach (FSS-Forward) has been applied and a comparison between them is provided. Based on achieved results, a set of most relevant non-speaker dependent features is proposed for both languages and new perspectives are suggested.

  15. Machining Feature Recognition Based on Surface Clustering%基于表面聚类优化的加工特征识别方法

    Institute of Scientific and Technical Information of China (English)

    汤岑书; 褚学宁; 孙习武; 苏於梁

    2009-01-01

    To realize the effective integration of CAD and CAPP system, an approach of machining feature recognition was proposed based on the generation and clustering of surface machining methods. Three kinds of information models, such as manufacturing resource, machined surface and machining method were built. A concept of cutting mode was proposed and used for generating the machining methods of part surfaces. Aiming at minimizing the number of tool type and the number of setups used, an optimal model for surface clustering was established to select the best machining method for every surface. Surfaces which can be simultaneously machined by a common type of tool in the same setup were then recognized as a machining feature. Finally, an example part was used to test the validity and effectiveness of the approach proposed, and the final results illustrate that the approach can effectively solve some problems such as the recognition of intersecting features which are difficult to traditional feature recognition approaches.%为了实现计算机辅助设计与计算机辅助工艺设计系统的有效集成,提出了以表面加工方法生成和聚类优化为基础的加工特征识别新方法.基于加工资源、加工表面和加工方法3类信息模型,引出了切削模式概念和表面加工方法生成的原理.以刀具种类数和零件装夹次数最少为目标,建立了表面聚类优化模型,为加工表面选择最优加工方法,并把可用同类刀具、在同一装夹下连续加工的一组表面聚为一个加工特征.通过实例测试,验证了该方法的正确性和有效性.

  16. Filter-based feature selection and support vector machine for false positive reduction in computer-aided mass detection in mammograms

    Science.gov (United States)

    Nguyen, V. D.; Nguyen, D. T.; Nguyen, T. D.; Phan, V. A.; Truong, Q. D.

    2015-02-01

    In this paper, a method for reducing false positive in computer-aided mass detection in screening mammograms is proposed. A set of 32 features, including First Order Statistics (FOS) features, Gray-Level Occurrence Matrix (GLCM) features, Block Difference Inverse Probability (BDIP) features, and Block Variation of Local Correlation coefficients (BVLC) are extracted from detected Regions-Of-Interest (ROIs). An optimal subset of 8 features is selected from the full feature set by mean of a filter-based Sequential Backward Selection (SBS). Then, Support Vector Machine (SVM) is utilized to classify the ROIs into massive regions or normal regions. The method's performance is evaluated using the area under the Receiver Operating Characteristic (ROC) curve (AUC or AZ). On a dataset consisting about 2700 ROIs detected from mini-MIAS database of mammograms, the proposed method achieves AZ=0.938.

  17. Measuring the Readability of Elementary Algebra Using the Cloze Technique.

    Science.gov (United States)

    Kulm, Gerald

    The relationship to readability of ten variables characterizing structural properties of mathematical prose was investigated in elementary algebra textbooks. Readability was measured by algebra student's responses to two forms of cloze tests. Linear and currilinear correlations were calculated between each structural variable and the cloze test.…

  18. Reading and Readability Research in the Armed Services. Final Report.

    Science.gov (United States)

    Sticht, Thomas G., Ed.; Zapf, Diana Welty, Ed.

    The Conference on Reading and Readability Research in the Armed Services brought together reading, technical writing, and readability experts from civilian research and development (R & D) centers with R & D specialists from the armed services for discussion of reading and text-design problems in the military. This report of the proceedings…

  19. Best Sellers Evaluated for Readability and Portrayal of Female Characters.

    Science.gov (United States)

    Schulze, Lydia Domkiw

    This paper contains a historical review of the literature pertaining to American popular literature and analysis of two different samples of adult fiction on best sellers, one an analysis of readability levels and one an analysis of the portrayal of female characters. In the first analysis the Fry Readability Formula was applied to each of 60…

  20. Machine learning in geosciences and remote sensing

    Institute of Scientific and Technical Information of China (English)

    David J. Lary; Amir H. Alavi; Amir H. Gandomi; Annette L. Walker

    2016-01-01

    Learning incorporates a broad range of complex procedures. Machine learning (ML) is a subdivision of artificial intelligence based on the biological learning process. The ML approach deals with the design of algorithms to learn from machine readable data. ML covers main domains such as data mining, difficult-to-program applications, and software applications. It is a collection of a variety of algorithms (e.g. neural networks, support vector machines, self-organizing map, decision trees, random forests, case-based reasoning, genetic programming, etc.) that can provide multivariate, nonlinear, nonparametric regres-sion or classification. The modeling capabilities of the ML-based methods have resulted in their extensive applications in science and engineering. Herein, the role of ML as an effective approach for solving problems in geosciences and remote sensing will be highlighted. The unique features of some of the ML techniques will be outlined with a specific attention to genetic programming paradigm. Furthermore, nonparametric regression and classification illustrative examples are presented to demonstrate the ef-ficiency of ML for tackling the geosciences and remote sensing problems.

  1. Machine learning in geosciences and remote sensing

    Directory of Open Access Journals (Sweden)

    David J. Lary

    2016-01-01

    Full Text Available Learning incorporates a broad range of complex procedures. Machine learning (ML is a subdivision of artificial intelligence based on the biological learning process. The ML approach deals with the design of algorithms to learn from machine readable data. ML covers main domains such as data mining, difficult-to-program applications, and software applications. It is a collection of a variety of algorithms (e.g. neural networks, support vector machines, self-organizing map, decision trees, random forests, case-based reasoning, genetic programming, etc. that can provide multivariate, nonlinear, nonparametric regression or classification. The modeling capabilities of the ML-based methods have resulted in their extensive applications in science and engineering. Herein, the role of ML as an effective approach for solving problems in geosciences and remote sensing will be highlighted. The unique features of some of the ML techniques will be outlined with a specific attention to genetic programming paradigm. Furthermore, nonparametric regression and classification illustrative examples are presented to demonstrate the efficiency of ML for tackling the geosciences and remote sensing problems.

  2. Gloved Human-Machine Interface

    Science.gov (United States)

    Adams, Richard (Inventor); Olowin, Aaron (Inventor); Hannaford, Blake (Inventor)

    2015-01-01

    Certain exemplary embodiments can provide a system, machine, device, manufacture, circuit, composition of matter, and/or user interface adapted for and/or resulting from, and/or a method and/or machine-readable medium comprising machine-implementable instructions for, activities that can comprise and/or relate to: tracking movement of a gloved hand of a human; interpreting a gloved finger movement of the human; and/or in response to interpreting the gloved finger movement, providing feedback to the human.

  3. A hybrid feature selection algorithm integrating an extreme learning machine for landslide susceptibility modeling of Mt. Woomyeon, South Korea

    Science.gov (United States)

    Vasu, Nikhil N.; Lee, Seung-Rae

    2016-06-01

    An ever-increasing trend of extreme rainfall events in South Korea owing to climate change is causing shallow landslides and debris flows in mountains that cover 70% of the total land area of the nation. These catastrophic, gravity-driven processes cost the government several billion KRW (South Korean Won) in losses in addition to fatalities every year. The most common type of landslide observed is the shallow landslide, which occurs at 1-3 m depth, and may mobilize into more catastrophic flow-type landslides. Hence, to predict potential landslide areas, susceptibility maps are developed in a geographical information system (GIS) environment utilizing available morphological, hydrological, geotechnical, and geological data. Landslide susceptibility models were developed using 163 landslide points and an equal number of nonlandslide points in Mt. Woomyeon, Seoul, and 23 landslide conditioning factors. However, because not all of the factors contribute to the determination of the spatial probability for landslide initiation, and a simple filter or wrapper-based approach is not efficient in identifying all of the relevant features, a feedback-loop-based hybrid algorithm was implemented in conjunction with a learning scheme called an extreme learning machine, which is based on a single-layer, feed-forward network. Validation of the constructed susceptibility model was conducted using a testing set of landslide inventory data through a prediction rate curve. The model selected 13 relevant conditioning factors out of the initial 23; and the resulting susceptibility map shows a success rate of 85% and a prediction rate of 89.45%, indicating a good performance, in contrast to the low success and prediction rate of 69.19% and 56.19%, respectively, as obtained using a wrapper technique.

  4. An easy to compare tool for more readable (physics) textbooks

    Science.gov (United States)

    Skorecova, I.; Teleki, A.; Lacsny, B.; Zelenicky, L.

    2016-11-01

    In this article, we show the easy way to compare the readability of two physics school texts written in the same language. We show that the readability of scholar texts depends on the frequency of terms. We compare the readability of texts from two physics textbooks written in Slovak. In these comparisons, we used the frequency of words and terms with a given length (probability distribution) instead of using a readability formula. The concept of analysing the frequency of words in a text agrees with our observations made by eye tracking systems. We verified our results by cloze tests. The method comparing the probability distribution of words and terms of two texts (written in the same language) is very simple, intuitive as well as language independent (in contrary to readability formulae).

  5. Multi-modal, Multi-measure, and Multi-class Discrimination of ADHD with Hierarchical Feature Extraction and Extreme Learning Machine Using Structural and Functional Brain MRI.

    Science.gov (United States)

    Qureshi, Muhammad Naveed Iqbal; Oh, Jooyoung; Min, Beomjun; Jo, Hang Joon; Lee, Boreom

    2017-01-01

    Structural and functional MRI unveil many hidden properties of the human brain. We performed this multi-class classification study on selected subjects from the publically available attention deficit hyperactivity disorder ADHD-200 dataset of patients and healthy children. The dataset has three groups, namely, ADHD inattentive, ADHD combined, and typically developing. We calculated the global averaged functional connectivity maps across the whole cortex to extract anatomical atlas parcellation based features from the resting-state fMRI (rs-fMRI) data and cortical parcellation based features from the structural MRI (sMRI) data. In addition, the preprocessed image volumes from both of these modalities followed an ANOVA analysis separately using all the voxels. This study utilized the average measure from the most significant regions acquired from ANOVA as features for classification in addition to the multi-modal and multi-measure features of structural and functional MRI data. We extracted most discriminative features by hierarchical sparse feature elimination and selection algorithm. These features include cortical thickness, image intensity, volume, cortical thickness standard deviation, surface area, and ANOVA based features respectively. An extreme learning machine performed both the binary and multi-class classifications in comparison with support vector machines. This article reports prediction accuracy of both unimodal and multi-modal features from test data. We achieved 76.190% (p multi-class settings as well as 92.857% (p multi-modal group analysis approach with multi-measure features may improve the accuracy of the ADHD differential diagnosis.

  6. Modeling the Relationship between Vibration Features and Condition Parameters Using Relevance Vector Machines for Health Monitoring of Rolling Element Bearings under Varying Operation Conditions

    Directory of Open Access Journals (Sweden)

    Lei Hu

    2015-01-01

    Full Text Available Rotational speed and load usually change when rotating machinery works. Both this kind of changing operational conditions and machine fault could make the mechanical vibration characteristics change. Therefore, effective health monitoring method for rotating machinery must be able to adjust during the change of operational conditions. This paper presents an adaptive threshold model for the health monitoring of bearings under changing operational conditions. Relevance vector machines (RVMs are used for regression of the relationships between the adaptive parameters of the threshold model and the statistical characteristics of vibration features. The adaptive threshold model is constructed based on these relationships. The health status of bearings can be indicated via detecting whether vibration features exceed the adaptive threshold. This method is validated on bearings running at changing speeds. The monitoring results show that this method is effective as long as the rotational speed is higher than a relative small value.

  7. From consciousness to computation: a spectrum of theories of consciousness and selected salient features germane to the development of thinking machines

    OpenAIRE

    2013-01-01

    This study investigated the field of consciousness to isolate concepts that might be useful in producing thinking machines, potentially with full consciousness. Questions that informed the research were: Is it possible to identify “successful” theories of consciousness? Can there be a set of salient features that would be useful in the evaluation of theories of consciousness? A literature survey identifies ways in which enduring problems in discussing intelligence, cognition and conscious...

  8. Method of generating a computer readable model

    DEFF Research Database (Denmark)

    2008-01-01

    A method of generating a computer readable model of a geometrical object constructed from a plurality of interconnectable construction elements, wherein each construction element has a number of connection elements for connecting the construction element with another construction element....... The method comprises encoding a first and a second one of the construction elements as corresponding data structures, each representing the connection elements of the corresponding construction element, and each of the connection elements having associated with it a predetermined connection type. The method...... further comprises determining a first connection element of the first construction element and a second connection element of the second construction element located in a predetermined proximity of each other; and retrieving connectivity information of the corresponding connection types of the first...

  9. Principal Components of Superhigh-Dimensional Statistical Features and Support Vector Machine for Improving Identification Accuracies of Different Gear Crack Levels under Different Working Conditions

    Directory of Open Access Journals (Sweden)

    Dong Wang

    2015-01-01

    Full Text Available Gears are widely used in gearbox to transmit power from one shaft to another. Gear crack is one of the most frequent gear fault modes found in industry. Identification of different gear crack levels is beneficial in preventing any unexpected machine breakdown and reducing economic loss because gear crack leads to gear tooth breakage. In this paper, an intelligent fault diagnosis method for identification of different gear crack levels under different working conditions is proposed. First, superhigh-dimensional statistical features are extracted from continuous wavelet transform at different scales. The number of the statistical features extracted by using the proposed method is 920 so that the extracted statistical features are superhigh dimensional. To reduce the dimensionality of the extracted statistical features and generate new significant low-dimensional statistical features, a simple and effective method called principal component analysis is used. To further improve identification accuracies of different gear crack levels under different working conditions, support vector machine is employed. Three experiments are investigated to show the superiority of the proposed method. Comparisons with other existing gear crack level identification methods are conducted. The results show that the proposed method has the highest identification accuracies among all existing methods.

  10. Readability analysis of online health information about overactive bladder.

    Science.gov (United States)

    Koo, Kevin; Shee, Kevin; Yap, Ronald L

    2017-09-01

    Despite the prevalence of overactive bladder (OAB) and the widespread accessibility of patient education information on the Internet, the readability of this information and its potential impact on patient decision-making are not known. This study evaluates the readability of OAB material online in the context of website ownership and the Health on the Net standard for information reliability. Three Internet search platforms were queried daily with OAB-related keywords for 30 days. Readability analysis was performed using the SMOG test, Dale-Chall readability formula, and Fry readability graph. Websites were stratified by ownership type and Health on the Net certification to compare readability metrics. After 270 total searches, 57 websites were analyzed. Mean SMOG reading grade was 10.7 (SD = 1.6) and 10.1 in an adjusted calculation to reduce overestimation from medical jargon. Mean Dale-Chall score was 9.2 (SD = 0.9), or grade 13-15. Mean Fry graph coordinates (177 syllables, 5.9 sentences) corresponded to grade 15. Only seven sites (12%) were predicted to be readable by the average adult with an eighth-grade reading level. Mean reading grades were not significantly different between academic versus commercial sites and Health on the Net-certified versus non-certified sites. A large majority of online information about OAB treatment exceeds the reading ability of most adults. Neither websites sponsored by academic institutions nor those certified by the Health on the Net standard have easier readability. The readability of health information online may be distinct from reliability in the context of urological literacy. © 2017 Wiley Periodicals, Inc.

  11. An AAG Based Method of Machining Feature Recognition%基于属性邻接图的制造特征识别方法

    Institute of Scientific and Technical Information of China (English)

    刘文剑; 顾琳; 常伟; 杨乐民

    2001-01-01

    制造特征的识别是实现CAD/CAPP/CAM集成的关键技术,既要将特征交互的区域合理分解,又要避免在识别过程中把单个特征不恰当地分解为两个或多个特征。本文使用辅助面、延伸面相结合的方法,对零件的属性邻接图表示进行扩展,同时引入了联合加工特征的概念及识别方法,使扩展后得到的图被分解后能够真实地表示零件的制造特征,从而使加工方法的推理更确切、更快捷。%The recognition of machining features is the key technology in the integration of CAD/CAPP/CAM.How to decompose the interacted zone into different features is the core issue of the recognition of machining features.This paper extends the attributed adjacency graph(AAG) of the part with the connection of assistant face and extended face.We can get the exact type of the feature by matching the sub-graph divided from the NAAG.We also introduced united machining feature (UMF) to make the reasoning more reasonable.

  12. Geometric Feature-Based Facial Expression Recognition in Image Sequences Using Multi-Class AdaBoost and Support Vector Machines

    Directory of Open Access Journals (Sweden)

    Joonwhoan Lee

    2013-06-01

    Full Text Available Facial expressions are widely used in the behavioral interpretation of emotions, cognitive science, and social interactions. In this paper, we present a novel method for fully automatic facial expression recognition in facial image sequences. As the facial expression evolves over time facial landmarks are automatically tracked in consecutive video frames, using displacements based on elastic bunch graph matching displacement estimation. Feature vectors from individual landmarks, as well as pairs of landmarks tracking results are extracted, and normalized, with respect to the first frame in the sequence. The prototypical expression sequence for each class of facial expression is formed, by taking the median of the landmark tracking results from the training facial expression sequences. Multi-class AdaBoost with dynamic time warping similarity distance between the feature vector of input facial expression and prototypical facial expression, is used as a weak classifier to select the subset of discriminative feature vectors. Finally, two methods for facial expression recognition are presented, either by using multi-class AdaBoost with dynamic time warping, or by using support vector machine on the boosted feature vectors. The results on the Cohn-Kanade (CK+ facial expression database show a recognition accuracy of 95.17% and 97.35% using multi-class AdaBoost and support vector machines, respectively.

  13. Readability of patient information and consent documents in rheumatological studies

    DEFF Research Database (Denmark)

    Hamnes, Bente; van Eijk-Hustings, Yvonne; Primdahl, Jette

    2016-01-01

    BACKGROUND: Before participation in medical research an informed consent must be obtained. This study investigates whether the readability of patient information and consent documents (PICDs) corresponds to the average educational level of participants in rheumatological studies in the Netherlands...

  14. Readability of patient information and consent documents in rheumatological studies

    DEFF Research Database (Denmark)

    Hamnes, Bente; van Eijk-Hustings, Yvonne; Primdahl, Jette

    2016-01-01

    , Denmark, and Norway. METHODS: 24 PICDs from studies were collected and readability was assessed independently using the Gunning's Fog Index (FOG) and Simple Measure of Gobbledygook (SMOG) grading. RESULTS: The mean score for the FOG and SMOG grades were 14.2 (9.0-19.0) and 14.2 (12-17) respectively...... guidelines for how to write clear and unambiguous PICDs in simple and easily understandable language could increase the focus on the readability of PICD....

  15. SU-F-R-08: Can Normalization of Brain MRI Texture Features Reduce Scanner-Dependent Effects in Unsupervised Machine Learning?

    Energy Technology Data Exchange (ETDEWEB)

    Ogden, K; O’Dwyer, R [SUNY Upstate Medical University, Syracuse, NY (United States); Bradford, T [Syracuse University, Syracuse, NY (United States); Cussen, L [Rochester Institute of Technology, Rochester, NY (United States)

    2016-06-15

    Purpose: To reduce differences in features calculated from MRI brain scans acquired at different field strengths with or without Gadolinium contrast. Methods: Brain scans were processed for 111 epilepsy patients to extract hippocampus and thalamus features. Scans were acquired on 1.5 T scanners with Gadolinium contrast (group A), 1.5T scanners without Gd (group B), and 3.0 T scanners without Gd (group C). A total of 72 features were extracted. Features were extracted from original scans and from scans where the image pixel values were rescaled to the mean of the hippocampi and thalami values. For each data set, cluster analysis was performed on the raw feature set and for feature sets with normalization (conversion to Z scores). Two methods of normalization were used: The first was over all values of a given feature, and the second by normalizing within the patient group membership. The clustering software was configured to produce 3 clusters. Group fractions in each cluster were calculated. Results: For features calculated from both the non-rescaled and rescaled data, cluster membership was identical for both the non-normalized and normalized data sets. Cluster 1 was comprised entirely of Group A data, Cluster 2 contained data from all three groups, and Cluster 3 contained data from only groups 1 and 2. For the categorically normalized data sets there was a more uniform distribution of group data in the three Clusters. A less pronounced effect was seen in the rescaled image data features. Conclusion: Image Rescaling and feature renormalization can have a significant effect on the results of clustering analysis. These effects are also likely to influence the results of supervised machine learning algorithms. It may be possible to partly remove the influence of scanner field strength and the presence of Gadolinium based contrast in feature extraction for radiomics applications.

  16. 基于图的混合加工特征识别方法%Hybrid Recognition of Machining Features Based on Graph

    Institute of Scientific and Technical Information of China (English)

    李大磊; 陈广飞; 尹跃峰

    2013-01-01

    Since traditional feature recognition method based on graph is difficult to recognize intersecting features and features with variable topology,this paper presents a method of hybrid feature recognition based on graph.Firstly,merged faces are split by creating split lines in order to separate intersecting feature.Secondly,the method establishes the extended attribute adjacency graph again,which is decomposed into several minimal condition sub-graphs (MCSG).Finally,according to boundary pattern of quadric features and features consist of planes,constructs separately feature recognition knowledge bases,and recognizes features by using the knowledge tree and reasoning.The results show the method can separate reasonably intersecting feature and recognize effectively machining features.%针对传统的基于图的特征识别方法难以识别相交特征和拓扑不固定特征的问题,提出了一种基于图的混合加工特征识别方法.该方法首先利用插入分割线分割贴合的面的方法拆分相交特征,然后重构扩展属性邻接图,从中分解出最小条件子图,最后根据二次曲面特征、由平面组成的特征的边界模式,分别建立相应的特征识别知识库,并应用知识树通过推理识别特征.验证结果表明该方法能合理地拆分相交特征,有效地识别常见的加工特征.

  17. Classification of basal cell carcinoma in human skin using machine learning and quantitative features captured by polarization sensitive optical coherence tomography.

    Science.gov (United States)

    Marvdashti, Tahereh; Duan, Lian; Aasi, Sumaira Z; Tang, Jean Y; Ellerbee Bowden, Audrey K

    2016-09-01

    We report the first fully automated detection of basal cell carcinoma (BCC), the most commonly occurring type of skin cancer, in human skin using polarization-sensitive optical coherence tomography (PS-OCT). Our proposed automated procedure entails building a machine-learning based classifier by extracting image features from the two complementary image contrasts offered by PS-OCT, intensity and phase retardation (PR), and selecting a subset of features that yields a classifier with the highest accuracy. Our classifier achieved 95.4% sensitivity and specificity, validated by leave-one-patient-out cross validation (LOPOCV), in detecting BCC in human skin samples collected from 42 patients. Moreover, we show the superiority of our classifier over the best possible classifier based on features extracted from intensity-only data, which demonstrates the significance of PR data in detecting BCC.

  18. Determining Readability: How to Select and Apply Easy-to-Use Readability Formulas to Assess the Difficulty of Adult Literacy Materials

    Science.gov (United States)

    Burke, Victoria; Greenberg, Daphne

    2010-01-01

    There are many readability tools that instructors can use to help adult learners select reading materials. We describe and compare different types of readability tools: formulas calculated by hand, tools found on the Web, tools embedded in a word processing program, and readability tools found in a commercial software program. Practitioners do not…

  19. Low-Resolution Tactile Image Recognition for Automated Robotic Assembly Using Kernel PCA-Based Feature Fusion and Multiple Kernel Learning-Based Support Vector Machine

    Directory of Open Access Journals (Sweden)

    Yi-Hung Liu

    2014-01-01

    Full Text Available In this paper, we propose a robust tactile sensing image recognition scheme for automatic robotic assembly. First, an image reprocessing procedure is designed to enhance the contrast of the tactile image. In the second layer, geometric features and Fourier descriptors are extracted from the image. Then, kernel principal component analysis (kernel PCA is applied to transform the features into ones with better discriminating ability, which is the kernel PCA-based feature fusion. The transformed features are fed into the third layer for classification. In this paper, we design a classifier by combining the multiple kernel learning (MKL algorithm and support vector machine (SVM. We also design and implement a tactile sensing array consisting of 10-by-10 sensing elements. Experimental results, carried out on real tactile images acquired by the designed tactile sensing array, show that the kernel PCA-based feature fusion can significantly improve the discriminating performance of the geometric features and Fourier descriptors. Also, the designed MKL-SVM outperforms the regular SVM in terms of recognition accuracy. The proposed recognition scheme is able to achieve a high recognition rate of over 85% for the classification of 12 commonly used metal parts in industrial applications.

  20. Combining cluster analysis, feature selection and multiple support vector machine models for the identification of human ether-a-go-go related gene channel blocking compounds.

    Science.gov (United States)

    Nisius, Britta; Göller, Andreas H; Bajorath, Jürgen

    2009-01-01

    Blockade of the human ether-a-go-go related gene potassium channel is regarded as a major cause of drug toxicity and associated with severe cardiac side-effects. A variety of in silico models have been reported to aid in the identification of compounds blocking the human ether-a-go-go related gene channel. Herein, we present a classification approach for the detection of diverse human ether-a-go-go related gene blockers that combines cluster analysis of training data, feature selection and support vector machine learning. Compound learning sets are first divided into clusters of similar molecules. For each cluster, independent support vector machine models are generated utilizing preselected MACCS structural keys as descriptors. These models are combined to predict human ether-a-go-go related gene inhibition of our large compound data set with consistent experimental measurements (i.e. only patch clamp measurements on mammalian cell lines). Our combined support vector machine model achieves a prediction accuracy of 85% on this data set and performs better than alternative methods used for comparison. We also find that structural keys selected on the basis of statistical criteria are associated with molecular substructures implicated in human ether-a-go-go related gene channel binding.

  1. Readability of medicinal package leaflets: a systematic review.

    Science.gov (United States)

    Pires, Carla; Vigário, Marina; Cavaco, Afonso

    2015-01-01

    OBJECTIVE To review studies on the readability of package leaflets of medicinal products for human use. METHODS We conducted a systematic literature review between 2008 and 2013 using the keywords "Readability and Package Leaflet" and "Readability and Package Insert" in the academic search engine Biblioteca do Conhecimento Online, comprising different bibliographic resources/databases. The preferred reporting items for systematic reviews and meta-analyses criteria were applied to prepare the draft of the report. Quantitative and qualitative original studies were included. Opinion or review studies not written in English, Portuguese, Italian, French, or Spanish were excluded. RESULTS We identified 202 studies, of which 180 were excluded and 22 were enrolled [two enrolling healthcare professionals, 10 enrolling other type of participants (including patients), three focused on adverse reactions, and 7 descriptive studies]. The package leaflets presented various readability problems, such as complex and difficult to understand texts, small font size, or few illustrations. The main methods to assess the readability of the package leaflet were usability tests or legibility formulae. Limitations with these methods included reduced number of participants; lack of readability formulas specifically validated for specific languages (e.g., Portuguese); and absence of an assessment on patients literacy, health knowledge, cognitive skills, levels of satisfaction, and opinions. CONCLUSIONS Overall, the package leaflets presented various readability problems. In this review, some methodological limitations were identified, including the participation of a limited number of patients and healthcare professionals, the absence of prior assessments of participant literacy, humor or sense of satisfaction, or the predominance of studies not based on role-plays about the use of medicines. These limitations should be avoided in future studies and be considered when interpreting the results.

  2. Readability assessment of the American Rhinologic Society patient education materials.

    Science.gov (United States)

    Kasabwala, Khushabu; Misra, Poonam; Hansberry, David R; Agarwal, Nitin; Baredes, Soly; Setzen, Michael; Eloy, Jean Anderson

    2013-04-01

    The extensive amount of medical literature available on the Internet is frequently accessed by patients. To effectively contribute to healthcare decision-making, these online resources should be worded at a level that is readable by any patient seeking information. The American Medical Association and National Institutes of Health recommend the readability of patient information material should be between a 4th to 6th grade level. In this study, we evaluate the readability of online patient education information available from the American Rhinologic Society (ARS) website using 9 different assessment tools that analyze the materials for reading ease and grade level of the target audience. Online patient education material from the ARS was downloaded in February 2012 and assessed for level of readability using the Flesch Reading Ease, Flesch-Kincaid Grade Level, Simple Measure of Gobbledygook (SMOG) Grading, Coleman-Liau Index, Gunning-Fog Index, FORCAST formula, Raygor Readability Estimate, the Fry Graph, and the New Dale-Chall Readability Formula. Each article was pasted as plain text into a Microsoft® Word® document and each subsection was analyzed using the software package Readability Studio Professional Edition Version 2012.1. All healthcare education materials assessed were written between a 9th grade and graduate reading level and were considered "difficult" to read by the assessment scales. Online patient education materials on the ARS website are written above the recommended 6th grade level and may require revision to make them easily understood by a broader audience. © 2013 ARS-AAOA, LLC.

  3. Features and machine learning classification of connected speech samples from patients with autopsy proven Alzheimer's disease with and without additional vascular pathology.

    Science.gov (United States)

    Rentoumi, Vassiliki; Raoufian, Ladan; Ahmed, Samrah; de Jager, Celeste A; Garrard, Peter

    2014-01-01

    Mixed vascular and Alzheimer-type dementia and pure Alzheimer's disease are both associated with changes in spoken language. These changes have, however, seldom been subjected to systematic comparison. In the present study, we analyzed language samples obtained during the course of a longitudinal clinical study from patients in whom one or other pathology was verified at post mortem. The aims of the study were twofold: first, to confirm the presence of differences in language produced by members of the two groups using quantitative methods of evaluation; and secondly to ascertain the most informative sources of variation between the groups. We adopted a computational approach to evaluate digitized transcripts of connected speech along a range of language-related dimensions. We then used machine learning text classification to assign the samples to one of the two pathological groups on the basis of these features. The classifiers' accuracies were tested using simple lexical features, syntactic features, and more complex statistical and information theory characteristics. Maximum accuracy was achieved when word occurrences and frequencies alone were used. Features based on syntactic and lexical complexity yielded lower discrimination scores, but all combinations of features showed significantly better performance than a baseline condition in which every transcript was assigned randomly to one of the two classes. The classification results illustrate the word content specific differences in the spoken language of the two groups. In addition, those with mixed pathology were found to exhibit a marked reduction in lexical variation and complexity compared to their pure AD counterparts.

  4. Application of Multi-task Sparse Group Lasso Feature Extraction and Support Vector Machine Regression in the Stellar Atmospheric Parametrization

    Science.gov (United States)

    Gao, W.; Li, X. R.

    2016-07-01

    The multi-task learning puts the multiple tasks together to analyse and calculate for discovering the correlation between them, which can improve the accuracy of analysis results. This kind of methods have been widely studied in machine learning, pattern recognition, computer vision, and other related fields. This paper investigates the application of multi-task learning in estimating the effective temperature (T_{eff}), surface gravity (lg g), and chemical abundance ([Fe/H]). Firstly, the spectral characteristics of the three atmospheric physical parameters are extracted by using the multi-task Sparse Group Lasso algorithm, and then the support vector machine is used to estimate the atmospheric physical parameters. The proposed scheme is evaluated on both Sloan stellar spectra and theoretical spectra computed from Kurucz's New Opacity Distribution Function (NEWODF) model. The mean absolute errors (MAEs) on the Sloan spectra are: 0.0064 for lg (T_{eff}/K), 0.1622 for lg (g/(cm\\cdot s^{-2})), and 0.1221 dex for [Fe/H]; The MAEs on synthetic spectra are 0.0006 for lg (T_{eff}/K), 0.0098 for lg (g/(cm\\cdot s^{-2})), and 0.0082 dex for [Fe/H]. Experimental results show that the proposed scheme is excellent for atmospheric parameter estimation.

  5. A fast approach for detection of erythemato-squamous diseases based on extreme learning machine with maximum relevance minimum redundancy feature selection

    Science.gov (United States)

    Liu, Tong; Hu, Liang; Ma, Chao; Wang, Zhi-Yan; Chen, Hui-Ling

    2015-04-01

    In this paper, a novel hybrid method, which integrates an effective filter maximum relevance minimum redundancy (MRMR) and a fast classifier extreme learning machine (ELM), has been introduced for diagnosing erythemato-squamous (ES) diseases. In the proposed method, MRMR is employed as a feature selection tool for dimensionality reduction in order to further improve the diagnostic accuracy of the ELM classifier. The impact of the type of activation functions, the number of hidden neurons and the size of the feature subsets on the performance of ELM have been investigated in detail. The effectiveness of the proposed method has been rigorously evaluated against the ES disease dataset, a benchmark dataset, from UCI machine learning database in terms of classification accuracy. Experimental results have demonstrated that our method has achieved the best classification accuracy of 98.89% and an average accuracy of 98.55% via 10-fold cross-validation technique. The proposed method might serve as a new candidate of powerful methods for diagnosing ES diseases.

  6. The Readability of Malaysian English Children Books: A Multilevel Analysis

    Directory of Open Access Journals (Sweden)

    Adlina Ismail

    2016-11-01

    Full Text Available These days, there are more English books for children published by local publishers in Malaysia. It is a positive development because the books will be more accessible to the children. However, the books have never been studied and evaluated in depth yet. One important factor in assessing reading materials is readability. Readability determines whether a text is easy or difficult to understand and a balanced mix of both can promote learning and language development. Various researchers mentioned a multilevel framework of discourse that any language assessment on a text should take into account. The levels that were proposed were word, syntax, textbase, situation model and genre and rhetorical structures. Traditional readability measures such as Flesh Reading Ease Formula, Gunning Readability Index, Fog Count, and Fry Grade Level are not able to address the multilevel because they are based on shallow variables. In contrast, Coh-metrix TERA provided five indices that are correlated to grade level and aligned to the multilevel framework. This study analyzed ten Malaysian English chapter books for children using this Coh-metrix TERA. The result revealed that the Malaysian English children books were easy in shallow level but there was a possible difficulty in textbase and situation model level because of the lack of cohesion. In conclusion, more attention should be given on deeper level of text rather than just word and syntax level. Keywords: Readability, assessment of reading materials, children books, Coh-Metrix

  7. The readability and suitability of sexual health promotion leaflets.

    Science.gov (United States)

    Corcoran, Nova; Ahmad, Fatuma

    2016-02-01

    To investigate the readability and suitability of sexual health promotion leaflets. Application of SMOG, FRY and SAM tests to assess the readability and suitability of a selection of sexual health leaflets. SMOG and FRY scores illustrate an average reading level of grade 9. SAM scores indicate that 59% of leaflets are superior in design and 41% are average in design. Leaflets generally perform well in the categories of content, literacy demand, typography and layout. They perform poorly in use of graphics, learning stimulation/motivation and cultural appropriateness. Sexual health leaflets have a reading level that is too high. Leaflets perform well on the suitability scores indicating they are reasonably suitable. There are a number of areas where sexual health leaflets could improve their design. Numerous practical techniques are suggested for improving the readability and suitability of sexual health leaflets. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  8. Building Support Vector Machine with Reduced Feature Complexity%构造特征复杂性减低的支持向量机

    Institute of Scientific and Technical Information of China (English)

    宇缨

    2007-01-01

    支持向量机(SVM)较一般的机器学习方法显示出更好的泛化能力.然而,在实际的数据中经常存在着大量冗余、噪声或者不可靠的特征,这严重影响到SVM的性能.因此,有必要减低特征复杂性以获取更好的SVM结果.本文提出了一种基于遗传算法(GA)的嵌入式框架下的特征优化算法,以构造改进SVM.针对选择的UCI成人数据库的实验表明,与原始的SVM相比,提出的改进SVM方法获得了更少的支持向量数目和更好的分类精度.%Support Vector Machine (SVM) has revealed better generalization than conventional machine learning methods. However, in the real data there often exist a large number of redundant, noisy or unreliable features to deteriorate the function of SVM strongly. So to reduce the feature complexity, it is necessary to improve the performance of SVM for better results. A method to build modified SVM, which is based on embedded methods for feature optimization using Genetic Algorithm (GA),is proposed in this paper. The experimental results on selected UCI Adult data base show that compared with original SVM classifier, the number of support vector decreases and better classification results are achieved based on our modified SVM.

  9. The readability of online breast cancer risk assessment tools.

    Science.gov (United States)

    Cortez, Sarah; Milbrandt, Melissa; Kaphingst, Kimberly; James, Aimee; Colditz, Graham

    2015-11-01

    Numerous breast cancer risk assessment tools that allow users to input personal risk information and obtain a personalized breast cancer risk estimate are available on the Internet. The goal of these tools is to increase screening awareness and identify modifiable health behaviors; however, the utility of this risk information is limited by the readability of the material. We undertook this study to assess the overall readability of breast cancer risk assessment tools and accompanying information, as well as to identify areas of suggested improvement. We searched for breast cancer risk assessment tools, using five search terms, on three search engines. All searches were performed on June 12, 2014. Sites that met inclusion criteria were then assessed for readability using the suitability assessment of materials (SAM) and the SMOG readability formula (July 1, 2014–January 31, 2015). The primary outcomes are the frequency distribution of overall SAM readability category (superior, adequate, or not suitable) and mean SMOG reading grade level. The search returned 42 sites were eligible for assessment, only 9 (21.4 %) of which achieved an overall SAM superior rating, and 27 (64.3 %) were deemed adequate. The average SMOG reading grade level was grade 12.1 (SD 1.6, range 9–15). The readability of breast cancer risk assessment tools and the sites that host them is an important barrier to risk communication. This study demonstrates that most breast cancer risk assessment tools are not accessible to individuals with limited health literacy skills. More importantly, this study identifies potential areas of improvement and has the potential to heighten a physician’s awareness of the Internet resources a patient might navigate in their quest for breast cancer risk information.

  10. 可制造性驱动的三维CAD模型相交制造特征识别方法%Manufacturability Driven Interacting Machining Feature Recognition Algorithms for 3D CAD Models

    Institute of Scientific and Technical Information of China (English)

    黄瑞; 张树生; 白晓亮

    2013-01-01

    To realize the effective integration of CAD,CAPP,and CAM system,we present a manufacturability driven interacting machining feature recognition method for 3D CAD models.Firstly,the accessibility cone for each machining face is computed based on heuristic rules.Machining region subgraphs are then constructed by machining face clustering method considering manufacturing semantics.With the dimension semantic information,interacting machining features are finally recognized based on the machining region subgraph that is used as a feature hint.The approach proposed is implemented and tested by hundreds of mechanical parts.Preliminary results show that the method can effectively realize machining feature recognition for complex interacting machining features and complex parts,and the efficiency can meet the requirement of engineering application.%为了实现CAD/CAPP/CAM系统的有效集成,提出一种可制造性驱动的三维CAD模型相交制造特征识别方法.首先通过启发式规则对加工面进行可达性分析,计算加工面可行刀具轴向空间;然后采用融合制造语义的加工面聚类算法构建加工区域子图;最后以加工区域子图为制造特征痕迹,结合标注语义信息对加工区域子图进行优化合并,从而实现制造特征的识别.实验结果表明,该方法能够有效地实现复杂相交制造特征和复杂零件的制造特征识别,制造特征识别性能可满足工程应用中的需求.

  11. Machine vision: an incremental learning system based on features derived using fast Gabor transforms for the identification of textural objects

    Science.gov (United States)

    Clark, Richard M.; Adjei, Osei; Johal, Harpal

    2001-11-01

    This paper proposes a fast, effective and also very adaptable incremental learning system for identifying textures based on features extracted from Gabor space. The Gabor transform is a useful technique for feature extraction since it exhibits properties that are similar to biologically visual sensory systems such as those found in the mammalian visual cortex. Although two-dimensional Gabor filters have been applied successfully to a variety of tasks such as text segmentation, object detection and fingerprint analysis, the work of this paper extends previous work by incorporating incremental learning to facilitate easier training. The proposed system transforms textural images into Gabor space and a non-linear threshold function is then applied to extract feature vectors that bear signatures of the textural images. The mean and variance of each training group is computed followed by a technique that uses the Kohonen network to cluster these features. The centers of these clusters form the basis of an incremental learning paradigm that allows new information to be integrated into the existing knowledge. A number of experiments are conducted for real-time identification or discrimination of textural images.

  12. A readability comparison of anti- versus pro-influenza vaccination online messages in Japan

    Directory of Open Access Journals (Sweden)

    Tsuyoshi Okuhara

    2017-06-01

    When health professionals prepare pro-influenza vaccination materials for publication online, we recommend they check for readability using readability assessment tools and improve the text for easy reading if necessary.

  13. Model-Based Comparison of Deep Brain Stimulation Array Functionality with Varying Number of Radial Electrodes and Machine Learning Feature Sets.

    Science.gov (United States)

    Teplitzky, Benjamin A; Zitella, Laura M; Xiao, YiZi; Johnson, Matthew D

    2016-01-01

    Deep brain stimulation (DBS) leads with radially distributed electrodes have potential to improve clinical outcomes through more selective targeting of pathways and networks within the brain. However, increasing the number of electrodes on clinical DBS leads by replacing conventional cylindrical shell electrodes with radially distributed electrodes raises practical design and stimulation programming challenges. We used computational modeling to investigate: (1) how the number of radial electrodes impact the ability to steer, shift, and sculpt a region of neural activation (RoA), and (2) which RoA features are best used in combination with machine learning classifiers to predict programming settings to target a particular area near the lead. Stimulation configurations were modeled using 27 lead designs with one to nine radially distributed electrodes. The computational modeling framework consisted of a three-dimensional finite element tissue conductance model in combination with a multi-compartment biophysical axon model. For each lead design, two-dimensional threshold-dependent RoAs were calculated from the computational modeling results. The models showed more radial electrodes enabled finer resolution RoA steering; however, stimulation amplitude, and therefore spatial extent of the RoA, was limited by charge injection and charge storage capacity constraints due to the small electrode surface area for leads with more than four radially distributed electrodes. RoA shifting resolution was improved by the addition of radial electrodes when using uniform multi-cathode stimulation, but non-uniform multi-cathode stimulation produced equivalent or better resolution shifting without increasing the number of radial electrodes. Robust machine learning classification of 15 monopolar stimulation configurations was achieved using as few as three geometric features describing a RoA. The results of this study indicate that, for a clinical-scale DBS lead, more than four radial

  14. Microtopographic Features of Metal Surfaces Machined via Micro-Ploughing%微犁削成形金属表面的微观形貌特征

    Institute of Scientific and Technical Information of China (English)

    王清辉; 张小明; 郑旭; 黄祥; 李静蓉

    2012-01-01

    In this paper, the microtopography of metal surfaces machined via the micro-ploughing is analyzed by combining the evaluation methods of roughness and fractal dimension. Then, the topographical features of micro groove surfaces and the effects of machining parameters on these features are investigated. The results show that the distributions of fractal dimension and roughness of contour profiles in different positions on the micro-ploughing groove surfaces are subject to some specific statistical laws, that there is a relatively strong positive correlation between the average fractal dimension and the average roughness, and that, in a certain range of manufacturing parameters , an increasing groove depth and a decreasing machining feed may cause an increase in average fractal dimension and roughness. In addition, based on the experimental investigation, some digital models are constructed with an improved W-M fractal function to describe the micro-ploughing groove surfaces with specific topographical features, which are beneficial to the initiative design and simulation of functional surface structures fabricated via micro-ploughing.%综合粗糙度和分形维数两种评价方法对微犁削成形金属表面的微观形貌进行分析,研究微沟槽表面形貌特征及加工参数对其的影响规律.研究表明:微犁削沟槽表面不同位置的轮廓分形维数和轮廓粗糙度均符合特定统计规律,且轮廓分形维数平均值和轮廓粗糙度平均值存在较强的正相关性;在一定的加工参数范围内,槽深的增大和加工进给量的减小使沟槽表面轮廓分形维数和轮廓粗糙度的均值上升.文中还基于实验数据,利用改进的W-M分形函数建立了描述上述形貌特征的微犁削沟槽表面数字化模型,为微犁削成形表面功能结构的仿真与主动设计奠定了基础.

  15. SU-D-204-01: A Methodology Based On Machine Learning and Quantum Clustering to Predict Lung SBRT Dosimetric Endpoints From Patient Specific Anatomic Features

    Energy Technology Data Exchange (ETDEWEB)

    Lafata, K; Ren, L; Wu, Q; Kelsey, C; Hong, J; Cai, J; Yin, F [Duke University Medical Center, Durham, NC (United States)

    2016-06-15

    Purpose: To develop a data-mining methodology based on quantum clustering and machine learning to predict expected dosimetric endpoints for lung SBRT applications based on patient-specific anatomic features. Methods: Ninety-three patients who received lung SBRT at our clinic from 2011–2013 were retrospectively identified. Planning information was acquired for each patient, from which various features were extracted using in-house semi-automatic software. Anatomic features included tumor-to-OAR distances, tumor location, total-lung-volume, GTV and ITV. Dosimetric endpoints were adopted from RTOG-0195 recommendations, and consisted of various OAR-specific partial-volume doses and maximum point-doses. First, PCA analysis and unsupervised quantum-clustering was used to explore the feature-space to identify potentially strong classifiers. Secondly, a multi-class logistic regression algorithm was developed and trained to predict dose-volume endpoints based on patient-specific anatomic features. Classes were defined by discretizing the dose-volume data, and the feature-space was zero-mean normalized. Fitting parameters were determined by minimizing a regularized cost function, and optimization was performed via gradient descent. As a pilot study, the model was tested on two esophageal dosimetric planning endpoints (maximum point-dose, dose-to-5cc), and its generalizability was evaluated with leave-one-out cross-validation. Results: Quantum-Clustering demonstrated a strong separation of feature-space at 15Gy across the first-and-second Principle Components of the data when the dosimetric endpoints were retrospectively identified. Maximum point dose prediction to the esophagus demonstrated a cross-validation accuracy of 87%, and the maximum dose to 5cc demonstrated a respective value of 79%. The largest optimized weighting factor was placed on GTV-to-esophagus distance (a factor of 10 greater than the second largest weighting factor), indicating an intuitively strong

  16. Comparison of Different Machine Learning Algorithms for Lithological Mapping Using Remote Sensing Data and Morphological Features: A Case Study in Kurdistan Region, NE Iraq

    Science.gov (United States)

    Othman, Arsalan; Gloaguen, Richard

    2015-04-01

    Topographic effects and complex vegetation cover hinder lithology classification in mountain regions based not only in field, but also in reflectance remote sensing data. The area of interest "Bardi-Zard" is located in the NE of Iraq. It is part of the Zagros orogenic belt, where seven lithological units outcrop and is known for its chromite deposit. The aim of this study is to compare three machine learning algorithms (MLAs): Maximum Likelihood (ML), Support Vector Machines (SVM), and Random Forest (RF) in the context of a supervised lithology classification task using Advanced Space-borne Thermal Emission and Reflection radiometer (ASTER) satellite, its derived, spatial information (spatial coordinates) and geomorphic data. We emphasize the enhancement in remote sensing lithological mapping accuracy that arises from the integration of geomorphic features and spatial information (spatial coordinates) in classifications. This study identifies that RF is better than ML and SVM algorithms in almost the sixteen combination datasets, which were tested. The overall accuracy of the best dataset combination with the RF map for the all seven classes reach ~80% and the producer and user's accuracies are ~73.91% and 76.09% respectively while the kappa coefficient is ~0.76. TPI is more effective with SVM algorithm than an RF algorithm. This paper demonstrates that adding geomorphic indices such as TPI and spatial information in the dataset increases the lithological classification accuracy.

  17. The Short-Term Power Load Forecasting Based on Sperm Whale Algorithm and Wavelet Least Square Support Vector Machine with DWT-IR for Feature Selection

    Directory of Open Access Journals (Sweden)

    Jin-peng Liu

    2017-07-01

    Full Text Available Short-term power load forecasting is an important basis for the operation of integrated energy system, and the accuracy of load forecasting directly affects the economy of system operation. To improve the forecasting accuracy, this paper proposes a load forecasting system based on wavelet least square support vector machine and sperm whale algorithm. Firstly, the methods of discrete wavelet transform and inconsistency rate model (DWT-IR are used to select the optimal features, which aims to reduce the redundancy of input vectors. Secondly, the kernel function of least square support vector machine LSSVM is replaced by wavelet kernel function for improving the nonlinear mapping ability of LSSVM. Lastly, the parameters of W-LSSVM are optimized by sperm whale algorithm, and the short-term load forecasting method of W-LSSVM-SWA is established. Additionally, the example verification results show that the proposed model outperforms other alternative methods and has a strong effectiveness and feasibility in short-term power load forecasting.

  18. IDEPI: rapid prediction of HIV-1 antibody epitopes and other phenotypic features from sequence data using a flexible machine learning platform.

    Directory of Open Access Journals (Sweden)

    N Lance Hepler

    2014-09-01

    Full Text Available Since its identification in 1983, HIV-1 has been the focus of a research effort unprecedented in scope and difficulty, whose ultimate goals--a cure and a vaccine--remain elusive. One of the fundamental challenges in accomplishing these goals is the tremendous genetic variability of the virus, with some genes differing at as many as 40% of nucleotide positions among circulating strains. Because of this, the genetic bases of many viral phenotypes, most notably the susceptibility to neutralization by a particular antibody, are difficult to identify computationally. Drawing upon open-source general-purpose machine learning algorithms and libraries, we have developed a software package IDEPI (IDentify EPItopes for learning genotype-to-phenotype predictive models from sequences with known phenotypes. IDEPI can apply learned models to classify sequences of unknown phenotypes, and also identify specific sequence features which contribute to a particular phenotype. We demonstrate that IDEPI achieves performance similar to or better than that of previously published approaches on four well-studied problems: finding the epitopes of broadly neutralizing antibodies (bNab, determining coreceptor tropism of the virus, identifying compartment-specific genetic signatures of the virus, and deducing drug-resistance associated mutations. The cross-platform Python source code (released under the GPL 3.0 license, documentation, issue tracking, and a pre-configured virtual machine for IDEPI can be found at https://github.com/veg/idepi.

  19. IDEPI: rapid prediction of HIV-1 antibody epitopes and other phenotypic features from sequence data using a flexible machine learning platform.

    Science.gov (United States)

    Hepler, N Lance; Scheffler, Konrad; Weaver, Steven; Murrell, Ben; Richman, Douglas D; Burton, Dennis R; Poignard, Pascal; Smith, Davey M; Kosakovsky Pond, Sergei L

    2014-09-01

    Since its identification in 1983, HIV-1 has been the focus of a research effort unprecedented in scope and difficulty, whose ultimate goals--a cure and a vaccine--remain elusive. One of the fundamental challenges in accomplishing these goals is the tremendous genetic variability of the virus, with some genes differing at as many as 40% of nucleotide positions among circulating strains. Because of this, the genetic bases of many viral phenotypes, most notably the susceptibility to neutralization by a particular antibody, are difficult to identify computationally. Drawing upon open-source general-purpose machine learning algorithms and libraries, we have developed a software package IDEPI (IDentify EPItopes) for learning genotype-to-phenotype predictive models from sequences with known phenotypes. IDEPI can apply learned models to classify sequences of unknown phenotypes, and also identify specific sequence features which contribute to a particular phenotype. We demonstrate that IDEPI achieves performance similar to or better than that of previously published approaches on four well-studied problems: finding the epitopes of broadly neutralizing antibodies (bNab), determining coreceptor tropism of the virus, identifying compartment-specific genetic signatures of the virus, and deducing drug-resistance associated mutations. The cross-platform Python source code (released under the GPL 3.0 license), documentation, issue tracking, and a pre-configured virtual machine for IDEPI can be found at https://github.com/veg/idepi.

  20. The ATOS[TM] Readability Formula for Books and How It Compares to Other Formulas. Report.

    Science.gov (United States)

    School Renaissance Inst., Inc., Madison, WI.

    Readability formulas estimate how difficult text is to read. The resulting "readability level" helps teachers and school librarians match students to appropriate books. Guiding students to appropriate-level books is now easier and more accurate with the ATOS (Advantage-TASA Open Standard) Readability Formula for Books, the new…

  1. Recent Advances on Permanent Magnet Machines

    Institute of Scientific and Technical Information of China (English)

    诸自强

    2012-01-01

    This paper overviews advances on permanent magnet(PM) brushless machines over last 30 years,with particular reference to new and novel machine topologies.These include current states and trends for surface-mounted and interior PM machines,electrically and mechanically adjusted variable flux PM machines including memory machine,hybrid PM machines which uniquely integrate PM technology into induction machines,switched and synchronous reluctance machines and wound field machines,Halbach PM machines,dual-rotor PM machines,and magnetically geared PM machines,etc.The paper highlights their features and applications to various market sectors.

  2. readability of comprehension passages in junior high school (jhs)

    African Journals Online (AJOL)

    CHARLES

    by the way in which we teach, or lack of intelligence of the learner but may be the result of a ... that is complex, indirect, uneconomical, and unfamiliar affects readability of a text. In addition, the ...... American Association for Artificial Intelligence.

  3. Tools for Assessing Readability of Statistics Teaching Materials

    Science.gov (United States)

    Lesser, Lawrence; Wagler, Amy

    2016-01-01

    This article provides tools and rationale for instructors in math and science to make their assessment and curriculum materials (more) readable for students. The tools discussed (MSWord, LexTutor, Coh-Metrix TEA) are readily available linguistic analysis applications that are grounded in current linguistic theory, but present output that can…

  4. An Analysis of the Readability of Financial Accounting Textbooks.

    Science.gov (United States)

    Smith, Gerald; And Others

    1981-01-01

    The Flesch formula was used to calculate the readability of 15 financial accounting textbooks. The 15 textbooks represented introductory, intermediate, and advanced levels and also were classified by five different publishers. Two-way analysis of variance and Tukey's post hoc analysis revealed some significant differences. (Author/CT)

  5. Home Pregnancy Test Kits: How Readable Are the Instructions?

    Science.gov (United States)

    Holcomb, Carol Ann

    At the conclusion of their study on home pregnancy test kits, Valinas and Perlman (1982) suggested that the instructions accompanying the kits be revised to make them easier to read. A study was undertaken to determine the readability of the printed instructions accompanying five home pregnancy test kits (Daisy II, Answer, Acu-Test, Predictor, and…

  6. Tools for Assessing Readability of Statistics Teaching Materials

    Science.gov (United States)

    Lesser, Lawrence; Wagler, Amy

    2016-01-01

    This article provides tools and rationale for instructors in math and science to make their assessment and curriculum materials (more) readable for students. The tools discussed (MSWord, LexTutor, Coh-Metrix TEA) are readily available linguistic analysis applications that are grounded in current linguistic theory, but present output that can…

  7. Global and Local Features Based Classification for Bleed-Through Removal

    Science.gov (United States)

    Hu, Xiangyu; Lin, Hui; Li, Shutao; Sun, Bin

    2016-12-01

    The text on one side of historical documents often seeps through and appears on the other side, so the bleed-through is a common problem in historical document images. It makes the document images hard to read and the text difficult to recognize. To improve the image quality and readability, the bleed-through has to be removed. This paper proposes a global and local features extraction based bleed-through removal method. The Gaussian mixture model is used to get the global features of the images. Local features are extracted by the patch around each pixel. Then, the extreme learning machine classifier is utilized to classify the scanned images into the foreground text and the bleed-through component. Experimental results on real document image datasets show that the proposed method outperforms the state-of-the-art bleed-through removal methods and preserves the text strokes well.

  8. Evaluating the Readability of Radio Frequency Identification for Construction Materials

    Directory of Open Access Journals (Sweden)

    Younghan Jung

    2017-01-01

    Full Text Available Radio Frequency Identification (RFID, which was originally introduced to improve material handling and speed production as part of supply chain management, has become a globally accepted technology that is now applied on many construction sites to facilitate real-time information visibility and traceability. This paper describes a senior undergraduate project for a Construction Management (CM program that was specifically designed to give the students a greater insight into technical research in the CM area. The students were asked to determine whether it would be possible to utilize an RFID system capable of tracking tagged equipment, personnel and materials across an entire construction site. This project required them to set up an experimental program, execute a series of experiments, analyze the results and summarize them in a report. The readability test was performed using an active Ultra-High frequency (UHF, 433.92 MHz RFID system with various construction materials, including metal, concrete, wood, plastic, and aluminum. The readability distance distances are measured for each of the six scenarios. The distance at which a tag was readable with no obstructions was found to be an average of 133.9m based on three measurements, with a standard deviation of 3.9m. This result confirms the manufacturer’s claimed distance of 137.2m. The RFID tag embedded under 50.8mm of concrete was readable for an average distance of only 12.2m, the shortest readable distance of any of the scenarios tested. At the end of the semester, faculty advisors held an open discussion session to gather feedback and elicit the students’ reflections on their research experiences, revealing that the students’ overall impressions of their undergraduate research had positively affected their postgraduate education plans.

  9. An Efficient Diagnosis System for Parkinson’s Disease Using Kernel-Based Extreme Learning Machine with Subtractive Clustering Features Weighting Approach

    Directory of Open Access Journals (Sweden)

    Chao Ma

    2014-01-01

    Full Text Available A novel hybrid method named SCFW-KELM, which integrates effective subtractive clustering features weighting and a fast classifier kernel-based extreme learning machine (KELM, has been introduced for the diagnosis of PD. In the proposed method, SCFW is used as a data preprocessing tool, which aims at decreasing the variance in features of the PD dataset, in order to further improve the diagnostic accuracy of the KELM classifier. The impact of the type of kernel functions on the performance of KELM has been investigated in detail. The efficiency and effectiveness of the proposed method have been rigorously evaluated against the PD dataset in terms of classification accuracy, sensitivity, specificity, area under the receiver operating characteristic (ROC curve (AUC, f-measure, and kappa statistics value. Experimental results have demonstrated that the proposed SCFW-KELM significantly outperforms SVM-based, KNN-based, and ELM-based approaches and other methods in the literature and achieved highest classification results reported so far via 10-fold cross validation scheme, with the classification accuracy of 99.49%, the sensitivity of 100%, the specificity of 99.39%, AUC of 99.69%, the f-measure value of 0.9964, and kappa value of 0.9867. Promisingly, the proposed method might serve as a new candidate of powerful methods for the diagnosis of PD with excellent performance.

  10. Online Capacity Estimation of Lithium-Ion Batteries Based on Novel Feature Extraction and Adaptive Multi-Kernel Relevance Vector Machine

    Directory of Open Access Journals (Sweden)

    Yang Zhang

    2015-11-01

    Full Text Available Prognostics is necessary to ensure the reliability and safety of lithium-ion batteries for hybrid electric vehicles or satellites. This process can be achieved by capacity estimation, which is a direct fading indicator for assessing the state of health of a battery. However, the capacity of a lithium-ion battery onboard is difficult to monitor. This paper presents a data-driven approach for online capacity estimation. First, six novel features are extracted from cyclic charge/discharge cycles and used as indirect health indicators. An adaptive multi-kernel relevance machine (MKRVM based on accelerated particle swarm optimization algorithm is used to determine the optimal parameters of MKRVM and characterize the relationship between extracted features and battery capacity. The overall estimation process comprises offline and online stages. A supervised learning step in the offline stage is established for model verification to ensure the generalizability of MKRVM for online application. Cross-validation is further conducted to validate the performance of the proposed model. Experiment and comparison results show the effectiveness, accuracy, efficiency, and robustness of the proposed approach for online capacity estimation of lithium-ion batteries.

  11. Machine medical ethics

    CERN Document Server

    Pontier, Matthijs

    2015-01-01

    The essays in this book, written by researchers from both humanities and sciences, describe various theoretical and experimental approaches to adding medical ethics to a machine in medical settings. Medical machines are in close proximity with human beings, and getting closer: with patients who are in vulnerable states of health, who have disabilities of various kinds, with the very young or very old, and with medical professionals. In such contexts, machines are undertaking important medical tasks that require emotional sensitivity, knowledge of medical codes, human dignity, and privacy. As machine technology advances, ethical concerns become more urgent: should medical machines be programmed to follow a code of medical ethics? What theory or theories should constrain medical machine conduct? What design features are required? Should machines share responsibility with humans for the ethical consequences of medical actions? How ought clinical relationships involving machines to be modeled? Is a capacity for e...

  12. Moving Beyond Readability Metrics for Health-Related Text Simplification.

    Science.gov (United States)

    Kauchak, David; Leroy, Gondy

    2016-01-01

    Limited health literacy is a barrier to understanding health information. Simplifying text can reduce this barrier and possibly other known disparities in health. Unfortunately, few tools exist to simplify text with demonstrated impact on comprehension. By leveraging modern data sources integrated with natural language processing algorithms, we are developing the first semi-automated text simplification tool. We present two main contributions. First, we introduce our evidence-based development strategy for designing effective text simplification software and summarize initial, promising results. Second, we present a new study examining existing readability formulas, which are the most commonly used tools for text simplification in healthcare. We compare syllable count, the proxy for word difficulty used by most readability formulas, with our new metric 'term familiarity' and find that syllable count measures how difficult words 'appear' to be, but not their actual difficulty. In contrast, term familiarity can be used to measure actual difficulty.

  13. Readability and comprehensibility of the "exercise lite" brochure.

    Science.gov (United States)

    Cardinal, B J; Seidler, T L

    1995-04-01

    Studies suggest that exercise literature tends to be very difficult to read and the writing not matched to the reading ability of the audience for which it was intended. Two studies aimed at describing the readability and comprehensibility of the recently developed U.S. Centers for Disease Control and American College of Sports Medicine "Exercise Lite" brochure were conducted. In Study 1, the brochure's readability was assessed using four different formulas. This study showed that the brochure was written at a level equivalent to that of a scientific journal article. In Study 2, 56 participants (two-thirds of whom were college graduates) were tested to assess whether they could comprehend the brochure's message. Results showed that, without supplemental instruction, the Exercise Lite brochure was incomprehensible for 69.6% (n = 39) of the subjects.

  14. 基于加工特征的铣削力预测研究%Numerical prediction of milling forces based on machining feature

    Institute of Scientific and Technical Information of China (English)

    赵凯; 刘战强

    2014-01-01

    航空发动机广泛采用钛合金薄壁结构,薄壁件在铣削加工过程中受铣削力的影响易于产生加工变形,影响加工质量。为减少加工变形,提高加工质量,需对铣削加工过程中的铣削力进行预测。为此,以Johnson-Cook本构方程为基础,考虑材料热力学动态性能和断裂准则对铣削力的影响,建立了基于加工特征的钛合金Ti-6Al-4V铣削力预测模型。首先,利用UG/Open工具模块对UG软件进行二次开发,创建了零件加工特征知识库。然后,利用Deform-3D仿真软件对材料本构模型、切屑分离和切屑断裂准则等进行描述,建立钛合金Ti-6Al-4V铣削加工有限元模型,对铣削力进行预测。铣削力实验证明了预测模型的可行性。最后,利用建立的有限元模型研究了工件曲率半径对铣削力的影响。结果表明,圆弧内轮廓铣削过程中的铣削力较大,圆弧外轮廓铣削过程中的铣削力较小。%Thin-walled structures with titanium alloy are widely used in aircraft engine .However , deflection induced by cutting force will reduce the finished part accuracy due to the lower rigidy of thin-walled structure .The amount of cutting force must be accurately predicted to improve the machining precision of thin-walled parts.A numerical prediction model for milling force of Ti-6Al-4V titanium alloy is developed based on the Johnson-Cook constitutive equation ,UG geometric modeling and Deform-3D fi-nite element simulation .The effects of the material properties and fracture criterion on the milling force are considered during sim -ulation modeling.Firstly,the knowledge-base of features for aero-engine components is created by UG software .Then,the finite el-ement model of milling process is established by modeling the material constitutive relationship ,chip fracture and material separa-tion criteria.The machining experiments are conducted to validate the feasibility of the proposed

  15. Reliability, Readability and Quality of Online Information about Femoracetabular Impingement

    Directory of Open Access Journals (Sweden)

    Fatih Küçükdurmaz

    2015-07-01

    Conclusion: According to our results, the websites intended to attract patients searching for information regarding femoroacetabular impingement are providing a highly accessible, readable information source, but do not appear to apply a comparable amount of rigor to scientific literature or healthcare practitioner websites in regard to matters such as citing sources for information, supplying methodology and including a publication date. This indicates that while these resources are easily accessed by patients, there is potential for them to be a source of misinformation.

  16. Online Tonsillectomy Resources: Are Parents Getting Consistent and Readable Recommendations?

    Science.gov (United States)

    Wozney, Lori; Chorney, Jill; Huguet, Anna; Song, Jin Soo; Boss, Emily F; Hong, Paul

    2017-05-01

    Objective Parents frequently refer to information on the Internet to confirm or broaden their understanding of surgical procedures and to research postoperative care practices. Our study evaluated the readability, comprehensiveness, and consistency around online recommendations directed at parents of children undergoing tonsillectomy. Study Design A cross-sectional study design was employed. Setting Thirty English-language Internet websites. Subjects and Methods Three validated measures of readability were applied and content analysis was employed to evaluate the comprehensiveness of information in domains of perioperative education. Frequency effect sizes and percentile ranks were calculated to measure dispersion of recommendations across sites. Results The mean readability level of all sites was above a grade 10 level with fewer than half of the sites (n = 14, 47%) scoring at or below the eight-grade level. Provided information was often incomplete with a noted lack of psychosocial support and skills-training recommendations. Content analysis showed 67 unique recommendations spanning the full perioperative period. Most recommendations had low consensus, being reported in 5 or fewer sites (frequency effect size information easier to read.

  17. Readability assessment of online urology patient education materials.

    Science.gov (United States)

    Colaco, Marc; Svider, Peter F; Agarwal, Nitin; Eloy, Jean Anderson; Jackson, Imani M

    2013-03-01

    The National Institutes of Health, American Medical Association, and United States Department of Health and Human Services recommend that patient education materials be written at a fourth to sixth grade reading level to facilitate comprehension. We examined and compared the readability and difficulty of online patient education materials from the American Urological Association and academic urology departments in the Northeastern United States. We assessed the online patient education materials for difficulty level with 10 commonly used readability assessment tools, including the Flesch Reading Ease Score, Flesch-Kincaid Grade Level, Simple Measure of Gobbledygook, Gunning Frequency of Gobbledygook, New Dale-Chall Test, Coleman-Liau index, New Fog Count, Raygor Readability Estimate, FORCAST test and Fry score. Most patient education materials on the websites of these programs were written at or above the eleventh grade reading level. Urological online patient education materials are written above the recommended reading level. They may need to be simplified to facilitate better patient understanding of urological topics. Copyright © 2013 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.

  18. Modifications to standard forms of contract: the impact on readability

    Directory of Open Access Journals (Sweden)

    Raufdeen Rameezdeen

    2014-06-01

    Full Text Available Lack of clarity in contract documents can lead to disputes between contracting parties. Standard form contracts have evolved due to construction business becoming increasingly complex and the difficulty in drafting bespoke conditions of contract for each project. Numerous advantages have been identified in using standard forms of contract. However, clients often modify some clauses in order to include specific requirements for a project. While the consequences of ill-modifications to standard forms have been researched, no study has been done on the impact of these modifications on the clarity and readability of the document. Using 281 modified clauses from large infrastructure projects implemented in Sri Lanka, this study found that on balance modifications generally make the document more difficult to read; 60% of the sample clauses were more difficult to read compared to 40% becoming easier. More than 50% of the original and modified clauses were still at the ‘very difficult’ level of readability, which requires the equivalent of post-graduate level to understand. The study contends that modifications have not resulted in improved readability. The study highlights the necessity of clear and plain language when modifying contract documents.

  19. Modifications to standard forms of contract: the impact on readability

    Directory of Open Access Journals (Sweden)

    Raufdeen Rameezdeen

    2014-06-01

    Full Text Available Lack of clarity in contract documents can lead to disputes between contracting parties. Standard form contracts have evolved due to construction business becoming increasingly complex and the difficulty in drafting bespoke conditions of contract for each project. Numerous advantages have been identified in using standard forms of contract. However, clients often modify some clauses in order to include specific requirements for a project. While the consequences of ill-modifications to standard forms have been researched, no study has been done on the impact of these modifications on the clarity and readability of the document. Using 281 modified clauses from large infrastructure projects implemented in Sri Lanka, this study found that on balance modifications generally make the document more difficult to read; 60% of the sample clauses were more difficult to read compared to 40% becoming easier. More than 50% of the original and modified clauses were still at the ‘very difficult’ level of readability, which requires the equivalent of post-graduate level to understand. The study contends that modifications have not resulted in improved readability. The study highlights the necessity of clear and plain language when modifying contract documents.

  20. The readability of information and consent forms in clinical research in France.

    Directory of Open Access Journals (Sweden)

    Véronique Ménoni

    Full Text Available BACKGROUND: Quantitative tools have been developed to evaluate the readability of written documents and have been used in several studies to evaluate information and consent forms. These studies all showed that such documents had a low level of readability. Our objective is to evaluate the readability of Information and Consent Forms (ICFs used in clinical research. METHODS AND FINDINGS: Clinical research protocols were collected from four public clinical research centers in France. Readability was evaluated based on three criteria: the presence of an illustration, the length of the text and its Flesch score. Potential effects of protocol characteristics on the length and readability of the ICFs were determined. Medical and statutory parts of the ICF form were analyzed separately. The readability of these documents was compared with that of everyday contracts, press articles, literary extracts and political speeches. We included 209 protocols and the corresponding 275 ICFs. The median length was 1304 words. Their Flesch readability scores were low (median: 24, and only about half that of selected press articles. ICF s for industrially sponsored and randomized protocols were the longest and had the highest readability scores. More than half (52% of the text in ICFs concerned medical information, and this information was statistically (p<0.05 more readable (Flesch: 28 than statutory information (Flesch: 21. CONCLUSION: Regardless of the field of research, the ICFs for protocols included had poor readability scores. However, a prospective analysis of this test in French should be carried out before it is put into general use.

  1. A new model of flavonoids affinity towards P-glycoprotein: genetic algorithm-support vector machine with features selected by a modified particle swarm optimization algorithm.

    Science.gov (United States)

    Cui, Ying; Chen, Qinggang; Li, Yaxiao; Tang, Ling

    2017-02-01

    Flavonoids exhibit a high affinity for the purified cytosolic NBD (C-terminal nucleotide-binding domain) of P-glycoprotein (P-gp). To explore the affinity of flavonoids for P-gp, quantitative structure-activity relationship (QSAR) models were developed using support vector machines (SVMs). A novel method coupling a modified particle swarm optimization algorithm with random mutation strategy and a genetic algorithm coupled with SVM was proposed to simultaneously optimize the kernel parameters of SVM and determine the subset of optimized features for the first time. Using DRAGON descriptors to represent compounds for QSAR, three subsets (training, prediction and external validation set) derived from the dataset were employed to investigate QSAR. With excluding of the outlier, the correlation coefficient (R(2)) of the whole training set (training and prediction) was 0.924, and the R(2) of the external validation set was 0.941. The root-mean-square error (RMSE) of the whole training set was 0.0588; the RMSE of the cross-validation of the external validation set was 0.0443. The mean Q(2) value of leave-many-out cross-validation was 0.824. With more informations from results of randomization analysis and applicability domain, the proposed model is of good predictive ability, stability.

  2. Using machine learning to classify image features from canine pelvic radiographs: evaluation of partial least squares discriminant analysis and artificial neural network models.

    Science.gov (United States)

    McEvoy, Fintan J; Amigo, José M

    2013-01-01

    As the number of images per study increases in the field of veterinary radiology, there is a growing need for computer-assisted diagnosis techniques. The purpose of this study was to evaluate two machine learning statistical models for automatically identifying image regions that contain the canine hip joint on ventrodorsal pelvis radiographs. A training set of images (120 of the hip and 80 from other regions) was used to train a linear partial least squares discriminant analysis (PLS-DA) model and a nonlinear artificial neural network (ANN) model to classify hip images. Performance of the models was assessed using a separate test image set (36 containing hips and 20 from other areas). Partial least squares discriminant analysis model achieved a classification error, sensitivity, and specificity of 6.7%, 100%, and 89%, respectively. The corresponding values for the ANN model were 8.9%, 86%, and 100%. Findings indicated that statistical classification of veterinary images is feasible and has the potential for grouping and classifying images or image features, especially when a large number of well-classified images are available for model training. © 2012 Veterinary Radiology & Ultrasound.

  3. Decomposition of forging dies for machining planning

    CERN Document Server

    Tapie, Laurent; Anselmetti, Bernard

    2009-01-01

    This paper will provide a method to decompose forging dies for machining planning in the case of high speed machining finishing operations. This method lies on a machining feature approach model presented in the following paper. The two main decomposition phases, called Basic Machining Features Extraction and Process Planning Generation, are presented. These two decomposition phases integrates machining resources models and expert machining knowledge to provide an outstanding process planning.

  4. Decomposition of forging dies for machining planning

    OpenAIRE

    Tapie, Laurent; Mawussi, Kwamiwi; Anselmetti, Bernard

    2009-01-01

    International audience; This paper will provide a method to decompose forging dies for machining planning in the case of high speed machining finishing operations. This method lies on a machining feature approach model presented in the following paper. The two main decomposition phases, called Basic Machining Features Extraction and Process Planning Generation, are presented. These two decomposition phases integrates machining resources models and expert machining knowledge to provide an outs...

  5. 基于制造特征的三轴高速铣削数控自动编程技术%MACHINING-FEATURE BASED 3-AXIS AUTOMATIC NC PROGRAMMING FOR HIGH-SPEED MILLING

    Institute of Scientific and Technical Information of China (English)

    孙全平; 汪通悦; 廖文和; 何宁

    2007-01-01

    运用面向对象技术,描述了待加工件的制造特征.利用模糊最大隶属原则,实现了加工区域几何制造特征的识别.以高速加工工艺数据库和范例库为支撑,采用IFTHEN规则和模糊匹配方法,提取出了适合高速铣削加工的工艺信息.提出了以切削时间短、加工成本低、表面质量高为目标的工艺方案寻优模型,该模型有助于形成成功的加工范例.依据已有加工范例和提取的工艺信息,实现了3轴高速铣削加工的自动编程.%Machining-features of the workplace are described by using of the object-oriented (O-O) technology. Geometrical machining-features are recognized in the given cut region by using the maximum membership priciple about the fuzzy set. Depending on the IF-THEN rule and the fuzzy matching method, the rough information of the machining-process for high-speed milling (HSM) is extracted based on the database of machining-process for HSM. The optimization model of machining-process scheme is established to obtain shorter cut time, lower cost or higher surface quality. It is helpful to form successful cases for HSM. NC programming for HSM is realized according to optimized machining-process data from HSM cases selected by the optimization model and the extracted information of machining-process.

  6. A readability analysis of elementary-level science textbooks

    Science.gov (United States)

    Trainer, Robyn

    Given both the unprecedented attention to the importance of providing children with the best possible science textbooks and the overwhelming evidence that students in the United States are severely lacking the most basic science knowledge, the decline in the number of students pursuing science degrees is alarming. In spite of all the efforts being made, a disparity still exists between (1) the wealth of science information available, (2) the apparent ease of access to scientific information, and (3) the lack of scientific academic progress being made in classrooms across the United States. A literature review was conducted which included the areas of textbook analysis and textbook readability levels, the fields of textbook analysis and readability, and findings from recently published books about textbook readability. The majority of the literature reflected an urgent need for science textbooks to be revised. Based on the information gathered during the literature review, the study examined the readability levels of elementary level science textbooks that were published by six textbook publishers. Results from the study revealed that when used properly, readability formulas provide an objective look at textbooks. After applying these formulas to the selected elementary level science textbooks, it became clear that very few changes were implemented between the most recent previous editions and the current editions. The textbooks remain too difficult for the students using them. The findings from this study will help science textbook publishers and textbook writers see that some changes need to be made in the way their textbooks are written. In order to maintain a competitive edge in the global marketplace, more students need to pursue science. In order for more students to do that, they need to pursue science degrees, but in order for them to pursue science degrees, they need to have a certain degree of confidence and level of interest in the subject matter. For

  7. Pattern recognition & machine learning

    CERN Document Server

    Anzai, Y

    1992-01-01

    This is the first text to provide a unified and self-contained introduction to visual pattern recognition and machine learning. It is useful as a general introduction to artifical intelligence and knowledge engineering, and no previous knowledge of pattern recognition or machine learning is necessary. Basic for various pattern recognition and machine learning methods. Translated from Japanese, the book also features chapter exercises, keywords, and summaries.

  8. Methods, apparatuses, and computer-readable media for projectional morphological analysis of N-dimensional signals

    Science.gov (United States)

    Glazoff, Michael V.; Gering, Kevin L.; Garnier, John E.; Rashkeev, Sergey N.; Pyt'ev, Yuri Petrovich

    2016-05-17

    Embodiments discussed herein in the form of methods, systems, and computer-readable media deal with the application of advanced "projectional" morphological algorithms for solving a broad range of problems. In a method of performing projectional morphological analysis, an N-dimensional input signal is supplied. At least one N-dimensional form indicative of at least one feature in the N-dimensional input signal is identified. The N-dimensional input signal is filtered relative to the at least one N-dimensional form and an N-dimensional output signal is generated indicating results of the filtering at least as differences in the N-dimensional input signal relative to the at least one N-dimensional form.

  9. When Machines Design Machines!

    DEFF Research Database (Denmark)

    2011-01-01

    Until recently we were the sole designers, alone in the driving seat making all the decisions. But, we have created a world of complexity way beyond human ability to understand, control, and govern. Machines now do more trades than humans on stock markets, they control our power, water, gas...... and food supplies, manage our elevators, microclimates, automobiles and transport systems, and manufacture almost everything. It should come as no surprise that machines are now designing machines. The chips that power our computers and mobile phones, the robots and commercial processing plants on which we...... depend, all are now largely designed by machines. So what of us - will be totally usurped, or are we looking at a new symbiosis with human and artificial intelligences combined to realise the best outcomes possible. In most respects we have no choice! Human abilities alone cannot solve any of the major...

  10. Machining strategy choice: performance VIEWER

    CERN Document Server

    Tapie, Laurent; Anselmetti, Bernard

    2009-01-01

    Nowadays high speed machining (HSM) machine tool combines productivity and part quality. So mould and die maker invested in HSM. Die and mould features are more and more complex shaped. Thus, it is difficult to choose the best machining strategy according to part shape. Geometrical analysis of machining features is not sufficient to make an optimal choice. Some research show that security, technical, functional and economical constrains must be taken into account to elaborate a machining strategy. During complex shape machining, production system limits induce feed rate decreases, thus loss of productivity, in some part areas. In this paper we propose to analyse these areas by estimating tool path quality. First we perform experiments on HSM machine tool to determine trajectory impact on machine tool behaviour. Then, we extract critical criteria and establish models of performance loss. Our work is focused on machine tool kinematical performance and numerical controller unit calculation capacity. We implement...

  11. Readability of the written study information in pediatric research in France.

    Directory of Open Access Journals (Sweden)

    Véronique Ménoni

    Full Text Available BACKGROUND: The aim was to evaluate the readability of research information leaflets (RIL for minors asked to participate in biomedical research studies and to assess the factors influencing this readability. METHODS AND FINDINGS: All the pediatric protocols from three French pediatric clinical research units were included (N = 104. Three criteria were used to evaluate readability: length of the text, Flesch's readability score and presence of illustrations. We compared the readability of RIL to texts specifically written for children (school textbooks, school exams or extracts from literary works. We assessed the effect of protocol characteristics on readability. The RIL had a median length of 608 words [350 words, 25(th percentile; 1005 words, 75(th percentile], corresponding to two pages. The readability of the RIL, with a median Flesch score of 40 [30; 47], was much poorer than that of pediatric reference texts, with a Flesch score of 67 [60; 73]. A small proportion of RIL (13/91; 14% were illustrated. The RIL were longer (p<0.001, more readable (p<0.001 and more likely to be illustrated (p<0.009 for industrial than for institutional sponsors. CONCLUSION: Researchers should routinely compute the reading ease of study information sheets and make greater efforts to improve the readability of written documents for potential participants.

  12. Debugging the virtual machine

    Energy Technology Data Exchange (ETDEWEB)

    Miller, P.; Pizzi, R.

    1994-09-02

    A computer program is really nothing more than a virtual machine built to perform a task. The program`s source code expresses abstract constructs using low level language features. When a virtual machine breaks, it can be very difficult to debug because typical debuggers provide only low level machine implementation in formation to the software engineer. We believe that the debugging task can be simplified by introducing aspects of the abstract design into the source code. We introduce OODIE, an object-oriented language extension that allows programmers to specify a virtual debugging environment which includes the design and abstract data types of the virtual machine.

  13. Electrical machines & drives

    CERN Document Server

    Hammond, P

    1985-01-01

    Containing approximately 200 problems (100 worked), the text covers a wide range of topics concerning electrical machines, placing particular emphasis upon electrical-machine drive applications. The theory is concisely reviewed and focuses on features common to all machine types. The problems are arranged in order of increasing levels of complexity and discussions of the solutions are included where appropriate to illustrate the engineering implications. This second edition includes an important new chapter on mathematical and computer simulation of machine systems and revised discussions o

  14. Printed health information materials: evaluation of readability and suitability.

    Science.gov (United States)

    Shieh, Carol; Hosei, Barbara

    2008-01-01

    This study examined readability and suitability of printed health information materials colleted from multiple sources. In phase I, nursing students used Simple Measure of Gobbledygook (SMOG; McLaughlin, 1969) to assess the readability of 21 materials collected from the community. In phases II and III, nursing students and registered nurses used SMOG and the Suitability Assessment of Materials (SAM; Doak, Doak, & Root, 1996) to evaluate 15 prenatal materials from a Healthy Start program. SMOG assigns a reading grade level based on the number of words with 3 or more syllables. SAM has 22 items in 6 evaluation areas: content, literacy demand, graphics, layout and typography, learning stimulation and motivation, and cultural appropriateness. Major findings included that 53% to 86% of the printed materials had a reading level at or higher than 9th grade; materials lacked summary, interaction, and modeled behaviors, and registered nurses rated more materials as not suitable and fewer as superior for suitability qualities than students. Improving printed materials to have lower reading levels and better suitability qualities are indicated.

  15. Readability of Air Force Publications: A Criterion Referenced Evaluation. Final Report.

    Science.gov (United States)

    Hooke, Lydia R.; And Others

    In a study of the readability of Air Force regulations, the writer-estimated reading grade level (RGL) for each regulation was rechecked by using the FORCAST readability formula. In four of the seven cases, the regulation writers underestimated the RGL of their regulation by more than one grade level. None of the writers produced a document with…

  16. Language, Reading, and Readability Formulas: Implications for Developing and Adapting Tests

    Science.gov (United States)

    Oakland, Thomas; Lane, Holly B.

    2004-01-01

    Issues pertaining to language and reading while developing and adapting tests are examined. Strengths and limitations associated with the use of readability formulas are discussed. Their use should be confined to paragraphs and longer passages, not items. Readability methods that consider both quantitative and qualitative variables and are…

  17. Readability of Questionnaires Assessing Listening Difficulties Associated with (Central) Auditory Processing Disorders

    Science.gov (United States)

    Atcherson, Samuel R.; Richburg, Cynthia M.; Zraick, Richard I.; George, Cassandra M.

    2013-01-01

    Purpose: Eight English-language, student- or parent proxy-administered questionnaires for (central) auditory processing disorders, or (C)APD, were analyzed for readability. For student questionnaires, readability levels were checked against the approximate reading grade levels by intended administration age per the questionnaires' developers. For…

  18. Using Readability Formulas to Establish the Grade Level Difficulty of Software.

    Science.gov (United States)

    Clariana, Roy B.

    1993-01-01

    Compared the grade level difficulty obtained from seven readability formulas to student reading levels from two national standardized tests to determine which formulas best determined the readability of computer-based text. Found that the Flesch-Kincaid, FOG, and ARI formulas provided the best estimate of reading grade level of computer-based text…

  19. Readability and Its Effects on Reading Rate, Subjective Judgments of Comprehensibility and Comprehension.

    Science.gov (United States)

    Coke, Esther U.

    Prose passages read aloud or silently were rated for pronounceability and comprehensibility. The relationships of text-derived readability indices to reading rate, comprehensibility ratings and comprehension test scores were explored. Reading rate in syllables per minute was unrelated to readability. The high correlation between rate in words per…

  20. 快速数控编程系统的制造特征构建研究%Research on Reconstruction of Machining Feature for Rapid NC Programming System

    Institute of Scientific and Technical Information of China (English)

    李铁钢; 付春林; 于天彪; 王宛山

    2012-01-01

    Aiming at recognition and reconstruction of features from different 3D CAD models, this paper presented the implementation process of feature recognition. Firstly, an attributed adjacency graph (AAG) created of geometry and topology in STEP file by lexical analysis. According to the analyses on machining features of structural parts in terms of the NC programming cutting logical, the machining feature will be recognized and reconstructed. The machining feature are provided by XML, which can be used by CAM system. Case study validates the proposed method, which improves efficiency and quality of NC programming of structural part.%针对快速数控编程系统中不同CAD模型的特征识别和构建,论述了基于STEP文件的特征识别技术及其实现过程:首先利用词法分析器解析STEP中性文件,按照STEP的文件拓扑结构生成属性邻接图(AAG);在总结典型结构件拓扑特征基础上,结合数控编程切削逻辑,以切削级为基础进行特征识别和特征构建;最后以XML形式构造制造特征森林以供CAM系统使用.实例证明了文中方法的有效性,提高了结构件数控编程的效率和质量.

  1. The structure and features of a large-scale three-roller bending machine from America%一种美制大型三辊卷板机的结构与特点

    Institute of Scientific and Technical Information of China (English)

    赵学; 王吉龙; 顾富生

    2011-01-01

    The bending principle, technical characteristics and mechanical structure features of a three roller bending machine from America have been emphasizedly introduced in the text. The bending process and driving mode of the machine have been analyzed as well as the difference of the structure and characteristics comparing with the domestic machines. The machine has advantages as high precision and loading capacity, good damping property and easy operation, which is the ideal equipment for bending the plate with big thickness.%重点介绍了一种美制大型三辊卷板机的卷板原理、技术性能及机械结构特点.分析了该卷板机的卷制工艺、传动方式以及与其他一些国产三辊卷板机的结构和性能上的不同.该卷板机具有卷板精度高、承载能力强、抗振性能好、操作维护方便等优点,是卷制厚重板材的理想设备.

  2. MONTO: A Machine-Readable Ontology for Teaching Word Problems in Mathematics

    Science.gov (United States)

    Lalingkar, Aparna; Ramnathan, Chandrashekar; Ramani, Srinivasan

    2015-01-01

    The Indian National Curriculum Framework has as one of its objectives the development of mathematical thinking and problem solving ability. However, recent studies conducted in Indian metros have expressed concern about students' mathematics learning. Except in some private coaching academies, regular classroom teaching does not include problem…

  3. Toolsets for Airborne Data (TAD): Improving Machine Readability for ICARTT Data Files

    Science.gov (United States)

    Early, Amanda Benson; Beach, Aubrey; Northup, Emily; Wang, Dali; Kusterer, John; Quam, Brandi; Chen, Gao

    2015-01-01

    The Atmospheric Science Data Center (ASDC) at NASA Langley Research Center is responsible for the ingest, archive, and distribution of NASA Earth Science data in the areas of radiation budget, clouds, aerosols, and tropospheric chemistry. The ASDC specializes in atmospheric data that is important to understanding the causes and processes of global climate change and the consequences of human activities on the climate. The ASDC currently supports more than 44 projects and has over 1,700 archived data sets, which increase daily. ASDC customers include scientists, researchers, federal, state, and local governments, academia, industry, and application users, the remote sensing community, and the general public.

  4. MONTO: A Machine-Readable Ontology for Teaching Word Problems in Mathematics

    Science.gov (United States)

    Lalingkar, Aparna; Ramnathan, Chandrashekar; Ramani, Srinivasan

    2015-01-01

    The Indian National Curriculum Framework has as one of its objectives the development of mathematical thinking and problem solving ability. However, recent studies conducted in Indian metros have expressed concern about students' mathematics learning. Except in some private coaching academies, regular classroom teaching does not include problem…

  5. 78 FR 28111 - Making Open and Machine Readable the New Default for Government Information

    Science.gov (United States)

    2013-05-14

    ... each stage ] of the information lifecycle to identify information that should not be released. These... continued to develop a vast range of useful new products and businesses using these public information...

  6. Sunspot latitudes during the Maunder Minimum: a machine-readable catalogue from previous studies

    OpenAIRE

    J. M. Vaquero; Nogales, J. M.; Sánchez-Bajo, F.

    2015-01-01

    The Maunder Minimum (1645-1715 approximately) was a period of very low solar activity and a strong hemispheric asymmetry, with most of sunspots in the southern hemisphere. In this paper, two data sets of sunspot latitudes during the Maunder minimum have been recovered for the international scientific community. The first data set is constituted by latitudes of sunspots appearing in the catalogue published by Gustav Sp\\"orer nearly 130 years ago. The second data set is based on the sunspot lat...

  7. Available Methods in Farsi-English Cross Language Information Retrieval Using Machine-readable, Bilingual Glossary

    Directory of Open Access Journals (Sweden)

    Hamid Alizadeh

    2009-12-01

    Full Text Available In this paper the impact scope of Natural Language Processing (NLP on translating search statements was determined by testing out research hypotheses. The NLP techniques employed for search statement processing included text parsing, linguistic forms identification, stopword removal, morphological analysis, and tokenization. Examination of the hypotheses indicated that using the method of translating the first equivalent term selected versus the method of selecting all equivalent terms, would contribute to increased efficiency of the review that while morphological analysis of the terms not translated by the glossary, would increase the retrieval precision cutoff, there would be no significant difference established by the lack of such analysis thereof that sentence translation as opposed to term by term translation, would increase the efficiency of Farsi-English proofreading. Other findings are also represented.

  8. Formalizing the Safety of Java, the Java Virtual Machine and Java Card

    NARCIS (Netherlands)

    Hartel, Pieter H.; Moreau, Luc

    2001-01-01

    We review the existing literature on Java safety, emphasizing formal approaches, and the impact of Java safety on small footprint devices such as smart cards. The conclusion is that while a lot of good work has been done, a more concerted effort is needed to build a coherent set of machine readable

  9. 三维工艺设计中基于加工特征的工序模型生成技术%Generation of Intermediate Process Model Based on Machining Features in 3D Process Planning

    Institute of Scientific and Technical Information of China (English)

    2013-01-01

    In the 3D process design of machining part, according to machining features such as shape, dimensional tolerance and so on, then carry out process design and process planning, and generate entity modal of every working process from the blank part to the final part. Geometric topology structure and manufacturing process information are combined to set a feature definition and classification system for machining. The concept of intermediate process model and the model recovery method automatically-generated in an intermediate process model is put forward on the basis of feature recognition and 3D process design technology. According to process method and process parameter of each machining feature, automatically generate intermediate process model based on part processing rout. The test results proved that the method offers reference to achieve 3D model-based machining process design.%  在机械加工零件三维工艺设计中,需要根据零件的形状、尺寸公差等识别加工特征,按照加工特征进行工艺设计和工艺路线规划,生成从零件毛坯到最终零件的各个工序的实体模型。结合零件的几何拓扑结构和制造工艺信息,建立一套面向机械加工的特征定义和分类体系,在特征识别和三维工艺设计技术的基础上,提出中间工序模型的概念和中间工序模型自动生成的模型恢复方法,根据各个加工特征的工艺方法和工艺参数,按照零件的加工路线自动生成中间工序模型。实例验证结果证明,该方法可为实现基于三维模型的机加工艺设计提供参考。

  10. Blind and readable image watermarking using wavelet tree quantization

    Institute of Scientific and Technical Information of China (English)

    HU Yuping; YU Shengsheng; ZHOU JingLi; SHI Lei

    2004-01-01

    A blind and readable image watermarking scheme using wavelet tree quantization is proposed. In order to increase the algorithm robustness and ensure the watermark integrity,error correction coding techniques are used to encode the embedded watermark. In the watermark embedding process, the wavelet coefficients of the host image are grouped into wavelet trees and each watermark bit is embedded by using two trees. The trees are so quantized that they exhibit a large enough statistical difference, which will later be used for watermark extraction. The experimental results show that the proposed algorithm is effective and robust to common image processing operations and some geometric operations such as JPEG compression,JPEG2000 compression, filtering, Gaussian noise attack, and row-column removal. It is demonstrated that the proposed technique is practical.

  11. Development of Correlations for Windage Power Losses Modeling in an Axial Flux Permanent Magnet Synchronous Machine with Geometrical Features of the Magnets

    Directory of Open Access Journals (Sweden)

    Alireza Rasekh

    2016-11-01

    Full Text Available In this paper, a set of correlations for the windage power losses in a 4 kW axial flux permanent magnet synchronous machine (AFPMSM is presented. In order to have an efficient machine, it is necessary to optimize the total electromagnetic and mechanical losses. Therefore, fast equations are needed to estimate the windage power losses of the machine. The geometry consists of an open rotor–stator with sixteen magnets at the periphery of the rotor with an annular opening in the entire disk. Air can flow in a channel being formed between the magnets and in a small gap region between the magnets and the stator surface. To construct the correlations, computational fluid dynamics (CFD simulations through the frozen rotor (FR method are performed at the practical ranges of the geometrical parameters, namely the gap size distance, the rotational speed of the rotor, the magnet thickness and the magnet angle. Thereafter, two categories of formulations are defined to make the windage losses dimensionless based on whether the losses are mainly due to the viscous forces or the pressure forces. At the end, the correlations can be achieved via curve fittings from the numerical data. The results reveal that the pressure forces are responsible for the windage losses for the side surfaces in the air-channel, whereas for the surfaces facing the stator surface in the gap, the viscous forces mainly contribute to the windage losses. Additionally, the results of the parametric study demonstrate that the overall windage losses in the machine escalate with an increase in either the rotational Reynolds number or the magnet thickness ratio. By contrast, the windage losses decrease once the magnet angle ratio enlarges. Moreover, it can be concluded that the proposed correlations are very useful tools in the design and optimizations of this type of electrical machine.

  12. Introduction to machine learning.

    Science.gov (United States)

    Baştanlar, Yalin; Ozuysal, Mustafa

    2014-01-01

    The machine learning field, which can be briefly defined as enabling computers make successful predictions using past experiences, has exhibited an impressive development recently with the help of the rapid increase in the storage capacity and processing power of computers. Together with many other disciplines, machine learning methods have been widely employed in bioinformatics. The difficulties and cost of biological analyses have led to the development of sophisticated machine learning approaches for this application area. In this chapter, we first review the fundamental concepts of machine learning such as feature assessment, unsupervised versus supervised learning and types of classification. Then, we point out the main issues of designing machine learning experiments and their performance evaluation. Finally, we introduce some supervised learning methods.

  13. Readability evaluation of Internet-based patient education materials related to the anesthesiology field.

    Science.gov (United States)

    De Oliveira, Gildasio S; Jung, Michael; Mccaffery, Kirsten J; McCarthy, Robert J; Wolf, Michael S

    2015-08-01

    The main objective of the current investigation was to assess the readability of Internet-based patient education materials related to the field of anesthesiology. We hypothesized that the majority of patient education materials would not be written according to current recommended readability grade level. Online patient education materials describing procedures, risks, and management of anesthesia-related topics were identified using the search engine Google (available at www.google.com) using the terms anesthesia, anesthesiology, anesthesia risks, and anesthesia care. Cross-sectional evaluation. None. Assessments of content readability were performed using validated instruments (Flesch-Kincaid Grade Formulae, the Gunning Frequency of Gobbledygook, the New Dale-Chall Test, the Fry graph, and the Flesch Reading Ease score). Ninety-six Web sites containing Internet patient education materials (IPEMs) were evaluated. The median (interquartile range) readability grade level for all evaluated IPEMs was 13.5 (12.0-14.6). All the evaluated documents were classified at a greater readability level than the current recommended readability grade, P materials related to the field of anesthesiology are currently written far above the recommended readability grade level. High complexity of written education materials likely limits access of information to millions of American patients. Redesign of online content of Web sites that provide patient education material regarding anesthesia could be an important step in improving access to information for patients with poor health literacy. Copyright © 2015 Elsevier Inc. All rights reserved.

  14. A readability comparison of anti- versus pro-influenza vaccination online messages in Japan.

    Science.gov (United States)

    Okuhara, Tsuyoshi; Ishikawa, Hirono; Okada, Masahumi; Kato, Mio; Kiuchi, Takahiro

    2017-06-01

    Historically, anti-vaccination sentiment has existed in many populations. Mass media plays a large role in disseminating and sensationalizing vaccine objections, especially via the medium of the Internet. Based on studies of processing fluency, we assumed that anti-influenza vaccination online messages to be more readable and more fluently processed than pro-influenza vaccination online messages, which may consequently sway the opinions of some audiences. The aim of this study was to compare readability of anti- and pro-influenza vaccination online messages in Japan using a measure of readability. Web searches were conducted at the end of August 2016 using two major Japanese search engines (Google.jp and Yahoo!.jp). The included websites were classified as "anti", "pro", or "both" depending on the claims, and "health professional" or "non-health professional" depending on the writers' expertise. Readability was determined using a validated measure of Japanese readability (the Japanese sentence difficulty discrimination system). Readability of "health professional" websites was compared with that of "non-health professional" websites, and readability of "anti" websites was compared with that of "pro" websites, using the t-test. From a total of 145 websites, the online messages written by non-health professionals were significantly easier to read than those written by health professionals (p = 0.002, Cohen's d = 0.54). Anti-influenza vaccination messages were significantly easier to read than pro-influenza vaccination messages (p vaccination materials for publication online, we recommend they check for readability using readability assessment tools and improve the text for easy reading if necessary.

  15. Readability of patient education materials on the American Orthopaedic Society for Sports Medicine website.

    Science.gov (United States)

    Eltorai, Adam E M; Han, Alex; Truntzer, Jeremy; Daniels, Alan H

    2014-11-01

    The recommended readability of patient education materials by the American Medical Association (AMA) and National Institutes of Health (NIH) should be no greater than a sixth-grade reading level. However, online resources may be too complex for some patients to understand, and poor health literacy predicts inferior health-related quality of life outcomes. This study evaluated whether the American Orthopaedic Society for Sports Medicine (AOSSM) website's patient education materials meet recommended readability guidelines for medical information. We hypothesized that the readability of these online materials would have a Flesch-Kincaid formula grade above the sixth grade. All 65 patient education entries of the AOSSM website were analyzed for grade level readability using the Flesch-Kincaid formula, a widely used and validated tool to evaluate the text reading level. The average (standard deviation) readability of all 65 articles was grade level 10.03 (1.44); 64 articles had a readability score above the sixth-grade level, which is the maximum level recommended by the AMA and NIH. Mean readability of the articles exceeded this level by 4.03 grade levels (95% CI, 3.7-4.4; P education materials exceeds the readability level recommended by the AMA and NIH, and is above the average reading level of the majority of US adults. This online information may be of limited utility to most patients due to a lack of comprehension. Our study provides a clear example of the need to improve the readability of specific education material in order to maximize the efficacy of multimedia sources.

  16. A level set methodology for predicting the effect of mask wear on surface evolution of features in abrasive jet micro-machining

    Science.gov (United States)

    Burzynski, T.; Papini, M.

    2012-07-01

    A previous implementation of narrow-band level set methodology developed by the authors was extended to allow for the modelling of mask erosive wear in abrasive jet micro-machining (AJM). The model permits the prediction of the surface evolution of both the mask and the target simultaneously, by representing them as a hybrid and continuous mask-target surface. The model also accounts for the change in abrasive mass flux incident to both the target surface and, for the first time, the eroding mask edge, that is brought about by the presence of the mask edge itself. The predictions of the channel surface and eroded mask profiles were compared with measurements on channels machined in both glass and poly-methyl-methacrylate (PMMA) targets at both normal and oblique incidence, using tempered steel and elastomeric masks. A much better agreement between the predicted and measured profiles was found when mask wear was taken into account. Mask wear generally resulted in wider and deeper glass target profiles and wider PMMA target profiles, respectively, when compared to cases where no mask wear was present. This work has important implications for the AJM of complex MEMS and microfluidic devices that require longer machining times.

  17. Machine Translation

    Institute of Scientific and Technical Information of China (English)

    张严心

    2015-01-01

    As a kind of ancillary translation tool, Machine Translation has been paid increasing attention to and received different kinds of study by a great deal of researchers and scholars for a long time. To know the definition of Machine Translation and to analyse its benefits and problems are significant for translators in order to make good use of Machine Translation, and helpful to develop and consummate Machine Translation Systems in the future.

  18. 基于特征加工元的复杂箱体类零件工艺路线优化%Process Route Optimization of Complex Housing-type Parts Based on Feature Machining Element

    Institute of Scientific and Technical Information of China (English)

    徐立云; 史楠; 段建国; 李爱平

    2013-01-01

    针对工艺设计过程中工艺路线的优化问题,通过分析复杂箱体类零件特征,并将其细分为加工元,在考虑优化过程中存在的问题和相关工艺约束的基础上,将工艺路线的优化转化为加工元的优先排序.以机床、夹具和刀具变换次数最少建立目标优化模型,利用改进的遗传算法进行求解,避免了遗传算法“早熟”的缺陷.以某型号缸体为研究对象验证该改进算法的有效性,结果表明该算法具有很好的收敛性.%According to the current problems of process route optimization during the process design, the complex part features were divided into several feature machining elements,then the optimization of process route was changed to the priority of feature machining elements considering the problems in the process of optimization and related process constraints. An objective function was built by minimizing the transform of machine tools,fixtures and cutting tools,using an improved genetic algorithm to avoid the premature defects of the genetic algorithm. Finally, a case of cylinder block was illustrated to verify the efficiency of the hybrid algorithm, and the results show that the improved al gorithm has a good convergence.

  19. Computer-aided Selection System for Cutting Tools and Parameters Based on Machining Features%面向加工特征的刀具和切削参数计算机辅助选择系统

    Institute of Scientific and Technical Information of China (English)

    郝传海; 刘战强; 任小平; 万熠

    2012-01-01

    Cutting tool manufacturers are facing increasing demands to supply a comprehensive advice service with relation to selection of appropriate tools and cutting parameters for a widely variety of part materials and machining features. The central element for process planning is to select the appropriate cutting tools and machining parameters, too. However, the main attention has been only paid on the part materials. It causes the mismatches between workpieces and tools. This study is to describe the development of a computer - aided selection system for cutting tools and cutting parameters based on machining features (FTCPS), which is designed to cover different component shapes including turning, milling, drilling as well as boring operation features. The kinematic link between machined surface feature with a simple icon based interface being used to input data records, and a relational database combined with data - driven method and rule - based decision logic is used to select cutting tool geometry and machining parameters for a range of machining operations. The system also utilizes mathematical model to calculate processing conditions (machining time in single path, cutting power, maximum harshness, etc. ). Process planning is completed in the end. By turning tools and turning parameters selection for example , the result shows the realization method of system. FTCPS will help the designers and manufacturing planners to select optimal set of cutting tools and cutting conditions.%切削刀具制造商面临围绕大量工件材料和加工特征为客户提供合理刀具和切削参数的现状,切削工艺规划的核心步骤也是刀具和切削参数的确定.确定刀具和切削参数一般多从零件材料角度出发,可能导致工件与刀具不匹配.文中提出面向加工特征的刀具和切削参数计算机辅助选择系统的开发.系统包括车削特征、铣削特征、钻削和镗削加工特征,系统利用特征

  20. Sustainable machining

    CERN Document Server

    2017-01-01

    This book provides an overview on current sustainable machining. Its chapters cover the concept in economic, social and environmental dimensions. It provides the reader with proper ways to handle several pollutants produced during the machining process. The book is useful on both undergraduate and postgraduate levels and it is of interest to all those working with manufacturing and machining technology.

  1. Automatic generation of NC program for cylinder-type parts based on machining features%基于加工特征的缸体类零件数控程序自动生成

    Institute of Scientific and Technical Information of China (English)

    杨鹏宇; 曹忠亮; 张楠; 张旭堂

    2011-01-01

    根据企业的产品特点和现有的制造资源,提出了一种新的缸体类的加工特征模型.通过该模型的使用,在设计数控工艺时就能根据加工特征的种类,以模块化的方式进行数控程序的生成,很好地解决了传统工艺和数控工序的集成问题.同时以加工特征为单元,采用参数化技术实现数控程序的派生式生成,在三维环境下进行动态仿真,提高了数控程序设计效率和质量,从而实现了数控程序的模块化设计,提高了数控工艺设计的柔性和自适应能力.该方法在企业的工艺自动化系统中得到了验征.%According to the characteristic of products and the manufacturing resources in some enterprise, a new type of cylinder block machining feature model is proposed. Through the use of the model in the design process, the CNC program is generated by modular approach according to CNC machining features. CNC machining feature classification base process design method is a good solution to the integration issues of the traditional processes and CNC processes. With the machining feature as unit, parametric programming technique is applied to the NC code generation as well as in three-dimensional environment for dynamic simulation, which improves the efficiency of the CNC program design and quality, realizes the modular design of the CNC program and improves the NC process design flexibility and adaptive capacity. The proposed scheme is practically demonstrated by the enterprise system.

  2. Feature based man-hour forecasting model for aircraft structure parts NC machining%基于特征的飞机结构件数控加工工时预测模型

    Institute of Scientific and Technical Information of China (English)

    刘长青; 李迎光; 王伟; 林勇

    2011-01-01

    针对目前数控加工工时预测方法不能兼顾精度和效率的问题,通过分析飞机结构件的结构特点和工艺特点,基于加工特征属性提炼工时的影响因素,提出了一种基于特征的两级结构工时预测模型。首先依据加工特征属性的数据类型把加工特征分为枚举型和数值型,然后以枚举型特征属性作为分类器的输入构建模型第一级结构,数值型特征属性作为反向传播神经网络的输入构建模型第二级结构。基于该模型开发的系统已经在某大型数控企业得到了良好的应用,效率高且误差在10%以内。%Existing prediction methods for Numerical Control(NC) machining man-hours couldn't taken precision and efficiency into consideration simultaneously,by analzing the structure and process characteristics of aircraft structural parts,a feature-based two-level structure man-hour foresting model was proposed in view of man-hour influencing factors exteacted by machining feature attributes.Firstly,the machining features were classified into enumerative type and numerical value type according to machining feature attributes.Then,enumerative value type attribute was used as input of the classifier to establish first level of the model and numerical value type attribute was used as the input of back propagation neural network to set up second level of the model.The system developed based on this model was applied in a large NC enterprise with high efficiency and the error was within 10%.

  3. 印尼 ASAHAN No.1水电站清污机设计特点%Design Features of Trashrack Cleaning Machine, ASAHAN No.1 Hydropower Station, Indonesia

    Institute of Scientific and Technical Information of China (English)

    刘大宏; 张继雄; 杨鹏隆

    2012-01-01

      Through introduction to design features and operation briefing of transhrack cleaning machine of ASAHAN No.1 Hydropower Station, Indonesia, presenting that arrangement and type selection of trashrack cleaning machine at power intake are quite important for safety operation and economic benefit of the station.Proper arrangement and reasonable design of the trashrack cleaning machine not only result in convenient operation management but also create excellent economic benefit.%  通过对印尼Asahan No.1水电站清污机设计特点及运行概况介绍,指出电站取水口清污设备的布置及选型对电站的安全运行和经济效益十分重要,清污设备布置恰当、设计合理不仅给电站的运行管理带来方便,更能创造出良好的经济效益.

  4. Readability Levels of the Science Textbooks Most Used in Secondary Schools.

    Science.gov (United States)

    Chiang-Soong, Betty; Yager, Robert E.

    1993-01-01

    Evaluated and compared 12 science textbooks with respect to their readability levels and agreement with the intended reader level. Four of the books were determined to be unsatisfactory for their intended grade levels. (20 references) (MDH)

  5. Novel Switched Flux Permanent Magnet Machine Topologies

    Institute of Scientific and Technical Information of China (English)

    诸自强

    2012-01-01

    This paper overviews various switched flux permanent magnet machines and their design and performance features,with particular emphasis on machine topologies with reduced magnet usage or without using magnet,as well as with variable flux capability.

  6. Assessing Implicit Knowledge in BIM Models with Machine Learning

    DEFF Research Database (Denmark)

    Krijnen, Thomas; Tamke, Martin

    2015-01-01

    architects and engineers are able to deduce non-explicitly explicitly stated information, which is often the core of the transported architectural information. This paper investigates how machine learning approaches allow a computational system to deduce implicit knowledge from a set of BIM models.......The promise, which comes along with Building Information Models, is that they are information rich, machine readable and represent the insights of multiple building disciplines within single or linked models. However, this knowledge has to be stated explicitly in order to be understood. Trained...

  7. Assessing Implicit Knowledge in BIM Models with Machine Learning

    DEFF Research Database (Denmark)

    Krijnen, Thomas; Tamke, Martin

    2015-01-01

    architects and engineers are able to deduce non-explicitly explicitly stated information, which is often the core of the transported architectural information. This paper investigates how machine learning approaches allow a computational system to deduce implicit knowledge from a set of BIM models.......The promise, which comes along with Building Information Models, is that they are information rich, machine readable and represent the insights of multiple building disciplines within single or linked models. However, this knowledge has to be stated explicitly in order to be understood. Trained...

  8. Readability assessment of patient education materials on major otolaryngology association websites.

    Science.gov (United States)

    Eloy, Jean Anderson; Li, Shawn; Kasabwala, Khushabu; Agarwal, Nitin; Hansberry, David R; Baredes, Soly; Setzen, Michael

    2012-11-01

    Various otolaryngology associations provide Internet-based patient education material (IPEM) to the general public. However, this information may be written above the fourth- to sixth-grade reading level recommended by the American Medical Association (AMA) and National Institutes of Health (NIH). The purpose of this study was to assess the readability of otolaryngology-related IPEMs on various otolaryngology association websites and to determine whether they are above the recommended reading level for patient education materials. Analysis of patient education materials from 9 major otolaryngology association websites. The readability of 262 otolaryngology-related IPEMs was assessed with 8 numerical and 2 graphical readability tools. Averages were evaluated against national recommendations and between each source using analysis of variance (ANOVA) with post hoc Tukey's honestly significant difference (HSD) analysis. Mean readability scores for each otolaryngology association website were compared. Mean website readability scores using Flesch Reading Ease test, Flesch-Kincaid Grade Level, Coleman-Liau Index, SMOG grading, Gunning Fog Index, New Dale-Chall Readability Formula, FORCAST Formula, New Fog Count Test, Raygor Readability Estimate, and the Fry Readability Graph ranged from 20.0 to 57.8, 9.7 to 17.1, 10.7 to 15.9, 11.6 to 18.2, 10.9 to 15.0, 8.6 to 16.0, 10.4 to 12.1, 8.5 to 11.8, 10.5 to 17.0, and 10.0 to 17.0, respectively. ANOVA results indicate a significant difference (P otolaryngology association websites exceed the recommended fourth- to sixth-grade reading level.

  9. [Global analysis of the readability of the informed consent forms used in public hospitals of Spain].

    Science.gov (United States)

    Mariscal-Crespo, M I; Coronado-Vázquez, M V; Ramirez-Durán, M V

    To analyse the readability of informed consent forms (ICF) used in Public Hospitals throughout Spain, with the aim of checking their function of providing comprehensive information to people who are making any health decision no matter where they are in Spain. A descriptive study was performed on a total of 11,339 ICF received from all over Spanish territory, of which 1617 ICF were collected from 4 web pages of Health Portal and the rest (9722) were received through email and/or telephone contact from March 2012 to February 2013. The readability level was studied using the Inflesz tool. A total of 372 ICF were selected and analysed using simple random sampling. The Inflesz scale and the Flesch-Szigriszt index were used to analyse the readability. The readability results showed that 62.4% of the ICF were rated as a "little difficult", the 23.4% as "normal", and the 13.4% were rated as "very difficult". The highest readability means using the Flesch index were scored in Andalusia with a mean of 56.99 (95% CI; 55.42-58.57) and Valencia with a mean of 51.93 (95% CI; 48.4-55.52). The lowest readability means were in Galicia with a mean of 40.77 (95% CI; 9.83-71.71) and Melilla, mean=41.82 (95% CI; 35.5-48.14). The readability level of Spanish informed consent forms must be improved because their scores using readability tools could not be classified in normal scales. Furthermore, there was very wide variability among Spanish ICF, which showed a lack of equity in information access among Spanish citizens. Copyright © 2017 SECA. Publicado por Elsevier España, S.L.U. All rights reserved.

  10. Readability Comparison of Pro- and Anti-Cancer Screening Online Messages in Japan

    Science.gov (United States)

    Okuhara, Tsuyoshi; Ishikawa, Hirono; Okada, Masahumi; Kato, Mio; Kiuchi, Takahiro

    2016-12-01

    Background: Cancer screening rates are lower in Japan than those in western countries. Health professionals publish procancer screening messages on the internet to encourage audiences to undergo cancer screening. However, the information provided is often difficult to read for lay persons. Further, anti-cancer screening activists warn against cancer screening with messages on the Internet. We aimed to assess and compare the readability of pro- and anti-cancer screening online messages in Japan using a measure of readability. Methods: We conducted web searches at the beginning of September 2016 using two major Japanese search engines (Google.jp and Yahoo!.jp). The included websites were classified as “anti”, “pro”, or “neutral” depending on the claims, and “health professional” or “non-health professional” depending on the writers. Readability was determined using a validated measure of Japanese readability. Statistical analysis was conducted using two-way ANOVA. Results: In the total 159 websites analyzed, anti-cancer screening online messages were generally easier to read than pro-cancer screening online messages, Messages written by health professionals were more difficult to read than those written by non-health professionals. Claim × writer interaction was not significant. Conclusion: When health professionals prepare pro-cancer screening materials for publication online, we recommend they check for readability using readability assessment tools and improve text for easy comprehension when necessary. Creative Commons Attribution License

  11. Readability of informed consent documents (1987-2007) for clinical trials: a linguistic analysis.

    Science.gov (United States)

    Sand, K; Eik-Nes, N L; Loge, J H

    2012-10-01

    We investigated the readability of informed consent documents linguistically and compared old and new ICDs. Twenty ICDs (ten from 1987-1992 and ten from 2006-2007) were included. The Evaluative Linguistic Framework (ELF) was used to analyze the texts. The ELF evaluates the following items: main themes, order of themes, rhetorical functions, the relationship between reader and writer, metadiscourse, headings, expert terminology, and visual aspects. An ICD is considered readable if it achieves the goal of inviting the reader to participate and explaining the implication of participation. The new ICDs were more readable than the old ones, as they were more oriented towards research, contained instructions about how to consent, and provided clear contact information. Aspects that reduced the readability of the new ICDs were the large number of topics, details, and actors presented. The readability of the old ICDs was enhanced by fewer topics, a clear presentation of the involved actors, and brevity. However, their readability was reduced by the inclusion of a vast amount of information about the reader's diagnosis and treatment.

  12. Readability and suitability assessment of patient education materials in rheumatic diseases.

    Science.gov (United States)

    Rhee, Rennie L; Von Feldt, Joan M; Schumacher, H Ralph; Merkel, Peter A

    2013-10-01

    Web-based patient education materials and printed pamphlets are frequently used by providers to inform patients about their rheumatic disease. Little attention has been given to the readability and appropriateness of patient materials. The objective of this study was to examine the readability and suitability of commonly used patient education materials for osteoarthritis (OA), rheumatoid arthritis, systemic lupus erythematosus, and vasculitis. Five or 6 popular patient resources for each disease were chosen for evaluation. Readability was measured using the Flesch-Kincaid reading grade level and suitability was determined by the Suitability Assessment of Materials (SAM), a score that considers characteristics such as content, graphics, layout/topography, and cultural appropriateness. Three different reviewers rated the SAM score and means were used in the analysis. Twenty-three resources written on the 4 diseases were evaluated. The education material for all 4 diseases studied had readability above the eighth-grade level and readability did not differ among the diseases. Only 5 of the 23 resources received superior suitability scores, and 3 of these 5 resources were written for OA. All 4 diseases received adequate suitability scores, with OA having the highest mean suitability score. Most patient education materials for rheumatic diseases are written at readability levels above the recommended sixth-grade reading level and have only adequate suitability. Developing more appropriate educational resources for patients with rheumatic diseases may improve patient comprehension. Copyright © 2013 by the American College of Rheumatology.

  13. EFFECT OF ENERGY EFFICIENT LIGHT SOURCES ON READABILITY OF STUDENTS – AN EXPERIMENTAL APPROACH

    Directory of Open Access Journals (Sweden)

    SATHYA P.

    2016-01-01

    Full Text Available The objective of this paper is to investigate the effect of light sources on readability of students using psychophysical methods. Light sources such as Compact Fluorescent Lamp (CFL and Light Emitting Diode lamp (LED of same power rating were used in this research work because of their high lighting efficiency and uniformity of illuminance compared to that of Incandescent lamp (IL and florescent lamp (FL. A group of prospective students having normal vision, and abnormal vision like myopia, hypermetropia and astigmatism were involved in the test process. Three types of test like Snellen visual acuity, Color contrast test and Readability test were conducted on student participants under different lighting conditions. Test results showed the visibility and color contrast sensitivity of the students were high in the LED illumination. The quantitative measure of readability under different circumstances showed that the lightness difference on text under different color combination and font size, affected their readability. The computed average results confirmed that the luminance and color contrast were improved in LED illumination and also proved a high readability measure in the experimentation. Both the results of psychophysical test were proven that LED lighting was the best lighting system suitable for color distinction and readability.

  14. Readability of patient-reported outcome questionnaires for use with persons who stutter.

    Science.gov (United States)

    Zraick, Richard I; Atcherson, Samuel R; Brown, Angela M

    2012-03-01

    The purpose of this study was to examine the readability of several published patient-reported outcome (PRO) questionnaires for use with persons who stutter, and to compare the readability results to existing data about average reading levels for English-speaking adults living in the United States. Published PRO questionnaires were identified that are traditionally completed by persons who stutter in a self-administered format. Reading grade levels were analyzed using the Flesch Reading Ease, FOG, and FORCAST formulas as computed by a readability calculations software package. Descriptive statistics were computed across the questionnaires. The results of this study demonstrate that many of the PRO questionnaires exceeded the fifth to sixth grade reading levels recommended by health literacy experts. The clinician should consider the average reading level needed to understand a particular PRO questionnaire when administering it to a patient or their proxy. Likewise, developers of PRO questionnaires should consider reading level of respondents and include information about this when reporting psychometric data. The reader will get an overview over the literature on patient-reported outcome (PRO) questionnaires and their use with persons who stutter and will be able to: (1) define readability, (2) describe how reading levels are determined for a given PRO questionnaire, (3) list the strengths and limitations of readability assessment in the evaluation of persons who stutter and (4) analyze the role of readability assessment in future PRO questionnaire development. Copyright © 2011 Elsevier Inc. All rights reserved.

  15. Multi-color 2D datamatrix codes with poorly readable colors

    Directory of Open Access Journals (Sweden)

    Arjana Žitnik

    2010-09-01

    Full Text Available Datamatrix code is a type of 2D codes that can encode much more data on the same or smaller area than the linearbarcodes. This makes 2D codes usable for marking even very small items. 2D codes can be decoded by thereaders in retails but also with the mobile phones equipped with camera and appropriate software. 2D codes canbe depicted in different materials or printed on different printing substrates. The application area of the codes isbroad, from magazines and newspapers to posters and packaging. Successful reading of 2D codes is possible ifthe code is printed in appropriate contrast between the printing ink and substrate, like black ink printed on whitematt paper. Problems can occur if the code is printed in colors. The readability of 2D Datamatrix codes printed incyan, magenta, yellow and black was studied. Yellow is proved to be poorly readable. In addition, the bi-coloredand multi-colored 2D Datamatrix codes were studied. When four colors are used in creation of the 2D Datamatrixcode, poorly readable elements, yellow codewords, may cause the reading failure. 2D Datamatix codes are capableto ensure good readability even if they contain a defined number of poorly readable codewords due to the Reed –Solomon error-correction system. The aim of the study was to investigate the effect of using yellow printed, poorlyreadable, codewords in the multi-colored 2D Datamatrix code on the code readability.

  16. Punctuation as readability and textuality factor in technical discourse

    Directory of Open Access Journals (Sweden)

    Carmen Sancho Guinda

    2002-04-01

    Full Text Available This paper studies the incidence of punctuation on the reading comprehension of technical discourse and its role as a factor of textuality. Starting out from the notions of textuality and punctuation functions formulated by different linguistic approaches, an analysis has been made to quantify the decoding skills and punctuating competence of 60 Aeronautical Engineering students, as well as to determine the nature and effects of their punctuation errors. The survey has been focused on the Full Stop, the comma and the hyphen due to their highly conflicting uses as regards the identification of immediate sentence constituents and semantic relationships. The results obtained suggest that most students have a poor knowledge of punctuation rules and little awareness of punctuation as a textual elementaffecting readability. Errors are in the main related to comma use and produced by transference, either of Spanish punctuating habits into English, or of individual prosodic patterns into writing, while meaning appears to be the prevailing punctuating criterion over sound and syntax. Punctuation proves an effective tool for the anticipation of implicit meanings and an untapped resource in the teaching of the diverse communicative and stylistic possibilities offered by technical texts.

  17. A Radar Active Jamming Sorting Based on Feature Weighted and Support Vector Machine%基于特征加权与SVM的雷达有源干扰分类技术

    Institute of Scientific and Technical Information of China (English)

    唐翥; 张兵; 李广强; 沈浩浩

    2014-01-01

    In order to improve the the active jamming sorting accuracy effectively,a sorting method based on feature weighted and support vector machine is proposed. The feature weighting concept according to the different importance degree of each signal feature parameters to signal classification in the process is introduced. Using the gray relational analysis to obtain the weight of each feature,and some weak characteristics for the huge impact on the classification results are avoided. Finally,using support vector machine classifier,the active radar interference signal is classified and identified. Simulation experiments show that this method can improve the recognition rate of the radar active interference signal type effectively.%为了有效提高雷达有源干扰分类正确率,提出一种基于特征加权与支持向量机的分类方法。针对分类过程中各信号特征参数对信号分类的重要度不同,引入特征加权的概念。利用灰色关联分析方法求取各特征权重,避免一些弱特征对分类结果产生较大影响。最后利用支持向量机分类器,对雷达有源干扰信号进行了分类识别。通过仿真实验证明,该方法可以有效提高雷达有源干扰信号类型的识别率,具有很好的通用性。

  18. Fishery text categorization method based on feature word weight and support vector machine%基于特征词权值的渔业文本分类研究

    Institute of Scientific and Technical Information of China (English)

    谷军; 何南

    2014-01-01

    Fishery text categorization is an effective way to make full use of fishery information resources. According to the structural characteristics of Chinese literatures, a fishery text categorization method based on feature word weights and support vector machine was put forward. Text vector space was constructed by using vector space model(VSM) and the feature items in every text feature vector were calculated in consideration of feature word weights. Support vector machine(SVM) was employed as the classifier and the standard documents downloaded from CNKI as the test data. The experiments were checked with precision rate and recall rate. The experimental results show that our fishery text categorization method owns satisfactory categorization performance.%渔业文本分类是充分利用渔业信息资源的有效途径。针对中文文献资料的结构特点,提出一种结合特征词权值和支持向量机(Support Vector Machine,SVM)的渔业文本分类方法,利用向量空间模型(Vector Space Model, VSM)构建文本向量空间,并结合特征词权值计算文本特征向量中的各特征项,将构建的文本向量送入 SVM 进行渔业文本分类。采用中国知网下载的标准文档进行了实验测试,并考察了准确率和召回率两个指标,实验结果表明,文章提出的渔业文本分类方法具有较好的分类效果。

  19. Extension Association Method of Machining Features for Axis Class Parts Based on Structure Similarities%基于结构相似性的轴类零件加工特征可拓关联方法

    Institute of Scientific and Technical Information of China (English)

    黄风立; 徐春光; 顾金梅; 钱苏翔

    2015-01-01

    在现有轴类零件相似性检索方法的基础上,将轴类零件的信息模型划分为结构特征层和加工工艺层,并利用可拓基元方法进行形式化描述.提出从轴类零件结构和加工工艺两方面进行相似性检索的方法:首先基于结构矩阵表达方法,对轴类零件的结构特征进行相似性检索,从实例库中检索出2~5个相似实例,然后以可拓综合关联函数进行加工特征的相似度匹配,得出与新零件最相似的零件.通过实例验证,该方法具有可行性,并且可在检索过程中动态调整系数水平,具有检索适应性强的优点.%The information model of axis class parts are di-vided into structure feature layer and machining feature layer based on the similarity retrieval method of existing axis class parts. The extensible basic elementmethod is used for the formal description. The method makes similarity retrieval from the struc-ture and machining process aspects are provided. At first, the structure features perform the similarity retrieval based on the method of structure descripted by matrix,2~5 similar cases will be searched in the instance database. Then, the best similarity part is selected by the method, which is applied in similarity match of machining feature using extension comprehensive corre-lation functions. The good feasibility of this method is verified by a simple example, and the method has an advantage that its a-daptability and derivative are strong, with the level adjustment of coefficient in retrieval process.

  20. Simple machines

    CERN Document Server

    Graybill, George

    2007-01-01

    Just how simple are simple machines? With our ready-to-use resource, they are simple to teach and easy to learn! Chocked full of information and activities, we begin with a look at force, motion and work, and examples of simple machines in daily life are given. With this background, we move on to different kinds of simple machines including: Levers, Inclined Planes, Wedges, Screws, Pulleys, and Wheels and Axles. An exploration of some compound machines follows, such as the can opener. Our resource is a real time-saver as all the reading passages, student activities are provided. Presented in s

  1. A systematic review of readability and comprehension instruments used for print and web-based cancer information.

    Science.gov (United States)

    Friedman, Daniela B; Hoffman-Goetz, Laurie

    2006-06-01

    Adequate functional literacy skills positively influence individuals' ability to take control of their health. Print and Web-based cancer information is often written at difficult reading levels. This systematic review evaluates readability instruments (FRE, F-K, Fog, SMOG, Fry) used to assess print and Web-based cancer information and word recognition and comprehension tests (Cloze, REALM, TOFHLA, WRAT) that measure people's health literacy. Articles on readability and comprehension instruments explicitly used for cancer information were assembled by searching MEDLINE and Psyc INFO from 1993 to 2003. In all, 23 studies were included; 16 on readability, 6 on comprehension, and 1 on readability and comprehension. Of the readability investigations, 14 focused on print materials, and 2 assessed Internet information. Comprehension and word recognition measures were not applied to Web-based information. None of the formulas were designed to determine the effects of visuals or design factors that could influence readability and comprehension of cancer education information.

  2. STEP-NC oriented parts setup planning based on machining feature clustering%面向STEP-NC基于加工特征规则聚类的零件装夹规划

    Institute of Scientific and Technical Information of China (English)

    欧阳华兵; 沈斌

    2012-01-01

    Aiming at the setup planning problem in parts process planning,a heuristic clustering setup planning solving method oriented to STEP-NC machining feature was proposed.Based on the analysis of STEP-NC data model and machining unit,a mathematical model for setup planning was established.The machining unit was clustered by using manufacturing priority rules of machining features,and the setup scheme set was formed.Through expert estimation and evaluating scheme,these setup scheme sets were ordered.Thus the setup planning scheme that accord with parts setup demand was generated.The concrete shape and stability of parts were considered and each unit’s locating surface as well as clamping surface were determined.Based on Solidworks 3D computer aided design platform,the generation of parts setup planning was realized.The proposed algorithm was verified by examples.%针对零件工艺规划过程中的装夹规划问题,提出一种面向STEP-NC加工特征的启发式聚类装夹规划求解方法。在分析STEP-NC数据模型和加工单元的基础上,建立了零件装夹规划的数学模型,基于加工特征制造优先级规则对加工单元进行聚类分组,形成零件的装夹方案集;随后通过专家打分和评定策略对这些装夹方案集进行排序,生成符合零件装夹要求的装夹规划方案。装夹规划考虑了零件的具体形状及其稳定性等多种约束条件,确定零件每一个加工单元的定位面和装夹面,较好地体现了零件实际加工过程中的装夹情况。基于Solid-works三维计算机辅助设计平台实现了零件装夹规划的生成,通过实例对所提算法进行了验证。

  3. 支持向量机的全局局部特征融合目标识别%Target Recognition Based on Support Vector Machine(SVM) Features Fusion

    Institute of Scientific and Technical Information of China (English)

    易晓柯

    2011-01-01

    This paper proposes a target recognition method based on support vector machine features fusion. The method uses nonlinear discrimination analysis and local retain mapping to extract the global and local features and then makes features fusion in order to extract more comprehensive samples and obtain more accurate identification results. Then the support vector machine is used for classification. Since its power to deal with nonlinear and small samples, the identification accuracy is further improved. The simulation results of three plane targets show the effectiveness.%提出一种基于支持向量机的全局局部特征融合目标识别方法,并将其运用到雷达一维距离像目标识别.该方法采用非线性辨别方法与局部保留映射方法分别提取样本的非线性全局特征与局部特征,并进行特征融合,以便提取更全面的样本特征,得到更加准确的识别结果,随后采用支持向量机进行分类识别,利用其对于非线性小样本问题的强大处理能力,进一步改善识别结果.对三种飞机目标的实测雷达一维距离像进行了仿真实验,结果表明了方法的有效性.

  4. Language, terminology and the readability of online cancer information.

    Science.gov (United States)

    Peters, Pam; Smith, Adam; Funk, Yasmin; Boyages, John

    2016-03-01

    Medical terms are a recognised problem in doctor-patient consultations. By contrast, the language difficulties of online healthcare documents are underestimated, even though patients are often encouraged to go to the internet for information. Literacy levels in the community vary, and for patients, carers and health workers with limited reading skills (including first- and second-language users of English), the language of web-based health documents may be challenging or impenetrable. Online delivery of health information is inherently problematic because it cannot provide two-way discussion; and amid the range of health documents on the web, the intended readership (whether general or specialist) is rarely indicated up front. In this research study, we focus on the language and readability of web-based cancer documents, using lexicostatistical methods to profile the vocabularies in two large test databases of breast cancer information, one consisting of material designed for health professionals, the other for the general public. They yielded significantly different word frequency rankings and keyness values, broadly correlating with their different readerships, that is, scientifically literate readers for the professional dataset, and non-specialist readers for the public dataset. The higher type/token ratio in the professional dataset confirms its greater lexical demands, with no concessions to the variable language and literacy skills among second-language health workers. Their language needs can, however, be addressed by a new online multilingual termbank of breast cancer vocabulary, HealthTermFinder, designed to sit alongside health documents on the internet, and provide postconsultation help for patients and carers at their point of need.

  5. Feature gene selection for Chinese hamster classification based on support vector machine%基于支持向量机的中国地鼠分类特征基因选取

    Institute of Scientific and Technical Information of China (English)

    杨俊丽; 刘田福

    2011-01-01

    针对中国地鼠基因表达谱数据维数高和样本小的特点,提出一种基于支持向量机(SVM)的分类特征基因选取方法.该方法利用改进的Fisher判别(FDR)基因特征计分准则剔除分类无关基因,提出由空间距离和功能距离组成的新距离作为相似性度量的标准进行冗余基因的剔除,采用SVM作为分类器检验特征基因的分类性能.实验结果表明,该方法有效地剔除了分类无关基因和冗余基因,选取的特征基因满足对中国地鼠正确分类的最小基因数.%Concerning the gene expression profile of Chinese hamster feature, such as high-dimension and small sample,a method of feature selection for Chinese hamster classification based on Support Vector Machine (SVM) was proposed in this paper. The method used improved FDR gene feature score criterion to remove the genes irrelevant to the classification. A new distance composed by space distance and function distance was proposed as the criterion of comparability to remove redundant genes. A SVM was used as classifier to validate the classification performance of the feature genes selected. The experimental results show that this method effectively removes the irrelevant and redundant genes, and selected the feature genes that meet the needs of least feature genes which classify accurately on Chinese hamster.

  6. Insights into Protein Sequence and Structure-Derived Features Mediating 3D Domain Swapping Mechanism using Support Vector Machine Based Approach

    Directory of Open Access Journals (Sweden)

    Khader Shameer

    2010-06-01

    Full Text Available 3-dimensional domain swapping is a mechanism where two or more protein molecules form higher order oligomers by exchanging identical or similar subunits. Recently, this phenomenon has received much attention in the context of prions and neuro-degenerative diseases, due to its role in the functional regulation, formation of higher oligomers, protein misfolding, aggregation etc. While 3-dimensional domain swap mechanism can be detected from three-dimensional structures, it remains a formidable challenge to derive common sequence or structural patterns from proteins involved in swapping. We have developed a SVM-based classifier to predict domain swapping events using a set of features derived from sequence and structural data. The SVM classifier was trained on features derived from 150 proteins reported to be involved in 3D domain swapping and 150 proteins not known to be involved in swapped conformation or related to proteins involved in swapping phenomenon. The testing was performed using 63 proteins from the positive dataset and 63 proteins from the negative dataset. We obtained 76.33% accuracy from training and 73.81% accuracy from testing. Due to high diversity in the sequence, structure and functions of proteins involved in domain swapping, availability of such an algorithm to predict swapping events from sequence and structure-derived features will be an initial step towards identification of more putative proteins that may be involved in swapping or proteins involved in deposition disease. Further, the top features emerging in our feature selection method may be analysed further to understand their roles in the mechanism of domain swapping.

  7. Electric machine

    Science.gov (United States)

    El-Refaie, Ayman Mohamed Fawzi [Niskayuna, NY; Reddy, Patel Bhageerath [Madison, WI

    2012-07-17

    An interior permanent magnet electric machine is disclosed. The interior permanent magnet electric machine comprises a rotor comprising a plurality of radially placed magnets each having a proximal end and a distal end, wherein each magnet comprises a plurality of magnetic segments and at least one magnetic segment towards the distal end comprises a high resistivity magnetic material.

  8. 人机交互界面中形状特征的视觉显著度计算%Visual Salience Calculation of Shape Feature for Human-Machine Interface

    Institute of Scientific and Technical Information of China (English)

    王宁; 余隋怀; 周宪; 肖琳臻

    2016-01-01

    形状特征是影响人机交互界面视觉工效的关键因素,为使人机交互界面能更好地适应用户的生理及心理特性、提升用户体验,需要建构一种人机交互界面中形状特征的视觉显著度计算模型。在分析形状特征对视觉显著度影响程度的基础上,针对人机交互界面中的典型形状,利用内接正方形将形状分割为多个部分,使用相关三角形对形状部分的视觉显著度进行计算,取其中最大值作为形状的视觉显著度,实现形状视觉显著度的量化分析与计算,并通过眼动追踪实验验证该方法的有效性。%Shape feature is an important element of visual ergonomics of human-machine interface. In order to improve the user experience and increase operation efficiency, a visual saliency calculation model for shape features of human-machine interface isproposed. The influence of shape features to visual saliency is analysed and several specific shapes are obtained which are used in huaman-machine interface frequently at first. The inscribed square is used to segment the shapes and some specific parts are got consequently. The triangle is related to the parts to calculate the parts’ visual saliency. The maximal value of parts’ visual saliency is taken as the visual saliency of the shape. An eye tracking experiment verifies the effectiveness of the proposed visual saliency calculation model.

  9. Optical security features by using information carrier digital screening

    Science.gov (United States)

    Koltai, Ferenc

    2002-04-01

    Jura is an Austrian-Hungarian company providing security printers with proprietary security printing design software, complete security printing pre=press systems (HW + SW), ultrahigh resolution image setters developed for security printing market, security features, developed by Jura for security printing in general, proprietary security features, destined for document personalization systems. In addition to supply such products Jura is providing its customers with full technical support, as integration, installation, training, hot-line remote and/or on-site support, service and maintenance worldwide. Research and development have always been in the focus of Jura's activity. Development and testing of new software, new security features are the most important parts of the work. Jura was the first on the world to release her Engraver Software enabling artist-engravers to create engraving-styled portraits digitally. This development, incompatibility with Jura's security design software package, enabled a full digital workflow for banknote origination. Jura made a lot of remarkable steps to develop security features also for Document Personalization. This development links the personal data with the photography of the document' holder by encoding personal data to the photography, invisibly for naked human eye, however, decodable by an appropriate decoding device. This feature exists also in machine-readable digital version. Experts of Jura started the research and development on digital screening 15 years ago for commercial printing and 10 years ago on special screens for security printing technologies. In very early stage of this development, when knowledge of creating each screen-dot individually in shape, form and position was acquired, the idea was born to use the screen dots as secondary data holder for encoded messages.

  10. 基于机器视觉的作物多姿态害虫特征提取与分类方法%Feature extraction and classification method of multi-pose pests using machine vision

    Institute of Scientific and Technical Information of China (English)

    李文勇; 李明; 陈梅香; 钱建平; 孙传恒; 杜尚丰

    2014-01-01

    Pest identification and classification is time-consuming work that requires expert knowledge for integrated pest management. Automation, including machine vision combined with pattern recognition, has achieved some applications in areas such as fruit sorting, robotic harvesting, and quality detection, etc. Automatic classification and counting of pests using machine vision is still a challenge because of variable and uncertain poses of trapped pests. Therefore, using Pseudaletia separata, Conogethes punctiferalis, Helicoverpa armigera, Agrotis ypsilon with different poses as research objects, this paper presents a novel classification method for multi-pose pests based on color and texture feature groups and using a multi-class support vector machine. 320 images were taken using field samples with an original resolution of 4 288×2 848. The subimages of pests with 640×640 pixel size were obtained from original images for computational efficiency. Color features in RGB and HSV spaces, statistical texture features, and wavelet-based texture features were extracted. Six feature vector groups were constructed using those features. In order to select effective feature parameters of each group, a genetic algorithm was designed to optimize feature vectors based on 10-fold cross-validation. Finally, the one-against-one DAGMSVM (acronym as yet undefined) algorithm was applied to classify and recognize the four kinds of target pests and to find the best feature group. 80 images (60 for the training set and 20 for the testing set) were adopted for each species. Parameter numbers were calculated and analyzed after optimization, thus the best parameters were selected for each group. The training time of the SVM model and classification accuracy, which contains false negative and false positive details, were compared between pre-optimization and post-optimization. The results showed that the highest parameter optimization ratio is from the sixth feature group with a dimension

  11. Readability of patient education materials in ophthalmology: a single-institution study and systematic review.

    Science.gov (United States)

    Williams, Andrew M; Muir, Kelly W; Rosdahl, Jullia A

    2016-08-03

    Patient education materials should be written at a level that is understandable for patients with low health literacy. The aims of this study are (1) to review the literature on readability of ophthalmic patient education materials and (2) to evaluate and revise our institution's patient education materials about glaucoma using evidence-based guidelines on writing for patients with low health literacy. A systematic search was conducted on the PubMed/MEDLINE database for studies that have evaluated readability level of ophthalmic patient education materials, and the reported readability scores were assessed. Additionally, we collected evidence-based guidelines for writing easy-to-read patient education materials, and these recommendations were applied to revise 12 patient education handouts on various glaucoma topics at our institution. Readability measures, including Flesch-Kincaid Grade Level (FKGL), and word count were calculated for the original and revised documents. The original and revised versions of the handouts were then scored in random order by two glaucoma specialists using the Suitability Assessment of Materials (SAM) instrument, a grading scale used to evaluate suitability of health information materials for patients. Paired t test was used to analyze changes in readability measures, word count, and SAM score between original and revised handouts. Finally, five glaucoma patients were interviewed to discuss the revised materials, and patient feedback was analyzed qualitatively. Our literature search included 13 studies that evaluated a total of 950 educational materials. Among the mean FKGL readability scores reported in these studies, the median was 11 (representing an eleventh-grade reading level). At our institution, handouts' readability averaged a tenth-grade reading level (FKGL = 10.0 ± 1.6), but revising the handouts improved their readability to a sixth-grade reading level (FKGL = 6.4 ± 1.2) (p materials are consistently

  12. Calisthenics with Words: The Effect of Readability and Investor Sophistication on Investors’ Performance Judgment

    Directory of Open Access Journals (Sweden)

    Xiao Carol Cui

    2016-01-01

    Full Text Available Since the 1990s, the SEC has advocated for financial disclosures to be in “plain English” so that they would be more readable and informative. Past research has shown that high readability is related to more extreme investor judgments of firm performance. Processing fluency is the prevalent theory to explain this: higher readability increases the investor’s subconscious reliance on the disclosure, so positive (negative news leads to more positive (negative judgments. The relationship may not be so simple, though: drawing on research from cognitive psychology, I predict and find that investor financial literacy simultaneously influences investor decision-making, and that it has an interactive effect with readability. When presented with financial disclosure containing conflicting financial information, investors with higher financial literacy make more negative judgments than investors with low financial literacy when the disclosure is easy to read, but the effect becomes insignificant when the disclosure becomes difficult to read. This effect is moderated by a comprehension gap between the two investor groups. Financial literacy and readability interact to impact both how and how well the investor processes financial information.

  13. Readability: an important issue impacting healthcare for women with postpartum depression.

    Science.gov (United States)

    Logsdon, M Cynthia; Hutti, Marianne H

    2006-01-01

    To evaluate the reading level of depression-screening instruments commonly used in postpartum depression (PPD) and evaluate the reading level of prevalent consumer pamphlets and books on PPD. Descriptive study evaluating the reading level of four PPD instruments (the Edinburgh Postnatal Depression Scale, The Center for Epidemiologic Symptoms of Depression, the Postpartum Depression Screening Scale, and the Beck Depression Inventory-II), five pamphlets from grassroots organizations, and seven consumer books using the Fry Readability Graph. The readability of the postpartum screening instruments varied, but all were at or below the recommended 6th grade reading level. CES-D had the lowest reading level (grade 2). The readability of the consumer publications also varied, but all had a higher reading level than the recommended 6th grade level, some at the college reading level. Readability is an important consideration in the choice of depression-screening instruments and written materials for consumers. Nurses using any of the four postpartum screening instruments studied can feel confident that women who can read will be able to read them. The readability of a book, pamphlet, or instrument should be of concern to nurses who work with women during the postpartum period.

  14. Application of 'Numerical Control Special Features' on Numerical Control Quenching Machine Tool%“数控特殊功能”在数控淬火机床上的应用

    Institute of Scientific and Technical Information of China (English)

    王正

    2014-01-01

    This paper describes the special features on the devel-opment of CNC machine tool CNC quenching - "interrupt macro insert"function", " while effectively manual and automatic function", and"manual positioning function", which provides a new way to accelerate the pace of production and improve production efficiency .%本文介绍了在数控淬火机床上开发数控的特殊功能-“中断宏插入”功能、“手动自动同时有效功能”,“手动定位功能”,这些功能的应用是加快生产节拍、提高生产效率的新方法。

  15. Machine Learning Method Applied in Readout System of Superheated Droplet Detector

    Science.gov (United States)

    Liu, Yi; Sullivan, Clair Julia; d'Errico, Francesco

    2017-07-01

    Direct readability is one advantage of superheated droplet detectors in neutron dosimetry. Utilizing such a distinct characteristic, an imaging readout system analyzes image of the detector for neutron dose readout. To improve the accuracy and precision of algorithms in the imaging readout system, machine learning algorithms were developed. Deep learning neural network and support vector machine algorithms are applied and compared with generally used Hough transform and curvature analysis methods. The machine learning methods showed a much higher accuracy and better precision in recognizing circular gas bubbles.

  16. The Machine within the Machine

    CERN Multimedia

    Katarina Anthony

    2014-01-01

    Although Virtual Machines are widespread across CERN, you probably won't have heard of them unless you work for an experiment. Virtual machines - known as VMs - allow you to create a separate machine within your own, allowing you to run Linux on your Mac, or Windows on your Linux - whatever combination you need.   Using a CERN Virtual Machine, a Linux analysis software runs on a Macbook. When it comes to LHC data, one of the primary issues collaborations face is the diversity of computing environments among collaborators spread across the world. What if an institute cannot run the analysis software because they use different operating systems? "That's where the CernVM project comes in," says Gerardo Ganis, PH-SFT staff member and leader of the CernVM project. "We were able to respond to experimentalists' concerns by providing a virtual machine package that could be used to run experiment software. This way, no matter what hardware they have ...

  17. Machine Learning for Medical Imaging.

    Science.gov (United States)

    Erickson, Bradley J; Korfiatis, Panagiotis; Akkus, Zeynettin; Kline, Timothy L

    2017-01-01

    Machine learning is a technique for recognizing patterns that can be applied to medical images. Although it is a powerful tool that can help in rendering medical diagnoses, it can be misapplied. Machine learning typically begins with the machine learning algorithm system computing the image features that are believed to be of importance in making the prediction or diagnosis of interest. The machine learning algorithm system then identifies the best combination of these image features for classifying the image or computing some metric for the given image region. There are several methods that can be used, each with different strengths and weaknesses. There are open-source versions of most of these machine learning methods that make them easy to try and apply to images. Several metrics for measuring the performance of an algorithm exist; however, one must be aware of the possible associated pitfalls that can result in misleading metrics. More recently, deep learning has started to be used; this method has the benefit that it does not require image feature identification and calculation as a first step; rather, features are identified as part of the learning process. Machine learning has been used in medical imaging and will have a greater influence in the future. Those working in medical imaging must be aware of how machine learning works. (©)RSNA, 2017.

  18. Readability of Information Related to the Parenting of a Child With a Cleft.

    Science.gov (United States)

    De Felippe, Nanci; Kar, Farnaz

    2015-07-08

    Many parents look to various sources for information about parenting when their child has a cleft lip and/or palate. More than 8 million Americans perform health-related searches every day on the World Wide Web. Furthermore, a significant number of them report feeling "overwhelmed" by the language and content of the information. The purpose of this study is to determine the readability of information related to parenting a child with cleft lip and/or palate. It was hypothesized that the readability of such materials would be at a level higher than 6th grade. In February of 2012, a Web-based search was conducted using the search engine Google for the terms "parenting cleft lip and palate." A total of 15 websites, 7 books, and 8 booklets/factsheets (N=30) entered the readability analysis. Flesch-Kincaid Grade Level, Fog Scale Level, and Simple Measure of Gobbledygook (SMOG) index scores were calculated. The reading level of the websites and books ranged from 8th to 9th and 9th to10th grade, respectively. The average reading level of the booklets/factsheets was 10th grade. Overall, the mean readability of the media resources analyzed was considered "hard to read." No statistically significant mean difference was found for the readability level across websites, books, and booklets/factsheets (Kruskal-Wallis test, significance level .05). When considering websites, books, booklets, and factsheets analyzed, the average readability level was between 8th and 10th grade. With the US national reading level average at 8th grade and the general recommendation that health-related information be written at a 6th grade level, many parents may find the text they are reading too difficult to comprehend. Therefore, many families might be missing out on the opportunity to learn parenting practices that foster optimal psychosocial development of their children.

  19. Readability of spine-related patient education materials from subspecialty organization and spine practitioner websites.

    Science.gov (United States)

    Vives, Michael; Young, Lyle; Sabharwal, Sanjeev

    2009-12-01

    Analysis of spine-related websites available to the general public. To assess the readability of spine-related patient educational materials available on professional society and individual surgeon or practice based websites. The Internet has become a valuable source of patient education material. A significant percentage of patients, however, find this Internet based information confusing. Healthcare experts recommend that the readability of patient education material be less than the sixth grade level. The Flesch-Kincaid grade level is the most widely used method to evaluate the readability score of textual material, with lower scores suggesting easier readability. We conducted an Internet search of all patient education documents on the North American Spine Society (NASS), American Association of Neurological Surgeons (AANS), the American Academy of Orthopaedic Surgeons (AAOS), and a sample of 10 individual surgeon or practice based websites. The Flesch-Kincaid grade level of each article was calculated using widely available Microsoft Office Word software. The mean grade level of articles on the various professional society and individual/practice based websites were compared. A total of 121 articles from the various websites were available and analyzed. All 4 categories of websites had mean Flesch-Kincaid grade levels greater than 10. Only 3 articles (2.5%) were found to be at or below the sixth grade level, the recommended readability level for adult patients in the United States. There were no significant differences among the mean Flesch-Kincaid grade levels from the AAOS, NASS, AANS, and practice-based web-sites (P = 0.065, ANOVA). Our findings suggest that most of the Spine-related patient education materials on professional society and practice-based websites have readability scores that may be too high, making comprehension difficult for a substantial portion of the United States adult population.

  20. Machine Learning

    CERN Document Server

    CERN. Geneva

    2017-01-01

    Machine learning, which builds on ideas in computer science, statistics, and optimization, focuses on developing algorithms to identify patterns and regularities in data, and using these learned patterns to make predictions on new observations. Boosted by its industrial and commercial applications, the field of machine learning is quickly evolving and expanding. Recent advances have seen great success in the realms of computer vision, natural language processing, and broadly in data science. Many of these techniques have already been applied in particle physics, for instance for particle identification, detector monitoring, and the optimization of computer resources. Modern machine learning approaches, such as deep learning, are only just beginning to be applied to the analysis of High Energy Physics data to approach more and more complex problems. These classes will review the framework behind machine learning and discuss recent developments in the field.

  1. Readability Formulas and User Perceptions of Electronic Health Records Difficulty: A Corpus Study

    Science.gov (United States)

    Yu, Hong

    2017-01-01

    Background Electronic health records (EHRs) are a rich resource for developing applications to engage patients and foster patient activation, thus holding a strong potential to enhance patient-centered care. Studies have shown that providing patients with access to their own EHR notes may improve the understanding of their own clinical conditions and treatments, leading to improved health care outcomes. However, the highly technical language in EHR notes impedes patients’ comprehension. Numerous studies have evaluated the difficulty of health-related text using readability formulas such as Flesch-Kincaid Grade Level (FKGL), Simple Measure of Gobbledygook (SMOG), and Gunning-Fog Index (GFI). They conclude that the materials are often written at a grade level higher than common recommendations. Objective The objective of our study was to explore the relationship between the aforementioned readability formulas and the laypeople’s perceived difficulty on 2 genres of text: general health information and EHR notes. We also validated the formulas’ appropriateness and generalizability on predicting difficulty levels of highly complex technical documents. Methods We collected 140 Wikipedia articles on diabetes and 242 EHR notes with diabetes International Classification of Diseases, Ninth Revision code. We recruited 15 Amazon Mechanical Turk (AMT) users to rate difficulty levels of the documents. Correlations between laypeople’s perceived difficulty levels and readability formula scores were measured, and their difference was tested. We also compared word usage and the impact of medical concepts of the 2 genres of text. Results The distributions of both readability formulas’ scores (P<.001) and laypeople’s perceptions (P=.002) on the 2 genres were different. Correlations of readability predictions and laypeople’s perceptions were weak. Furthermore, despite being graded at similar levels, documents of different genres were still perceived with different

  2. The Correlation between Readability and Circulation in Regard to the Twenty-Six Daily Newspapers of New Jersey.

    Science.gov (United States)

    Cottler, Risa

    A study examined 20 New Jersey daily newspapers to assess the impact of readability on circulation. It was hypothesized that the circulation of a newspaper would be inversely related to the readability level of its contents; that is to say, as reading material becomes harder to read, fewer people will read it. News pieces (divided into hard news,…

  3. A Study of the Subject Categorization of the MIS-related Journals in the ISI Databases Using Topical Features in the Text Content and Machine Learning Methods

    Directory of Open Access Journals (Sweden)

    Sung-Chien Lin

    2015-07-01

    Full Text Available In this study we analyzed and discussed that the MIS-related journals under the ISI subject category of IS&LS are simultaneously given with subject category Management, using methods of topic modeling, journal clustering and subject category prediction. In the experiment of journal clustering, all journals under subject category Management and other journals also having similar topical features can be gathered into a cluster, and “management” is their common and the most distinct topic. Because the journals belonged to this cluster are almost same to those in the MIS clusters generated by the previous studies, we considered it as the MIS cluster in this study. In the second experiment, we used the classification and regression tree (CART technique to predict assignment of subject category with that the journals in the original subject category Management and in the MIS cluster produced in this study as positive examples, respectively. The trees generated by the two tests both used the occurring probabilities of the topic “management” as the main classification rule. However, in the latter test, we did not only obtain a simpler classification tree but also had a result with less predicting errors. This means that if all journals in the MIS cluster could be given with subject category Management, the retrieval results can be more effective and complete.

  4. 37 CFR 1.824 - Form and format for nucleotide and/or amino acid sequence submissions in computer readable form.

    Science.gov (United States)

    2010-07-01

    ... “Sequence Listing” file. (6) All computer readable forms must have a label permanently affixed thereto on...) Computer readable form files submitted may be in any of the following media: (1) Diskette: 3.50 inch, 1.44... nucleotide and/or amino acid sequence submissions in computer readable form. 1.824 Section 1.824...

  5. Attention: A Machine Learning Perspective

    DEFF Research Database (Denmark)

    Hansen, Lars Kai

    2012-01-01

    We review a statistical machine learning model of top-down task driven attention based on the notion of ‘gist’. In this framework we consider the task to be represented as a classification problem with two sets of features — a gist of coarse grained global features and a larger set of low...

  6. Research Translation Strategies to Improve the Readability of Workplace Health Promotion Resources

    Science.gov (United States)

    Wallace, Alison; Joss, Nerida

    2016-01-01

    Without deliberate and resourced translation, research evidence is unlikely to inform policy and practice. This paper describes the processes and practical solutions used to translate evaluation research findings to improve the readability of print materials in a large scale worksite health programme. It is argued that a knowledge brokering and…

  7. Readability of Igbo Language Textbook in Use in Nigerian Secondary Schools

    Science.gov (United States)

    Eze, Nneka Justina

    2015-01-01

    This study assessed the readability of Igbo language textbook in use in Nigerian secondary schools. Five Igbo Language textbook were evaluated. The study employed an evaluation research design. The study was conducted in South Eastern Geopolitical zone of Nigeria which is predominantly the Igbo tribe of Nigeria. Four hundred secondary school…

  8. Health literacy and the Internet: a study on the readability of Australian online health information.

    Science.gov (United States)

    Cheng, Christina; Dunn, Matthew

    2015-08-01

    Almost 80% of Australian Internet users seek out health information online so the readability of this information is important. This study aimed to evaluate the readability of Australian online health information and determine if it matches the average reading level of Australians. Two hundred and fifty-one web pages with information on 12 common health conditions were identified across sectors. Readability was assessed by the Flesch-Kincaid (F-K), Simple Measure of Gobbledygook (SMOG) and Flesch Reading Ease (FRE) formulas, with grade 8 adopted as the average Australian reading level. The average reading grade measured by F-K and SMOG was 10.54 and 12.12 respectively. The mean FRE was 47.54, a 'difficult-to-read' score. Only 0.4% of web pages were written at or below grade 8 according to SMOG. Information on dementia was the most difficult to read overall, while obesity was the most difficult among government websites. The findings suggest that the readability of Australian health websites is above the average Australian levels of reading. A quantifiable guideline is needed to ensure online health information accommodates the reading needs of the general public to effectively use the Internet as an enabler of health literacy. © 2015 Public Health Association of Australia.

  9. Assessing the Readability of College Textbooks in Public Speaking: Attending to Entry Level Instruction

    Science.gov (United States)

    Schneider, David E.

    2011-01-01

    More research is needed that examines textbooks intended for the entry level college classroom. This study offers valuable information to academics that adopt a public speaking textbook for instruction as well as objective feedback to the collective authors. Readability levels of 22 nationally published textbooks, based on McGlaughlin's (1969)…

  10. Evaluation of Quality and Readability of Health Information Websites Identified through India's Major Search Engines.

    Science.gov (United States)

    Raj, S; Sharma, V L; Singh, A J; Goel, S

    2016-01-01

    Background. The available health information on websites should be reliable and accurate in order to make informed decisions by community. This study was done to assess the quality and readability of health information websites on World Wide Web in India. Methods. This cross-sectional study was carried out in June 2014. The key words "Health" and "Information" were used on search engines "Google" and "Yahoo." Out of 50 websites (25 from each search engines), after exclusion, 32 websites were evaluated. LIDA tool was used to assess the quality whereas the readability was assessed using Flesch Reading Ease Score (FRES), Flesch-Kincaid Grade Level (FKGL), and SMOG. Results. Forty percent of websites (n = 13) were sponsored by government. Health On the Net Code of Conduct (HONcode) certification was present on 50% (n = 16) of websites. The mean LIDA score (74.31) was average. Only 3 websites scored high on LIDA score. Only five had readability scores at recommended sixth-grade level. Conclusion. Most health information websites had average quality especially in terms of usability and reliability and were written at high readability levels. Efforts are needed to develop the health information websites which can help general population in informed decision making.

  11. Effect of Textbook Readability on Student Achievement in High School Chemistry.

    Science.gov (United States)

    Rapp, D. Neil

    2001-01-01

    Notes the readability level of many high school chemistry textbooks is far above students' reading levels. Conducts two separate studies, making every effort to keep the two classes as similar as possible in all aspects except text. Finds strong evidence that changing the chemistry textbook resulted in an increase in student achievement. Suggests…

  12. Readability comparison of pro- and anti-HPV-vaccination online messages in Japan.

    Science.gov (United States)

    Okuhara, Tsuyoshi; Ishikawa, Hirono; Okada, Masahumi; Kato, Mio; Kiuchi, Takahiro

    2017-10-01

    In Japan, the HPV vaccination rate has sharply fallen to nearly 0% due to a series of sensational media reports of adverse events. Online anti-HPV-vaccination activists often warn readers of the vaccine's dangers. We aimed to examine distribution and readability of pro-and anti-vaccination online messages with relation to these authors' professional expertise. We conducted online searches via two major search engines. Identified sites were classified as "anti," "pro," or "neutral" depending on their claims, and "health professional" or "non-health professional" depending on their authors' expertise. Readability was determined using a validated measure of Japanese readability. Statistical analysis was conducted using two-way analysis of variance and Tukey's test. Of the total 270 sites analyzed, up to 137 (50.7%) were deemed anti- and 101 (37.4%) pro-HPV-vaccination. Of the pro-vaccination sites 71% were written by health professionals. Anti-vaccination messages were found to be considerably easier to read than pro-vaccination ones; both among those by health professionals and non-health professionals. Our findings substantiate concern that the anti messages may serve to prolong the HPV vaccination crisis. We recommend that health professionals use readability assessment tools and improve the text for easier reading if necessary. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Reconstructing Readability: Recent Developments and Recommendations in the Analysis of Text Difficulty

    Science.gov (United States)

    Benjamin, Rebekah George

    2012-01-01

    Largely due to technological advances, methods for analyzing readability have increased significantly in recent years. While past researchers designed hundreds of formulas to estimate the difficulty of texts for readers, controversy has surrounded their use for decades, with criticism stemming largely from their application in creating new texts…

  14. Textbook Readability and Student Performance in Online Introductory Corporate Finance Classes

    Science.gov (United States)

    Peng, Chien-Chih

    2015-01-01

    This paper examines whether the choice of a more readable textbook can improve student performance in online introductory corporate finance classes. The ordinary least squares regression model is employed to analyze a sample of 206 students during the period from 2008 to 2012. The results of this study show that the student's age, student's major,…

  15. Readability assessment of Internet-based patient education materials related to endoscopic sinus surgery.

    Science.gov (United States)

    Cherla, Deepa V; Sanghvi, Saurin; Choudhry, Osamah J; Liu, James K; Eloy, Jean Anderson

    2012-08-01

    Numerous professional societies, clinical practices, and hospitals provide Internet-based patient education materials (PEMs) to the general public, but not all of this information is written at a reading level appropriate for the average patient. The National Institutes of Health and the US Department of Health and Human Services recommend that PEMs be written at or below the sixth-grade level. Our purpose was to assess the readability of endoscopic sinus surgery (ESS)-related PEMs available on the Internet and compare readability levels of PEMs provided by three sources: professional societies, clinical practices, and hospitals. A descriptive and correlational design was used for this study. The readability of 31 ESS-related PEMs was assessed with four different readability indices: Flesch-Kincaid Grade Level (FKGL), Flesch Reading Ease Score (FRES), Simple Measure of Gobbledygook (SMOG), and Gunning Frequency of Gobbledygook (Gunning FOG). Averages were evaluated against national recommendations and between each source using analysis of variance and t tests. The majority of PEMs (96.8%) were written above the recommended sixth-grade reading level, based on FKGL (P Society, Inc.

  16. Machine Learning

    Energy Technology Data Exchange (ETDEWEB)

    Chikkagoudar, Satish; Chatterjee, Samrat; Thomas, Dennis G.; Carroll, Thomas E.; Muller, George

    2017-04-21

    The absence of a robust and unified theory of cyber dynamics presents challenges and opportunities for using machine learning based data-driven approaches to further the understanding of the behavior of such complex systems. Analysts can also use machine learning approaches to gain operational insights. In order to be operationally beneficial, cybersecurity machine learning based models need to have the ability to: (1) represent a real-world system, (2) infer system properties, and (3) learn and adapt based on expert knowledge and observations. Probabilistic models and Probabilistic graphical models provide these necessary properties and are further explored in this chapter. Bayesian Networks and Hidden Markov Models are introduced as an example of a widely used data driven classification/modeling strategy.

  17. Readability of patient education materials available at the point of care.

    Science.gov (United States)

    Stossel, Lauren M; Segar, Nora; Gliatto, Peter; Fallar, Robert; Karani, Reena

    2012-09-01

    Many patient education materials (PEMs) available on the internet are written at high school or college reading levels, rendering them inaccessible to the average US resident, who reads at or below an 8(th) grade level. Currently, electronic health record (EHR) providers partner with companies that produce PEMs, allowing clinicians to access PEMs at the point of care. To assess the readability of PEMs provided by a popular EHR vendor as well as the National Library of Medicine (NLM). We included PEMs from Micromedex, EBSCO, and MedlinePlus. Micromedex and EBSCO supply PEMs to Meditech, a popular EHR supplier in the US. MedlinePlus supplies the NLM. These PEM databases have high market penetration and accessibility. Grade reading level of the PEMs was calculated using three validated indices: Simple Measure of Gobbledygook (SMOG), Gunning Fog (GFI), and Flesch-Kincaid (FKI). The percentage of documents above target readability and average readability scores from each database were calculated. We randomly sampled 100 disease-matched PEMs from three databases (n = 300 PEMs). Depending on the readability index used, 30-100% of PEMs were written above the 8(th) grade level. The average reading level for MedlinePlus, EBSCO, and Micromedex PEMs was 10.2 (1.9), 9.7 (1.3), and 8.6 (0.9), respectively (p ≤ 0.000) as estimated by the GFI. Estimates of readability using SMOG and FKI were similar. The majority of PEMS available through the NLM and a popular EHR were written at reading levels considerably higher than that of the average US adult.

  18. Readability of pediatric otolaryngology information by children's hospitals and academic institutions.

    Science.gov (United States)

    Wong, Kevin; Levi, Jessica R

    2017-04-01

    Evaluate the readability of pediatric otolaryngology-related patient education materials from leading online sources. Cross-sectional analysis. All pediatric otolaryngology-related articles from the online patient health libraries of the top 10 US News & World Report-ranked children's hospitals, top 5 Doximity-ranked pediatric otolaryngology fellowships, and the American Academy of Otolaryngology-Head and Neck Surgery were collected. Each article was copied in plain text format into a blank document. Web page navigation, appointment information, references, author information, appointment information, acknowledgements, and disclaimers were removed. Follow-up editing was also performed to remove paragraph breaks, colons, semicolons, numbers, percentages, and bullets. Readability grade was calculated using the Flesch-Kincaid Grade Level, Flesch Reading Ease Score, Gunning-Fog Index, Coleman-Liau Index, Automated Readability Index, and Simple Measure of Gobbledygook. Intraobserver and interobserver reliability were assessed. A total of 502 articles were analyzed. Intraobserver and interobserver reliability were both excellent, with an intraclass correlation coefficient of 0.99 and 0.96, respectively. The average readability grade across all authorships and readability assessments exceeded the reading ability of the average American adult. Only 142 articles (28.3%) were written at or below the reading ability of the average American adult, whereas the remaining 360 articles (71.7%) were written above the reading level of the average adult. Current online health information related to pediatric otolaryngology may be too difficult for the average reader to understand. Revisions may be necessary for current materials to benefit a larger readership. NA Laryngoscope, 127:E138-E144, 2017. © 2016 The American Laryngological, Rhinological and Otological Society, Inc.

  19. Analysis of the readability of patient education materials from surgical subspecialties.

    Science.gov (United States)

    Hansberry, David R; Agarwal, Nitin; Shah, Ravi; Schmitt, Paul J; Baredes, Soly; Setzen, Michael; Carmel, Peter W; Prestigiacomo, Charles J; Liu, James K; Eloy, Jean Anderson

    2014-02-01

    Patients are increasingly using the Internet as a source of information on medical conditions. Because the average American adult reads at a 7th- to 8th-grade level, the National Institutes of Health recommend that patient education material be written between a 4th- and 6th-grade level. In this study, we assess and compare the readability of patient education materials on major surgical subspecialty Web sites relative to otolaryngology. Descriptive and correlational design. Patient education materials from 14 major surgical subspecialty Web sites (American Society of Colon and Rectal Surgeons, American Association of Endocrine Surgeons, American Society of General Surgeons, American Society for Metabolic and Bariatric Surgery, American Association of Neurological Surgeons, American Congress of Obstetricians and Gynecologists, American Academy of Ophthalmology, American Academy of Orthopedic Surgeons, American Academy of Otolaryngology-Head and Neck Surgery, American Pediatric Surgical Association, American Society of Plastic Surgeons, Society for Thoracic Surgeons, and American Urological Association) were downloaded and assessed for their level of readability using 10 widely accepted readability scales. The readability level of patient education material from all surgical subspecialties was uniformly too high. Average readability levels across all subspecialties ranged from the 10th- to 15th-grade level. Otolaryngology and other surgical subspecialties Web sites have patient education material written at an education level that the average American may not be able to understand. To reach a broader population of patients, it might be necessary to rewrite patient education material at a more appropriate level. N/A. © 2013 The American Laryngological, Rhinological and Otological Society, Inc.

  20. Readability assessment of internet-based patient education materials related to facial fractures.

    Science.gov (United States)

    Sanghvi, Saurin; Cherla, Deepa V; Shukla, Pratik A; Eloy, Jean Anderson

    2012-09-01

    Various professional societies, clinical practices, hospitals, and health care-related Web sites provide Internet-based patient education material (IPEMs) to the general public. However, this information may be written above the 6th-grade reading level recommended by the US Department of Health and Human Services. The purpose of this study is to assess the readability of facial fracture (FF)-related IPEMs and compare readability levels of IPEMs provided by four sources: professional societies, clinical practices, hospitals, and miscellaneous sources. Analysis of IPEMs on FFs available on Google.com. The readability of 41 FF-related IPEMs was assessed with four readability indices: Flesch-Kincaid Grade Level (FKGL), Flesch Reading Ease Score (FRES), Simple Measure of Gobbledygook (SMOG), and Gunning Frequency of Gobbledygook (Gunning FOG). Averages were evaluated against national recommendations and between each source using analysis of variance and t tests. Only 4.9% of IPEMs were written at or below the 6th-grade reading level, based on FKGL. The mean readability scores were: FRES 54.10, FKGL 9.89, SMOG 12.73, and Gunning FOG 12.98, translating into FF-related IPEMs being written at a "difficult" writing level, which is above the level of reading understanding of the average American adult. IPEMs related to FFs are written above the recommended 6th-grade reading level. Consequently, this information would be difficult to understand by the average US patient. Copyright © 2012 The American Laryngological, Rhinological, and Otological Society, Inc.

  1. Machine testning

    DEFF Research Database (Denmark)

    De Chiffre, Leonardo

    This document is used in connection with a laboratory exercise of 3 hours duration as a part of the course GEOMETRICAL METROLOGY AND MACHINE TESTING. The exercise includes a series of tests carried out by the student on a conventional and a numerically controled lathe, respectively. This document...

  2. Representational Machines

    DEFF Research Database (Denmark)

    Petersson, Dag; Dahlgren, Anna; Vestberg, Nina Lager

    to the enterprises of the medium. This is the subject of Representational Machines: How photography enlists the workings of institutional technologies in search of establishing new iconic and social spaces. Together, the contributions to this edited volume span historical epochs, social environments, technological...

  3. Xeml Lab: a tool that supports the design of experiments at a graphical interface and generates computer-readable metadata files, which capture information about genotypes, growth conditions, environmental perturbations and sampling strategy.

    Science.gov (United States)

    Hannemann, Jan; Poorter, Hendrik; Usadel, Björn; Bläsing, Oliver E; Finck, Alex; Tardieu, Francois; Atkin, Owen K; Pons, Thijs; Stitt, Mark; Gibon, Yves

    2009-09-01

    Data mining depends on the ability to access machine-readable metadata that describe genotypes, environmental conditions, and sampling times and strategy. This article presents Xeml Lab. The Xeml Interactive Designer provides an interactive graphical interface at which complex experiments can be designed, and concomitantly generates machine-readable metadata files. It uses a new eXtensible Mark-up Language (XML)-derived dialect termed XEML. Xeml Lab includes a new ontology for environmental conditions, called Xeml Environment Ontology. However, to provide versatility, it is designed to be generic and also accepts other commonly used ontology formats, including OBO and OWL. A review summarizing important environmental conditions that need to be controlled, monitored and captured as metadata is posted in a Wiki (http://www.codeplex.com/XeO) to promote community discussion. The usefulness of Xeml Lab is illustrated by two meta-analyses of a large set of experiments that were performed with Arabidopsis thaliana during 5 years. The first reveals sources of noise that affect measurements of metabolite levels and enzyme activities. The second shows that Arabidopsis maintains remarkably stable levels of sugars and amino acids across a wide range of photoperiod treatments, and that adjustment of starch turnover and the leaf protein content contribute to this metabolic homeostasis.

  4. Computer-aided diagnosis of Parkinson’s disease based on [123I]FP-CIT SPECT binding potential images, using the voxels-as-features approach and support vector machines

    Science.gov (United States)

    Oliveira, Francisco P. M.; Castelo-Branco, Miguel

    2015-04-01

    Objective. The aim of the present study was to develop a fully-automated computational solution for computer-aided diagnosis in Parkinson syndrome based on [123I]FP-CIT single photon emission computed tomography (SPECT) images. Approach. A dataset of 654 [123I]FP-CIT SPECT brain images from the Parkinson’s Progression Markers Initiative were used. Of these, 445 images were of patients with Parkinson’s disease at an early stage and the remainder formed a control group. The images were pre-processed using automated template-based registration followed by the computation of the binding potential at a voxel level. Then, the binding potential images were used for classification, based on the voxel-as-feature approach and using the support vector machines paradigm. Main results. The obtained estimated classification accuracy was 97.86%, the sensitivity was 97.75% and the specificity 98.09%. Significance. The achieved classification accuracy was very high and, in fact, higher than accuracies found in previous studies reported in the literature. In addition, results were obtained on a large dataset of early Parkinson’s disease subjects. In summation, the information provided by the developed computational solution potentially supports clinical decision-making in nuclear medicine, using important additional information beyond the commonly used uptake ratios and respective statistical comparisons. (ClinicalTrials.gov Identifier: NCT01141023)

  5. Design features of a sulphuric acid plant based on lead and zinc sintering machine off-gas%硫酸用新型耐蚀合金的研究与开发

    Institute of Scientific and Technical Information of China (English)

    刘焕安

    2001-01-01

    Design features of a 150kt/a sulphuric acid plant based on lead and zinc sintering machine off-gases are described. The plant adopted a single absorption technology including closed dilute-acid-scrubbing gas cleaning and ammonia-acid off-gas treatment. The dilute acid settling system,cooling water circulation system, installation of electrostatic precipitator, high-temperature absorption technology and acid distributor of drying and absorption section, and preheater, hot bypass and insulation of conversion section are emphasized in detail.%论述硫酸对金属腐蚀的特殊性和合金设计的基本原理。介绍高温浓硫酸用高硅不锈钢HD-1、合金球墨铸铁HD-3以及稀硫酸用高钼含氮奥氏体不锈钢HD-7、HD-11的研究开发和应用范围。

  6. Adding machine and calculating machine

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    In 1642 the French mathematician Blaise Pascal(1623-1662) invented a machine;.that could add and subtract. It had.wheels that each had: 1 to 10 marked off along its circumference. When the wheel at the right, representing units, made one complete circle, it engaged the wheel to its left, represents tens, and moved it forward one notch.

  7. Improving the human readability of Arden Syntax medical logic modules using a concept-oriented terminology and object-oriented programming expressions.

    Science.gov (United States)

    Choi, Jeeyae; Bakken, Suzanne; Lussier, Yves A; Mendonça, Eneida A

    2006-01-01

    Medical logic modules are a procedural representation for sharing task-specific knowledge for decision support systems. Based on the premise that clinicians may perceive object-oriented expressions as easier to read than procedural rules in Arden Syntax-based medical logic modules, we developed a method for improving the readability of medical logic modules. Two approaches were applied: exploiting the concept-oriented features of the Medical Entities Dictionary and building an executable Java program to replace Arden Syntax procedural expressions. The usability evaluation showed that 66% of participants successfully mapped all Arden Syntax rules to Java methods. These findings suggest that these approaches can play an essential role in the creation of human readable medical logic modules and can potentially increase the number of clinical experts who are able to participate in the creation of medical logic modules. Although our approaches are broadly applicable, we specifically discuss the relevance to concept-oriented nursing terminologies and automated processing of task-specific nursing knowledge.

  8. Performance of machine learning methods for classification tasks

    OpenAIRE

    B. Krithika; Dr. V. Ramalingam; Rajan, K

    2013-01-01

    In this paper, the performance of various machine learning methods on pattern classification and recognition tasks are proposed. The proposed method for evaluating performance will be based on the feature representation, feature selection and setting model parameters. The nature of the data, the methods of feature extraction and feature representation are discussed. The results of the Machine Learning algorithms on the classification task are analysed. The performance of Machine Learning meth...

  9. Online Health Information Regarding Male Infertility: An Evaluation of Readability, Suitability, and Quality

    Science.gov (United States)

    Robins, Stephanie; Barr, Helena J; Idelson, Rachel; Lambert, Sylvie

    2016-01-01

    Background Many men lack knowledge about male infertility, and this may have consequences for their reproductive and general health. Men may prefer to seek health information online, but these sources of information vary in quality. Objective The objective of this study is to determine if online sources of information regarding male infertility are readable, suitable, and of appropriate quality for Internet users in the general population. Methods This study used a cross-sectional design to evaluate online sources resulting from search engine queries. The following categories of websites were considered: (1) Canadian fertility clinics, (2) North American organizations related to fertility, and (3) the first 20 results of Google searches using the terms “male infertility” and “male fertility preservation” set to the search locations worldwide, English Canada, and French Canada. Websites that met inclusion criteria (N=85) were assessed using readability indices, the Suitability Assessment of Materials (SAM), and the DISCERN tool. The associations between website affiliation (government, university/medical, non-profit organization, commercial/corporate, private practice) and Google placement to readability, suitability, and quality were also examined. Results None of the sampled websites met recommended levels of readability. Across all websites, the mean SAM score for suitability was 45.37% (SD 11.21), or “adequate”, while the DISCERN mean score for quality was 43.19 (SD 10.46) or “fair”. Websites that placed higher in Google obtained a higher overall score for quality with an r (58) value of -.328 and a P value of .012, but this position was not related to readability or suitability. In addition, 20% of fertility clinic websites did not include fertility information for men. Conclusions There is a lack of high quality online sources of information on male fertility. Many websites target their information to women, or fail to meet established

  10. Readability and comprehensibility of informed consent forms for clinical trials

    Directory of Open Access Journals (Sweden)

    Anvita Pandiya

    2010-01-01

    A shortened Informed Consent Form, with information that a reasonable person would want to understand along with specific information that the person wants in particular would be a good option to improve understanding or comprehensibility. Additional informational meetings with a qualified person like a counselor could help in comprehension. Questionnaires designed to test comprehension of patient, peer review, patient writing the salient features could help evaluate the comprehensibility of the Informed Consent Form.

  11. Development and Test of a Computer Readability Editing System (CRES).

    Science.gov (United States)

    1980-03-01

    AMIIUNITION AC,,u.ELEDGE AID AMONG ACUSTIC AIi AMOUNT ACRE AIR AMPERAGE ACROSS AIRBORNE AMPERE ACT AIRCRAFT AMPHIBIOUS ACTION AIFIELD AMPLIFIER ACTIVATE...FOOT GRAY FEATURE FOR GREASE FEDERAL FORCE GREAT FEEL FORE GREEN FEET FOREARM GROOM FELLOW FORECASTLE GROLD FEMALE FOREIGN GROUP FEW FORM GUARD F IBER...ENCOUNTERED IN NAVAL SERVICE. FOR ALL BUT THE MOST SIMPLE TYPES OF EQUIPMENT. MANUFACTURERS INSTRUCTION BOOKS ARE SUPPLIED. THEY CONTAIN DETAILED INFORMATION

  12. Genesis machines

    CERN Document Server

    Amos, Martyn

    2014-01-01

    Silicon chips are out. Today's scientists are using real, wet, squishy, living biology to build the next generation of computers. Cells, gels and DNA strands are the 'wetware' of the twenty-first century. Much smaller and more intelligent, these organic computers open up revolutionary possibilities. Tracing the history of computing and revealing a brave new world to come, Genesis Machines describes how this new technology will change the way we think not just about computers - but about life itself.

  13. A Knowledge base model for complex forging die machining

    CERN Document Server

    Mawussi, Kwamiwi; 10.1016/j.cie.2011.02.016

    2011-01-01

    Recent evolutions on forging process induce more complex shape on forging die. These evolutions, combined with High Speed Machining (HSM) process of forging die lead to important increase in time for machining preparation. In this context, an original approach for generating machining process based on machining knowledge is proposed in this paper. The core of this approach is to decompose a CAD model of complex forging die in geometric features. Technological data and topological relations are aggregated to a geometric feature in order to create machining features. Technological data, such as material, surface roughness and form tolerance are defined during forging process and dies design. These data are used to choose cutting tools and machining strategies. Topological relations define relative positions between the surfaces of the die CAD model. After machining features identification cutting tools and machining strategies currently used in HSM of forging die, are associated to them in order to generate mac...

  14. Readability dependence on lithography conditions for printing code marks using a squared optical fiber matrix and light-emitting diodes

    Science.gov (United States)

    Watanabe, Jun; Kato, Kazuhide; Iwasaki, Jun-ya; Horiuchi, Toshiyuki

    2015-06-01

    The direct readability of code marks printed using a new exposure system was investigated. In the new exposure system, code-mark patterns were printed using LEDs as exposure sources and squared optical-fiber ends as code-mark elements. A 10 × 10 fiber matrix was fabricated, and light emitted from each LED was led to each fiber. Because gaps appeared between the code-mark cells, a long exposure time was adopted, and the gaps between cells were eliminated by giving an overdose of light. After code-mark patterns were stably printed, their readability was investigated using a commercial code-mark reader. It was found that all the printed code marks were readable without errors. In concrete, same 100 identical marks printed on a wafer were readable. Moreover, six kinds of marks were repeatedly detected more than 100 times with no reading errors.

  15. Exploring the Readability of Assessment Tasks: The Influence of Text and Reader Factors

    Directory of Open Access Journals (Sweden)

    David Wray

    2013-02-01

    Full Text Available Readability is the degree to which a text is matched to its intended and actual reader. The factors influencing readability, both text factors and reader factors, have been widely researched from the standpoint of attempts to maximise reader understanding of texts. The application of understandings in the area has not, however, always been applied systematically to the design and writing of assessment tasks and consequently test items are sometimes less accessible to the intended test takers than they might be.This paper is an attempt to provide a wide ranging review of literature which bears on the task of the assessment designer in ensuring that assessment items measure what they are supposed to measure, and not just the reading abilities of the test takers.

  16. Availability and Readability of Online Patient Education Materials Regarding Regional Anesthesia Techniques for Perioperative Pain Management.

    Science.gov (United States)

    Kumar, Gunjan; Howard, Steven K; Kou, Alex; Kim, T Edward; Butwick, Alexander J; Mariano, Edward R

    2016-08-02

    OBJECTIVE : Patient education materials (PEM) should be written at a sixth-grade reading level or lower. We evaluated the availability and readability of online PEM related to regional anesthesia and compared the readability and content of online PEM produced by fellowship and nonfellowship institutions. METHODS : With IRB exemption, we constructed a cohort of online regional anesthesia PEM by searching Websites from North American academic medical centers supporting a regional anesthesiology and acute pain medicine fellowships and used a standardized Internet search engine protocol to identify additional nonfellowship Websites with regional anesthesia PEM based on relevant keywords. Readability metrics were calculated from PEM using the TextStat 0.1.4 textual analysis package for Python 2.7 and compared between institutions with and without a fellowship program. The presence of specific descriptive PEM elements related to regional anesthesia was also compared between groups. RESULTS : PEM from 17 fellowship and 15 nonfellowship institutions were included in analyses. The mean (SD) Flesch-Kincaid Grade Level for PEM from the fellowship group was 13.8 (2.9) vs 10.8 (2.0) for the nonfellowship group (p = 0.002). We observed no other differences in readability metrics between fellowship and nonfellowship institutions. Fellowship-based PEM less commonly included descriptions of the following risks: local anesthetic systemic toxicity (p = 0.033) and injury due to an insensate extremity (p = 0.003). CONCLUSIONS : Available online PEM related to regional anesthesia are well above the recommended reading level. Further, fellowship-based PEM posted are at a higher reading level than PEM posted by nonfellowship institutions and are more likely to omit certain risk descriptions.

  17. A Practical Method to Increase the Frequency Readability for Vibration Signals

    Directory of Open Access Journals (Sweden)

    Jean Loius Ntakpe

    2016-10-01

    Full Text Available Damage detection and nondestructive evaluation of mechanical and civil engineering structures are nowadays very important to assess the integrity and ensure the reliability of structures. Thus, frequency evaluation becomes a crucial issue, since this modal parameter is mainly used in structural integrity assessment. The herein presented study highligts the possibility of increasing the frequency readability by involving a simple and cost-effective method.

  18. Simulating Turing machines on Maurer machines

    NARCIS (Netherlands)

    Bergstra, J.A.; Middelburg, C.A.

    2008-01-01

    In a previous paper, we used Maurer machines to model and analyse micro-architectures. In the current paper, we investigate the connections between Turing machines and Maurer machines with the purpose to gain an insight into computability issues relating to Maurer machines. We introduce ways to

  19. Transrectal Ultrasound Guided Biopsy of the Prostate: Is the Information Accessible, Usable, Reliable and Readable?

    Science.gov (United States)

    Redmond, Ciaran E.; Nason, Gregory J.; Kelly, Michael E.; McMahon, Colm; Cantwell, Colin P.; Quinlan, David M.

    2015-01-01

    Background/Aims To evaluate the accessibility, usability, reliability and readability of Internet information regarding transrectal ultrasound (TRUS) guided biopsy of the prostate. Materials and Methods The terms “prostate biopsy”, “TRUS biopsy” and “transrectal ultrasound guided biopsy of the prostate” were separately entered into the each of the top 5 most accessed Internet search engines. Websites were evaluated for accessibility, usability and reliability using the LIDA tool – a validated tool for the assessment of health related websites. Website readability was assessed using the Flesch Reading Ease Score and the Flesch Kincaid Grade Level. Results Following the application of exclusion criteria, 82 unique websites were analyzed. There was a significant difference in scores depending on authorship categories (p ≤ 0.001), with health related charity websites scoring highest (mean 122.29 ± 13.98) and non-academic affiliated institution websites scoring lowest (mean 87 ± 19.76). The presence of advertisements on a website was associated with a lower mean overall LIDA tool score (p = 0.024). Only a single website adhered to the National Institutes for Health recommendations on readability. Conclusions This study demonstrates variability in the quality of information available to Internet users regarding TRUS biopsies. Collaboration of website design and clinical acumen are necessary to develop appropriate websites for patient benefit. PMID:26195961

  20. Readability of Educational Materials to Support Parent Sexual Communication With Their Children and Adolescents.

    Science.gov (United States)

    Ballonoff Suleiman, Ahna; Lin, Jessica S; Constantine, Norman A

    2016-05-01

    Sexual communication is a principal means of transmitting sexual values, expectations, and knowledge from parents to their children and adolescents. Many parents seek information and guidance to support talking with their children about sex and sexuality. Parent education materials can deliver this guidance but must use appropriate readability levels to facilitate comprehension and motivation. This study appraised the readability of educational materials to support parent sexual communication with their children. Fifty brochures, pamphlets, and booklets were analyzed using the Flesch-Kincaid, Gunning Fog, and Simple Measure of Gobbledygook (SMOG) index methods. Mean readability grade-level scores were 8.3 (range = 4.5-12.8), 9.7 (range = 5.5-14.9), and 10.1 (range = 6.7-13.9), respectively. Informed by National Institutes of Health-recommended 6th to 7th grade levels and American Medical Association-recommended 5th to 6th grade levels, percentages falling at or below the 7.0 grade level were calculated as 38%, 12%, and 2% and those falling at or below the 6.0 grade level were calculated as 12%, 2%, and 0% based on the Flesch-Kincaid, Gunning Fog, and SMOG methods, respectively. These analyses indicate that the majority of educational materials available online to support parents' communication with their children about sex and sexuality do not meet the needs of many or most parents. Efforts to improve the accessibility of these materials are warranted.

  1. Readability and quality assessment of internet-based patient education materials related to laryngeal cancer.

    Science.gov (United States)

    Narwani, Vishal; Nalamada, Keerthana; Lee, Michael; Kothari, Prasad; Lakhani, Raj

    2016-04-01

    Patients are increasingly using the internet to access health-related information. The purpose of this study was to assess the readability and quality of laryngeal cancer-related websites. Patient education materials were identified by performing an internet search using 3 search engines. Readability was assessed using Flesch Reading Ease Score (FRES), Flesch-Kincaid Grade Level (FKGL), and Gunning Fog Index (GFI). The DISCERN instrument was utilized to assess quality of health information. A total of 54 websites were included in the analysis. The mean readability scores were as follows: FRES, 48.2 (95% confidence interval [CI] = 44.8-51.6); FKGL, 10.9 (95% CI = 10.3-11.5); and GFI, 13.8 (95% CI = 11.3-16.3). These scores suggest that, on average, online information about patients with laryngeal cancer is written at an advanced level. The mean DISCERN score was 49.8 (95% CI = 45.4-54.2), suggesting that online information is of variable quality. Our study suggests much of the laryngeal cancer information available online is of suboptimal quality and written at a level too difficult for the average adult to read comfortably. © 2015 Wiley Periodicals, Inc.

  2. A quantitative readability analysis of patient education resources from gastroenterology society websites.

    Science.gov (United States)

    Hansberry, David R; Patel, Sahil R; Agarwal, Prateek; Agarwal, Nitin; John, Elizabeth S; John, Ann M; Reynolds, James C

    2017-06-01

    The lay public frequently access and rely on online information as a source of their medical knowledge. Many medical societies are unaware of national patient education material guidelines and subsequently fail to meet them. The goal of the present study was to evaluate the readability of patient education materials within the medical field of gastroenterology. Two hundred fourteen articles pertaining to patient education materials were evaluated with ten well-established readability scales. The articles were available on the websites for the American College of Gastroenterology (ACG), the American Gastroenterological Association (AGA), the American Society of Gastrointestinal Endoscopy (ASGE), the British Society of Gastroenterology (BSG), and the NIH section National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK). One-way analysis of variance (ANOVA) and Tukey's honest significant difference (HSD) post hoc analysis were conducted to determine any differences in level of readability between websites. The 214 articles were written at an 11.8 ± 2.1 grade level with a range of 8.0 to 16.0 grade level. A one-way ANOVA and Tukey's HSD post hoc analysis determined the ACG was written at a significantly (p education materials were written at a level that met national guidelines. If the materials are redrafted, the general American public will likely have a greater understanding of the gastroenterology content.

  3. Environmentally Friendly Machining

    CERN Document Server

    Dixit, U S; Davim, J Paulo

    2012-01-01

    Environment-Friendly Machining provides an in-depth overview of environmentally-friendly machining processes, covering numerous different types of machining in order to identify which practice is the most environmentally sustainable. The book discusses three systems at length: machining with minimal cutting fluid, air-cooled machining and dry machining. Also covered is a way to conserve energy during machining processes, along with useful data and detailed descriptions for developing and utilizing the most efficient modern machining tools. Researchers and engineers looking for sustainable machining solutions will find Environment-Friendly Machining to be a useful volume.

  4. STEP based Finish Machining CAPP system

    OpenAIRE

    A Arivazhagan; Mehta, NK; Jain, PK

    2012-01-01

    This research paper presents various methodologies developed in a STEP based Computer Aided Process Planning (CAPP) system named "Finish Machining – CAPP" (FM-CAPP). It is developed to generate automatic process plans for finish machining prismatic parts. It is designed in a modular fashion consisting of three main modules, namely (i) Feature Recognition module (FRM) (ii) Machining Planning Module (MPM) and (iii) Setup Planning Module (SPM). The FRM Module analyses the geometrical and topolog...

  5. Machine Transliteration

    CERN Document Server

    Knight, K; Knight, Kevin; Graehl, Jonathan

    1997-01-01

    It is challenging to translate names and technical terms across languages with different alphabets and sound inventories. These items are commonly transliterated, i.e., replaced with approximate phonetic equivalents. For example, "computer" in English comes out as "konpyuutaa" in Japanese. Translating such items from Japanese back to English is even more challenging, and of practical interest, as transliterated items make up the bulk of text phrases not found in bilingual dictionaries. We describe and evaluate a method for performing backwards transliterations by machine. This method uses a generative model, incorporating several distinct stages in the transliteration process.

  6. 基于图像多特征融合和支持向量机的气液两相流流型识别%Identification Method of Gas-Liquid Two-phase Flow Regime Based on Image Multi-feature Fusion and Support Vector Machine

    Institute of Scientific and Technical Information of China (English)

    周云龙; 陈飞; 孙斌

    2008-01-01

    The knowledge of flow regime is very important for quantifying the pressure drop, the stability and safety of two-phase flow systems. Based on image multi-feature fusion and support vector machine, a new method to identify flow regime in two-phase flow was presented. Firstly, gas-liquid two-phase flow images including bubbly flow, plug flow, slug flow, stratified flow, wavy flow, annular flow and mist flow were captured by digital high speed video systems in the horizontal tube. The image moment invariants and gray level co-occurrence matrix texture features were extracted using image processing techniques. To improve the performance of a multiple classifier system, the rough sets theory was used for reducing the inessential factors. Furthermore, the support vector machine was trained by using these eigenvectors to reduce the dimension as flow regime samples, and the flow regime intelligent identification was realized. The test results showed that image features which were reduced with the rough sets theory could excellently reflect the difference between seven typical flow regimes, and successful training the support vector machine could quickly and accurately identify seven typical flow regimes of gas-liquid two-phase flow in the horizontal tube. Image multi-feature fusion method provided a new way to identify the gas-liquid two-phase flow, and achieved higher identification ability than that of single characteristic. The overall identification accuracy was 100%, and an estimate of the image processing time was 8 ms for online flow regime identification.

  7. Machine Protection

    CERN Document Server

    Schmidt, R

    2014-01-01

    The protection of accelerator equipment is as old as accelerator technology and was for many years related to high-power equipment. Examples are the protection of powering equipment from overheating (magnets, power converters, high-current cables), of superconducting magnets from damage after a quench and of klystrons. The protection of equipment from beam accidents is more recent. It is related to the increasing beam power of high-power proton accelerators such as ISIS, SNS, ESS and the PSI cyclotron, to the emission of synchrotron light by electron–positron accelerators and FELs, and to the increase of energy stored in the beam (in particular for hadron colliders such as LHC). Designing a machine protection system requires an excellent understanding of accelerator physics and operation to anticipate possible failures that could lead to damage. Machine protection includes beam and equipment monitoring, a system to safely stop beam operation (e.g. dumping the beam or stopping the beam at low energy) and an ...

  8. Clustering Categories in Support Vector Machines

    DEFF Research Database (Denmark)

    Carrizosa, Emilio; Nogales-Gómez, Amaya; Morales, Dolores Romero

    2017-01-01

    The support vector machine (SVM) is a state-of-the-art method in supervised classification. In this paper the Cluster Support Vector Machine (CLSVM) methodology is proposed with the aim to increase the sparsity of the SVM classifier in the presence of categorical features, leading to a gain in in...

  9. The Improved Relevance Voxel Machine

    DEFF Research Database (Denmark)

    Ganz, Melanie; Sabuncu, Mert; Van Leemput, Koen

    The concept of sparse Bayesian learning has received much attention in the machine learning literature as a means of achieving parsimonious representations of features used in regression and classification. It is an important family of algorithms for sparse signal recovery and compressed sensing...

  10. Analysis of machining and machine tools

    CERN Document Server

    Liang, Steven Y

    2016-01-01

    This book delivers the fundamental science and mechanics of machining and machine tools by presenting systematic and quantitative knowledge in the form of process mechanics and physics. It gives readers a solid command of machining science and engineering, and familiarizes them with the geometry and functionality requirements of creating parts and components in today’s markets. The authors address traditional machining topics, such as: single and multiple point cutting processes grinding components accuracy and metrology shear stress in cutting cutting temperature and analysis chatter They also address non-traditional machining, such as: electrical discharge machining electrochemical machining laser and electron beam machining A chapter on biomedical machining is also included. This book is appropriate for advanced undergraduate and graduate mechani cal engineering students, manufacturing engineers, and researchers. Each chapter contains examples, exercises and their solutions, and homework problems that re...

  11. Vehicle brand recognition based on HOG feature and support vector machine%基于 HOG 特征及支持向量机的车辆品牌识别方法

    Institute of Scientific and Technical Information of China (English)

    张小琴; 赵池航; 沙月进; 党倩; 张运胜

    2013-01-01

    In order to solve the problem of fake-licensed car and illegal car identification, a vehicle brand recognition method is proposed.It detects the car front face based on symmetric feature, ex-tracts the HOG ( histogar of driented gradient) feature of the car face and then uses support vector machine ( SVM) to classify the vehicle brand.According to the road bayonet pictures provided by the Suzhou Municipal Public Security Bureau, a car face database is built, which includes 3000 pic-tures of 15 kinds of vehicle brands such as Audi, Changan and Nissan.Based on the built database, experiments are conducted using the proposed method for vehicle brand recognition.The perform-ance of three kinds of kernel functions ( linear kernel function, polynomial kernel function and radial basis kernel function) of SVM is compared and analyzed.The overall classification accuracy of the three kernel functions are 89.27% , 89.74% and 89.89%.Theoretical analysis and experimental results show that the proposed recognition method based on HOG features and SVM is feasible, and the SVM classifier based on the radial basis function performs optimal.%为了解决套牌车与违章车的身份确认问题,提出了一种车辆品牌识别方法.该方法首先基于对称特征检测车辆前脸区域,然后提取车辆前脸区域的HOG特征,最后采用支持向量机对车辆品牌进行分类.实验根据苏州市公安局提供的道路卡口图片,构建了车脸数据库,该数据库包括奥迪、长安、日产等15种车辆品牌,共3000张图片.基于构建的车脸数据库,采用所提出的车辆品牌识别方法进行了实验,并对比分析了支持向量机( support vector machine, SVM)线性核函数、多项式核函数和径向基核函数的性能,3种核函数的整体分类精度分别为89.27%,89.74%和89.89%.理论分析和实验结果表明,所提出的基于HOG特征及支持向量机的车辆品牌识别方法是

  12. Machine Learning in Parliament Elections

    Directory of Open Access Journals (Sweden)

    Ahmad Esfandiari

    2012-09-01

    Full Text Available Parliament is considered as one of the most important pillars of the country governance. The parliamentary elections and prediction it, had been considered by scholars of from various field like political science long ago. Some important features are used to model the results of consultative parliament elections. These features are as follows: reputation and popularity, political orientation, tradesmen's support, clergymen's support, support from political wings and the type of supportive wing. Two parameters of reputation and popularity and the support of clergymen and religious scholars that have more impact in reducing of prediction error in election results, have been used as input parameters in implementation. In this study, the Iranian parliamentary elections, modeled and predicted using learnable machines of neural network and neuro-fuzzy. Neuro-fuzzy machine combines the ability of knowledge representation of fuzzy sets and the learning power of neural networks simultaneously. In predicting the social and political behavior, the neural network is first trained by two learning algorithms using the training data set and then this machine predict the result on test data. Next, the learning of neuro-fuzzy inference machine is performed. Then, be compared the results of two machines.

  13. Machine Process Capability Information Through Six Sigma

    Energy Technology Data Exchange (ETDEWEB)

    Lackner, M.F.

    1998-03-13

    A project investigating details concerning machine process capability information and its accessibility has been conducted. The thesis of the project proposed designing a part (denoted as a machine capability workpiece) based on the major machining features of a given machine. Parts are machined and measured to gather representative production, short-term variation. The information is utilized to predict the expected defect rate, expressed in terms of a composite sigma level process capability index, for a production part. Presently, decisions concerning process planning, particularly what machine will statistically produce the minimum amount of defects based on machined features and associated tolerances, are rarely made. Six sigma tools and methodology were employed to conduct this investigation at AlliedSignal FM and T. Tools such as the thought process map, factor relationship diagrams, and components of variance were used. This study is progressing toward completion. This research study was an example of how machine process capability information may be gathered for milling planar faces (horizontal) and slot features. The planning method used to determine where and how to gather variation for the part to be designed is known as factor relationship diagramming. Components-of-variation is then applied to the gathered data to arrive at the contributing level of variation illustrated within the factor relationship diagram. The idea of using this capability information beyond process planning to the other business enterprise operations is proposed.

  14. Automation of printing machine

    OpenAIRE

    Sušil, David

    2016-01-01

    Bachelor thesis is focused on the automation of the printing machine and comparing the two types of printing machines. The first chapter deals with the history of printing, typesettings, printing techniques and various kinds of bookbinding. The second chapter describes the difference between sheet-fed printing machines and offset printing machines, the difference between two representatives of rotary machines, technological process of the products on these machines, the description of the mac...

  15. Weighted Feature Distance

    DEFF Research Database (Denmark)

    Ortiz-Arroyo, Daniel; Yazdani, Hossein

    2017-01-01

    The accuracy of machine learning methods for clustering depends on the optimal selection of similarity functions. Conventional distance functions for the vector space might cause an algorithm to being affected by some dominant features that may skew its final results. This paper introduces a flexib...

  16. Readability, complexity, and suitability of online resources for mastectomy and lumpectomy.

    Science.gov (United States)

    Tran, Bao Ngoc N; Singh, Mansher; Singhal, Dhruv; Rudd, Rima; Lee, Bernard T

    2017-05-15

    Nearly half of American adults have low or marginal health literacy. This negatively affects patients' participation, decision-making, satisfaction, and overall outcomes especially when there is a mismatch between information provided and the skills of the intended audience. Recommendations that patient information be written below the sixth grade level have been made for over three decades. This study compares online resources for mastectomy versus lumpectomy using expanded metrics including readability level, complexity, and density of data and overall suitability for public consumption. The 10 highest ranked Web sites for mastectomy and lumpectomy were identified using the largest Internet engine (Google). Each Web site was assessed for readability (Simple Measure of Gobbledygook), complexity (PMOSE/iKIRSCH), and suitability (Suitability Assessment of Materials). Scores were analyzed by each Web site and overall. Readability analysis showed a significant reading grade level difference between mastectomy and lumpectomy online information (15.4 and 13.9, P = 0.04, respectively). Complexity analysis via PMOSE/iKIRSCH revealed a mean score of 6.5 for mastectomy materials corresponding to "low" complexity and eighth to 12(th) grade education. Lumpectomy literature had a lower PMOSE/iKIRSCH score of 5.8 corresponding to a "very low" complexity and fourth to eighth grade education (P = 0.05). Suitability assessment showed mean values of 41% and 46% (P = 0.83) labeled as the lowest level of "adequacy" for mastectomy and lumpectomy materials, respectively. Inter-rater reliability was high for both complexity and suitability analysis. Online resources for the surgical treatment of breast cancer are above the recommended reading grade level. The suitability level is barely adequate indicating a need for revision. Online resources for mastectomy have a higher reading grade level than do materials for lumpectomy and tend to be more complex. Copyright © 2017 Elsevier

  17. Readability and Content Assessment of Informed Consent Forms for Medical Procedures in Croatia

    Science.gov (United States)

    Vučemilo, Luka; Borovečki, Ana

    2015-01-01

    Background High quality of informed consent form is essential for adequate information transfer between physicians and patients. Current status of medical procedure consent forms in clinical practice in Croatia specifically in terms of the readability and the content is unknown. The aim of this study was to assess the readability and the content of informed consent forms for diagnostic and therapeutic procedures used with patients in Croatia. Methods 52 informed consent forms from six Croatian hospitals on the secondary and tertiary health-care level were tested for reading difficulty using Simple Measure of Gobbledygook (SMOG) formula adjusted for Croatian language and for qualitative analysis of the content. Results The averaged SMOG grade of analyzed informed consent forms was 13.25 (SD 1.59, range 10–19). Content analysis revealed that informed consent forms included description of risks in 96% of the cases, benefits in 81%, description of procedures in 78%, alternatives in 52%, risks and benefits of alternatives in 17% and risks and benefits of not receiving treatment or undergoing procedures in 13%. Conclusions Readability of evaluated informed consent forms is not appropriate for the general population in Croatia. The content of the forms failed to include in high proportion of the cases description of alternatives, risks and benefits of alternatives, as well as risks and benefits of not receiving treatments or undergoing procedures. Data obtained from this research could help in development and improvement of informed consent forms in Croatia especially now when Croatian hospitals are undergoing the process of accreditation. PMID:26376183

  18. Assessing the quality, suitability and readability of internet-based health information about warfarin for patients

    Directory of Open Access Journals (Sweden)

    Sayeed Nasser

    2012-03-01

    Full Text Available BackgroundWarfarin is a high-risk medication where patient information may be critical to help ensure safe and effective treatment. Considering the time constraints of healthcare providers, the internet can be an important supplementary information resource for patients prescribed warfarin. The usefulness of internet-based patient information is often limited by challenges associated with finding valid and reliable health information. Given patients’ increasing access of the internet for information, this study investigated the quality, suitability and readability of patient information about warfarin presented on the internet.MethodPreviously validated tools were used to evaluate the quality, suitability and readability of patient information about warfarin on selected websites.ResultsThe initial search yielded 200 websites, of which 11 fit selection criteria, comprising seven non-commercial and four commercial websites. Regarding quality, most of the non-commercial sites (six out of seven scored at least an ‘adequate’ score. With regard to suitability, 6 of the 11 websites (including two of the four commercial sites attained an ‘adequate’ score. It was determined that information on 7 of the 11 sites (including two commercial sites was written at reading grade levels beyond that considered representative of the adult patient population with poor literacy skills (e.g. school grade 8 or less.ConclusionDespite the overall ‘adequate’ quality and suitability of the internet derived patient information about warfarin, the actual usability of such websites may be limited due to their poor readability grades, particularly in patients with low literacy skills.

  19. GCP compliance and readability of informed consent forms from an emerging hub for clinical trials

    Directory of Open Access Journals (Sweden)

    Satish Chandrasekhar Nair

    2015-01-01

    Full Text Available Background: The rapid expansion of trials in emerging regions has raised valid concerns about research subject protection, particularly related to informed consent. The purpose of this study is to assess informed consent form (ICF compliance with Good Clinical Practice (GCP guidelines and the readability easeof the ICFs in Abu Dhabi, a potential destination for clinical trials in the UAE. Materials and Methods: A multicenter retrospective cross-sectional analysis of 140 ICFs from industry sponsored and non-sponsored studies was conducted by comparing against a local standard ICF. Flesch-Kincaid Reading Scale was used to assess the readability ease of the forms. Results: Non-sponsored studies had signifi cantly lower overall GCP compliance of 55.8% when compared to 79.5% for industry sponsored studies. Only 33% of sponsored and 16% of non-sponsored studies included basic information on the participants′ rights and responsibilities. Flesch-Kincaid Reading ease score for the informed consent forms from industry sponsored studies was signifi cantly higher 48.9 ± 4.8 as compared to 38.5 ± 8.0 for non-sponsored studies, though both were more complex than recommended. Reading Grade Level score was also higher than expected, but scores for the ICFs from the industry sponsored studies were 9.7 ± 0.7, signifi cantly lower as compared to 12.2 ± 1.3 for non-sponsored studies. Conclusion: In spite of the undisputed benefits of conducting research in emerging markets readability, comprehension issues and the lack of basic essential information call for improvements in the ICFs to protect the rights of future research subjects enrolled in clinical trials in the UAE.

  20. Readability and comprehensibility of patient education material in hand-related web sites.

    Science.gov (United States)

    Wang, Steve W; Capo, John T; Orillaza, Nathaniel

    2009-09-01

    As patients are more frequently referring to the Internet for information on their musculoskeletal problems, the readability and comprehensibility of these educational materials becomes increasingly more important to most of the lay public. In this study, we investigated the readability of the currently available web sites of the American Society for Surgery of the Hand (ASSH) and the American Academy of Orthopaedic Surgeons (AAOS) that pertain to hand and wrist problems, to assess their usefulness as a source for patient information. We analyzed all articles available in 2008 from the AAOS web site within the Patient Education Library under the heading, "Hand & Wrist" and from the ASSH web site under the heading, "Hand Conditions." A total of 83 articles were identified for hand conditions. Each article was analyzed by the Flesch-Kincaid program available in Microsoft Office Word software and the Dale-Chall grade-level assessor. These program models analyze all words in the specified text and return a grade level that corresponds to the difficulty level of the text. The AAOS web sites contained 34 articles with a mean Flesch-Kincaid grade level of 8.5 and a mean Dale-Chall grade level of 8.8. The ASSH web site contained 49 articles showing a mean Flesch-Kincaid grade level of 10.4 and a mean Dale-Chall grade level of 10.8. Our results suggest that the patient education materials found on the AAOS and ASSH web sites have readability scores that are higher than the recommended reading levels and thus may be too difficult to be understood by a substantial portion of the U.S. population.

  1. Beauty and the Beast - on the readability of object-oriented example programs

    DEFF Research Database (Denmark)

    Börstler, Jürgen; Caspersen, Michael E.; Nordström, Marie

    2016-01-01

    Some solutions to a programming problem are more beautiful, elegant, and simple than others and thus more understandable for students. But why is it so, and can we quantify the notion of understandability of programs? We review desirable properties of program examples from a cognitive...... and a measurement point of view. It can be argued that some cognitive aspects of example programs are captured by common software measures, but we argue that they are not sucient to capture the most important aspects of understandability. A key aspect of understandability is readability. The authors propose...

  2. Logic without unique readability - a study of semantic and syntactic ambiguity

    DEFF Research Database (Denmark)

    Bentzen, Martin Mose

    of logic such as definability, satisfiability and validity. Here follows two simple examples illustrating the relation between syntactic and semantic ambiguity. In some cases unique readability can be regained through careful construction of formulas. E.g., although an attempt to define p → q as ¬p ∨ q...... would be syntactically and semantically ambiguous, one may define it as q ∨ ¬p, which can be read only one way (but obviously this construction is not stable under substitution). Syntactic ambiguity does not imply semantic ambiguity, although it is typically the case. For instance, although the formula...

  3. Machine musicianship

    Science.gov (United States)

    Rowe, Robert

    2002-05-01

    The training of musicians begins by teaching basic musical concepts, a collection of knowledge commonly known as musicianship. Computer programs designed to implement musical skills (e.g., to make sense of what they hear, perform music expressively, or compose convincing pieces) can similarly benefit from access to a fundamental level of musicianship. Recent research in music cognition, artificial intelligence, and music theory has produced a repertoire of techniques that can make the behavior of computer programs more musical. Many of these were presented in a recently published book/CD-ROM entitled Machine Musicianship. For use in interactive music systems, we are interested in those which are fast enough to run in real time and that need only make reference to the material as it appears in sequence. This talk will review several applications that are able to identify the tonal center of musical material during performance. Beyond this specific task, the design of real-time algorithmic listening through the concurrent operation of several connected analyzers is examined. The presentation includes discussion of a library of C++ objects that can be combined to perform interactive listening and a demonstration of their capability.

  4. Electrical machines mathematical fundamentals of machine topologies

    CERN Document Server

    Gerling, Dieter

    2015-01-01

    Electrical Machines and Drives play a powerful role in industry with an ever increasing importance. This fact requires the understanding of machine and drive principles by engineers of many different disciplines. Therefore, this book is intended to give a comprehensive deduction of these principles. Special attention is given to the precise mathematical derivation of the necessary formulae to calculate machines and drives and to the discussion of simplifications (if applied) with the associated limits. The book shows how the different machine topologies can be deduced from general fundamentals, and how they are linked together. This book addresses graduate students, researchers, and developers of Electrical Machines and Drives, who are interested in getting knowledge about the principles of machine and drive operation and in detecting the mathematical and engineering specialties of the different machine and drive topologies together with their mutual links. The detailed - but nevertheless compact - mat...

  5. A Knowledge base model for complex forging die machining

    OpenAIRE

    Mawussi, Kwamiwi; Tapie, Laurent

    2011-01-01

    International audience; Recent evolutions on forging process induce more complex shape on forging die. These evolutions, combined with High Speed Machining (HSM) process of forging die lead to important increase in time for machining preparation. In this context, an original approach for generating machining process based on machining knowledge is proposed in this paper. The core of this approach is to decompose a CAD model of complex forging die in geometric features. Technological data and ...

  6. INVESTIGATION OF MAGNESIUM ALLOYS MACHINABILITY

    Directory of Open Access Journals (Sweden)

    Berat Barıs BULDUM

    2013-01-01

    Full Text Available Magnesium is the lightest structural metal. Magnesium alloys have a hexagonal lattice structure, which affects the fundamental properties of these alloys. Plastic deformation of the hexagonal lattice is more complicated than in cubic latticed metals like aluminum, copper and steel. Magnesium alloy developments have traditionally been driven by industry requirements for lightweight materials to operate under increasingly demanding conditions. Magnesium alloys have always been attractive to designers due to their low density, only two thirds that of aluminium and its alloys [1]. The element and its alloys take a big part of modern industry needs. Especially nowadays magnesium alloys are used in automotive and mechanical (trains and wagons manufacture, because of its lightness and other features. Magnesium and magnesium alloys are the easiest of all metals to machine, allowing machining operations at extremely high speed. All standard machining operations such as turning, drilling, milling, are commonly performed on magnesium parts.

  7. Laser machining of advanced materials

    CERN Document Server

    Dahotre, Narendra B

    2011-01-01

    Advanced materialsIntroductionApplicationsStructural ceramicsBiomaterials CompositesIntermetallicsMachining of advanced materials IntroductionFabrication techniquesMechanical machiningChemical Machining (CM)Electrical machiningRadiation machining Hybrid machiningLaser machiningIntroductionAbsorption of laser energy and multiple reflectionsThermal effectsLaser machining of structural ceramicsIntrodu

  8. Readable relativity

    CERN Document Server

    Durell, Clement V

    2003-01-01

    Concise and practical, this text by a renowned teacher sketches the mathematical background essential to understanding the fundamentals of relativity theory. Subjects include the velocity of light, measurement of time and distance, and properties of mass and momentum, with numerous diagrams, formulas, and examples, plus exercises and solutions. 1960 edition.

  9. Template-Directed Biopolymerization: Tape-Copying Turing Machines

    Science.gov (United States)

    Sharma, Ajeet K.; Chowdhury, Debashish

    2012-10-01

    DNA, RNA and proteins are among the most important macromolecules in a living cell. These molecules are polymerized by molecular machines. These natural nano-machines polymerize such macromolecules, adding one monomer at a time, using another linear polymer as the corresponding template. The machine utilizes input chemical energy to move along the template which also serves as a track for the movements of the machine. In the Alan Turing year 2012, it is worth pointing out that these machines are "tape-copying Turing machines". We review the operational mechanisms of the polymerizer machines and their collective behavior from the perspective of statistical physics, emphasizing their common features in spite of the crucial differences in their biological functions. We also draw the attention of the physics community to another class of modular machines that carry out a different type of template-directed polymerization. We hope this review will inspire new kinetic models for these modular machines.

  10. Template-directed biopolymerization: tape-copying Turing machines

    CERN Document Server

    Sharma, Ajeet K; 10.1142/S1793048012300083

    2013-01-01

    DNA, RNA and proteins are among the most important macromolecules in a living cell. These molecules are polymerized by molecular machines. These natural nano-machines polymerize such macromolecules, adding one monomer at a time, using another linear polymer as the corresponding template. The machine utilizes input chemical energy to move along the template which also serves as a track for the movements of the machine. In the Alan Turing year 2012, it is worth pointing out that these machines are "tape-copying Turing machines". We review the operational mechanisms of the polymerizer machines and their collective behavior from the perspective of statistical physics, emphasizing their common features in spite of the crucial differences in their biological functions. We also draw attention of the physics community to another class of modular machines that carry out a different type of template-directed polymerization. We hope this review will inspire new kinetic models for these modular machines.

  11. The Readability of Electronic Cigarette Health Information and Advice: A Quantitative Analysis of Web-Based Information

    Science.gov (United States)

    Zhu, Shu-Hong; Conway, Mike

    2017-01-01

    Background The popularity and use of electronic cigarettes (e-cigarettes) has increased across all demographic groups in recent years. However, little is currently known about the readability of health information and advice aimed at the general public regarding the use of e-cigarettes. Objective The objective of our study was to examine the readability of publicly available health information as well as advice on e-cigarettes. We compared information and advice available from US government agencies, nongovernment organizations, English speaking government agencies outside the United States, and for-profit entities. Methods A systematic search for health information and advice on e-cigarettes was conducted using search engines. We manually verified search results and converted to plain text for analysis. We then assessed readability of the collected documents using 4 readability metrics followed by pairwise comparisons of groups with adjustment for multiple comparisons. Results A total of 54 documents were collected for this study. All 4 readability metrics indicate that all information and advice on e-cigarette use is written at a level higher than that recommended for the general public by National Institutes of Health (NIH) communication guidelines. However, health information and advice written by for-profit entities, many of which were promoting e-cigarettes, were significantly easier to read. Conclusions A substantial proportion of potential and current e-cigarette users are likely to have difficulty in fully comprehending Web-based health information regarding e-cigarettes, potentially hindering effective health-seeking behaviors. To comply with NIH communication guidelines, government entities and nongovernment organizations would benefit from improving the readability of e-cigarettes information and advice. PMID:28062390

  12. Augmented reality with image registration, vision correction and sunlight readability via liquid crystal devices.

    Science.gov (United States)

    Wang, Yu-Jen; Chen, Po-Ju; Liang, Xiao; Lin, Yi-Hsin

    2017-03-27

    Augmented reality (AR), which use computer-aided projected information to augment our sense, has important impact on human life, especially for the elder people. However, there are three major challenges regarding the optical system in the AR system, which are registration, vision correction, and readability under strong ambient light. Here, we solve three challenges simultaneously for the first time using two liquid crystal (LC) lenses and polarizer-free attenuator integrated in optical-see-through AR system. One of the LC lens is used to electrically adjust the position of the projected virtual image which is so-called registration. The other LC lens with larger aperture and polarization independent characteristic is in charge of vision correction, such as myopia and presbyopia. The linearity of lens powers of two LC lenses is also discussed. The readability of virtual images under strong ambient light is solved by electrically switchable transmittance of the LC attenuator originating from light scattering and light absorption. The concept demonstrated in this paper could be further extended to other electro-optical devices as long as the devices exhibit the capability of phase modulations and amplitude modulations.

  13. Readability, suitability, and characteristics of asthma action plans: examination of factors that may impair understanding.

    Science.gov (United States)

    Yin, H Shonna; Gupta, Ruchi S; Tomopoulos, Suzy; Wolf, Michael S; Mendelsohn, Alan L; Antler, Lauren; Sanchez, Dayana C; Lau, Claudia Hillam; Dreyer, Benard P

    2013-01-01

    Recognition of the complexity of asthma management has led to the development of asthma treatment guidelines that include the recommendation that all pediatric asthma patients receive a written asthma action plan. We assessed the readability, suitability, and characteristics of asthma action plans, elements that contribute to the effectiveness of action plan use, particularly for those with limited literacy. This was a descriptive study of 30 asthma action plans (27 state Department of Health (DOH)-endorsed, 3 national action plans endorsed by 6 states). (1) readability (as assessed by Flesch Reading Ease, Flesch-Kincaid, Gunning Fog, Simple Measure of Gobbledygook, Forcast), (2) suitability (Suitability Assessment of Materials [SAM], adequate: ≥ 0.4; unsuitable: typography (30.0%), learning stimulation/motivation (26.7%), and graphics (13.3%). There were no statistically significant differences between the average grade level or SAM score of state DOH developed action plans and those from or adapted from national organizations. Plans varied with respect to terms used, symptoms included, and recommended actions. Specific improvements in asthma action plans could maximize patient and parent understanding of appropriate asthma management and could particularly benefit individuals with limited literacy skills.

  14. The deleuzian abstract machines

    DEFF Research Database (Denmark)

    Werner Petersen, Erik

    2005-01-01

    production. In Kafka: Toward a Minor Literature, Deleuze and Guatari gave the most comprehensive explanation to the abstract machine in the work of art. Like the war-machines of Virilio, the Kafka-machine operates in three gears or speeds. Furthermore, the machine is connected to spatial diagrams...

  15. Machine drawing

    CERN Document Server

    Narayana, KL; Reddy, K Venkata

    2006-01-01

    About the Book: Written by three distinguished authors with ample academic and teaching experience, this textbook, meant for diploma and degree students of Mechanical Engineering as well as those preparing for AMIE examination, incorporates the latest standards. The new edition includes the features of assembly drawings, part drawings and computer-aided drawings to cater to the needs of students pursuing various courses. The text of the new edition has been thoroughly revised to include new concepts and practices in the subject. It should prove an ideal textbook. Contents: Introduction

  16. Data Encoding using Periodic Nano-Optical Features

    Science.gov (United States)

    Vosoogh-Grayli, Siamack

    Successful trials have been made through a designed algorithm to quantize, compress and optically encode unsigned 8 bit integer values in the form of images using Nano optical features. The periodicity of the Nano-scale features (Nano-gratings) have been designed and investigated both theoretically and experimentally to create distinct states of variation (three on states and one off state). The use of easy to manufacture and machine readable encoded data in secured authentication media has been employed previously in bar-codes for bi-state (binary) models and in color barcodes for multiple state models. This work has focused on implementing 4 states of variation for unit information through periodic Nano-optical structures that separate an incident wavelength into distinct colors (variation states) in order to create an encoding system. Compared to barcodes and magnetic stripes in secured finite length storage media the proposed system encodes and stores more data. The benefits of multiple states of variation in an encoding unit are 1) increased numerically representable range 2) increased storage density and 3) decreased number of typical set elements for any ergodic or semi-ergodic source that emits these encoding units. A thorough investigation has targeted the effects of the use of multi-varied state Nano-optical features on data storage density and consequent data transmission rates. The results show that use of Nano-optical features for encoding data yields a data storage density of circa 800 Kbits/in2 via the implementation of commercially available high resolution flatbed scanner systems for readout. Such storage density is far greater than commercial finite length secured storage media such as Barcode family with maximum practical density of 1kbits/in2 and highest density magnetic stripe cards with maximum density circa 3 Kbits/in2. The numerically representable range of the proposed encoding unit for 4 states of variation is [0 255]. The number of

  17. Mineral mining machines

    Energy Technology Data Exchange (ETDEWEB)

    Mc Gaw, B.H.

    1984-01-01

    A machine for mining minerals is patented. It is a cutter loader with a drum actuating element of the worm type equipped with a multitude of cutting teeth reinforced with tungsten carbide. A feature of the patented machine is that all of the cutting teeth and holders on the drum have the identical design. This is achieved through selecting a slant angle for the cutting teeth which is the mean between the slant angle of the conventional radial teeth and the slant angle of the advance teeth. This, in turn, is provided thanks to the corresponding slant of the holders relative to the drum and (or) the slant of the cutting part of the teeth relative to their stems. Thus, the advance teeth projecting beyond the surface of the drum on the face side and providing upper and lateral clearances have the same angle of attack as the radial teeth, that is, from 20 to 35 degrees. A series of modifications of the cutting teeth is patented. One of the designs allows the cutting tooth to occupy a varying position relative to the drum, from the conventional vertical to an inverted, axially projecting position. In the last case the tooth in the extraction process provides the upper and lateral clearances for the drum on the face side. Among the different modifications of the cutting teeth, a design is proposed which provides for the presence of a stem which is shaped like a truncated cone. This particular stem is designed for use jointly with a wedge which unfastens the teeth and is placed in a holder. The latter is completed in a transverse slot thanks to which the rear end of the stem is compressed, which simplifies replacement of a tooth. Channels are provided in the patented machine for feeding water to the worm spiral, the holders and the cutting teeth themselves in order to deal with dust.

  18. Feature Engineering for Drug Name Recognition in Biomedical Texts: Feature Conjunction and Feature Selection

    Directory of Open Access Journals (Sweden)

    Shengyu Liu

    2015-01-01

    Full Text Available Drug name recognition (DNR is a critical step for drug information extraction. Machine learning-based methods have been widely used for DNR with various types of features such as part-of-speech, word shape, and dictionary feature. Features used in current machine learning-based methods are usually singleton features which may be due to explosive features and a large number of noisy features when singleton features are combined into conjunction features. However, singleton features that can only capture one linguistic characteristic of a word are not sufficient to describe the information for DNR when multiple characteristics should be considered. In this study, we explore feature conjunction and feature selection for DNR, which have never been reported. We intuitively select 8 types of singleton features and combine them into conjunction features in two ways. Then, Chi-square, mutual information, and information gain are used to mine effective features. Experimental results show that feature conjunction and feature selection can improve the performance of the DNR system with a moderate number of features and our DNR system significantly outperforms the best system in the DDIExtraction 2013 challenge.

  19. Feature engineering for drug name recognition in biomedical texts: feature conjunction and feature selection.

    Science.gov (United States)

    Liu, Shengyu; Tang, Buzhou; Chen, Qingcai; Wang, Xiaolong; Fan, Xiaoming

    2015-01-01

    Drug name recognition (DNR) is a critical step for drug information extraction. Machine learning-based methods have been widely used for DNR with various types of features such as part-of-speech, word shape, and dictionary feature. Features used in current machine learning-based methods are usually singleton features which may be due to explosive features and a large number of noisy features when singleton features are combined into conjunction features. However, singleton features that can only capture one linguistic characteristic of a word are not sufficient to describe the information for DNR when multiple characteristics should be considered. In this study, we explore feature conjunction and feature selection for DNR, which have never been reported. We intuitively select 8 types of singleton features and combine them into conjunction features in two ways. Then, Chi-square, mutual information, and information gain are used to mine effective features. Experimental results show that feature conjunction and feature selection can improve the performance of the DNR system with a moderate number of features and our DNR system significantly outperforms the best system in the DDIExtraction 2013 challenge.

  20. Managing virtual machines with Vac and Vcycle

    Science.gov (United States)

    McNab, A.; Love, P.; MacMahon, E.

    2015-12-01

    We compare the Vac and Vcycle virtual machine lifecycle managers and our experiences in providing production job execution services for ATLAS, CMS, LHCb, and the GridPP VO at sites in the UK, France and at CERN. In both the Vac and Vcycle systems, the virtual machines are created outside of the experiment's job submission and pilot framework. In the case of Vac, a daemon runs on each physical host which manages a pool of virtual machines on that host, and a peer-to-peer UDP protocol is used to achieve the desired target shares between experiments across the site. In the case of Vcycle, a daemon manages a pool of virtual machines on an Infrastructure-as-a-Service cloud system such as OpenStack, and has within itself enough information to create the types of virtual machines to achieve the desired target shares. Both systems allow unused shares for one experiment to temporarily taken up by other experiements with work to be done. The virtual machine lifecycle is managed with a minimum of information, gathered from the virtual machine creation mechanism (such as libvirt or OpenStack) and using the proposed Machine/Job Features API from WLCG. We demonstrate that the same virtual machine designs can be used to run production jobs on Vac and Vcycle/OpenStack sites for ATLAS, CMS, LHCb, and GridPP, and that these technologies allow sites to be operated in a reliable and robust way.

  1. Utilization of Readability and Readability Analyzer under IT Environment%信息技术环境下的文本易读性分析及其工具

    Institute of Scientific and Technical Information of China (English)

    许智坚

    2014-01-01

    易读性分析包括文本的易读度、年级水平和词汇的统计信息,这些信息是测量文本的难易度的主要依据。在信息技术环境下,利用Readability Analyzer等易读性分析工具使得易读性分析更为准确、快速、便捷。研究发现,一些国家的报刊杂志和法律文本对易读性都有一定的限制。在外语教学中,选择易读性适中的文本是提高阅读教学效果的前提。通过易读性分析,教师可以选择难易恰当的阅读材料,提高学习者的理解能力,扩大学习资源,培养自主学习能力。易读性分析指数还可以用于测量学习者的作文年级水平,帮助他们发现写作上的问题,提高写作能力。%Readability index includes reading ease, grade level and counts of sentences and words, which is used as the statistic data in testing the difficulty of reading materials. In the era of information technology, the readability analyzer will make it easier, faster and more convenient to analyze the reading ease and other readability indices of a text. Research findings show that, in some countries, reading ease of news papers, magazines and legal documents is usually restricted. In language instruction, choosing texts with suitable readability is the precondition to promote teaching efficiency. Educators may use the readability analyzer in choosing suitable reading texts for developing learners ’ reading comprehension, and selecting suitable internet recourses in cultivating learner’ s autonomic learning. With the readability analyzer, teachers can also evaluate learners’ level of writing, assist learners to find out their problems in writing and to achieve progress in writing.

  2. J1140型压铸机增压控制油路设计特点与应用%Design Features and Application of Booster Pressure Control Circuit for J1140 Type Die-casting Machine

    Institute of Scientific and Technical Information of China (English)

    张海华

    2015-01-01

    According to the requirements of J1140 type die-casting machine injection system, boost pressure control circuit was designed. The problem for the change of injection process parameters due to artificial pouring amount different was solved. So the qual-ity stability of die-casting machine parts production is improved, injection force can be adjusted in a wide range, the process-range of die-casting machine is expanded.%根据J1140型压铸机压射系统的要求, 设计增压控制油路, 解决了由于人工浇筑量的不同而造成的压铸工艺参数变化的问题, 提高了压铸机生产零件质量的稳定性, 压射力可在较大范围内进行调整, 扩大了压铸机的工艺范围.

  3. Adaptive machine and its thermodynamic costs

    Science.gov (United States)

    Allahverdyan, Armen E.; Wang, Q. A.

    2013-03-01

    We study the minimal thermodynamically consistent model for an adaptive machine that transfers particles from a higher chemical potential reservoir to a lower one. This model describes essentials of the inhomogeneous catalysis. It is supposed to function with the maximal current under uncertain chemical potentials: if they change, the machine tunes its own structure fitting it to the maximal current under new conditions. This adaptation is possible under two limitations: (i) The degree of freedom that controls the machine's structure has to have a stored energy (described via a negative temperature). The origin of this result is traced back to the Le Chatelier principle. (ii) The machine has to malfunction at a constant environment due to structural fluctuations, whose relative magnitude is controlled solely by the stored energy. We argue that several features of the adaptive machine are similar to those of living organisms (energy storage, aging).

  4. [Psychic experience of pathological machine gamblers].

    Science.gov (United States)

    Avtonomov, D A

    2011-01-01

    The author presents results of the psychopathological phenomena and subjective experience study of 38 patients with the verified diagnosis "Pathological addiction to gambling" (F63.0) without psychotic disorders. In 84,2% cases, the patients preferred slot machine gambling. The causes of such preferences were analyzed. The phenomenology of the psychic experience of the patients who are slot machine gamblers is presented. With the formation of the addiction, the gamblers began to think about slot machines as human beings (creatures), feel attachment to them, see the individuality in them, and experience slot machines as live and real partners in imaginative or even verbal dialogs. Two main "forms of contact" with slot machines were elicited and described: verbal and non-verbal. The gambler has been gradually depleted the image of himself and experiences the "loss of contact" with his own features, qualities, wishes, and intentions. The data obtained may be helpful in psychotherapeutic and rehabilitative work with such patients.

  5. Decomposition of forging die for high speed machining

    CERN Document Server

    Tapie, Laurent

    2009-01-01

    Today's forging die manufacturing process must be adapted to several evolutions in machining process generation: CAD/CAM models, CAM software solutions and High Speed Machining (HSM). In this context, the adequacy between die shape and HSM process is in the core of machining preparation and process planning approaches. This paper deals with an original approach of machining preparation integrating this adequacy in the main tasks carried out. In this approach, the design of the machining process is based on two levels of decomposition of the geometrical model of a given die with respect to HSM cutting conditions (cutting speed and feed rate) and technological constrains (tool selection, features accessibility). This decomposition assists machining assistant to generate an HSM process. The result of this decomposition is the identification of machining features.

  6. Recent advances in micro- and nano-machining technologies

    Science.gov (United States)

    Gao, Shang; Huang, Han

    2016-12-01

    Device miniaturization is an emerging advanced technology in the 21st century. The miniaturization of devices in different fields requires production of micro- and nano-scale components. The features of these components range from the sub-micron to a few hundred microns with high tolerance to many engineering materials. These fields mainly include optics, electronics, medicine, bio-technology, communications, and avionics. This paper reviewed the recent advances in micro- and nano-machining technologies, including micro-cutting, micro-electrical-discharge machining, laser micro-machining, and focused ion beam machining. The four machining technologies were also compared in terms of machining efficiency, workpiece materials being machined, minimum feature size, maximum aspect ratio, and surface finish.

  7. Evaluation of Quality and Readability of Health Information Websites Identified through India’s Major Search Engines

    Directory of Open Access Journals (Sweden)

    S. Raj

    2016-01-01

    Full Text Available Background. The available health information on websites should be reliable and accurate in order to make informed decisions by community. This study was done to assess the quality and readability of health information websites on World Wide Web in India. Methods. This cross-sectional study was carried out in June 2014. The key words “Health” and “Information” were used on search engines “Google” and “Yahoo.” Out of 50 websites (25 from each search engines, after exclusion, 32 websites were evaluated. LIDA tool was used to assess the quality whereas the readability was assessed using Flesch Reading Ease Score (FRES, Flesch-Kincaid Grade Level (FKGL, and SMOG. Results. Forty percent of websites (n=13 were sponsored by government. Health On the Net Code of Conduct (HONcode certification was present on 50% (n=16 of websites. The mean LIDA score (74.31 was average. Only 3 websites scored high on LIDA score. Only five had readability scores at recommended sixth-grade level. Conclusion. Most health information websites had average quality especially in terms of usability and reliability and were written at high readability levels. Efforts are needed to develop the health information websites which can help general population in informed decision making.

  8. Reading and Readability Affect on E-Learning Success in a Fortune 100 Company: A Correlational Study

    Science.gov (United States)

    Finnegan, Denis Michael Thomas

    2010-01-01

    The purpose of this quantitative correlational study was to examine the relationship between employees' reading skills, E-learning readability, student learning, and student satisfaction. The Tests of Adult Basic Education (TABE) form 10 Level A instrument evaluated student-reading skills. The Flesch-Kincaid Grade Level Index course assessed…

  9. [Systematic Readability Analysis of Medical Texts on Websites of German University Clinics for General and Abdominal Surgery].

    Science.gov (United States)

    Esfahani, B Janghorban; Faron, A; Roth, K S; Grimminger, P P; Luers, J C

    2016-12-01

    Background: Besides the function as one of the main contact points, websites of hospitals serve as medical information portals. As medical information texts should be understood by any patients independent of the literacy skills and educational level, online texts should have an appropriate structure to ease understandability. Materials and Methods: Patient information texts on websites of clinics for general surgery at German university hospitals (n = 36) were systematically analysed. For 9 different surgical topics representative medical information texts were extracted from each website. Using common readability tools and 5 different readability indices the texts were analysed concerning their readability and structure. The analysis was furthermore stratified in relation to geographical regions in Germany. Results: For the definite analysis the texts of 196 internet websites could be used. On average the texts consisted of 25 sentences and 368 words. The reading analysis tools congruously showed that all texts showed a rather low readability demanding a high literacy level from the readers. Conclusion: Patient information texts on German university hospital websites are difficult to understand for most patients. To fulfill the ambition of informing the general population in an adequate way about medical issues, a revision of most medical texts on websites of German surgical hospitals is recommended.

  10. Creating Readable Handouts, Worksheets, Overheads, Tests, Review Materials, Study Guides, and Homework Assignments through Effective Typographic Design.

    Science.gov (United States)

    Hoener, Arthur; And Others

    1997-01-01

    This article presents guidelines for using the principles of typography to enhance the readability and legibility of classroom print materials for students with mild disabilities. Different elements of type, line length, page margins, and spacing are addressed. Recommendations for preparing materials that promote student performance are provided.…

  11. Machine tool structures

    CERN Document Server

    Koenigsberger, F

    1970-01-01

    Machine Tool Structures, Volume 1 deals with fundamental theories and calculation methods for machine tool structures. Experimental investigations into stiffness are discussed, along with the application of the results to the design of machine tool structures. Topics covered range from static and dynamic stiffness to chatter in metal cutting, stability in machine tools, and deformations of machine tool structures. This volume is divided into three sections and opens with a discussion on stiffness specifications and the effect of stiffness on the behavior of the machine under forced vibration c

  12. Archetypal Analysis for Machine Learning

    DEFF Research Database (Denmark)

    Mørup, Morten; Hansen, Lars Kai

    2010-01-01

    Archetypal analysis (AA) proposed by Cutler and Breiman in [1] estimates the principal convex hull of a data set. As such AA favors features that constitute representative ’corners’ of the data, i.e. distinct aspects or archetypes. We will show that AA enjoys the interpretability of clustering - ...... for K-means [2]. We demonstrate that the AA model is relevant for feature extraction and dimensional reduction for a large variety of machine learning problems taken from computer vision, neuroimaging, text mining and collaborative filtering....

  13. Critical Analysis of the Quality, Readability, and Technical Aspects of Online Information Provided for Neck-Lifts.

    Science.gov (United States)

    Rayess, Hani; Zuliani, Giancarlo F; Gupta, Amar; Svider, Peter F; Folbe, Adam J; Eloy, Jean Anderson; Carron, Michael A

    2017-03-01

    The number of patients using the internet to obtain health information is growing. This material is unregulated and heterogeneous and can influence patient decisions. To compare the quality, readability, and technical aspects of online information about neck-lifts provided by private practice websites vs academic medical centers and reference sources. In this cross-sectional analysis conducted between November 2015 and January 2016, a Google search of the term neck-lift was performed, and the first 45 websites were evaluated. The websites were categorized as private practice vs other. Private websites (PWs) included sites created by private practice physicians. Other websites (OWs) were created by academic medical centers or reference sources. Quality, readability, and technical aspects of online websites related to neck-lifts. Quality was assessed using the DISCERN criteria and the Health on the Net principles (HONcode). Readability was assessed using 7 validated and widely used criteria. Consensus US reading grade level readability was provided by a website (readabilityformulas.com). Twelve technical aspects were evaluated based on criteria specified by medical website creators. Forty-five websites (8 OWs [18%] and 37 PWs [82%]) were analyzed. There was a significant difference in quality between OWs and PWs based on the DISCERN criteria and HONcode principles. The DISCERN overall mean (SD) scores were 2.3 (0.5) for OWs and 1.3 (0.3) for PWs (P analysis, the mean (SD) was 8.6 (1.8) (range, 5-11) for OW, and the mean (SD) was 5.8 (1.7) (range, 2-9) for PW. The mean (SD) readability consensus reading grade level scores were 11.7 (1.9) for OWs and 10.6 (1.9) for PWs. Of a total possible score of 12, the mean (SD) technical scores were 6.3 (1.8) (range, 4-9) for OWs and 6.4 (1.5) (range, 3-9) for PWs. Compared with PWs, OWs had a significantly higher quality score based on both the DISCERN criteria and HONcode principles. The mean readability for OWs and PWs was

  14. Compensation strategy for machining optical freeform surfaces by the combined on- and off-machine measurement.

    Science.gov (United States)

    Zhang, Xiaodong; Zeng, Zhen; Liu, Xianlei; Fang, Fengzhou

    2015-09-21

    Freeform surface is promising to be the next generation optics, however it needs high form accuracy for excellent performance. The closed-loop of fabrication-measurement-compensation is necessary for the improvement of the form accuracy. It is difficult to do an off-machine measurement during the freeform machining because the remounting inaccuracy can result in significant form deviations. On the other side, on-machine measurement may hides the systematic errors of the machine because the measuring device is placed in situ on the machine. This study proposes a new compensation strategy based on the combination of on-machine and off-machine measurement. The freeform surface is measured in off-machine mode with nanometric accuracy, and the on-machine probe achieves accurate relative position between the workpiece and machine after remounting. The compensation cutting path is generated according to the calculated relative position and shape errors to avoid employing extra manual adjustment or highly accurate reference-feature fixture. Experimental results verified the effectiveness of the proposed method.

  15. Readability and Suitability of Spanish Language Hypertension and Diabetes Patient Education Materials.

    Science.gov (United States)

    Howe, Carol J; Barnes, Donelle M; Estrada, Griselle B; Godinez, Ignacio

    2016-01-01

    Hispanics who speak Spanish are at risk for low health literacy. We evaluated Spanish language hypertension (HTN) and diabetes mellitus (DM) patient education materials from U.S. federal agency public sector sources using the Suitability of Assessment (SAM) instrument. Mean readability for HTN materials was grade 7.9 and for DM materials was grade 6.6. Mean SAM score for HTN materials was 43.9 and for DM materials was 63.2. SAM scores were significantly better for DM than for HTN materials in overall score, content, graphics, layout, stimulation/motivation, and cultural appropriateness (p < .05). Clinicians should evaluate suitability of Spanish language HTN and DM materials that they use in patient teaching.

  16. The NEO-PI-3: a more readable revised NEO Personality Inventory.

    Science.gov (United States)

    McCrae, Robert R; Costa Jr, Paul T; Martin, Thomas A

    2005-06-01

    Use of the Revised NEO Personality Inventory (NEO-PI-R; Costa & McCrae, 1992) in adolescent samples has shown that a few respondents have difficulty with a subset of items. We identified 30 items that were not understood by at least 2% of adolescent respondents and 18 additional items with low item-total correlations, and we wrote 2 trial replacement items for each. We used self-report and observer rating data from 500 respondents aged 14 to 20 to select replacement items. The modified instrument retained the intended factor structure and showed slightly better internal consistency, cross-observer agreement, and readability (Flesch-Kincaid grade level = 5.3). The NEO-PI-3 appears to be useful in high school and college samples and may have wider applicability to adults as well.

  17. The Readability of the Books on Materials%材料著作的阅读性

    Institute of Scientific and Technical Information of China (English)

    肖纪美

    2001-01-01

    依据作者的教学和科研经验,类比于材料的性能,用环境和结构来阐明材料著作的阅读性,并示例地说明提高它的途径%Basing upon the authour's experiences of teaching and research, similar to the property(P) of the materials, the readability of the books on materials is elucidated in terms of the two fundamental equations in materialogy:P=f(S,e) S={E,R}. In which S is the structure of the material, e is the environment, E is the ensemble of the elements in the material and R is the ensemble of the relationships among E.

  18. Limits of responsiveness concerning human-readable knowledge bases: an operational analysis

    CERN Document Server

    Pentzaropoulos, G C

    2010-01-01

    Introduction. The purpose of this work is the evaluation of responsiveness when remote users communicate with a human-readable knowledge base (KB). Responsiveness [R(s)] is considered here as a measure of service quality. Method. The preferred method is operational analysis, a variation of classical stochastic theory, which allows for the study of user-system interaction with minimal computational effort. Analysis. The analysis is based on well-known performance metrics, such as service ability, elapsed time, and throughput: from these metrics estimates of R(s) are derived analytically. Results. Critical points indicating congestion are obtained: these are limits on the number of admissible requests and the number of connected users. Also obtained is a sufficient condition for achieving flow balance between the KB host and the request-relaying servers. Conclusions. When R(s) is within normal limits, users should appreciate the benefits from using the services offered by their KB host. When bottlenecks are for...

  19. Preserving medical correctness, readability and consistency in de-identified health records

    DEFF Research Database (Denmark)

    Pantazos, Kostas; Lauesen, Søren; Lippert, Søren

    2016-01-01

    written in an abbreviated style that cannot be analyzed grammatically. If we replace a word that looks like a name, but isn’t, we degrade readability and medical correctness. If we fail to replace it when we should, we degrade confidentiality. We de-identified an existing Danish electronic health record......A health record database contains structured data fields that identify the patient, such as patient ID, patient name, e-mail and phone number. These data are fairly easy to de-identify, that is, replace with other identifiers. However, these data also occur in fields with doctors’ free-text notes...... database, ending up with 323,122 patient health records. We had to invent many methods for de-identifying potential identifiers in the free-text notes. The de-identified health records should be used with caution for statistical purposes because we removed health records that were so special...

  20. A synthesis of research on color, typography and graphics as they relate to readability

    Science.gov (United States)

    Lamoreaux, M. E.

    1985-09-01

    A foundation for future research on the use of color, typography, and graphics to improve readability is provided. Articles from the broad fields of education and psychology, as well as from the fields of journalism and printing, have been reviewed for research relating color, typography, and graphics to reading ease, speed, or comprehension. The most relevant articles reviewed are presented in an annoated bibliography; the remaining articles are also presented in bibliographic format. This literature review indicates that recognition and recall of printed material may be improved through the use of headings, underlining, color, and, especially, illustrations. Current research suggests that individuals can remember pictures far longer than past research indicates. However, researchers are divided on the usefulness of illustrations to improve reading comprehension. On the other hand, reading comprehension can be improved through the use of statistical graphs and tables if the reader is properly trained in the use of these devices.

  1. Controlled English to facilitate human/machine analytical processing

    Science.gov (United States)

    Braines, Dave; Mott, David; Laws, Simon; de Mel, Geeth; Pham, Tien

    2013-06-01

    Controlled English is a human-readable information representation format that is implemented using a restricted subset of the English language, but which is unambiguous and directly accessible by simple machine processes. We have been researching the capabilities of CE in a number of contexts, and exploring the degree to which a flexible and more human-friendly information representation format could aid the intelligence analyst in a multi-agent collaborative operational environment; especially in cases where the agents are a mixture of other human users and machine processes aimed at assisting the human users. CE itself is built upon a formal logic basis, but allows users to easily specify models for a domain of interest in a human-friendly language. In our research we have been developing an experimental component known as the "CE Store" in which CE information can be quickly and flexibly processed and shared between human and machine agents. The CE Store environment contains a number of specialized machine agents for common processing tasks and also supports execution of logical inference rules that can be defined in the same CE language. This paper outlines the basic architecture of this approach, discusses some of the example machine agents that have been developed, and provides some typical examples of the CE language and the way in which it has been used to support complex analytical tasks on synthetic data sources. We highlight the fusion of human and machine processing supported through the use of the CE language and CE Store environment, and show this environment with examples of highly dynamic extensions to the model(s) and integration between different user-defined models in a collaborative setting.

  2. Fault Diagnosis Based on Fuzzy Support Vector Machine with Parameter Tuning and Feature Selection%基于结合参数整定和特征策略的模糊支持向量机的故障诊断

    Institute of Scientific and Technical Information of China (English)

    毛勇; 夏铮; 尹征; 孙优贤; 万征

    2007-01-01

    This study describes a classification methodology based on support vector machines (SVMs),which offer superior classification perlormance for fault diagnosis in chemical process engineering.The method incorporates an efficient parameter tuning procedure (based on minimization of radius/margin bound for SVM's leave-one-out errors) into a multi-class classification strategy using a fuzzy decision factor,which is named fuzzy support vector machine (FSVM).The datasets generated from the Tennessee Eastman process (TEP) simulator were used to evaluate the classification performance.To decrease the negative influence of the auto-correlated and irrelevant variables,a key variable identification procedure using recursive feature elimination,based on the SVM is implemented.with time lags incorporated,before every classifier is trained,and the number of relatively important variables to every classifier is basically determined by 10-fold cross-validation.Performance comparisons are implemented among several kinds of multi-class decision machines,by which the effectiveness of the proposed approach is proved.

  3. Hemodialysis machine technology: a global overview.

    Science.gov (United States)

    Polaschegg, Hans-Dietrich

    2010-11-01

    The market for hemodialysis machines, the background, the current products of manufacturers and the features of hemodialysis machines are described in this article. In addition to the established companies and their products, Chinese manufacturers, and new developments for home hemodialysis, are outlined based on publications available on the internet and patent applications. Here, a critical review of the state of the art questions the medical usefulness of high-tech developments, compared with the benefits of more frequent and/or longer dialysis treatment with comparable simple machines.

  4. BADMINTON TRAINING MACHINE WITH IMPACT MECHANISM

    OpenAIRE

    B.F. Yousif; KOK SOON YEH

    2011-01-01

    In the current work, a newly machine was designed and fabricated for badminton training purpose. In the designing process, CATIA software was used to design and simulate the machine components. The design was based on direct impact method to launch the shuttle using spring as the source of the impact. Hook’s law was used theoretically to determine the initial and the maximum lengths of the springs. The main feature of the machine is that can move in two axes (up and down, left and right). For...

  5. Operating System For Numerically Controlled Milling Machine

    Science.gov (United States)

    Ray, R. B.

    1992-01-01

    OPMILL program is operating system for Kearney and Trecker milling machine providing fast easy way to program manufacture of machine parts with IBM-compatible personal computer. Gives machinist "equation plotter" feature, which plots equations that define movements and converts equations to milling-machine-controlling program moving cutter along defined path. System includes tool-manager software handling up to 25 tools and automatically adjusts to account for each tool. Developed on IBM PS/2 computer running DOS 3.3 with 1 MB of random-access memory.

  6. Operating System For Numerically Controlled Milling Machine

    Science.gov (United States)

    Ray, R. B.

    1992-01-01

    OPMILL program is operating system for Kearney and Trecker milling machine providing fast easy way to program manufacture of machine parts with IBM-compatible personal computer. Gives machinist "equation plotter" feature, which plots equations that define movements and converts equations to milling-machine-controlling program moving cutter along defined path. System includes tool-manager software handling up to 25 tools and automatically adjusts to account for each tool. Developed on IBM PS/2 computer running DOS 3.3 with 1 MB of random-access memory.

  7. Readability Trends of Online Information by the American Academy of Otolaryngology-Head and Neck Surgery Foundation.

    Science.gov (United States)

    Wong, Kevin; Levi, Jessica R

    2017-01-01

    Objective Previous studies have shown that patient education materials published by the American Academy of Otolaryngology-Head and Neck Surgery Foundation may be too difficult for the average reader to understand. The purpose of this study was to determine if current educational materials show improvements in readability. Study Design Cross-sectional analysis. Setting The Patient Health Information section of the American Academy of Otolaryngology-Head and Neck Surgery Foundation website. Subjects and Methods All patient education articles were extracted in plain text. Webpage navigation, references, author information, appointment information, acknowledgments, and disclaimers were removed. Follow-up editing was also performed to remove paragraph breaks, colons, semicolons, numbers, percentages, and bullets. Readability grade was calculated with the Flesch-Kincaid Grade Level, Flesch Reading Ease, Gunning-Fog Index, Coleman-Liau Index, Automated Readability Index, and Simple Measure of Gobbledygook. Intra- and interobserver reliability were assessed. Results A total of 126 articles from 7 topics were analyzed. Readability levels across all 6 tools showed that the difficulty of patient education materials exceeded the abilities of an average American. As compared with previous studies, current educational materials by the American Academy of Otolaryngology-Head and Neck Surgery Foundation have shown a decrease in difficulty. Intra- and interobserver reliability were both excellent, with intraclass coefficients of 0.99 and 0.96, respectively. Conclusion Improvements in readability is an encouraging finding and one that is consistent with recent trends toward improved health literacy. Nevertheless, online patient educational material is still too difficult for the average reader. Revisions may be necessary for current materials to benefit a larger readership.

  8. How well are health information websites displayed on mobile phones? Implications for the readability of health information.

    Science.gov (United States)

    Cheng, Christina; Dunn, Matthew

    2016-06-02

    Issue addressed: More than 87% of Australians own a mobile phone with Internet access and 82% of phone owners use their smartphones to search for health information, indicating that mobile phones may be a powerful tool for building health literacy. Yet, online health information has been found to be above the reading ability of the general population. As reading on a smaller screen may further complicate the readability of information, this study aimed to examine how health information is displayed on mobile phones and its implications for readability.Methods: Using a cross-sectional design with convenience sampling, a sample of 270 mobile webpages with information on 12 common health conditions was generated for analysis, they were categorised based on design and position of information display.Results: The results showed that 71.48% of webpages were mobile-friendly but only 15.93% were mobile-friendly webpages designed in a way to optimise readability, with a paging format and queried information displayed for immediate viewing.Conclusion: With inadequate evidence and lack of consensus on how webpage design can best promote reading and comprehension, it is difficult to draw a conclusion on the effect of current mobile health information presentation on readability.So what?: Building mobile-responsive websites should be a priority for health information providers and policy-makers. Research efforts are urgently required to identify how best to enhance readability of mobile health information and fully capture the capabilities of mobile phones as a useful device to increase health literacy.

  9. Evaluation of the Quality, Accuracy, and Readability of Online Patient Resources for the Management of Articular Cartilage Defects.

    Science.gov (United States)

    Wang, Dean; Jayakar, Rohit G; Leong, Natalie L; Leathers, Michael P; Williams, Riley J; Jones, Kristofer J

    2017-04-01

    Objective Patients commonly use the Internet to obtain their health-related information. The purpose of this study was to investigate the quality, accuracy, and readability of online patient resources for the management of articular cartilage defects. Design Three search terms ("cartilage defect," "cartilage damage," "cartilage injury") were entered into 3 Internet search engines (Google, Bing, Yahoo). The first 25 websites from each search were collected and reviewed. The quality and accuracy of online information were independently evaluated by 3 reviewers using predetermined scoring criteria. The readability was evaluated using the Flesch-Kincaid (FK) grade score. Results Fifty-three unique websites were evaluated. Quality ratings were significantly higher in websites with a FK score >11 compared to those with a score of ≤11 ( P = 0.021). Only 10 websites (19%) differentiated between focal cartilage defects and diffuse osteoarthritis. Of these, 7 (70%) were elicited using the search term "cartilage defect" ( P = 0.038). The average accuracy of the websites was high (11.7 out of maximum 12), and the average FK grade level (13.4) was several grades higher than the recommended level for readable patient education material (eighth grade level). Conclusions The quality and readability of online patient resources for articular cartilage defects favor those with a higher level of education. Additionally, the majority of these websites do not distinguish between focal chondral defects and diffuse osteoarthritis, which can fail to provide appropriate patient education and guidance for available treatment. Clinicians should help guide patients toward high-quality, accurate, and readable online patient education material.

  10. Variable Quality and Readability of Patient-oriented Websites on Colorectal Cancer Screening.

    Science.gov (United States)

    Schreuders, Eline H; Grobbee, Esmée J; Kuipers, Ernst J; Spaander, Manon C W; Veldhuyzen van Zanten, Sander J O

    2017-01-01

    The efficacy of colorectal cancer (CRC) screening is dependent on participation and subsequent adherence to surveillance. The internet increasingly is used for health information and is important to support decision making. We evaluated the accuracy, quality, and readability of online information on CRC screening and surveillance. A Website Accuracy Score and Polyp Score were developed, which awarded points for various aspects of CRC screening and surveillance. Websites also were evaluated using validated internet quality instruments (Global Quality Score, LIDA, and DISCERN), and reading scores. Two raters independently assessed the top 30 websites appearing on Google.com. Portals, duplicates, and news articles were excluded. Twenty websites were included. The mean website accuracy score was 26 of 44 (range, 9-41). Websites with the highest scores were www.cancer.org, www.bowelcanceraustralia.org, and www.uptodate.com. The median polyp score was 3 of 10. The median global quality score was 3 of 5 (range, 2-5). The median overall LIDA score was 74% and the median DISCERN score was 45, both indicating moderate quality. The mean Flesch-Kincaid grade level was 11th grade, rating the websites as difficult to read, 30% had a reading level acceptable for the general public (Flesch Reading Ease > 60). There was no correlation between the Google rank and the website accuracy score (rs = -0.31; P = .18). There is marked variation in quality and readability of websites on CRC screening. Most websites do not address polyp surveillance. The poor correlation between quality and Google ranking suggests that screenees will miss out on high-quality websites using standard search strategies. Copyright © 2017 AGA Institute. Published by Elsevier Inc. All rights reserved.

  11. Design of Demining Machines

    CERN Document Server

    Mikulic, Dinko

    2013-01-01

    In constant effort to eliminate mine danger, international mine action community has been developing safety, efficiency and cost-effectiveness of clearance methods. Demining machines have become necessary when conducting humanitarian demining where the mechanization of demining provides greater safety and productivity. Design of Demining Machines describes the development and testing of modern demining machines in humanitarian demining.   Relevant data for design of demining machines are included to explain the machinery implemented and some innovative and inspiring development solutions. Development technologies, companies and projects are discussed to provide a comprehensive estimate of the effects of various design factors and to proper selection of optimal parameters for designing the demining machines.   Covering the dynamic processes occurring in machine assemblies and their components to a broader understanding of demining machine as a whole, Design of Demining Machines is primarily tailored as a tex...

  12. Applied machining technology

    CERN Document Server

    Tschätsch, Heinz

    2010-01-01

    Machining and cutting technologies are still crucial for many manufacturing processes. This reference presents all important machining processes in a comprehensive and coherent way. It includes many examples of concrete calculations, problems and solutions.

  13. Machining with abrasives

    CERN Document Server

    Jackson, Mark J

    2011-01-01

    Abrasive machining is key to obtaining the desired geometry and surface quality in manufacturing. This book discusses the fundamentals and advances in the abrasive machining processes. It provides a complete overview of developing areas in the field.

  14. Women, Men, and Machines.

    Science.gov (United States)

    Form, William; McMillen, David Byron

    1983-01-01

    Data from the first national study of technological change show that proportionately more women than men operate machines, are more exposed to machines that have alienating effects, and suffer more from the negative effects of technological change. (Author/SSH)

  15. Brain versus Machine Control.

    Directory of Open Access Journals (Sweden)

    Jose M Carmena

    2004-12-01

    Full Text Available Dr. Octopus, the villain of the movie "Spiderman 2", is a fusion of man and machine. Neuroscientist Jose Carmena examines the facts behind this fictional account of a brain- machine interface

  16. Kinematic Analysis of a New Parallel Machine Tool: the Orthoglide

    CERN Document Server

    Wenger, Philippe

    2007-01-01

    This paper describes a new parallel kinematic architecture for machining applications: the orthoglide. This machine features three fixed parallel linear joints which are mounted orthogonally and a mobile platform which moves in the Cartesian x-y-z space with fixed orientation. The main interest of the orthoglide is that it takes benefit from the advantages of the popular PPP serial machines (regular Cartesian workspace shape and uniform performances) as well as from the parallel kinematic arrangement of the links (less inertia and better dynamic performances), which makes the orthoglide well suited to high-speed machining applications. Possible extension of the orthoglide to 5-axis machining is also investigated.

  17. A New Three-DOF Parallel Mechanism: Milling Machine Applications

    CERN Document Server

    Chablat, Damien

    2000-01-01

    This paper describes a new parallel kinematic architecture for machining applications, namely, the orthoglide. This machine features three fixed parallel linear joints which are mounted orthogonally and a mobile platform which moves in the Cartesian x-y-z space with fixed orientation. The main interest of the orthoglide is that it takes benefit from the advantages of the popular PPP serial machines (regular Cartesian workspace shape and uniform performances) as well as from the parallel kinematic arrangement of the links (less inertia and better dynamic performances), which makes the orthoglide well suited to high-speed machining applications. Possible extension of the orthoglide to 5-axis machining is also investigated.

  18. A Universal Reactive Machine

    DEFF Research Database (Denmark)

    Andersen, Henrik Reif; Mørk, Simon; Sørensen, Morten U.

    1997-01-01

    Turing showed the existence of a model universal for the set of Turing machines in the sense that given an encoding of any Turing machine asinput the universal Turing machine simulates it. We introduce the concept of universality for reactive systems and construct a CCS processuniversal...

  19. Meso-scale machining capabilities and issues

    Energy Technology Data Exchange (ETDEWEB)

    BENAVIDES,GILBERT L.; ADAMS,DAVID P.; YANG,PIN

    2000-05-15

    Meso-scale manufacturing processes are bridging the gap between silicon-based MEMS processes and conventional miniature machining. These processes can fabricate two and three-dimensional parts having micron size features in traditional materials such as stainless steels, rare earth magnets, ceramics, and glass. Meso-scale processes that are currently available include, focused ion beam sputtering, micro-milling, micro-turning, excimer laser ablation, femto-second laser ablation, and micro electro discharge machining. These meso-scale processes employ subtractive machining technologies (i.e., material removal), unlike LIGA, which is an additive meso-scale process. Meso-scale processes have different material capabilities and machining performance specifications. Machining performance specifications of interest include minimum feature size, feature tolerance, feature location accuracy, surface finish, and material removal rate. Sandia National Laboratories is developing meso-scale electro-mechanical components, which require meso-scale parts that move relative to one another. The meso-scale parts fabricated by subtractive meso-scale manufacturing processes have unique tribology issues because of the variety of materials and the surface conditions produced by the different meso-scale manufacturing processes.

  20. Laser-assisted machining of difficult-to-machine materials

    Energy Technology Data Exchange (ETDEWEB)

    Incropera, F.P.; Rozzi, J.C.; Pfefferkorn, F.E.; Lei, S.; Shin, Y.C.

    1999-07-01

    Laser-assisted machining (LAM) is a hybrid process for which a difficult-to-machine material, such as a ceramic or super alloy, is irradiated by a laser source prior to material removal by a cutting tool. The process has the potential to significantly increase material removal rates, as well as to improve the geometry and properties of the finished work piece. Features and limitations of theoretical and experimental procedures for determining the transient thermal response of a work piece during LAM are described, and representative results are presented for laser-assisted turning of sintered silicon nitride. Significant physical trends are revealed by the calculations, as are guidelines for the selection of appropriate operating conditions.

  1. Asynchronized synchronous machines

    CERN Document Server

    Botvinnik, M M

    1964-01-01

    Asynchronized Synchronous Machines focuses on the theoretical research on asynchronized synchronous (AS) machines, which are "hybrids” of synchronous and induction machines that can operate with slip. Topics covered in this book include the initial equations; vector diagram of an AS machine; regulation in cases of deviation from the law of full compensation; parameters of the excitation system; and schematic diagram of an excitation regulator. The possible applications of AS machines and its calculations in certain cases are also discussed. This publication is beneficial for students and indiv

  2. Quantum machine learning.

    Science.gov (United States)

    Biamonte, Jacob; Wittek, Peter; Pancotti, Nicola; Rebentrost, Patrick; Wiebe, Nathan; Lloyd, Seth

    2017-09-13

    Fuelled by increasing computer power and algorithmic advances, machine learning techniques have become powerful tools for finding patterns in data. Quantum systems produce atypical patterns that classical systems are thought not to produce efficiently, so it is reasonable to postulate that quantum computers may outperform classical computers on machine learning tasks. The field of quantum machine learning explores how to devise and implement quantum software that could enable machine learning that is faster than that of classical computers. Recent work has produced quantum algorithms that could act as the building blocks of machine learning programs, but the hardware and software challenges are still considerable.

  3. Precision machine design

    CERN Document Server

    Slocum, Alexander H

    1992-01-01

    This book is a comprehensive engineering exploration of all the aspects of precision machine design - both component and system design considerations for precision machines. It addresses both theoretical analysis and practical implementation providing many real-world design case studies as well as numerous examples of existing components and their characteristics. Fast becoming a classic, this book includes examples of analysis techniques, along with the philosophy of the solution method. It explores the physics of errors in machines and how such knowledge can be used to build an error budget for a machine, how error budgets can be used to design more accurate machines.

  4. Development of precision numerical controlled high vacuum electron beam welding machine

    CERN Document Server

    Li Shao Lin

    2002-01-01

    The structure, main technical parameters and characteristics of the precision numerical controlled high vacuum electron beam welding machine are introduced. The design principle, some features and solutions to some key technique problems of this new type machine are described

  5. Characterization of machining quality attributes based on spindle probe, coordinate measuring machine, and surface roughness data

    Directory of Open Access Journals (Sweden)

    Tzu-Liang Bill Tseng

    2014-04-01

    Full Text Available This study investigates the effects of machining parameters as they relate to the quality characteristics of machined features. Two most important quality characteristics are set as the dimensional accuracy and the surface roughness. Before any newly acquired machine tool is put to use for production, it is important to test the machine in a systematic way to find out how different parameter settings affect machining quality. The empirical verification was made by conducting a Design of Experiment (DOE with 3 levels and 3 factors on a state-of-the-art Cincinnati Hawk Arrow 750 Vertical Machining Center (VMC. Data analysis revealed that the significant factor was the Hardness of the material and the significant interaction effect was the Hardness + Feed for dimensional accuracy, while the significant factor was Speed for surface roughness. Since the equally important thing is the capability of the instruments from which the quality characteristics are being measured, a comparison was made between the VMC touch probe readings and the measurements from a Mitutoyo coordinate measuring machine (CMM on bore diameters. A machine mounted touch probe has gained a wide acceptance in recent years, as it is more suitable for the modern manufacturing environment. The data vindicated that the VMC touch probe has the capability that is suitable for the production environment. The test results can be incorporated in the process plan to help maintain the machining quality in the subsequent runs.

  6. Experimental Investigation of process parameters influence on machining Inconel 800 in the Electrical Spark Eroding Machine

    Science.gov (United States)

    Karunakaran, K.; Chandrasekaran, M.

    2016-11-01

    The Electrical Spark Eroding Machining is an entrenched sophisticated machining process for producing complex geometry with close tolerances in hard materials like super alloy which are extremely difficult-to-machine by using conventional machining processes. It is sometimes offered as a better alternative or sometimes as an only alternative for generating accurate 3D complex shapes of macro, micro and nano-features in such difficult-to-machine materials among other advanced machining processes. The accomplishment of such challenging task by use of Electrical Spark Eroding Machining or Electrical Discharge Machining (EDM) is depending upon selection of apt process parameters. This paper is about analyzing the influencing of parameter in electrical eroding machining for Inconel 800 with electrolytic copper as a tool. The experimental runs were performed with various input conditions to process Inconel 800 nickel based super alloy for analyzing the response of material removal rate, surface roughness and tool wear rate. These are the measures of performance of individual experimental value of parameters such as pulse on time, Pulse off time, peak current. Taguchi full factorial Design by using Minitab release 14 software was employed to meet the manufacture requirements of preparing process parameter selection card for Inconel 800 jobs. The individual parameter's contribution towards surface roughness was observed from 13.68% to 64.66%.

  7. Mastering Virtual Machine Manager 2008 R2

    CERN Document Server

    Michael, Michael

    2009-01-01

    One-of-a-kind guide from Microsoft insiders on Virtual Machine Manager 2008 R2!. What better way to learn VMM 2008 R2 than from the high-powered Microsoft program managers themselves? This stellar author team takes you under the hood of VMM 2008 R2, providing intermediate and advanced coverage of all features.: Walks you through Microsoft's new System Center Virtual Machine Manager 2008, a unified system for managing all virtual and physical assets; VMM 2008 not only supports Windows Server 2008 Hyper-V, but also VMware ESXas well!; Features a winning author team behind the new VMM; Describes

  8. The method and efficacy of support vector machine classifiers based on texture features and multi-resolution histogram from {sup 18}F-FDG PET-CT images for the evaluation of mediastinal lymph nodes in patients with lung cancer

    Energy Technology Data Exchange (ETDEWEB)

    Gao, Xuan [Center of PET/CT, The Third Affiliated Hospital of Harbin Medical University, The Affiliated Tumor Hospital of Harbin Medical University, Harbin (China); Chu, Chunyu [HIT–INSA Sino French Research Centre for Biomedical Imaging, Harbin Institute of Technology, Harbin (China); Li, Yingci; Lu, Peiou; Wang, Wenzhi [Center of PET/CT, The Third Affiliated Hospital of Harbin Medical University, The Affiliated Tumor Hospital of Harbin Medical University, Harbin (China); Liu, Wanyu [HIT–INSA Sino French Research Centre for Biomedical Imaging, Harbin Institute of Technology, Harbin (China); Yu, Lijuan, E-mail: yulijuan2003@126.com [Center of PET/CT, The Third Affiliated Hospital of Harbin Medical University, The Affiliated Tumor Hospital of Harbin Medical University, Harbin (China)

    2015-02-15

    Highlights: • Three support vector machine classifiers were constructed from PET-CT images. • The areas under the ROC curve for SVM1, SVM2, and SVM3 were 0.689, 0.579, and 0.685, respectively. • The areas under curves for maximum short diameter and SUV{sub max} were 0.684 and 0.652, respectively. • The algorithm based on SVM was potential in the diagnosis of mediastinal lymph nodes. - Abstract: Objectives: In clinical practice, image analysis is dependent on simply visual perception and the diagnostic efficacy of this analysis pattern is limited for mediastinal lymph nodes in patients with lung cancer. In order to improve diagnostic efficacy, we developed a new computer-based algorithm and tested its diagnostic efficacy. Methods: 132 consecutive patients with lung cancer underwent {sup 18}F-FDG PET/CT examination before treatment. After all data were imported into the database of an on-line medical image analysis platform, the diagnostic efficacy of visual analysis was first evaluated without knowing pathological results, and the maximum short diameter and maximum standardized uptake value (SUV{sub max}) were measured. Then lymph nodes were segmented manually. Three classifiers based on support vector machine (SVM) were constructed from CT, PET, and combined PET-CT images, respectively. The diagnostic efficacy of SVM classifiers was obtained and evaluated. Results: According to ROC curves, the areas under curves for maximum short diameter and SUV{sub max} were 0.684 and 0.652, respectively. The areas under the ROC curve for SVM1, SVM2, and SVM3 were 0.689, 0.579, and 0.685, respectively. Conclusion: The algorithm based on SVM was potential in the diagnosis of mediastinal lymph nodes.

  9. Perspex machine: VII. The universal perspex machine

    Science.gov (United States)

    Anderson, James A. D. W.

    2006-01-01

    The perspex machine arose from the unification of projective geometry with the Turing machine. It uses a total arithmetic, called transreal arithmetic, that contains real arithmetic and allows division by zero. Transreal arithmetic is redefined here. The new arithmetic has both a positive and a negative infinity which lie at the extremes of the number line, and a number nullity that lies off the number line. We prove that nullity, 0/0, is a number. Hence a number may have one of four signs: negative, zero, positive, or nullity. It is, therefore, impossible to encode the sign of a number in one bit, as floating-point arithmetic attempts to do, resulting in the difficulty of having both positive and negative zeros and NaNs. Transrational arithmetic is consistent with Cantor arithmetic. In an extension to real arithmetic, the product of zero, an infinity, or nullity with its reciprocal is nullity, not unity. This avoids the usual contradictions that follow from allowing division by zero. Transreal arithmetic has a fixed algebraic structure and does not admit options as IEEE, floating-point arithmetic does. Most significantly, nullity has a simple semantics that is related to zero. Zero means "no value" and nullity means "no information." We argue that nullity is as useful to a manufactured computer as zero is to a human computer. The perspex machine is intended to offer one solution to the mind-body problem by showing how the computable aspects of mind and, perhaps, the whole of mind relates to the geometrical aspects of body and, perhaps, the whole of body. We review some of Turing's writings and show that he held the view that his machine has spatial properties. In particular, that it has the property of being a 7D lattice of compact spaces. Thus, we read Turing as believing that his machine relates computation to geometrical bodies. We simplify the perspex machine by substituting an augmented Euclidean geometry for projective geometry. This leads to a general

  10. Broiler chickens can benefit from machine learning: support vector machine analysis of observational epidemiological data.

    Science.gov (United States)

    Hepworth, Philip J; Nefedov, Alexey V; Muchnik, Ilya B; Morgan, Kenton L

    2012-08-07

    Machine-learning algorithms pervade our daily lives. In epidemiology, supervised machine learning has the potential for classification, diagnosis and risk factor identification. Here, we report the use of support vector machine learning to identify the features associated with hock burn on commercial broiler farms, using routinely collected farm management data. These data lend themselves to analysis using machine-learning techniques. Hock burn, dermatitis of the skin over the hock, is an important indicator of broiler health and welfare. Remarkably, this classifier can predict the occurrence of high hock burn prevalence with accuracy of 0.78 on unseen data, as measured by the area under the receiver operating characteristic curve. We also compare the results with those obtained by standard multi-variable logistic regression and suggest that this technique provides new insights into the data. This novel application of a machine-learning algorithm, embedded in poultry management systems could offer significant improvements in broiler health and welfare worldwide.

  11. A study of operators' computing efficiency with special focus on the readability under different viewing angles of a desktop

    Science.gov (United States)

    Maillck, Z.; Asjad, Mohammad

    2015-09-01

    The main objective of this work is to determine the reading performance of operators' under different viewing angles of a desktop computer. The effects of text/background color, viewing distance and character size on the speed of reading were investigated. The text and/or the background color combination were varied, with constant luminance contrast. Performance was recorded in terms of words per minutes. Standard workplace design recommendations to position center of visual display terminal 15° and 40°, below horizontal eye level, were taken up for a visually intensive readability task. An orthogonal array, signal-to-noise ratio and the analysis of variance were carried out to investigate the above mentioned operating parameters to determine optimum readability performance. The results suggested that performance was better at 15° viewing angle as compared to 40°.

  12. BADMINTON TRAINING MACHINE WITH IMPACT MECHANISM

    Directory of Open Access Journals (Sweden)

    B. F. YOUSIF

    2011-02-01

    Full Text Available In the current work, a newly machine was designed and fabricated for badminton training purpose. In the designing process, CATIA software was used to design and simulate the machine components. The design was based on direct impact method to launch the shuttle using spring as the source of the impact. Hook’s law was used theoretically to determine the initial and the maximum lengths of the springs. The main feature of the machine is that can move in two axes (up and down, left and right. For the control system, infra-red sensor and touch switch were adapted in microcontroller. The final product was locally fabricated and proved that the machine can operate properly.

  13. Acute low back pain information online: an evaluation of quality, content accuracy and readability of related websites.

    Science.gov (United States)

    Hendrick, Paul A; Ahmed, Osman H; Bankier, Shane S; Chan, Tze Jieh; Crawford, Sarah A; Ryder, Catherine R; Welsh, Lisa J; Schneiders, Anthony G

    2012-08-01

    The internet is increasingly being used as a source of health information by the general public. Numerous websites exist that provide advice and information on the diagnosis and management of acute low back pain (ALBP), however, the accuracy and utility of this information has yet to be established. The aim of this study was to establish the quality, content and readability of online information relating to the treatment and management of ALBP. The internet was systematically searched using Google search engines from six major English-speaking countries. In addition, relevant national and international low back pain-related professional organisations were also searched. A total of 22 relevant websites was identified. The accuracy of the content of the ALBP information was established using a 13 point guide developed from international guidelines. Website quality was evaluated using the HONcode, and the Flesch-Kincaid Grade level (FKGL) was used to establish readability. The majority of websites lacked accurate information, resulting in an overall mean content accuracy score of 6.3/17. Only 3 websites had a high content accuracy score (>14/17) along with an acceptable readability score (FKGL 6-8) with the majority of websites providing information which exceeded the recommended level for the average person to comprehend. The most accurately reported category was, "Education and reassurance" (98%) while information regarding "manipulation" (50%), "massage" (9%) and "exercise" (0%) were amongst the lowest scoring categories. These results demonstrate the need for more accurate and readable internet-based ALBP information specifically centred on evidence-based guidelines.

  14. Reduced-size microchips for identification of horses: response to implantation and readability during a six-month period.

    Science.gov (United States)

    Wulf, M; Aurich, C; von Lewinski, M; Möstl, E; Aurich, J E

    2013-11-09

    In this study, readability of reduced-size microchips in horses and the response to implantation were analysed. It was hypothesised that small microchips can be implanted stress-free but are less readable than larger microchips. Adult mares (n=40) were implanted with a reduced-size microchip (10.9×1.6 mm) at the left side of the neck (size of conventional microchips 11.4×2.2 mm). Microchips were identified with three different scanners (A, B, C) immediately, and at 6, 12 and 28 weeks after implantation. Twelve out of the 40 mares were submitted to microchip implantation and control treatments and cortisol, heart rate and heart rate variability (HRV) were determined. From the chip-bearing side of the neck, microchips were identified with all scanners in all horses at all times. From the contralateral side, correct readings were always 100 per cent with scanner C and with scanners A and B ranged between 60 and 100 per cent. Heart rate and HRV variable sd of beat-to-beat interval increased slightly (Phorses. Compared with conventional microchips, the reduction in size did not impair readability. Microchip implantation is no pronounced stressor for horses.

  15. Text Readability in Head-Worn Displays: Color and Style Optimization in Video vs. Optical See-Through Devices.

    Science.gov (United States)

    Debernardis, Saverio; Fiorentino, Michele; Gattullo, Michele; Monno, Giuseppe; Uva, Antonello E

    2013-05-24

    Efficient text visualization in head-worn Augmented Reality displays is critical because it is sensitive to display technology, text style and color, ambient illumination, etc.. The main problem for the developer is to know the optimal text style for the specific display and for applications where color coding must be strictly followed because it is regulated by laws or internal practices. In this work we experimented the effects on readability of two head worn devices (optical and video see-through), two backgrounds (light and dark), five colors (white, black, red, green, and blue) and two text styles (plain text and billboarded text). Font type and size were kept constant. We measured the performance of 15 subjects by collecting about 5000 measurements using a specific test application and followed by qualitative interviews. Readability turned out to be quicker on the optical see-through device. For the video see-through device, background affects readability only in case of text without billboard. Finally our tests suggest that a good combination for indoor augmented reality applications, regardless of device and background, could be white text and blue billboard, while a mandatory color should be displayed as billboard with a white text message.

  16. Text readability in head-worn displays: color and style optimization in video versus optical see-through devices.

    Science.gov (United States)

    Debernardis, Saverio; Fiorentino, Michele; Gattullo, Michele; Monno, Giuseppe; Uva, Antonio Emmanuele

    2014-01-01

    Efficient text visualization in head-worn augmented reality (AR) displays is critical because it is sensitive to display technology, text style and color, ambient illumination and so on. The main problem for the developer is to know the optimal text style for the specific display and for applications where color coding must be strictly followed because it is regulated by laws or internal practices. In this work, we experimented the effects on readability of two head-worn devices (optical and video see-through), two backgrounds (light and dark), five colors (white, black, red, green, and blue), and two text styles (plain text and billboarded text). Font type and size were kept constant. We measured the performance of 15 subjects by collecting about 5,000 measurements using a specific test application and followed by qualitative interviews. Readability turned out to be quicker on the optical see-through device. For the video see-through device, background affects readability only in case of text without billboard. Finally, our tests suggest that a good combination for indoor augmented reality applications, regardless of device and background, could be white text and blue billboard, while a mandatory color should be displayed as billboard with a white text message.

  17. Vane Pump Casing Machining of Dumpling Machine Based on CAD/CAM

    Science.gov (United States)

    Huang, Yusen; Li, Shilong; Li, Chengcheng; Yang, Zhen

    Automatic dumpling forming machine is also called dumpling machine, which makes dumplings through mechanical motions. This paper adopts the stuffing delivery mechanism featuring the improved and specially-designed vane pump casing, which can contribute to the formation of dumplings. Its 3D modeling in Pro/E software, machining process planning, milling path optimization, simulation based on UG and compiling post program were introduced and verified. The results indicated that adoption of CAD/CAM offers firms the potential to pursue new innovative strategies.

  18. Machinability of advanced materials

    CERN Document Server

    Davim, J Paulo

    2014-01-01

    Machinability of Advanced Materials addresses the level of difficulty involved in machining a material, or multiple materials, with the appropriate tooling and cutting parameters.  A variety of factors determine a material's machinability, including tool life rate, cutting forces and power consumption, surface integrity, limiting rate of metal removal, and chip shape. These topics, among others, and multiple examples comprise this research resource for engineering students, academics, and practitioners.

  19. Support vector machines applications

    CERN Document Server

    Guo, Guodong

    2014-01-01

    Support vector machines (SVM) have both a solid mathematical background and good performance in practical applications. This book focuses on the recent advances and applications of the SVM in different areas, such as image processing, medical practice, computer vision, pattern recognition, machine learning, applied statistics, business intelligence, and artificial intelligence. The aim of this book is to create a comprehensive source on support vector machine applications, especially some recent advances.

  20. Machining of titanium alloys

    CERN Document Server

    2014-01-01

    This book presents a collection of examples illustrating the resent research advances in the machining of titanium alloys. These materials have excellent strength and fracture toughness as well as low density and good corrosion resistance; however, machinability is still poor due to their low thermal conductivity and high chemical reactivity with cutting tool materials. This book presents solutions to enhance machinability in titanium-based alloys and serves as a useful reference to professionals and researchers in aerospace, automotive and biomedical fields.

  1. The generic simulation cell method for developing extensible, efficient and readable parallel computational models

    Science.gov (United States)

    Honkonen, I.

    2014-07-01

    I present a method for developing extensible and modular computational models without sacrificing serial or parallel performance or source code readability. By using a generic simulation cell method I show that it is possible to combine several distinct computational models to run in the same computational grid without requiring any modification of existing code. This is an advantage for the development and testing of computational modeling software as each submodel can be developed and tested independently and subsequently used without modification in a more complex coupled program. Support for parallel programming is also provided by allowing users to select which simulation variables to transfer between processes via a Message Passing Interface library. This allows the communication strategy of a program to be formalized by explicitly stating which variables must be transferred between processes for the correct functionality of each submodel and the entire program. The generic simulation cell class presented here requires a C++ compiler that supports variadic templates which were standardized in 2011 (C++11). The code is available at: https://github.com/nasailja/gensimcell for everyone to use, study, modify and redistribute; those that do are kindly requested to cite this work.

  2. The generic simulation cell method for developing extensible, efficient and readable parallel computational models

    Directory of Open Access Journals (Sweden)

    I. Honkonen

    2014-07-01

    Full Text Available I present a method for developing extensible and modular computational models without sacrificing serial or parallel performance or source code readability. By using a generic simulation cell method I show that it is possible to combine several distinct computational models to run in the same computational grid without requiring any modification of existing code. This is an advantage for the development and testing of computational modeling software as each submodel can be developed and tested independently and subsequently used without modification in a more complex coupled program. Support for parallel programming is also provided by allowing users to select which simulation variables to transfer between processes via a Message Passing Interface library. This allows the communication strategy of a program to be formalized by explicitly stating which variables must be transferred between processes for the correct functionality of each submodel and the entire program. The generic simulation cell class presented here requires a C++ compiler that supports variadic templates which were standardized in 2011 (C++11. The code is available at: https://github.com/nasailja/gensimcell for everyone to use, study, modify and redistribute; those that do are kindly requested to cite this work.

  3. A generic simulation cell method for developing extensible, efficient and readable parallel computational models

    Science.gov (United States)

    Honkonen, I.

    2015-03-01

    I present a method for developing extensible and modular computational models without sacrificing serial or parallel performance or source code readability. By using a generic simulation cell method I show that it is possible to combine several distinct computational models to run in the same computational grid without requiring modification of existing code. This is an advantage for the development and testing of, e.g., geoscientific software as each submodel can be developed and tested independently and subsequently used without modification in a more complex coupled program. An implementation of the generic simulation cell method presented here, generic simulation cell class (gensimcell), also includes support for parallel programming by allowing model developers to select which simulation variables of, e.g., a domain-decomposed model to transfer between processes via a Message Passing Interface (MPI) library. This allows the communication strategy of a program to be formalized by explicitly stating which variables must be transferred between processes for the correct functionality of each submodel and the entire program. The generic simulation cell class requires a C++ compiler that supports a version of the language standardized in 2011 (C++11). The code is available at https://github.com/nasailja/gensimcell for everyone to use, study, modify and redistribute; those who do are kindly requested to acknowledge and cite this work.

  4. Performance and Benchmarking of Multisurface UHF RFID Tags for Readability and Reliability

    Directory of Open Access Journals (Sweden)

    Joshua Bolton

    2017-01-01

    Full Text Available As the price of passive radio frequency identification (RFID tags continues to decrease, more and more companies are considering item-level tagging. Although the use of RFID is simple, its proper application should be studied to achieve maximum efficiency and utilization in the industry. This paper is intended to demonstrate the test results of various multisurface UHF tags from different manufacturers for their readability under varying conditions such as orientation of tags with respect to reader, distance of tag from the reader, and materials used for embedding tags. These conditions could affect the reliability of RFID systems used for varied applications. In this paper, we implement a Design for Six Sigma Research (DFSS-R methodology that allows for reliability testing of RFID systems. In this paper, we have showcased our results about the benchmarking of UHF RFID tags and have put forward an important observation about the blind spots observed at different distances and orientations along different surfaces, which is primarily due to the polarity of the antenna chosen.

  5. Readability, Suitability and Health Content Assessment of Cancer Screening Announcements in Municipal Newspapers in Japan.

    Science.gov (United States)

    Okuhara, Tsuyoshi; Ishikawa, Hirono; Okada, Hiroko; Kiuchi, Takahiro

    2015-01-01

    The objective of this study was to assess the readability, suitability, and health content of cancer screening information in municipal newspapers in Japan. Suitability Assessment of Materials (SAM) and the framework of Health Belief Model (HBM) were used for assessment of municipal newspapers that were published in central Tokyo (23 wards) from January to December 2013. The mean domain SAM scores of content, literacy demand, and layout/typography were considered superior. The SAM scores of interaction with readers, an indication of the models of desirable actions, and elaboration to enhance readers' self-efficacy were low. According to the HBM coding, messages of medical/clinical severity, of social severity, of social benefits, and of barriers of fear were scarce. The articles were generally well written and suitable. However, learning stimulation/motivation was scarce and the HBM constructs were not fully addressed. Articles can be improved to motivate readers to obtain cancer screening by increasing interaction with readers, introducing models of desirable actions and devices to raise readers' self-efficacy, and providing statements of perceived barriers of fear for pain and time constraints, perceived severity, and social benefits and losses.

  6. Readability of branding symbols in horses and histomorphological alterations at the branding site.

    Science.gov (United States)

    Aurich, J E; Wohlsein, P; Wulf, M; Nees, M; Baumgärtner, W; Becker-Birck, M; Aurich, C

    2013-03-01

    Identification of horses has traditionally been facilitated by hot iron branding, but the extent by which branding symbols and numbers can be identified has not been investigated. The local pathological changes induced by branding are also unknown. This study analysed the readability of branding symbols and histomorphological alterations at the branding sites. A total of 248 horses in an equestrian championship were available for identification of symbols and numbers. A further 28 horses, euthanased for other reasons, provided histological examination of the branding site. All except one horse had evidence of histological changes at the brand site, including epidermal hyperplasia, increase of dermal collagenous fibrous tissue and loss of adnexal structures. In two foals, an ulcerative to necrotizing dermatitis was observed and interpreted as a complication of recent branding lesions. Despite the fact that hot iron branding caused lesions compatible with third degree thermal injury, it did not allow unambiguous identification of a large proportion of older horses. While the breed-specific symbol was consistently identified by three independent investigators in 84% of the horses, the double-digit branding number was read correctly by all three investigators in less than 40%. In conclusion, hot iron branding in horses causes lesions compatible with third degree thermal injury but does not always allow identification of horses. Copyright © 2012 Elsevier Ltd. All rights reserved.

  7. Photometric Supernova Classification With Machine Learning

    CERN Document Server

    Lochner, Michelle; Peiris, Hiranya V; Lahav, Ofer; Winter, Max K

    2016-01-01

    Automated photometric supernova classification has become an active area of research in recent years in light of current and upcoming imaging surveys such as the Dark Energy Survey (DES) and the Large Synoptic Telescope (LSST), given that spectroscopic confirmation of type for all supernovae discovered with these surveys will be impossible. Here, we develop a multi-faceted classification pipeline, combining existing and new approaches. Our pipeline consists of two stages: extracting descriptive features from the light curves and classification using a machine learning algorithm. Our feature extraction methods vary from model-dependent techniques, namely SALT2 fits, to more independent techniques fitting parametric models to curves, to a completely model-independent wavelet approach. We cover a range of representative machine learning algorithms, including naive Bayes, k-nearest neighbors, support vector machines, artificial neural networks and boosted decision trees. We test the pipeline on simulated multi-ba...

  8. Implementation of Java Card Virtual Machine

    Institute of Scientific and Technical Information of China (English)

    刘嵩岩; 毛志刚; 叶以正

    2000-01-01

    Java card is a new system for programming smart cards, which is based on the Java language and Virtual Machine. Java card programs (applets)run in Java Card Runtime Environment (JCRE) including the Java Card Virtual Machine (JCVM), the framework, the associated native methods and the API (Application Programming Interface). JCVM is implemented as two separate pieces:off-card VM (Virtual Machine) and on-card VM. The stack model and heap memory structure used by on-card VM and exception handling are introduced. Because there are limited resources within smart card environment, and garbage collection is not supported in JCVM, the preferred way to exception handling does not directly involve the use of throw, although the throw keyword is supported. Security is the most important feature of smart card. The Java card applet security feature is also discussed.

  9. Rotating electrical machines

    CERN Document Server

    Le Doeuff, René

    2013-01-01

    In this book a general matrix-based approach to modeling electrical machines is promulgated. The model uses instantaneous quantities for key variables and enables the user to easily take into account associations between rotating machines and static converters (such as in variable speed drives).   General equations of electromechanical energy conversion are established early in the treatment of the topic and then applied to synchronous, induction and DC machines. The primary characteristics of these machines are established for steady state behavior as well as for variable speed scenarios. I

  10. Chaotic Boltzmann machines.

    Science.gov (United States)

    Suzuki, Hideyuki; Imura, Jun-ichi; Horio, Yoshihiko; Aihara, Kazuyuki

    2013-01-01

    The chaotic Boltzmann machine proposed in this paper is a chaotic pseudo-billiard system that works as a Boltzmann machine. Chaotic Boltzmann machines are shown numerically to have computing abilities comparable to conventional (stochastic) Boltzmann machines. Since no randomness is required, efficient hardware implementation is expected. Moreover, the ferromagnetic phase transition of the Ising model is shown to be characterised by the largest Lyapunov exponent of the proposed system. In general, a method to relate probabilistic models to nonlinear dynamics by derandomising Gibbs sampling is presented.

  11. Tribology in machine design

    CERN Document Server

    Stolarski, Tadeusz

    1999-01-01

    ""Tribology in Machine Design is strongly recommended for machine designers, and engineers and scientists interested in tribology. It should be in the engineering library of companies producing mechanical equipment.""Applied Mechanics ReviewTribology in Machine Design explains the role of tribology in the design of machine elements. It shows how algorithms developed from the basic principles of tribology can be used in a range of practical applications within mechanical devices and systems.The computer offers today's designer the possibility of greater stringen

  12. Machine learning with R

    CERN Document Server

    Lantz, Brett

    2013-01-01

    Written as a tutorial to explore and understand the power of R for machine learning. This practical guide that covers all of the need to know topics in a very systematic way. For each machine learning approach, each step in the process is detailed, from preparing the data for analysis to evaluating the results. These steps will build the knowledge you need to apply them to your own data science tasks.Intended for those who want to learn how to use R's machine learning capabilities and gain insight from your data. Perhaps you already know a bit about machine learning, but have never used R; or

  13. Induction machine handbook

    CERN Document Server

    Boldea, Ion

    2002-01-01

    Often called the workhorse of industry, the advent of power electronics and advances in digital control are transforming the induction motor into the racehorse of industrial motion control. Now, the classic texts on induction machines are nearly three decades old, while more recent books on electric motors lack the necessary depth and detail on induction machines.The Induction Machine Handbook fills industry's long-standing need for a comprehensive treatise embracing the many intricate facets of induction machine analysis and design. Moving gradually from simple to complex and from standard to

  14. DRIVE AND CONTROL OF VIRTUAL-AXIS NC MACHINE TOOLS

    Institute of Scientific and Technical Information of China (English)

    1999-01-01

    The structure features and driving modes of virtual-axis NC machine tools are studied.Accor- ding to different application requirements,the three-axis control method,the five-axis control method and the six-freedom control method are put forward.These results lay a foundation for the product development of the virtual-axis NC machine tools.

  15. An Evolutionary Machine Learning Framework for Big Data Sequence Mining

    Science.gov (United States)

    Kamath, Uday Krishna

    2014-01-01

    Sequence classification is an important problem in many real-world applications. Unlike other machine learning data, there are no "explicit" features or signals in sequence data that can help traditional machine learning algorithms learn and predict from the data. Sequence data exhibits inter-relationships in the elements that are…

  16. An Evolutionary Machine Learning Framework for Big Data Sequence Mining

    Science.gov (United States)

    Kamath, Uday Krishna

    2014-01-01

    Sequence classification is an important problem in many real-world applications. Unlike other machine learning data, there are no "explicit" features or signals in sequence data that can help traditional machine learning algorithms learn and predict from the data. Sequence data exhibits inter-relationships in the elements that are…

  17. SUPPORT VECTOR MACHINE METHOD FOR PREDICTING INVESTMENT MEASURES

    Directory of Open Access Journals (Sweden)

    Olga V. Kitova

    2016-01-01

    Full Text Available Possibilities of applying intelligent machine learning technique based on support vectors for predicting investment measures are considered in the article. The base features of support vector method over traditional econometric techniques for improving the forecast quality are described. Computer modeling results in terms of tuning support vector machine models developed with programming language Python for predicting some investment measures are shown.

  18. Bionic machines and systems

    Energy Technology Data Exchange (ETDEWEB)

    Halme, A.; Paanajaervi, J. (eds.)

    2004-07-01

    of bio-structures. Today's robotics research is directed towards solving the problems of the third generation intelligent robots. Most of them are not any more intended for working in production lines, as their second generation predecessors do, but for serving in different tasks related to natural environment or urban structures. Many of them are supposed to work in close cooperation with humans as a member of their community. One of the basic features needed is mobility, capability to go to the work, because works are not any more coming to the machine - as they do in factories - but the machines have to move. This, in turn, implies need for other primary functions, such as localization and navigation. Further, because the environment and details of the task are usually not known beforehand, the control system of the robot has to relay on perceptive information through sensors and senses in order to complete satisfactorily the task. Biological species have developed a large variety of solutions for all these primary functions. The variety in motion control methods provides also many interesting solutions, like walking, swimming and flying, which all are worth of mimicking in robotics. Learning is still in its infancy in intelligent robotics, especially regarding skilled tasks done with hands or tools. Biological research offers many interesting results on animal learning, which, while being a complex process in its own right, is still simpler than the corresponding human learning and thus easier to mimic. The report is based on the presentations given by the participants. The material has been collected from published references in literature and Web. Besides written material, also a video file archive has been collected and is available as an appendix to this report. The presentation order follows in a way bottom up hierarchy of subsystems in biological machines. Chapter 2 introduces the background of biological energy. Chapter 3 deals with motions, motion

  19. 基于特征比较和最大熵模型的统计机器翻译错误检测%Error Detection for Statistical Machine Translation Based on Feature Comparison and Maximum Entropy Model Classifier

    Institute of Scientific and Technical Information of China (English)

    杜金华; 王莎

    2013-01-01

    首先介绍3种典型的用于翻译错误检测和分类的单词后验概率特征,即基于固定位置的词后验概率、基于滑动窗的词后验概率和基于词对齐的词后验概率,分析其对错误检测性能的影响;然后,将其分别与语言学特征如词性、词及由LG句法分析器抽取的句法特征等进行组合,利用最大熵分类器预测翻译错误,并在汉英NIST数据集上进行实验验证和比较.实验结果表明,不同的单词后验概率对分类错误率的影响是显著的,并且在词后验概率基础上加入语言学特征的组合特征可以显著降低分类错误率,提高译文错误预测性能.%The authors firstly introduce three typical word posterior probabilities (WPP) for error detection and classification, which are fixed position WPP, sliding window WPP, and alignment-based WPP, and analyzes their impact on the detection performance. Then each WPP feature is combined with three linguistic features (Word, POS and LG Parsing knowledge) over the maximum entropy classifier to predict the translation errors. Experimental results on Chinese-to-English NIST datasets show that the influences of different WPP features on the classification error rate (CER) are significant, and the combination of WPP with linguistic features can significantly reduce the CER and improve the prediction capability of the classifier.

  20. The Application of Support Vector Machine (svm) Using Cielab Color Model, Color Intensity and Color Constancy as Features for Ortho Image Classification of Benthic Habitats in Hinatuan, Surigao del Sur, Philippines

    Science.gov (United States)

    Cubillas, J. E.; Japitana, M.

    2016-06-01

    This study demonstrates the application of CIELAB, Color intensity, and One Dimensional Scalar Constancy as features for image recognition and classifying benthic habitats in an image with the coastal areas of Hinatuan, Surigao Del Sur, Philippines as the study area. The study area is composed of four datasets, namely: (a) Blk66L005, (b) Blk66L021, (c) Blk66L024, and (d) Blk66L0114. SVM optimization was performed in Matlab® software with the help of Parallel Computing Toolbox to hasten the SVM computing speed. The image used for collecting samples for SVM procedure was Blk66L0114 in which a total of 134,516 sample objects of mangrove, possible coral existence with rocks, sand, sea, fish pens and sea grasses were collected and processed. The collected samples were then used as training sets for the supervised learning algorithm and for the creation of class definitions. The learned hyper-planes separating one class from another in the multi-dimensional feature space can be thought of as a super feature which will then be used in developing the C (classifier) rule set in eCognition® software. The classification results of the sampling site yielded an accuracy of 98.85% which confirms the reliability of remote sensing techniques and analysis employed to orthophotos like the CIELAB, Color Intensity and One dimensional scalar constancy and the use of SVM classification algorithm in classifying benthic habitats.